* Added OVC and ov.convert_model() description. * Minor corrections. * Small correction. * Include page to toctree. * WIP: Model Preparation * Forked OVC/ov.convert_model documentation sub-directory; reworked model_introduction.md * Reverted ovc-related changes in old MO_DG documentation * State explicitly that MO is considered legacy API * Reduced ovc description in model preparation part; added TF Hub exampe (via file) * Grammar check; removed obsolexte parts not relevant to ovc; better wording * Removed a duplicate of mo-to-ovc transition * Fixed links and some other errors found in documentation build * Resolved XYZ placeholder to the transition guide * Fixed technical issues with links * Up-to-date link to PTQ chapter (instead of obsolete POT) * Fixed strong text ending * Update docs/OV_Converter_UG/prepare_model/convert_model/MO_OVC_transition.md Co-authored-by: Anastasiia Pnevskaia <anastasiia.pnevskaia@intel.com> * Update docs/OV_Converter_UG/prepare_model/convert_model/MO_OVC_transition.md Co-authored-by: Anastasiia Pnevskaia <anastasiia.pnevskaia@intel.com> * Update docs/OV_Converter_UG/prepare_model/convert_model/MO_OVC_transition.md Co-authored-by: Anastasiia Pnevskaia <anastasiia.pnevskaia@intel.com> * Renamed Legacy conversion guides * Fixed links and styles for inlined code * Fixed style for code references * Fixing technical syntax errors in docs * Another attempt to fix docs * Removed all unreferenced images * Better content for Additional Resources in model preporation introduction * MO to OVC transition guide. (#127) * Examples code correction. * Change format of example. * Conflict fix. * Remove wrong change. * Added input_shapes example. * batch example. * Examples format changed. * List item removed. * Remove list for all examples. * Corrected batch example. * Transform example. * Text corrections. * Text correction. * Example correction. * Small correction. * Small correction. * Small correction. * Small correction. * Text corrections. * Links corrected. * Text corrections (#128) * Text corrections. * Example corrected. * Update docs/install_guides/pypi-openvino-dev.md Co-authored-by: Sergey Lyalin <sergey.lyalin@intel.com> --------- Co-authored-by: Sergey Lyalin <sergey.lyalin@intel.com> * Many technical fixes, description of recursive flattening of list and tuples * Reorganized structure of Model Conversion toc tree. Removed fp16 dedicated page, merged to Conversion Parameters. * Update docs/OV_Converter_UG/prepare_model/convert_model/Convert_Model_From_TensorFlow.md Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com> * Update docs/Documentation/model_introduction.md Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com> * Fixed example from tf hub. Removed input_shape references * Update docs/Documentation/model_introduction.md Co-authored-by: Tatiana Savina <tatiana.savina@intel.com> * Update docs/Documentation/model_introduction.md Co-authored-by: Tatiana Savina <tatiana.savina@intel.com> * Update docs/Documentation/model_introduction.md Co-authored-by: Tatiana Savina <tatiana.savina@intel.com> * Removed * Update docs/OV_Converter_UG/prepare_model/convert_model/Convert_Model_From_ONNX.md Co-authored-by: Tatiana Savina <tatiana.savina@intel.com> * Update docs/OV_Converter_UG/prepare_model/convert_model/Convert_Model_From_ONNX.md Co-authored-by: Tatiana Savina <tatiana.savina@intel.com> * Update docs/OV_Converter_UG/prepare_model/convert_model/Convert_Model_From_ONNX.md Co-authored-by: Tatiana Savina <tatiana.savina@intel.com> * Update docs/OV_Converter_UG/prepare_model/convert_model/Convert_Model_From_ONNX.md Co-authored-by: Tatiana Savina <tatiana.savina@intel.com> * Fixed links * Removed TODO for model flow * Apply suggestions from code review Co-authored-by: Tatiana Savina <tatiana.savina@intel.com> * Restored lost code-blocks that leaded to wrong rendering of the code snippets in some places * Apply suggestions from code review Co-authored-by: Tatiana Savina <tatiana.savina@intel.com> * Update docs/Documentation/model_introduction.md * Fixed links to notebooks * Apply suggestions from code review Co-authored-by: Tatiana Savina <tatiana.savina@intel.com> --------- Co-authored-by: Anastasiia Pnevskaia <anastasiia.pnevskaia@intel.com> Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com> Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com> Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com> Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>
6.6 KiB
Glossary
@sphinxdirective
.. meta:: :description: Check the list of acronyms, abbreviations and terms used in Intel® Distribution of OpenVINO™ toolkit.
Acronyms and Abbreviations #################################################
================== ===========================================================================
Abbreviation Description
================== ===========================================================================
API Application Programming Interface
AVX Advanced Vector Extensions
clDNN Compute Library for Deep Neural Networks
CLI Command Line Interface
CNN Convolutional Neural Network
CPU Central Processing Unit
CV Computer Vision
DL Deep Learning
DLL Dynamic Link Library
DNN Deep Neural Networks
ELU Exponential Linear rectification Unit
FCN Fully Convolutional Network
FP Floating Point
GCC GNU Compiler Collection
GPU Graphics Processing Unit
HD High Definition
IR Intermediate Representation
JIT Just In Time
JTAG Joint Test Action Group
LPR License-Plate Recognition
LRN Local Response Normalization
mAP Mean Average Precision
Intel® OneDNN Intel® OneAPI Deep Neural Network Library
mo Command-line tool for model conversion, CLI for tools.mo.convert_model (legacy)
MVN Mean Variance Normalization
NCDHW Number of images, Channels, Depth, Height, Width
NCHW Number of images, Channels, Height, Width
NHWC Number of images, Height, Width, Channels
NMS Non-Maximum Suppression
NN Neural Network
NST Neural Style Transfer
OD Object Detection
OS Operating System
ovc OpenVINO Model Converter, command line tool for model conversion
PCI Peripheral Component Interconnect
PReLU Parametric Rectified Linear Unit
PSROI Position Sensitive Region Of Interest
RCNN, R-CNN Region-based Convolutional Neural Network
ReLU Rectified Linear Unit
ROI Region Of Interest
SDK Software Development Kit
SSD Single Shot multibox Detector
SSE Streaming SIMD Extensions
USB Universal Serial Bus
VGG Visual Geometry Group
VOC Visual Object Classes
WINAPI Windows Application Programming Interface
================== ===========================================================================
Terms #################################################
Glossary of terms used in OpenVINO™
| Batch | Number of images to analyze during one call of infer. Maximum batch size is a property of the model set before its compilation. In NHWC, NCHW, and NCDHW image data layout representations, the 'N' refers to the number of images in the batch.
| Device Affinity | A preferred hardware device to run inference (CPU, GPU, GNA, etc.).
| Extensibility mechanism, Custom layers | The mechanism that provides you with capabilities to extend the OpenVINO™ Runtime and model conversion API so that they can work with models containing operations that are not yet supported.
| layer / operation | In OpenVINO, both terms are treated synonymously. To avoid confusion, "layer" is being pushed out and "operation" is the currently accepted term.
| Model conversion API
| The Conversion API is used to import and convert models trained in popular frameworks to a format usable by other OpenVINO components. Model conversion API is represented by a Python openvino.convert_model() method and ovc command-line tool.
| OpenVINO™ Core | OpenVINO™ Core is a software component that manages inference on certain Intel(R) hardware devices: CPU, GPU, GNA, etc.
| OpenVINO™ API | The basic default API for all supported devices, which allows you to load a model from Intermediate Representation or convert from ONNX, PaddlePaddle, TensorFlow, TensorFlow Lite file formats, set input and output formats and execute the model on various devices.
| OpenVINO™ Runtime | A C++ library with a set of classes that you can use in your application to infer input tensors and get the results.
| ov::Model | A class of the Model that OpenVINO™ Runtime reads from IR or converts from ONNX, PaddlePaddle, TensorFlow, TensorFlow Lite formats. Consists of model structure, weights and biases.
| ov::CompiledModel | An instance of the compiled model which allows the OpenVINO™ Runtime to request (several) infer requests and perform inference synchronously or asynchronously.
| ov::InferRequest | A class that represents the end point of inference on the model compiled by the device and represented by a compiled model. Inputs are set here, outputs should be requested from this interface as well.
| ov::ProfilingInfo | Represents basic inference profiling information per operation.
| ov::Layout | Image data layout refers to the representation of images batch. Layout shows a sequence of 4D or 5D tensor data in memory. A typical NCHW format represents pixel in horizontal direction, rows by vertical dimension, planes by channel and images into batch. See also Layout API Overview.
| ov::element::Type | Represents data element type. For example, f32 is 32-bit floating point, f16 is 16-bit floating point.
| plugin / Inference Device / Inference Mode | OpenVINO makes hardware available for inference based on several core components. They used to be called "plugins" in earlier versions of documentation and you may still find this term in some articles. Because of their role in the software, they are now referred to as Devices and Modes ("virtual" devices). For a detailed description of the concept, refer to [Inference Modes](@ref openvino_docs_Runtime_Inference_Modes_Overview) and [Inference Devices](@ref openvino_docs_OV_UG_Working_with_devices).
| Tensor | A memory container used for storing inputs and outputs of the model, as well as weights and biases of the operations.
See Also #################################################
- :doc:
Available Operations Sets <openvino_docs_ops_opset> - :doc:
Terminology <openvino_docs_OV_UG_supported_plugins_Supported_Devices>
@endsphinxdirective