[DOCS] shift to rst - resources (#16256)

This commit is contained in:
Karol Blaszczak
2023-03-16 12:10:27 +01:00
committed by GitHub
parent 0372ca929a
commit a72b9bac2f
8 changed files with 1081 additions and 960 deletions

View File

@@ -1,5 +1,8 @@
# Legal Information {#openvino_docs_Legal_Information}
@sphinxdirective
Performance varies by use, configuration and other factors. Learn more at [www.intel.com/PerformanceIndex](https://www.intel.com/PerformanceIndex).
Performance results are based on testing as of dates shown in configurations and may not reflect all publicly available updates. See backup for configuration details. No product or component can be absolutely secure.
@@ -12,9 +15,16 @@ OpenCL and the OpenCL logo are trademarks of Apple Inc. used by permission by Kh
© Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.
## OpenVINO™ Logo
OpenVINO™ Logo
###########################################################
To build equity around the project, the OpenVINO logo was created for both Intel and community usage. The logo may only be used to represent the OpenVINO toolkit and offerings built using the OpenVINO toolkit.
## Logo Usage Guidelines
Logo Usage Guidelines
###########################################################
The OpenVINO logo must be used in connection with truthful, non-misleading references to the OpenVINO toolkit, and for no other purpose.
Modification of the logo or use of any separate element(s) of the logo alone is not allowed.
Modification of the logo or use of any separate element(s) of the logo alone is not allowed.
@endsphinxdirective

File diff suppressed because it is too large Load Diff

View File

@@ -1,48 +1,90 @@
Supported Devices {#openvino_docs_OV_UG_supported_plugins_Supported_Devices}
==================
# Supported Devices {#openvino_docs_OV_UG_supported_plugins_Supported_Devices}
The OpenVINO Runtime can infer models in different formats with various input and output formats. This section provides supported and optimal configurations per device. In OpenVINO™ documentation, "device" refers to an Intel® processors used for inference, which can be a supported CPU, GPU, or GNA (Gaussian neural accelerator coprocessor), or a combination of those devices.
> **NOTE**: With OpenVINO™ 2023.0 release, support has been cancelled for all VPU accelerators based on Intel® Movidius™.
@sphinxdirective
The OpenVINO Runtime provides unique capabilities to infer deep learning models on the following device types with corresponding plugins:
| Plugin | Device types |
|------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------|
|[GPU plugin](GPU.md) |Intel® Processor Graphics, including Intel® HD Graphics and Intel® Iris® Graphics |
|[CPU plugin](CPU.md) |Intel® Xeon® with Intel® Advanced Vector Extensions 2 (Intel® AVX2), Intel® Advanced Vector Extensions 512 (Intel® AVX-512), and AVX512_BF16, Intel® Core™ Processors with Intel® AVX2, Intel® Atom® Processors with Intel® Streaming SIMD Extensions (Intel® SSE) |
|[GNA plugin](GNA.md) (available in the Intel® Distribution of OpenVINO™ toolkit) |Intel® Speech Enabling Developer Kit, Amazon Alexa* Premium Far-Field Developer Kit, Intel® Pentium® Silver J5005 Processor, Intel® Pentium® Silver N5000 Processor, Intel® Celeron® J4005 Processor, Intel® Celeron® J4105 Processor, Intel® Celeron® Processor N4100, Intel® Celeron® Processor N4000, Intel® Core™ i3-8121U Processor, Intel® Core™ i7-1065G7 Processor, Intel® Core™ i7-1060G7 Processor, Intel® Core™ i5-1035G4 Processor, Intel® Core™ i5-1035G7 Processor, Intel® Core™ i5-1035G1 Processor, Intel® Core™ i5-1030G7 Processor, Intel® Core™ i5-1030G4 Processor, Intel® Core™ i3-1005G1 Processor, Intel® Core™ i3-1000G1 Processor, Intel® Core™ i3-1000G4 Processor|
|[Arm® CPU plugin](ARM_CPU.md) (unavailable in the Intel® Distribution of OpenVINO™ toolkit) |Raspberry Pi™ 4 Model B, Apple® Mac mini with M1 chip, NVIDIA® Jetson Nano™, Android™ devices |
|[Multi-Device execution](../multi_device.md) |Multi-Device execution enables simultaneous inference of the same model on several devices in parallel |
|[Auto-Device plugin](../auto_device_selection.md) |Auto-Device plugin enables selecting Intel® device for inference automatically |
|[Heterogeneous plugin](../hetero_execution.md) |Heterogeneous execution enables automatic inference splitting between several devices (for example if a device doesn't [support certain operation](#supported-layers)). |
The OpenVINO runtime can infer various models of different input and output formats. Here, you can find configurations
supported by OpenVINO devices, which are CPU, GPU, or GNA (Gaussian neural accelerator coprocessor).
> **NOTE**: ARM® CPU plugin is a community-level add-on to OpenVINO™. Intel® welcomes community participation in the OpenVINO™ ecosystem, technical questions and code contributions on community forums. However, this component has not undergone full release validation or qualification from Intel®, hence no official support is offered.
.. note::
Devices similar to the ones we have used for benchmarking can be accessed using [Intel® DevCloud for the Edge](https://devcloud.intel.com/edge/), a remote development environment with access to Intel® hardware and the latest versions of the Intel® Distribution of the OpenVINO™ Toolkit. [Learn more](https://devcloud.intel.com/edge/get_started/devcloud/) or [Register here](https://inteliot.force.com/DevcloudForEdge/s/).
With OpenVINO™ 2023.0 release, support has been cancelled for all VPU accelerators based on Intel® Movidius™.
## Supported Configurations
The OpenVINO Runtime provides unique capabilities to infer deep learning models on the following devices:
+--------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------+
| OpenVINO Device | Supported Hardware |
+==========================================================================+===============================================================================================================+
|| :doc:`GPU <openvino_docs_OV_UG_supported_plugins_GPU>` | Intel&reg; Processor Graphics, including Intel&reg; HD Graphics and Intel&reg; Iris&reg; Graphics |
+--------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------+
|| :doc:`CPU <openvino_docs_OV_UG_supported_plugins_CPU>` | Intel&reg; Xeon&reg; with Intel® Advanced Vector Extensions 2 (Intel® AVX2), Intel® Advanced Vector |
|| | Extensions 512 (Intel® AVX-512), and AVX512_BF16, Intel&reg; Core&trade; Processors with Intel&reg; |
|| | AVX2, Intel&reg; Atom&reg; Processors with Intel® Streaming SIMD Extensions (Intel® SSE) |
+--------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------+
|| :doc:`GNA plugin <openvino_docs_OV_UG_supported_plugins_GNA>` | Intel&reg; Speech Enabling Developer Kit, Amazon Alexa* Premium Far-Field Developer Kit, Intel&reg; |
|| (available in the Intel® Distribution of OpenVINO™ toolkit) | Pentium&reg; Silver J5005 Processor, Intel&reg; Pentium&reg; Silver N5000 Processor, Intel&reg; |
|| | Celeron&reg; J4005 Processor, Intel&reg; Celeron&reg; J4105 Processor, Intel&reg; Celeron&reg; |
|| | Processor N4100, Intel&reg; Celeron&reg; Processor N4000, Intel&reg; Core&trade; i3-8121U Processor, |
|| | Intel&reg; Core&trade; i7-1065G7 Processor, Intel&reg; Core&trade; i7-1060G7 Processor, Intel&reg; |
|| | Core&trade; i5-1035G4 Processor, Intel&reg; Core&trade; i5-1035G7 Processor, Intel&reg; Core&trade; |
|| | i5-1035G1 Processor, Intel&reg; Core&trade; i5-1030G7 Processor, Intel&reg; Core&trade; i5-1030G4 Processor, |
|| | Intel&reg; Core&trade; i3-1005G1 Processor, Intel&reg; Core&trade; i3-1000G1 Processor, |
|| | Intel&reg; Core&trade; i3-1000G4 Processor |
+--------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------+
|| :doc:`Arm® CPU <openvino_docs_OV_UG_supported_plugins_ARM_CPU>` | Raspberry Pi™ 4 Model B, Apple® Mac mini with M1 chip, NVIDIA® Jetson Nano™, Android™ devices |
|| (unavailable in the Intel® Distribution of OpenVINO™ toolkit) | |
+--------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------+
|| :doc:`Multi-Device <openvino_docs_OV_UG_Running_on_multiple_devices>` | Multi-Device execution enables simultaneous inference of the same model on several devices in parallel |
+--------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------+
|| :doc:`Auto-Device plugin <openvino_docs_OV_UG_supported_plugins_AUTO>` | Auto-Device enables selecting devices for inference automatically |
+--------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------+
|| :doc:`Heterogeneous plugin <openvino_docs_OV_UG_Hetero_execution>` | Heterogeneous execution enables automatically splitting inference between several devices (for example if |
|| | a device doesn't support certain operations) |
+--------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------+
.. note::
ARM® CPU plugin is a community-level add-on to OpenVINO™. Intel® welcomes community participation in the OpenVINO™
ecosystem, technical questions and code contributions on community forums. However, this component has not
undergone full release validation or qualification from Intel®, hence no official support is offered.
Devices similar to the ones we have used for benchmarking can be accessed using `Intel® DevCloud for the Edge <https://devcloud.intel.com/edge/>`__,
a remote development environment with access to Intel® hardware and the latest versions of the Intel® Distribution
of OpenVINO™ Toolkit. `Learn more <https://devcloud.intel.com/edge/get_started/devcloud/>`__ or `Register here <https://inteliot.force.com/DevcloudForEdge/s/>`__.
Supported Configurations
###########################################################
The OpenVINO Runtime can inference models in different formats with various input and output formats.
This page shows supported and optimal configurations for each plugin.
### Terminology
| Acronym/Term | Description |
| :-----------------| :---------------------------------------------|
| FP32 format | Single-precision floating-point format |
| BF16 format | Brain floating-point format |
| FP16 format | Half-precision floating-point format |
| I16 format | 2-byte signed integer format |
| I8 format | 1-byte signed integer format |
| U16 format | 2-byte unsigned integer format |
| U8 format | 1-byte unsigned integer format |
Terminology
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
=================== =============================================
Acronym/Term Description
=================== =============================================
FP32 format Single-precision floating-point format
BF16 format Brain floating-point format
FP16 format Half-precision floating-point format
I16 format 2-byte signed integer format
I8 format 1-byte signed integer format
U16 format 2-byte unsigned integer format
U8 format 1-byte unsigned integer format
=================== =============================================
NHWC, NCHW, and NCDHW refer to the data ordering in batches of images:
* NHWC and NCHW refer to image data layout.
* NCDHW refers to image sequence data layout.
Abbreviations in the support tables are as follows:
* N: Number of images in a batch
* D: Depth. Depend on model it could be spatial or time dimension
* H: Number of pixels in the vertical dimension
@@ -52,66 +94,91 @@ Abbreviations in the support tables are as follows:
CHW, NC, C - Tensor memory layout.
For example, the CHW value at index (c,h,w) is physically located at index (c\*H+h)\*W+w, for others by analogy.
### Supported Model Formats
|Plugin |FP32 |FP16 |I8 |
|:------------------|:----------------------:|:----------------------:|:----------------------:|
|CPU plugin |Supported and preferred |Supported |Supported |
|GPU plugin |Supported |Supported and preferred |Supported |
|GNA plugin |Supported |Supported |Not supported |
|Arm® CPU plugin |Supported and preferred |Supported |Supported (partially) |
Supported Model Formats
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
For [Multi-Device](../multi_device.md) and [Heterogeneous](../hetero_execution.md) executions
the supported models formats depends on the actual underlying devices. _Generally, FP16 is preferable as it is most ubiquitous and performant_.
================== =========================== ============================ ==========================
Plugin FP32 FP16 I8
================== =========================== ============================ ==========================
CPU plugin Supported and preferred Supported Supported
GPU plugin Supported Supported and preferred Supported
GNA plugin Supported Supported Not supported
Arm® CPU plugin Supported and preferred Supported Supported (partially)
================== =========================== ============================ ==========================
### Supported Input Precision
|Plugin |FP32 |FP16 |U8 |U16 |I8 |I16 |
|:------------------|:--------:|:-------------:|:-------------:|:-------------:|:------------:|:-------------:|
|CPU plugin |Supported |Supported |Supported |Supported |Supported |Supported |
|GPU plugin |Supported |Supported\* |Supported\* |Supported\* |Not supported |Supported\* |
|GNA plugin |Supported |Not supported |Supported |Not supported |Supported |Supported |
|Arm® CPU plugin |Supported |Supported |Supported |Supported |Not supported |Not supported |
For :doc:`Multi-Device <openvino_docs_OV_UG_Running_on_multiple_devices>` and
:doc:`Heterogeneous <openvino_docs_OV_UG_Hetero_execution>` executions, the supported models formats depends
on the actual underlying devices. *Generally, FP16 is preferable as it is most ubiquitous and performant*.
<br>\* - Supported via `SetBlob` only, `GetBlob` returns FP32<br>
For [Multi-Device](../multi_device.md) and [Heterogeneous](../hetero_execution.md) executions
the supported input precision depends on the actual underlying devices. _Generally, U8 is preferable as it is most ubiquitous_.
Supported Input Precision
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
### Supported Output Precision
================= =========== =============== ============== =============== ============== =================
Plugin FP32 FP16 U8 U16 I8 I16
================= =========== =============== ============== =============== ============== =================
CPU plugin Supported Supported Supported Supported Supported Supported
GPU plugin Supported Supported\* Supported\* Supported\* Not supported Supported\*
GNA plugin Supported Not supported Supported Not supported Supported Supported
Arm® CPU plugin Supported Supported Supported Supported Not supported Not supported
================= =========== =============== ============== =============== ============== =================
|Plugin |FP32 |FP16 |
|:------------------|:--------:|:------------:|
|CPU plugin |Supported |Supported |
|GPU plugin |Supported |Supported |
|GNA plugin |Supported |Not supported |
|Arm® CPU plugin |Supported |Supported |
\* - Supported via ``SetBlob`` only, ``GetBlob`` returns FP32
For [Multi-Device](../multi_device.md) and [Heterogeneous](../hetero_execution.md) executions
the supported output precision depends on the actual underlying devices. _Generally, FP32 is preferable as it is most ubiquitous_.
For :doc:`Multi-Device <openvino_docs_OV_UG_Running_on_multiple_devices>` and
:doc:`Heterogeneous <openvino_docs_OV_UG_Hetero_execution>` executions, the supported input precision
depends on the actual underlying devices. *Generally, U8 is preferable as it is most ubiquitous*.
### Supported Input Layout
Supported Output Precision
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
|Plugin |NCDHW |NCHW |NHWC |NC |
|:------------------|:------------:|:------------:|:------------:|:------------:|
|CPU plugin |Supported |Supported |Supported |Supported |
|GPU plugin |Supported |Supported |Supported |Supported |
|GNA plugin |Not supported |Supported |Supported |Supported |
|Arm® CPU plugin |Not supported |Supported |Supported |Supported |
================== ========== ================
Plugin FP32 FP16
================== ========== ================
CPU plugin Supported Supported
GPU plugin Supported Supported
GNA plugin Supported Not supported
Arm® CPU plugin Supported Supported
================== ========== ================
### Supported Output Layout
For :doc:`Multi-Device <openvino_docs_OV_UG_Running_on_multiple_devices>` and
:doc:`Heterogeneous <openvino_docs_OV_UG_Hetero_execution>` executions, the supported output precision
depends on the actual underlying devices. *Generally, FP32 is preferable as it is most ubiquitous*.
Supported Input Layout
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
================== =============== ============ ============ ============
Plugin NCDHW NCHW NHWC NC
================== =============== ============ ============ ============
CPU plugin Supported Supported Supported Supported
GPU plugin Supported Supported Supported Supported
GNA plugin Not supported Supported Supported Supported
Arm® CPU plugin Not supported Supported Supported Supported
================== =============== ============ ============ ============
Supported Output Layout
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
====================== ======= ====== ===== ==== ====
Number of dimensions 5 4 3 2 1
====================== ======= ====== ===== ==== ====
Layout NCDHW NCHW CHW NC C
====================== ======= ====== ===== ==== ====
|Number of dimensions|5 |4 |3 |2 |1 |
|:-------------------|:---:|:---:|:---:|:---:|:---:|
|Layout |NCDHW|NCHW |CHW |NC |C |
For setting relevant configuration, refer to the
[Integrate with Customer Application](../integrate_with_your_application.md) topic
(step 3 "Configure input and output").
:doc:`Integrate with Customer Application <openvino_docs_OV_UG_Integrate_OV_with_your_application>`
topic (step 3 "Configure input and output").
Supported Layers
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
### <a name="supported-layers"></a> Supported Layers
The following layers are supported by the plugins:
@sphinxdirective
============================== ============== =============== ============== ==================
Layers GPU CPU GNA Arm® CPU
============================== ============== =============== ============== ==================
@@ -260,13 +327,16 @@ Unpooling Supported Not Supported Not Supported
Unsqueeze Supported Supported\*\* Supported Supported
Upsampling Supported Not Supported Not Supported Not Supported
============================== ============== =============== ============== ==================
\* - support is limited to the specific parameters. Refer to "Known Layer Limitations" section for the device :doc:`from the list of supported <openvino_docs_OV_UG_supported_plugins_Supported_Devices>`.
\*\* - support is implemented via :doc:`Extensibility mechanism <openvino_docs_Extensibility_UG_Intro>`.
\*\*\* - supports NCDHW layout.
\*\*\*\* - support is implemented via runtime reference.
@endsphinxdirective
\*- support is limited to the specific parameters. Refer to "Known Layers Limitation" section for the device [from the list of supported](Supported_Devices.md).
\*\*- support is implemented via [Extensibility mechanism](../../Extensibility_UG/Intro.md).
\*\*\*- supports NCDHW layout.
\*\*\*\*- support is implemented via runtime reference.

View File

@@ -1,110 +1,120 @@
# Glossary {#openvino_docs_OV_Glossary}
## Acronyms and Abbreviations
| Abbreviation | Description |
| :--- | :--- |
| API | Application Programming Interface |
| AVX | Advanced Vector Extensions |
| clDNN | Compute Library for Deep Neural Networks |
| CLI | Command Line Interface |
| CNN | Convolutional Neural Network |
| CPU | Central Processing Unit |
| CV | Computer Vision |
| DL | Deep Learning |
| DLL | Dynamic Link Library |
| DNN | Deep Neural Networks |
| ELU | Exponential Linear rectification Unit |
| FCN | Fully Convolutional Network |
| FP | Floating Point |
| GCC | GNU Compiler Collection |
| GPU | Graphics Processing Unit |
| HD | High Definition |
| IR | Intermediate Representation |
| JIT | Just In Time |
| JTAG | Joint Test Action Group |
| LPR | License-Plate Recognition |
| LRN | Local Response Normalization |
| mAP | Mean Average Precision |
| Intel® OneDNN | Intel® OneAPI Deep Neural Network Library |
| MO | Model Optimizer |
| MVN | Mean Variance Normalization |
| NCDHW | Number of images, Channels, Depth, Height, Width |
| NCHW | Number of images, Channels, Height, Width |
| NHWC | Number of images, Height, Width, Channels |
| NMS | Non-Maximum Suppression |
| NN | Neural Network |
| NST | Neural Style Transfer |
| OD | Object Detection |
| OS | Operating System |
| PCI | Peripheral Component Interconnect |
| PReLU | Parametric Rectified Linear Unit |
| PSROI | Position Sensitive Region Of Interest |
| RCNN, R-CNN | Region-based Convolutional Neural Network |
| ReLU | Rectified Linear Unit |
| ROI | Region Of Interest |
| SDK | Software Development Kit |
| SSD | Single Shot multibox Detector |
| SSE | Streaming SIMD Extensions |
| USB | Universal Serial Bus |
| VGG | Visual Geometry Group |
| VOC | Visual Object Classes |
| WINAPI | Windows Application Programming Interface |
## Terms
Glossary of terms used in OpenVINO™
@sphinxdirective
| Batch
Acronyms and Abbreviations
#################################################
================== ==================================================
Abbreviation Description
================== ==================================================
API Application Programming Interface
AVX Advanced Vector Extensions
clDNN Compute Library for Deep Neural Networks
CLI Command Line Interface
CNN Convolutional Neural Network
CPU Central Processing Unit
CV Computer Vision
DL Deep Learning
DLL Dynamic Link Library
DNN Deep Neural Networks
ELU Exponential Linear rectification Unit
FCN Fully Convolutional Network
FP Floating Point
GCC GNU Compiler Collection
GPU Graphics Processing Unit
HD High Definition
IR Intermediate Representation
JIT Just In Time
JTAG Joint Test Action Group
LPR License-Plate Recognition
LRN Local Response Normalization
mAP Mean Average Precision
Intel® OneDNN Intel® OneAPI Deep Neural Network Library
MO Model Optimizer
MVN Mean Variance Normalization
NCDHW Number of images, Channels, Depth, Height, Width
NCHW Number of images, Channels, Height, Width
NHWC Number of images, Height, Width, Channels
NMS Non-Maximum Suppression
NN Neural Network
NST Neural Style Transfer
OD Object Detection
OS Operating System
PCI Peripheral Component Interconnect
PReLU Parametric Rectified Linear Unit
PSROI Position Sensitive Region Of Interest
RCNN, R-CNN Region-based Convolutional Neural Network
ReLU Rectified Linear Unit
ROI Region Of Interest
SDK Software Development Kit
SSD Single Shot multibox Detector
SSE Streaming SIMD Extensions
USB Universal Serial Bus
VGG Visual Geometry Group
VOC Visual Object Classes
WINAPI Windows Application Programming Interface
================== ==================================================
Terms
#################################################
Glossary of terms used in OpenVINO™
| *Batch*
| Number of images to analyze during one call of infer. Maximum batch size is a property of the model set before its compilation. In NHWC, NCHW, and NCDHW image data layout representations, the 'N' refers to the number of images in the batch.
| Device Affinitity
| *Device Affinitity*
| A preferred hardware device to run inference (CPU, GPU, GNA, etc.).
| Extensibility mechanism, Custom layers
| *Extensibility mechanism, Custom layers*
| The mechanism that provides you with capabilities to extend the OpenVINO™ Runtime and Model Optimizer so that they can work with models containing operations that are not yet supported.
| layer / operation
| *layer / operation*
| In OpenVINO, both terms are treated synonymously. To avoid confusion, "layer" is being pushed out and "operation" is the currently accepted term.
| OpenVINO™ <code>Core</code>
| *OpenVINO™ <code>Core</code>*
| OpenVINO™ Core is a software component that manages inference on certain Intel(R) hardware devices: CPU, GPU, GNA, etc.
| OpenVINO™ API
| *OpenVINO™ API*
| The basic default API for all supported devices, which allows you to load a model from Intermediate Representation or convert from ONNX, PaddlePaddle, TensorFlow file formats, set input and output formats and execute the model on various devices.
| OpenVINO™ Runtime
| *OpenVINO™ Runtime*
| A C++ library with a set of classes that you can use in your application to infer input tensors and get the results.
| <code>ov::Model</code>
| *<code>ov::Model</code>*
| A class of the Model that OpenVINO™ Runtime reads from IR or converts from ONNX, PaddlePaddle, TensorFlow formats. Consists of model structure, weights and biases.
| <code>ov::CompiledModel</code>
| *<code>ov::CompiledModel</code>*
| An instance of the compiled model which allows the OpenVINO™ Runtime to request (several) infer requests and perform inference synchronously or asynchronously.
| <code>ov::InferRequest</code>
| *<code>ov::InferRequest</code>*
| A class that represents the end point of inference on the model compiled by the device and represented by a compiled model. Inputs are set here, outputs should be requested from this interface as well.
| <code>ov::ProfilingInfo</code>
| *<code>ov::ProfilingInfo</code>*
| Represents basic inference profiling information per operation.
| <code>ov::Layout</code>
| *<code>ov::Layout</code>*
| Image data layout refers to the representation of images batch. Layout shows a sequence of 4D or 5D tensor data in memory. A typical NCHW format represents pixel in horizontal direction, rows by vertical dimension, planes by channel and images into batch. See also [Layout API Overview](./OV_Runtime_UG/layout_overview.md).
| <code>ov::element::Type</code>
| *<code>ov::element::Type</code>*
| Represents data element type. For example, f32 is 32-bit floating point, f16 is 16-bit floating point.
| plugin / Inference Device / Inference Mode
| *plugin / Inference Device / Inference Mode*
| OpenVINO makes hardware available for inference based on several core components. They used to be called "plugins" in earlier versions of documentation and you may still find this term in some articles. Because of their role in the software, they are now referred to as Devices and Modes ("virtual" devices). For a detailed description of the concept, refer to [Inference Modes](@ref openvino_docs_Runtime_Inference_Modes_Overview) and [Inference Devices](@ref openvino_docs_OV_UG_Working_with_devices).
| Tensor
| *Tensor*
| A memory container used for storing inputs and outputs of the model, as well as weights and biases of the operations.
See Also
#################################################
* :doc:`Available Operations Sets <openvino_docs_ops_opset>`
* :doc:`Terminology <openvino_docs_OV_UG_supported_plugins_Supported_Devices>`
@endsphinxdirective
## See Also
* [Available Operations Sets](ops/opset.md)
* [Terminology](OV_Runtime_UG/supported_plugins/Supported_Devices.md)

View File

@@ -2,7 +2,7 @@
@sphinxdirective
To ensure you do not have to wait long to test OpenVINO's upcomming features,
To ensure you do not have to wait long to test OpenVINO's upcoming features,
OpenVINO developers continue to roll out prerelease versions. In this page you can find
a general changelog and the schedule for all versions for the current year.
@@ -25,7 +25,7 @@ a general changelog and the schedule for all versions for the current year.
* Enabled PaddlePaddle Framework 2.4
* Preview of TensorFlow Lite Front End Load models directly via “read_model” into OpenVINO Runtime and export OpenVINO IR format using Model Optimizer or “convert_model”
* PyTorch Frontend is available as an experimental feature which will allow you to convert PyTorch models, using convert_model Python API directly from your code without the need to export to ONNX. Model coverage is continuously increasing. Feel free to start using the option and give us feedback.
* Model Optimizer now uses the TensorFlow Frontend as the default path for conversion to IR. Known limitations compared to the legacy approach are: TF1 Loop, Complex types, models requiring config files and old python extensions. The solution detects unsupported functionalities and provides fallback. To force using the legacy Frontend "--use_legacy_fronted" can be specified.
* Model Optimizer now uses the TensorFlow Frontend as the default path for conversion to IR. Known limitations compared to the legacy approach are: TF1 Loop, Complex types, models requiring config files and old python extensions. The solution detects unsupported functionalities and provides fallback. To force using the legacy frontend ``--use_legacy_fronted`` can be specified.
* Model Optimizer now supports out-of-the-box conversion of TF2 Object Detection models. At this point, same performance experience is guaranteed only on CPU devices. Feel free to start enjoying TF2 Object Detection models without config files!
* Introduced new option ov::auto::enable_startup_fallback / ENABLE_STARTUP_FALLBACK to control whether to use CPU to accelerate first inference latency for accelerator HW devices like GPU.
* New FrontEndManager register_front_end(name, lib_path) interface added, to remove “OV_FRONTEND_PATH” env var (a way to load non-default frontends).

View File

@@ -29,25 +29,28 @@
Case Studies <https://www.intel.com/openvino-success-stories>
@endsphinxdirective
This section includes a variety of reference information focusing mostly on describing OpenVINO
and its proprietary model format, OpenVINO IR.
[Performance Benchmarks](../benchmarks/performance_benchmarks.md) contain results from benchmarking models with OpenVINO on Intel hardware.
:doc:`Performance Benchmarks <openvino_docs_performance_benchmarks>` contain results from benchmarking models with OpenVINO on Intel hardware.
[OpenVINO IR format](openvino_ir.md) is the proprietary model format of OpenVINO. Read more details on its operations and usage.
:doc:`OpenVINO IR format <openvino_ir>` is the proprietary model format of OpenVINO. Read more details on its operations and usage.
[Supported Devices](../OV_Runtime_UG/supported_plugins/Supported_Devices.md) is compatibility information about supported hardware accelerators.
:doc:`Supported Devices <openvino_docs_OV_UG_supported_plugins_Supported_Devices>` is compatibility information about supported hardware accelerators.
[Supported Models](supported_models.md) is a table of models officially supported by OpenVINO.
:doc:`Supported Models <openvino_supported_models>` is a table of models officially supported by OpenVINO.
[Supported Framework Layers](../MO_DG/prepare_model/Supported_Frameworks_Layers.md) are lists of framework layers supported by OpenVINO.
:doc:`Supported Framework Layers <openvino_docs_MO_DG_prepare_model_Supported_Frameworks_Layers>` are lists of framework layers supported by OpenVINO.
[Glossary](../glossary.md) contains terms used in OpenVINO.
:doc:`Glossary <openvino_docs_OV_Glossary>` contains terms used in OpenVINO.
[Legal Information](../Legal_Information.md) has trademark information and other legal statements.
:doc:`Legal Information <openvino_docs_Legal_Information>` has trademark information and other legal statements.
:doc:`OpenVINO™ Telemetry <openvino_docs_telemetry_information>` has detailed information on the telemetry data collection.
`Case Studies <https://www.intel.com/openvino-success-stories>`__ are articles about real-world examples of OpenVINO™ usage.
@endsphinxdirective
[OpenVINO™ Telemetry](telemetry_information.md) has detailed information on the telemetry data collection.
[Case Studies](https://www.intel.com/openvino-success-stories) are articles about real-world examples of OpenVINO™ usage.

View File

@@ -24,42 +24,28 @@ before every release. These models are considered officially supported.
The following table summarizes the number of models supported by OpenVINO™ in different categories:
+--------------------------------------------+-------------------+
| Model Categories: | Number of Models: |
+============================================+===================+
| Object Detection | 149 |
+--------------------------------------------+-------------------+
| Instance Segmentation | 3 |
+--------------------------------------------+-------------------+
| Semantic Segmentation | 19 |
+--------------------------------------------+-------------------+
| Image Processing, Enhancement | 16 |
+--------------------------------------------+-------------------+
| Monodepth | 2 |
+--------------------------------------------+-------------------+
| Colorization | 2 |
+--------------------------------------------+-------------------+
| Behavior / Decision Prediction | 1 |
+--------------------------------------------+-------------------+
| Action Recognition | 2 |
+--------------------------------------------+-------------------+
| Time Series Forecasting | 1 |
+--------------------------------------------+-------------------+
| Image Classification | 68 |
+--------------------------------------------+-------------------+
| Image Classification, Dual Path Network | 1 |
+--------------------------------------------+-------------------+
| Image Classification, Emotion | 1 |
+--------------------------------------------+-------------------+
| Image Translation | 1 |
+--------------------------------------------+-------------------+
| Natural language Processing | 35 |
+--------------------------------------------+-------------------+
| Text Detection | 18 |
+--------------------------------------------+-------------------+
| Audio Enhancement | 3 |
+--------------------------------------------+-------------------+
| Sound Classification | 2 |
+--------------------------------------------+-------------------+
=========================================== ====================
Model Categories: Number of Models:
=========================================== ====================
Object Detection 149
Instance Segmentation 3
Semantic Segmentation 19
Image Processing, Enhancement 16
Monodepth 2
Colorization 2
Behavior / Decision Prediction 1
Action Recognition 2
Time Series Forecasting 1
Image Classification 68
Image Classification, Dual Path Network 1
Image Classification, Emotion 1
Image Translation 1
Natural language Processing 35
Text Detection 18
Audio Enhancement 3
Sound Classification 2
=========================================== ====================
@endsphinxdirective
@endsphinxdirective

View File

@@ -11,10 +11,10 @@ Google Analytics is used for telemetry purposes. Refer to
`Google Analytics support <https://support.google.com/analytics/answer/6004245#zippy=%2Cour-privacy-policy%2Cgoogle-analytics-cookies-and-identifiers%2Cdata-collected-by-google-analytics%2Cwhat-is-the-data-used-for%2Cdata-access>`__ to understand how the data is collected and processed.
Enable or disable Telemetry reporting
======================================
###########################################################
First-run consent
--------------------------------------
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
On the first run of an application that collects telemetry data, you will be prompted
to opt in or out of telemetry collection with the following telemetry message:
@@ -26,13 +26,13 @@ to opt in or out of telemetry collection with the following telemetry message:
directly by Intel or through the use of Google Analytics. This data will be stored
in countries where Intel or Google operate.
You can opt-out at any time in the future by running 'opt_in_out --opt_in'.
You can opt-out at any time in the future by running ``opt_in_out --opt_in``.
More Information is available at docs.openvino.ai.
Please type 'Y' to give your consent or 'N' to decline.
Please type ``Y`` to give your consent or ``N`` to decline.
Choose your preference by typing 'Y' to enable or 'N' to disable telemetry. Your choice will
Choose your preference by typing ``Y`` to enable or ``N`` to disable telemetry. Your choice will
be confirmed by a corresponding disclaimer. If you do not reply to the telemetry message,
your telemetry data will not be collected.
@@ -42,17 +42,18 @@ if you have explicitly provided consent in another OpenVINO tool.
Changing consent decision
--------------------------------------
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
You can change your data collection decision with the following command lines:
`opt_in_out --opt_in` - enable telemetry
``opt_in_out --opt_in`` - enable telemetry
`opt_in_out --opt_out` - disable telemetry
``opt_in_out --opt_out`` - disable telemetry
Telemetry Data Collection Details
======================================
###########################################################
.. tab:: Telemetry Data Collected