DOCS shift to rst - OpenVINO 2.0 Deployment (#16509)
This commit is contained in:
parent
fb24e91416
commit
44d6d97871
@ -1,151 +1,161 @@
|
||||
# Installation & Deployment {#openvino_2_0_deployment}
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
One of the main concepts for OpenVINO™ API 2.0 is being "easy to use", which includes:
|
||||
|
||||
* Simplification of migration from different frameworks to OpenVINO.
|
||||
* Organization of OpenVINO.
|
||||
* Organization of OpenVINO.
|
||||
* Usage of development tools.
|
||||
* Development and deployment of OpenVINO-based applications.
|
||||
|
||||
|
||||
To accomplish that, the 2022.1 release OpenVINO introduced significant changes to the installation and deployment processes. This guide will walk you through these changes.
|
||||
|
||||
## The Installer Package Contains OpenVINO™ Runtime Only
|
||||
The Installer Package Contains OpenVINO™ Runtime Only
|
||||
#####################################################
|
||||
|
||||
Since OpenVINO 2022.1, development tools have been distributed only via [PyPI](https://pypi.org/project/openvino-dev/), and are no longer included in the OpenVINO installer package. For a list of these components, refer to the [installation overview](../../install_guides/installing-openvino-overview.md) guide. Benefits of this approach include:
|
||||
Since OpenVINO 2022.1, development tools have been distributed only via `PyPI <https://pypi.org/project/openvino-dev/>`__, and are no longer included in the OpenVINO installer package. For a list of these components, refer to the :doc:`installation overview <openvino_docs_install_guides_overview>` guide. Benefits of this approach include:
|
||||
|
||||
* simplification of the user experience - in previous versions, installation and usage of OpenVINO Development Tools differed from one distribution type to another (the OpenVINO installer vs. PyPI),
|
||||
* simplification of the user experience - in previous versions, installation and usage of OpenVINO Development Tools differed from one distribution type to another (the OpenVINO installer vs. PyPI),
|
||||
* ensuring that dependencies are handled properly via the PIP package manager, and support virtual environments of development tools.
|
||||
|
||||
The structure of the OpenVINO 2022.1 installer package has been organized as follows:
|
||||
|
||||
- The `runtime` folder includes headers, libraries and CMake interfaces.
|
||||
- The `tools` folder contains [the compile tool](../../../tools/compile_tool/README.md), [deployment manager](../../OV_Runtime_UG/deployment/deployment-manager-tool.md), and a set of `requirements.txt` files with links to the corresponding versions of the `openvino-dev` package.
|
||||
- The `python` folder contains the Python version for OpenVINO Runtime.
|
||||
* The ``runtime`` folder includes headers, libraries and CMake interfaces.
|
||||
* The ``tools`` folder contains :doc:`the compile tool <openvino_inference_engine_tools_compile_tool_README>`, :doc:`deployment manager <openvino_docs_install_guides_deployment_manager_tool>`, and a set of ``requirements.txt`` files with links to the corresponding versions of the ``openvino-dev`` package.
|
||||
* The ``python`` folder contains the Python version for OpenVINO Runtime.
|
||||
|
||||
## Installing OpenVINO Development Tools via PyPI
|
||||
Installing OpenVINO Development Tools via PyPI
|
||||
##############################################
|
||||
|
||||
Since OpenVINO Development Tools is no longer in the installer package, the installation process has also changed. This section describes it through a comparison with previous versions.
|
||||
|
||||
### For Versions Prior to 2022.1
|
||||
For Versions Prior to 2022.1
|
||||
++++++++++++++++++++++++++++
|
||||
|
||||
In previous versions, OpenVINO Development Tools was a part of the main package. After the package was installed, to convert models (for example, TensorFlow), you needed to install additional dependencies by using the requirement files, such as `requirements_tf.txt`, install Post-Training Optimization tool and Accuracy Checker tool via the `setup.py` scripts, and then use the `setupvars` scripts to make the tools available to the following command:
|
||||
In previous versions, OpenVINO Development Tools was a part of the main package. After the package was installed, to convert models (for example, TensorFlow), you needed to install additional dependencies by using the requirement files, such as ``requirements_tf.txt``, install Post-Training Optimization tool and Accuracy Checker tool via the ``setup.py`` scripts, and then use the ``setupvars`` scripts to make the tools available to the following command:
|
||||
|
||||
```sh
|
||||
$ mo.py -h
|
||||
```
|
||||
.. code-block:: sh
|
||||
|
||||
### For 2022.1 and After
|
||||
$ mo.py -h
|
||||
|
||||
In OpenVINO 2022.1 and later, you can install the development tools only from a [PyPI](https://pypi.org/project/openvino-dev/) repository, using the following command (taking TensorFlow as an example):
|
||||
|
||||
```sh
|
||||
$ python3 -m pip install -r <INSTALL_DIR>/tools/requirements_tf.txt
|
||||
```
|
||||
For 2022.1 and After
|
||||
++++++++++++++++++++
|
||||
|
||||
This will install all the development tools and additional components necessary to work with TensorFlow via the `openvino-dev` package (see **Step 4. Install the Package** on the [PyPI page](https://pypi.org/project/openvino-dev/) for parameters of other frameworks).
|
||||
In OpenVINO 2022.1 and later, you can install the development tools only from a `PyPI <https://pypi.org/project/openvino-dev/>`__ repository, using the following command (taking TensorFlow as an example):
|
||||
|
||||
.. code-block:: sh
|
||||
|
||||
$ python3 -m pip install -r <INSTALL_DIR>/tools/requirements_tf.txt
|
||||
|
||||
|
||||
This will install all the development tools and additional components necessary to work with TensorFlow via the ``openvino-dev`` package (see **Step 4. Install the Package** on the `PyPI page <https://pypi.org/project/openvino-dev/>`__ for parameters of other frameworks).
|
||||
|
||||
Then, the tools can be used by commands like:
|
||||
|
||||
```sh
|
||||
$ mo -h
|
||||
$ pot -h
|
||||
```
|
||||
.. code-block:: sh
|
||||
|
||||
Installation of any other dependencies is not required. For more details on the installation steps, see the [Install OpenVINO Development Tools](../../install_guides/installing-model-dev-tools.md).
|
||||
$ mo -h
|
||||
$ pot -h
|
||||
|
||||
## Interface Changes for Building C/C++ Applications
|
||||
|
||||
Installation of any other dependencies is not required. For more details on the installation steps, see the :doc:`Install OpenVINO Development Tools <openvino_docs_install_guides_install_dev_tools>`.
|
||||
|
||||
Interface Changes for Building C/C++ Applications
|
||||
#################################################
|
||||
|
||||
The new OpenVINO Runtime with its API 2.0 has also brought some changes for building C/C++ applications.
|
||||
|
||||
### CMake Interface
|
||||
CMake Interface
|
||||
++++++++++++++++++++
|
||||
|
||||
The CMake interface has been changed as follows:
|
||||
|
||||
**With Inference Engine of previous versions**:
|
||||
|
||||
```cmake
|
||||
find_package(InferenceEngine REQUIRED)
|
||||
find_package(ngraph REQUIRED)
|
||||
add_executable(ie_ngraph_app main.cpp)
|
||||
target_link_libraries(ie_ngraph_app PRIVATE ${InferenceEngine_LIBRARIES} ${NGRAPH_LIBRARIES})
|
||||
```
|
||||
.. code-block:: cmake
|
||||
|
||||
find_package(InferenceEngine REQUIRED)
|
||||
find_package(ngraph REQUIRED)
|
||||
add_executable(ie_ngraph_app main.cpp)
|
||||
target_link_libraries(ie_ngraph_app PRIVATE ${InferenceEngine_LIBRARIES} ${NGRAPH_LIBRARIES})
|
||||
|
||||
|
||||
**With OpenVINO Runtime 2022.1 (API 2.0)**:
|
||||
|
||||
```cmake
|
||||
find_package(OpenVINO REQUIRED)
|
||||
add_executable(ov_app main.cpp)
|
||||
target_link_libraries(ov_app PRIVATE openvino::runtime)
|
||||
.. code-block:: cmake
|
||||
|
||||
add_executable(ov_c_app main.c)
|
||||
target_link_libraries(ov_c_app PRIVATE openvino::runtime::c)
|
||||
```
|
||||
find_package(OpenVINO REQUIRED)
|
||||
add_executable(ov_app main.cpp)
|
||||
target_link_libraries(ov_app PRIVATE openvino::runtime)
|
||||
|
||||
### Native Interfaces
|
||||
add_executable(ov_c_app main.c)
|
||||
target_link_libraries(ov_c_app PRIVATE openvino::runtime::c)
|
||||
|
||||
|
||||
Native Interfaces
|
||||
++++++++++++++++++++
|
||||
|
||||
It is possible to build applications without the CMake interface by using: MSVC IDE, UNIX makefiles, and any other interface, which has been changed as shown here:
|
||||
|
||||
**With Inference Engine of previous versions**:
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
.. tab:: Include dirs
|
||||
|
||||
.. code-block:: sh
|
||||
|
||||
<INSTALL_DIR>/deployment_tools/inference_engine/include
|
||||
<INSTALL_DIR>/deployment_tools/ngraph/include
|
||||
.. code-block:: sh
|
||||
|
||||
<INSTALL_DIR>/deployment_tools/inference_engine/include
|
||||
<INSTALL_DIR>/deployment_tools/ngraph/include
|
||||
|
||||
.. tab:: Path to libs
|
||||
|
||||
.. code-block:: sh
|
||||
.. code-block:: sh
|
||||
|
||||
<INSTALL_DIR>/deployment_tools/inference_engine/lib/intel64/Release
|
||||
<INSTALL_DIR>/deployment_tools/ngraph/lib/
|
||||
<INSTALL_DIR>/deployment_tools/inference_engine/lib/intel64/Release
|
||||
<INSTALL_DIR>/deployment_tools/ngraph/lib/
|
||||
|
||||
.. tab:: Shared libs
|
||||
|
||||
.. code-block:: sh
|
||||
.. code-block:: sh
|
||||
|
||||
// UNIX systems
|
||||
inference_engine.so ngraph.so
|
||||
// UNIX systems
|
||||
inference_engine.so ngraph.so
|
||||
|
||||
// Windows
|
||||
inference_engine.dll ngraph.dll
|
||||
// Windows
|
||||
inference_engine.dll ngraph.dll
|
||||
|
||||
.. tab:: (Windows) .lib files
|
||||
|
||||
.. code-block:: sh
|
||||
|
||||
ngraph.lib
|
||||
inference_engine.lib
|
||||
.. code-block:: sh
|
||||
|
||||
@endsphinxdirective
|
||||
ngraph.lib
|
||||
inference_engine.lib
|
||||
|
||||
**With OpenVINO Runtime 2022.1 (API 2.0)**:
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
.. tab:: Include dirs
|
||||
|
||||
.. code-block:: sh
|
||||
.. code-block:: sh
|
||||
|
||||
<INSTALL_DIR>/runtime/include
|
||||
<INSTALL_DIR>/runtime/include
|
||||
|
||||
.. tab:: Path to libs
|
||||
|
||||
.. code-block:: sh
|
||||
.. code-block:: sh
|
||||
|
||||
<INSTALL_DIR>/runtime/lib/intel64/Release
|
||||
<INSTALL_DIR>/runtime/lib/intel64/Release
|
||||
|
||||
.. tab:: Shared libs
|
||||
|
||||
.. code-block:: sh
|
||||
.. code-block:: sh
|
||||
|
||||
// UNIX systems
|
||||
openvino.so
|
||||
// UNIX systems
|
||||
openvino.so
|
||||
|
||||
// Windows
|
||||
openvino.dll
|
||||
// Windows
|
||||
openvino.dll
|
||||
|
||||
.. tab:: (Windows) .lib files
|
||||
|
||||
@ -153,49 +163,55 @@ It is possible to build applications without the CMake interface by using: MSVC
|
||||
|
||||
openvino.lib
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
## Clearer Library Structure for Deployment
|
||||
Clearer Library Structure for Deployment
|
||||
########################################
|
||||
|
||||
OpenVINO 2022.1 introduced a reorganization of the libraries, to make deployment easier. In the previous versions, it was required to use several libraries to perform deployment steps. Now you can just use `openvino` or `openvino_c` based on your developing language, with the necessary plugins to complete your task. For example, `openvino_intel_cpu_plugin` and `openvino_ir_frontend` plugins will enable loading OpenVINO IRs and performing inference on the CPU device (for more details, see the [Local distribution with OpenVINO](../deployment/local-distribution.md)).
|
||||
OpenVINO 2022.1 introduced a reorganization of the libraries, to make deployment easier. In the previous versions, it was required to use several libraries to perform deployment steps. Now you can just use ``openvino`` or ``openvino_c`` based on your developing language, with the necessary plugins to complete your task. For example, ``openvino_intel_cpu_plugin`` and ``openvino_ir_frontend`` plugins will enable loading OpenVINO IRs and performing inference on the CPU device (for more details, see the :doc:`Local distribution with OpenVINO <openvino_docs_deploy_local_distribution>`).
|
||||
|
||||
Below are detailed comparisons of the library structure between OpenVINO 2022.1 and the previous versions:
|
||||
|
||||
* Starting with 2022.1 release, a single core library with all the functionalities (`openvino` for C++ Runtime, `openvino_c` for Inference Engine API C interface) is used, instead of the previous core libraries which contained `inference_engine`, `ngraph`, `inference_engine_transformations` and `inference_engine_lp_transformations`.
|
||||
* The optional `inference_engine_preproc` preprocessing library (if `InferenceEngine::PreProcessInfo::setColorFormat` or `InferenceEngine::PreProcessInfo::setResizeAlgorithm` is used) has been renamed to `openvino_gapi_preproc` and deprecated in 2022.1. For more details, see the [Preprocessing capabilities of OpenVINO API 2.0](preprocessing.md).
|
||||
* Starting with 2022.1 release, a single core library with all the functionalities (``openvino`` for C++ Runtime, ``openvino_c`` for Inference Engine API C interface) is used, instead of the previous core libraries which contained ``inference_engine``, ``ngraph``, ``inference_engine_transformations`` and ``inference_engine_lp_transformations``.
|
||||
* The optional ``inference_engine_preproc`` preprocessing library (if `InferenceEngine::PreProcessInfo::setColorFormat <classInferenceEngine_1_1PreProcessInfo.html#doxid-class-inference-engine-1-1-pre-process-info-1a3a10ba0d562a2268fe584d4d2db94cac>`__ or `InferenceEngine::PreProcessInfo::setResizeAlgorithm <classInferenceEngine_1_1PreProcessInfo.html#doxid-class-inference-engine-1-1-pre-process-info-1a0c083c43d01c53c327f09095e3e3f004>`__ is used) has been renamed to ``openvino_gapi_preproc`` and deprecated in 2022.1. For more details, see the :doc:`Preprocessing capabilities of OpenVINO API 2.0 <openvino_2_0_preprocessing>`.
|
||||
|
||||
* The libraries of plugins have been renamed as follows:
|
||||
* `openvino_intel_cpu_plugin` is used for [CPU](../supported_plugins/CPU.md) device instead of `MKLDNNPlugin`.
|
||||
* `openvino_intel_gpu_plugin` is used for [GPU](../supported_plugins/GPU.md) device instead of `clDNNPlugin`.
|
||||
* `openvino_auto_plugin` is used for [Auto-Device Plugin](../auto_device_selection.md).
|
||||
|
||||
* ``openvino_intel_cpu_plugin`` is used for :doc:`CPU <openvino_docs_OV_UG_supported_plugins_CPU>` device instead of ``MKLDNNPlugin``.
|
||||
* ``openvino_intel_gpu_plugin`` is used for :doc:`GPU <openvino_docs_OV_UG_supported_plugins_GPU>` device instead of ``clDNNPlugin``.
|
||||
* ``openvino_auto_plugin`` is used for :doc:`Auto-Device Plugin <openvino_docs_OV_UG_supported_plugins_AUTO>`.
|
||||
|
||||
* The plugins for reading and converting models have been changed as follows:
|
||||
* `openvino_ir_frontend` is used to read IRs instead of `inference_engine_ir_reader`.
|
||||
* `openvino_onnx_frontend` is used to read ONNX models instead of `inference_engine_onnx_reader` (with its dependencies).
|
||||
* `openvino_paddle_frontend` is added in 2022.1 to read PaddlePaddle models.
|
||||
|
||||
* ``openvino_ir_frontend`` is used to read IRs instead of ``inference_engine_ir_reader``.
|
||||
* ``openvino_onnx_frontend`` is used to read ONNX models instead of ``inference_engine_onnx_reader`` (with its dependencies).
|
||||
* ``openvino_paddle_frontend`` is added in 2022.1 to read PaddlePaddle models.
|
||||
|
||||
<!-----
|
||||
Older versions of OpenVINO had several core libraries and plugin modules:
|
||||
- Core: `inference_engine`, `ngraph`, `inference_engine_transformations`, `inference_engine_lp_transformations`
|
||||
- Optional `inference_engine_preproc` preprocessing library (if `InferenceEngine::PreProcessInfo::setColorFormat` or `InferenceEngine::PreProcessInfo::setResizeAlgorithm` are used)
|
||||
- Core: ``inference_engine``, ``ngraph``, ``inference_engine_transformations``, ``inference_engine_lp_transformations``
|
||||
- Optional ``inference_engine_preproc`` preprocessing library (if ``InferenceEngine::PreProcessInfo::setColorFormat`` or ``InferenceEngine::PreProcessInfo::setResizeAlgorithm`` are used)
|
||||
- Plugin libraries:
|
||||
- `MKLDNNPlugin` for [CPU](../supported_plugins/CPU.md) device
|
||||
- `clDNNPlugin` for [GPU](../supported_plugins/GPU.md) device
|
||||
- `MultiDevicePlugin` for [Multi-device execution](../multi_device.md)
|
||||
- ``MKLDNNPlugin`` for :doc:`CPU <openvino_docs_OV_UG_supported_plugins_CPU>` device
|
||||
- ``clDNNPlugin`` for :doc:`GPU <openvino_docs_OV_UG_supported_plugins_GPU>` device
|
||||
- ``MultiDevicePlugin`` for :doc:`Multi-device execution <openvino_docs_OV_UG_Running_on_multiple_devices>`
|
||||
- others
|
||||
- Plugins to read and convert a model:
|
||||
- `inference_engine_ir_reader` to read OpenVINO IR
|
||||
- `inference_engine_onnx_reader` (with its dependencies) to read ONNX models
|
||||
- ``inference_engine_ir_reader`` to read OpenVINO IR
|
||||
- ``inference_engine_onnx_reader`` (with its dependencies) to read ONNX models
|
||||
Now, the modularity is more clear:
|
||||
- A single core library with all the functionality `openvino` for C++ runtime
|
||||
- `openvino_c` with Inference Engine API C interface
|
||||
- **Deprecated** Optional `openvino_gapi_preproc` preprocessing library (if `InferenceEngine::PreProcessInfo::setColorFormat` or `InferenceEngine::PreProcessInfo::setResizeAlgorithm` are used)
|
||||
- Use [preprocessing capabilities from OpenVINO 2.0](../preprocessing_overview.md)
|
||||
- A single core library with all the functionality ``openvino`` for C++ runtime
|
||||
- ``openvino_c`` with Inference Engine API C interface
|
||||
- **Deprecated** Optional ``openvino_gapi_preproc`` preprocessing library (if ``InferenceEngine::PreProcessInfo::setColorFormat`` or ``InferenceEngine::PreProcessInfo::setResizeAlgorithm`` are used)
|
||||
- Use :doc:`preprocessing capabilities of OpenVINO API 2.0 <openvino_2_0_preprocessing>`
|
||||
- Plugin libraries with clear names:
|
||||
- `openvino_intel_cpu_plugin`
|
||||
- `openvino_intel_gpu_plugin`
|
||||
- `openvino_auto_plugin`
|
||||
- ``openvino_intel_cpu_plugin``
|
||||
- ``openvino_intel_gpu_plugin``
|
||||
- ``openvino_auto_plugin``
|
||||
- others
|
||||
- Plugins to read and convert models:
|
||||
- `openvino_ir_frontend` to read OpenVINO IR
|
||||
- `openvino_onnx_frontend` to read ONNX models
|
||||
- `openvino_paddle_frontend` to read Paddle models
|
||||
- ``openvino_ir_frontend`` to read OpenVINO IR
|
||||
- ``openvino_onnx_frontend`` to read ONNX models
|
||||
- ``openvino_paddle_frontend`` to read Paddle models
|
||||
---->
|
||||
|
||||
@endsphinxdirective
|
||||
|
@ -12,83 +12,101 @@
|
||||
openvino_2_0_preprocessing
|
||||
openvino_2_0_model_creation
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
This guide introduces the new OpenVINO™ API: API 2.0, as well as the new OpenVINO IR model format: IR v11. Here, you will find comparisons of their "old" and "new" versions.
|
||||
This guide introduces the new OpenVINO™ API: API 2.0, as well as the new OpenVINO IR model format: IR v11. Here, you will find comparisons of their "old" and "new" versions.
|
||||
|
||||
### Introduction of API 2.0
|
||||
Introduction of API 2.0
|
||||
#######################
|
||||
|
||||
Versions of OpenVINO prior to 2022.1 required changes in the application logic when migrating an app from other frameworks, such as TensorFlow, ONNX Runtime, PyTorch, PaddlePaddle, etc. The changes were required because:
|
||||
|
||||
- Model Optimizer changed input precisions for some inputs. For example, neural language processing models with `I64` inputs were changed to include `I32` ones.
|
||||
- Model Optimizer changed layouts for TensorFlow models (see the [Layouts in OpenVINO](../layout_overview.md)). It lead to unusual requirement of using the input data with a different layout than that of the framework:
|
||||
![tf_openvino]
|
||||
- Inference Engine API (`InferenceEngine::CNNNetwork`) applied some conversion rules for input and output precisions due to limitations in device plugins.
|
||||
- Model Optimizer changed input precisions for some inputs. For example, neural language processing models with ``I64`` inputs were changed to include ``I32`` ones.
|
||||
- Model Optimizer changed layouts for TensorFlow models (see the :doc:`Layouts in OpenVINO <openvino_docs_OV_UG_Layout_Overview>`). It lead to unusual requirement of using the input data with a different layout than that of the framework:
|
||||
|
||||
.. image:: _static/images/tf_openvino.svg
|
||||
:alt: tf_openvino
|
||||
|
||||
- Inference Engine API (`InferenceEngine::CNNNetwork <classInferenceEngine_1_1CNNNetwork.html#doxid-class-inference-engine-1-1-c-n-n-network>`__) applied some conversion rules for input and output precisions due to limitations in device plugins.
|
||||
- Users needed to specify input shapes during model conversions in Model Optimizer, and work with static shapes in the application.
|
||||
|
||||
OpenVINO™ 2022.1 has introduced API 2.0 (also called OpenVINO API v2) to align the logic of working with models as it is done in their origin frameworks - no layout and precision changes, operating with tensor names and indices to address inputs and outputs. OpenVINO Runtime has combined Inference Engine API used for inference and nGraph API targeted to work with models and operations. API 2.0 has a common structure, naming convention styles, namespaces, and removes duplicated structures. For more details, see the [Changes to Inference Pipeline in OpenVINO API v2](common_inference_pipeline.md).
|
||||
OpenVINO™ 2022.1 has introduced API 2.0 (also called OpenVINO API v2) to align the logic of working with models as it is done in their origin frameworks - no layout and precision changes, operating with tensor names and indices to address inputs and outputs. OpenVINO Runtime has combined Inference Engine API used for inference and nGraph API targeted to work with models and operations. API 2.0 has a common structure, naming convention styles, namespaces, and removes duplicated structures. For more details, see the :doc:`Changes to Inference Pipeline in OpenVINO API v2 <openvino_2_0_inference_pipeline>`.
|
||||
|
||||
> **NOTE**: Your existing applications will continue to work with OpenVINO Runtime 2022.1, as normal. Although, migration to API 2.0 is strongly recommended. This will allow you to use additional features, such as [Preprocessing](../preprocessing_overview.md) and [Dynamic shapes support](../ov_dynamic_shapes.md).
|
||||
.. note::
|
||||
|
||||
### The New OpenVINO IR v11
|
||||
Your existing applications will continue to work with OpenVINO Runtime 2022.1, as normal. Although, migration to API 2.0 is strongly recommended. This will allow you to use additional features, such as :doc:`Preprocessing <openvino_docs_OV_UG_Preprocessing_Overview>` and :doc:`Dynamic shapes support <openvino_docs_OV_UG_DynamicShapes>`.
|
||||
|
||||
To support these features, OpenVINO has introduced OpenVINO IR v11, which is now the default version for Model Optimizer. The model represented in OpenVINO IR v11 fully matches the original model in the original framework format in terms of inputs and outputs. It is also not required to specify input shapes during conversion, which results in OpenVINO IR v11 containing `-1` to denote undefined dimensions. For more details on how to fully utilize this feature, see [Working with dynamic shapes](../ov_dynamic_shapes.md). For information on how to reshape to static shapes in application, see [Changing input shapes](../ShapeInference.md).
|
||||
|
||||
The New OpenVINO IR v11
|
||||
#######################
|
||||
|
||||
To support these features, OpenVINO has introduced OpenVINO IR v11, which is now the default version for Model Optimizer. The model represented in OpenVINO IR v11 fully matches the original model in the original framework format in terms of inputs and outputs. It is also not required to specify input shapes during conversion, which results in OpenVINO IR v11 containing ``-1`` to denote undefined dimensions. For more details on how to fully utilize this feature, see :doc:`Working with dynamic shapes <openvino_docs_OV_UG_DynamicShapes>`. For information on how to reshape to static shapes in application, see :doc:`Changing input shapes <openvino_docs_OV_UG_ShapeInference>`.
|
||||
|
||||
OpenVINO IR v11 is fully compatible with applications written with the Inference Engine API used by older versions of OpenVINO. This backward compatibility is allowed thanks to additional runtime information included in OpenVINO IR v11. This means that when OpenVINO IR v11 is read by an application based on Inference Engine, it is internally converted to OpenVINO IR v10.
|
||||
|
||||
OpenVINO IR v11 is supported by all OpenVINO Development tools including Post-Training Optimization Tool, Benchmark app, etc.
|
||||
|
||||
### Backward Compatibility for OpenVINO IR v10
|
||||
Backward Compatibility for OpenVINO IR v10
|
||||
##########################################
|
||||
|
||||
API 2.0 also supports backward compatibility for models of OpenVINO IR v10. If you have OpenVINO IR v10 files, they can also be fed to OpenVINO Runtime. For more details, see the [migration steps](common_inference_pipeline.md).
|
||||
API 2.0 also supports backward compatibility for models of OpenVINO IR v10. If you have OpenVINO IR v10 files, they can also be fed to OpenVINO Runtime. For more details, see the :doc:`migration steps <openvino_2_0_inference_pipeline>`.
|
||||
|
||||
Some of the OpenVINO Development Tools also support both OpenVINO IR v10 and v11 as an input:
|
||||
- Accuracy checker uses API 2.0 for model accuracy measurement by default. It also supports switching to the old API by using the `--use_new_api False` command-line parameter. Both launchers accept OpenVINO IR v10 and v11, but in some cases configuration files should be updated. For more details, see the [Accuracy Checker documentation](https://github.com/openvinotoolkit/open_model_zoo/blob/master/tools/accuracy_checker/openvino/tools/accuracy_checker/launcher/openvino_launcher_readme.md).
|
||||
- [Compile tool](../../../tools/compile_tool/README.md) compiles the model to be used in API 2.0 by default. To use the resulting compiled blob under the Inference Engine API, the additional `ov_api_1_0` option should be passed.
|
||||
|
||||
- Accuracy checker uses API 2.0 for model accuracy measurement by default. It also supports switching to the old API by using the ``--use_new_api False`` command-line parameter. Both launchers accept OpenVINO IR v10 and v11, but in some cases configuration files should be updated. For more details, see the `Accuracy Checker documentation <https://github.com/openvinotoolkit/open_model_zoo/blob/master/tools/accuracy_checker/openvino/tools/accuracy_checker/launcher/openvino_launcher_readme.md>`__.
|
||||
- :doc:`Compile tool <openvino_inference_engine_tools_compile_tool_README>` compiles the model to be used in API 2.0 by default. To use the resulting compiled blob under the Inference Engine API, the additional ``ov_api_1_0`` option should be passed.
|
||||
|
||||
However, Post-Training Optimization Tool of OpenVINO 2022.1 does not support OpenVINO IR v10. They require the latest version of Model Optimizer to generate OpenVINO IR v11 files.
|
||||
|
||||
> **NOTE**: To quantize your OpenVINO IR v10 models to run with OpenVINO 2022.1, download and use Post-Training Optimization Tool of OpenVINO 2021.4.
|
||||
.. note::
|
||||
|
||||
To quantize your OpenVINO IR v10 models to run with OpenVINO 2022.1, download and use Post-Training Optimization Tool of OpenVINO 2021.4.
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
.. _differences_api20_ie:
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
|
||||
### Differences in API 2.0 and Inference Engine API Behaviors
|
||||
Differences in API 2.0 and Inference Engine API Behaviors
|
||||
#########################################################
|
||||
|
||||
Inference Engine and nGraph APIs do not become deprecated with the introduction of the new API, and they can still be used in applications. However, it is highly recommended to migrate to API 2.0, as it offers more features (further extended in future releases), such as:
|
||||
- [Working with dynamic shapes](../ov_dynamic_shapes.md), which increases performance when working with compatible models such as NLP (Neural Language Processing) and super-resolution models.
|
||||
- [Preprocessing of the model](../preprocessing_overview.md), which adds preprocessing operations to inference models and fully occupies the accelerator, freeing CPU resources.
|
||||
|
||||
- :doc:`Working with dynamic shapes <openvino_docs_OV_UG_DynamicShapes>`, which increases performance when working with compatible models such as NLP (Neural Language Processing) and super-resolution models.
|
||||
- :doc:`Preprocessing of the model <openvino_docs_OV_UG_Preprocessing_Overview>`, which adds preprocessing operations to inference models and fully occupies the accelerator, freeing CPU resources.
|
||||
|
||||
To understand the differences between Inference Engine API and API 2.0, see the definitions of two types of behaviors first:
|
||||
|
||||
- **Old behavior** of OpenVINO assumes that:
|
||||
|
||||
- Model Optimizer can change input element types and order of dimensions (layouts) for the model from the original framework.
|
||||
- Inference Engine can override input and output element types.
|
||||
- Inference Engine API uses operation names to address inputs and outputs (e.g. InferenceEngine::InferRequest::GetBlob).
|
||||
- Inference Engine API uses operation names to address inputs and outputs (e.g. `InferenceEngine::InferRequest::GetBlob <classInferenceEngine_1_1InferRequest.html#doxid-class-inference-engine-1-1-infer-request-1a9601a4cda3f309181af34feedf1b914c>`__).
|
||||
- Inference Engine API does not support compiling of models with dynamic input shapes.
|
||||
|
||||
- **New behavior** implemented in 2022.1 assumes full model alignment with the framework:
|
||||
|
||||
- Model Optimizer preserves input element types and order of dimensions (layouts), and stores tensor names from the original models.
|
||||
- OpenVINO Runtime 2022.1 reads models in any format (OpenVINO IR v10, OpenVINO IR v11, TensorFlow (check [TensorFlow Frontend Capabilities and Limitations](../../resources/tensorflow_frontend.md)), ONNX, PaddlePaddle, etc.).
|
||||
- OpenVINO Runtime 2022.1 reads models in any format (OpenVINO IR v10, OpenVINO IR v11, TensorFlow (check :doc:`TensorFlow Frontend Capabilities and Limitations <openvino_docs_MO_DG_TensorFlow_Frontend>`), ONNX, PaddlePaddle, etc.).
|
||||
- API 2.0 uses tensor names for addressing, which is the standard approach among the compatible model frameworks.
|
||||
- API 2.0 can also address input and output tensors by the index. Some model formats like ONNX are sensitive to the input and output order, which is preserved by OpenVINO 2022.1.
|
||||
|
||||
The table below demonstrates which behavior, **old** or **new**, is used for models based on the two APIs.
|
||||
|
||||
| API | OpenVINO IR v10 | OpenVINO IR v11 | ONNX Files | Models Created in Code |
|
||||
|-------------------------------|------------------|------------------|------------|------------------------|
|
||||
|Inference Engine / nGraph APIs | Old | Old | Old | Old |
|
||||
|API 2.0 | Old | New | New | New |
|
||||
+--------------------------------+-----------------+-----------------+-----------------+------------------------+
|
||||
| API | OpenVINO IR v10 | OpenVINO IR v11 | ONNX Files | Models Created in Code |
|
||||
+================================+=================+=================+=================+========================+
|
||||
| Inference Engine / nGraph APIs | Old | Old | Old | Old |
|
||||
+--------------------------------+-----------------+-----------------+-----------------+------------------------+
|
||||
| API 2.0 | Old | New | New | New |
|
||||
+--------------------------------+-----------------+-----------------+-----------------+------------------------+
|
||||
|
||||
### More Information
|
||||
More Information
|
||||
####################
|
||||
|
||||
See the following pages to understand how to migrate Inference Engine-based applications to API 2.0:
|
||||
- [Installation & Deployment](deployment_migration.md)
|
||||
- [OpenVINO™ Common Inference pipeline](common_inference_pipeline.md)
|
||||
- [Preprocess your model](./preprocessing.md)
|
||||
- [Configure device](./configure_devices.md)
|
||||
- [OpenVINO™ Model Creation](graph_construction.md)
|
||||
|
||||
[tf_openvino]: ../../img/tf_openvino.svg
|
||||
- :doc:`Installation & Deployment <openvino_2_0_deployment>`
|
||||
- :doc:`OpenVINO™ Common Inference pipeline <openvino_2_0_inference_pipeline>`
|
||||
- :doc:`Preprocess your model <openvino_2_0_preprocessing>`
|
||||
- :doc:`Configure device <openvino_2_0_configure_devices>`
|
||||
- :doc:`OpenVINO™ Model Creation <openvino_2_0_model_creation>`
|
||||
|
||||
@endsphinxdirective
|
||||
|
Loading…
Reference in New Issue
Block a user