[DOCS]continue_language_review-transitionguide (#11177)
PR for 22.1 made, now porting to release... some discrepancy between this version and the 22.1 branch seems to exist, so I adjusted the conflicting link to avoid build check errors... the overview has been merged, the remaining articles are reviewed here
This commit is contained in:
@@ -1,10 +1,10 @@
|
||||
# Inference Pipeline {#openvino_2_0_inference_pipeline}
|
||||
|
||||
Usually to infer models with OpenVINO™ Runtime, you need to do the following steps in the application pipeline:
|
||||
Usually, to infer models with OpenVINO™ Runtime, you need to make the following steps in the application pipeline:
|
||||
- 1. Create Core object
|
||||
- 1.1. (Optional) Load extensions
|
||||
- 2. Read model from the disk
|
||||
- 2.1. (Optional) Model preprocessing
|
||||
- 2. Read a model from a drive
|
||||
- 2.1. (Optional) Perform model preprocessing
|
||||
- 3. Load the model to the device
|
||||
- 4. Create an inference request
|
||||
- 5. Fill input tensors with data
|
||||
@@ -45,7 +45,7 @@ OpenVINO™ Runtime API 2.0:
|
||||
|
||||
### 1.1 (Optional) Load extensions
|
||||
|
||||
To load model with custom operation, you need to add extensions for these operations. We highly recommend to use [OpenVINO Extensibility API](../../Extensibility_UG/Intro.md) to write extensions, but if you already have old extensions you can load it to new OpenVINO™ Runtime:
|
||||
To load a model with custom operations, you need to add extensions for these operations. We highly recommend using [OpenVINO Extensibility API](../../Extensibility_UG/Intro.md) to write extensions, but if you already have old extensions you can also load them to the new OpenVINO™ Runtime:
|
||||
|
||||
Inference Engine API:
|
||||
|
||||
@@ -75,7 +75,7 @@ OpenVINO™ Runtime API 2.0:
|
||||
|
||||
@endsphinxtabset
|
||||
|
||||
## 2. Read model from the disk
|
||||
## 2. Read a model from a drive
|
||||
|
||||
Inference Engine API:
|
||||
|
||||
@@ -109,10 +109,10 @@ Read model has the same structure as in the example from [Model Creation](./grap
|
||||
|
||||
Note, you can combine read and compile model stages into a single call `ov::Core::compile_model(filename, devicename)`.
|
||||
|
||||
### 2.1 (Optional) Model preprocessing
|
||||
### 2.1 (Optional) Perform model preprocessing
|
||||
|
||||
When application's input data doesn't perfectly match with model's input format, preprocessing steps may need to be added.
|
||||
See detailed guide [how to migrate preprocessing in OpenVINO Runtime API 2.0](./preprocessing.md)
|
||||
When application's input data doesn't perfectly match the model's input format, preprocessing steps may be necessary.
|
||||
See a detailed guide on [how to migrate preprocessing in OpenVINO Runtime API 2.0](./preprocessing.md)
|
||||
|
||||
## 3. Load the Model to the Device
|
||||
|
||||
@@ -144,7 +144,7 @@ OpenVINO™ Runtime API 2.0:
|
||||
|
||||
@endsphinxtabset
|
||||
|
||||
If you need to configure OpenVINO Runtime devices with additional configuration parameters, please, refer to the migration [Configure devices](./configure_devices.md) guide.
|
||||
If you need to configure OpenVINO Runtime devices with additional configuration parameters, refer to the [Configure devices](./configure_devices.md) guide.
|
||||
|
||||
## 4. Create an Inference Request
|
||||
|
||||
@@ -178,7 +178,7 @@ OpenVINO™ Runtime API 2.0:
|
||||
|
||||
## 5. Fill input tensors
|
||||
|
||||
Inference Engine API fills inputs as `I32` precision (**not** aligned with the original model):
|
||||
The Inference Engine API fills inputs as `I32` precision (**not** aligned with the original model):
|
||||
|
||||
@sphinxtabset
|
||||
|
||||
@@ -398,7 +398,7 @@ OpenVINO™ Runtime API 2.0:
|
||||
|
||||
## 7. Process the Inference Results
|
||||
|
||||
Inference Engine API processes outputs as `I32` precision (**not** aligned with the original model):
|
||||
The Inference Engine API processes outputs as `I32` precision (**not** aligned with the original model):
|
||||
|
||||
@sphinxtabset
|
||||
|
||||
@@ -469,8 +469,8 @@ Inference Engine API processes outputs as `I32` precision (**not** aligned with
|
||||
@endsphinxtabset
|
||||
|
||||
OpenVINO™ Runtime API 2.0 processes outputs:
|
||||
- For IR v10 as `I32` precision (**not** aligned with the original model) to match **old** behavior
|
||||
- For IR v11, ONNX, ov::Model, Paddle as `I64` precision (aligned with the original model) to match **new** behavior
|
||||
- For IR v10 as `I32` precision (**not** aligned with the original model) to match the **old** behavior.
|
||||
- For IR v11, ONNX, ov::Model, Paddle as `I64` precision (aligned with the original model) to match the **new** behavior.
|
||||
|
||||
@sphinxtabset
|
||||
|
||||
|
||||
@@ -2,9 +2,9 @@
|
||||
|
||||
### Introduction
|
||||
|
||||
Inference Engine API provides an [ability to configure devices](https://docs.openvino.ai/2021.4/openvino_docs_IE_DG_InferenceEngine_QueryAPI.html) via configuration keys and [get device specific metrics](https://docs.openvino.ai/2021.4/openvino_docs_IE_DG_InferenceEngine_QueryAPI.html#getmetric). The values taken from `InferenceEngine::Core::GetConfig` are requested by its string name, while return type is `InferenceEngine::Parameter` and users don't know what is the actual type is stored in this parameter.
|
||||
The Inference Engine API provides the [ability to configure devices](https://docs.openvino.ai/2021.4/openvino_docs_IE_DG_InferenceEngine_QueryAPI.html) via configuration keys and [get device specific metrics](https://docs.openvino.ai/2021.4/openvino_docs_IE_DG_InferenceEngine_QueryAPI.html#getmetric). The values taken from `InferenceEngine::Core::GetConfig` are requested by the string name, while the return type is `InferenceEngine::Parameter`, making users lost on what the actual type stored in this parameter is.
|
||||
|
||||
OpenVINO Runtime API 2.0 solves these issues by introducing [properties](../supported_plugins/config_properties.md), which unify metrics and configuration key concepts, but the main advantage of properties - they have C++ type:
|
||||
The OpenVINO Runtime API 2.0 solves these issues by introducing [properties](../supported_plugins/config_properties.md), which unify metrics and configuration key concepts. Their main advantage is that they have the C++ type:
|
||||
|
||||
```
|
||||
static constexpr Property<std::string> full_name{"FULL_DEVICE_NAME"};
|
||||
@@ -14,7 +14,7 @@ And the property can be requested from an inference device as:
|
||||
|
||||
@snippet ov_properties_migration.cpp core_get_ro_property
|
||||
|
||||
The snippets below show how to migrate from Inference Engine device configuration to OpenVINO Runtime API 2.0 steps.
|
||||
The snippets below show how to migrate from an Inference Engine device configuration to OpenVINO Runtime API 2.0 steps.
|
||||
|
||||
### Set configuration values
|
||||
|
||||
|
||||
@@ -1,29 +1,29 @@
|
||||
# Installation & Deployment {#openvino_2_0_deployment}
|
||||
|
||||
"Easy to use" is one of the main concepts for OpenVINO™ API 2.0. It includes not only simplifying the migration from frameworks to OpenVINO, but also how OpenVINO is organized, how the development tools are used, and how to develop and deploy OpenVINO-based applications.
|
||||
"Easy to use" is one of the main concepts for OpenVINO™ API 2.0. It is about simplifying migration from different frameworks to OpenVINO, but also touches on how OpenVINO is organized, how its development tools are used, and how OpenVINO-based applications are developed and deployed.
|
||||
|
||||
To accomplish that, we have made some changes on the installation and deployment of OpenVINO in the 2022.1 release. This guide will walk you through them.
|
||||
To accomplish that, we made some changes to the installation and deployment processes of OpenVINO in the 2022.1 release. This guide will walk you through them.
|
||||
|
||||
## Installer Package Contains OpenVINO™ Runtime Only
|
||||
## The Installer Package Contains OpenVINO™ Runtime Only
|
||||
|
||||
Starting from OpenVINO 2022.1, Model Optimizer, Post-Training Optimization tool and Python-based Development tools such as Open Model Zoo tools are distributed via [PyPI](https://pypi.org/project/openvino-dev/) only, and are no longer included in the OpenVINO installer package. This change has several benefits as it:
|
||||
Starting from OpenVINO 2022.1, development tools are distributed via [PyPI](https://pypi.org/project/openvino-dev/) only and are no longer included in the OpenVINO installer package. For a list of these components, refer to the [installation overview](../../../install_guides/installing-openvino-overview.md). This approach has several benefits:
|
||||
|
||||
* Simplifies the user experience. In previous versions, the installation and usage of OpenVINO Development Tools differ according to the distribution type (via an OpenVINO installer or PyPI).
|
||||
* Ensures that dependencies are handled properly via the PIP package manager and support virtual environments of development tools.
|
||||
* simplifies the user experience - in previous versions, installation and usage of OpenVINO Development Tools differed from one distribution type to another (the OpenVINO installer vs. PyPI),
|
||||
* ensures that dependencies are handled properly via the PIP package manager and support virtual environments of development tools.
|
||||
|
||||
The structure of OpenVINO 2022.1 installer package has been organized as below:
|
||||
The structure of the OpenVINO 2022.1 installer package has been organized as follows:
|
||||
|
||||
- The `runtime` folder includes headers, libraries and CMake interfaces.
|
||||
- The `tools` folder contains [the compile tool](../../../tools/compile_tool/README.md), [deployment manager](../../OV_Runtime_UG/deployment/deployment-manager-tool.md) and a set of `requirements.txt` files with links to the corresponding versions of the `openvino-dev` package.
|
||||
- The `tools` folder contains [the compile tool](../../../tools/compile_tool/README.md), [deployment manager](../../OV_Runtime_UG/deployment/deployment-manager-tool.md), and a set of `requirements.txt` files with links to the corresponding versions of the `openvino-dev` package.
|
||||
- The `python` folder contains the Python version for OpenVINO Runtime.
|
||||
|
||||
## Installing OpenVINO Development Tools via PyPI
|
||||
|
||||
Since OpenVINO Development Tools is no longer in the installer package, the installation process has changed too. This section describes it through a comparison with previous versions.
|
||||
Since OpenVINO Development Tools is no longer in the installer package, the installation process has also changed. This section describes it through a comparison with previous versions.
|
||||
|
||||
### For Versions Prior to 2022.1
|
||||
|
||||
In previous versions, OpenVINO Development Tools is a part of main package. After the package is installed, to convert models (for example, TensorFlow), you need to install additional dependencies by using the requirements files such as `requirements_tf.txt`, install Post-Training Optimization tool and Accuracy Checker tool via the `setup.py` scripts, and then use the `setupvars` scripts to make the tools available to the following command:
|
||||
In previous versions, OpenVINO Development Tools was a part of the main package. After the package was installed, to convert models (for example, TensorFlow), you needed to install additional dependencies by using the requirement files, such as `requirements_tf.txt`, install Post-Training Optimization tool and Accuracy Checker tool via the `setup.py` scripts, and then use the `setupvars` scripts to make the tools available to the following command:
|
||||
|
||||
```sh
|
||||
$ mo.py -h
|
||||
@@ -31,13 +31,13 @@ $ mo.py -h
|
||||
|
||||
### For 2022.1 and After
|
||||
|
||||
Starting from OpenVINO 2022.1, you can install the development tools from [PyPI](https://pypi.org/project/openvino-dev/) repository only, using the following command (taking TensorFlow as an example):
|
||||
In OpenVINO 2022.1 and later, you can install the development tools from a [PyPI](https://pypi.org/project/openvino-dev/) repository only, using the following command (taking TensorFlow as an example):
|
||||
|
||||
```sh
|
||||
$ python3 -m pip install -r <INSTALL_DIR>/tools/requirements_tf.txt
|
||||
```
|
||||
|
||||
This will install all the development tools and additional necessary components to work with TensorFlow via the `openvino-dev` package (see **Step 4. Install the Package** on the [PyPI page](https://pypi.org/project/openvino-dev/) for parameters of other frameworks).
|
||||
This will install all the development tools and additional components necessary to work with TensorFlow via the `openvino-dev` package (see **Step 4. Install the Package** on the [PyPI page](https://pypi.org/project/openvino-dev/) for parameters of other frameworks).
|
||||
|
||||
Then, the tools can be used by commands like:
|
||||
|
||||
@@ -50,11 +50,11 @@ You don't have to install any other dependencies. For more details on the instal
|
||||
|
||||
## Interface Changes for Building C/C++ Applications
|
||||
|
||||
The new OpenVINO Runtime with API 2.0 has also brought some changes for builiding your C/C++ applications.
|
||||
The new OpenVINO Runtime with its API 2.0 has also brought some changes for building C/C++ applications.
|
||||
|
||||
### CMake Interface
|
||||
|
||||
The CMake interface has been changed as below:
|
||||
The CMake interface has been changed as follows:
|
||||
|
||||
**With Inference Engine of previous versions**:
|
||||
|
||||
@@ -78,7 +78,7 @@ target_link_libraries(ov_c_app PRIVATE openvino::runtime::c)
|
||||
|
||||
### Native Interfaces
|
||||
|
||||
To build applications without CMake interface, you can also use MSVC IDE, UNIX makefiles and any other interfaces, which have been changed as below:
|
||||
To build applications without the CMake interface, you can also use MSVC IDE, UNIX makefiles, and any other interface, which has been changed as shown here:
|
||||
|
||||
**With Inference Engine of previous versions**:
|
||||
|
||||
@@ -153,19 +153,19 @@ To build applications without CMake interface, you can also use MSVC IDE, UNIX m
|
||||
|
||||
## Clearer Library Structure for Deployment
|
||||
|
||||
OpenVINO 2022.1 has reorganized the libraries to make it easier for deployment. In previous versions, to perform deployment steps, you have to use several libraries. Now you can just use `openvino` or `openvino_c` based on your developing language plus necessary plugins to complete your task. For example, `openvino_intel_cpu_plugin` and `openvino_ir_frontend` plugins will enable you to load OpenVINO IRs and perform inference on CPU device (see [Local distribution with OpenVINO](../deployment/local-distribution.md) for more details).
|
||||
OpenVINO 2022.1 introduced a reorganization of the libraries, to make deployment easier. In the previous versions, to perform deployment steps, you had to use several libraries. Now you can just use `openvino` or `openvino_c` based on your developing language, together with the necessary plugins to complete your task. For example, `openvino_intel_cpu_plugin` and `openvino_ir_frontend` plugins will enable you to load OpenVINO IRs and perform inference on the CPU device (see [Local distribution with OpenVINO](../deployment/local-distribution.md) for more details).
|
||||
|
||||
Here you can find some detailed comparisons on library structure between OpenVINO 2022.1 and previous versions:
|
||||
Here you can find detailed comparisons on the library structure between OpenVINO 2022.1 and the previous versions:
|
||||
|
||||
* A single core library with all the functionalities (`openvino` for C++ Runtime, `openvino_c` for Inference Engine API C interface) is used in 2022.1, instead of the previous core libraries which contain `inference_engine`, `ngraph`, `inference_engine_transformations` and `inference_engine_lp_transformations`.
|
||||
* The optional `inference_engine_preproc` preprocessing library (if `InferenceEngine::PreProcessInfo::setColorFormat` or `InferenceEngine::PreProcessInfo::setResizeAlgorithm` is used) is renamed as `openvino_gapi_preproc` and deprecated in 2022.1. See more details on [Preprocessing capabilities of OpenVINO API 2.0](preprocessing.md).
|
||||
* The libraries of plugins are renamed as below:
|
||||
* `openvino_intel_cpu_plugin` is used for [CPU](../supported_plugins/CPU.md) device instead of `MKLDNNPlugin` in previous versions.
|
||||
* `openvino_intel_gpu_plugin` is used for [GPU](../supported_plugins/GPU.md) device instead of `clDNNPlugin` in previous versions.
|
||||
* `openvino_auto_plugin` is used for [Auto-Device Plugin](../auto_device_selection.md) in 2022.1.
|
||||
* The plugins for reading and converting models have been changed as below:
|
||||
* `openvino_ir_frontend` is used to read IRs instead of `inference_engine_ir_reader` in previous versions.
|
||||
* `openvino_onnx_frontend` is used to read ONNX models instead of `inference_engine_onnx_reader` (with its dependencies) in previous versions.
|
||||
* A single core library with all the functionalities (`openvino` for C++ Runtime, `openvino_c` for Inference Engine API C interface) is used in 2022.1, instead of the previous core libraries which contained `inference_engine`, `ngraph`, `inference_engine_transformations` and `inference_engine_lp_transformations`.
|
||||
* The optional `inference_engine_preproc` preprocessing library (if `InferenceEngine::PreProcessInfo::setColorFormat` or `InferenceEngine::PreProcessInfo::setResizeAlgorithm` is used) has been renamed to `openvino_gapi_preproc` and deprecated in 2022.1. See more details on [Preprocessing capabilities of OpenVINO API 2.0](preprocessing.md).
|
||||
* The libraries of plugins have been renamed as follows:
|
||||
* `openvino_intel_cpu_plugin` is used for [CPU](../supported_plugins/CPU.md) device instead of `MKLDNNPlugin`.
|
||||
* `openvino_intel_gpu_plugin` is used for [GPU](../supported_plugins/GPU.md) device instead of `clDNNPlugin`.
|
||||
* `openvino_auto_plugin` is used for [Auto-Device Plugin](../auto_device_selection.md).
|
||||
* The plugins for reading and converting models have been changed as follows:
|
||||
* `openvino_ir_frontend` is used to read IRs instead of `inference_engine_ir_reader`.
|
||||
* `openvino_onnx_frontend` is used to read ONNX models instead of `inference_engine_onnx_reader` (with its dependencies).
|
||||
* `openvino_paddle_frontend` is added in 2022.1 to read PaddlePaddle models.
|
||||
|
||||
<!-----
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
# Model Creation in Runtime {#openvino_2_0_model_creation}
|
||||
|
||||
OpenVINO™ Runtime API 2.0 includes nGraph engine as a common part. The `ngraph` namespace was changed to `ov`, all other ngraph API is preserved as is.
|
||||
Code snippets below show how application code should be changed for migration to OpenVINO™ Runtime API 2.0.
|
||||
OpenVINO™ Runtime API 2.0 includes the nGraph engine as a common part. The `ngraph` namespace has been changed to `ov` but all other parts of the ngraph API have been preserved.
|
||||
The code snippets below show how to change application code for migration to OpenVINO™ Runtime API 2.0.
|
||||
|
||||
### nGraph API
|
||||
|
||||
|
||||
@@ -2,26 +2,26 @@
|
||||
|
||||
### Introduction
|
||||
|
||||
Inference Engine API has preprocessing capabilities in `InferenceEngine::CNNNetwork` class. Such preprocessing information is not a part of the main inference graph executed by the [OpenVINO devices](../supported_plugins/Device_Plugins.md), so it is stored and executed separately before an inference stage:
|
||||
- Preprocessing operations are executed on CPU processor for most of the OpenVINO inference plugins. So, instead of occupying of acceleators, CPU processor is also busy with computational tasks.
|
||||
- Preprocessing information stored in `InferenceEngine::CNNNetwork` is lost during saving back to IR file format.
|
||||
The Inference Engine API contains preprocessing capabilities in the `InferenceEngine::CNNNetwork` class. Such preprocessing information is not part of the main inference graph executed by [OpenVINO devices](../supported_plugins/Device_Plugins.md), so it is stored and executed separately before the inference stage.
|
||||
- Preprocessing operations are executed on the CPU for most OpenVINO inference plugins. So, instead of occupying accelerators, they make CPU busy with computational tasks.
|
||||
- Preprocessing information stored in `InferenceEngine::CNNNetwork` is lost when saving back to the IR file format.
|
||||
|
||||
OpenVINO Runtime API 2.0 introduces [new way of adding preprocessing operations to the model](../preprocessing_overview.md) - each preprocessing or postprocessing operation is integrated directly to the model and compiled together with inference graph:
|
||||
OpenVINO Runtime API 2.0 introduces a [new way of adding preprocessing operations to the model](../preprocessing_overview.md) - each preprocessing or postprocessing operation is integrated directly into the model and compiled together with the inference graph.
|
||||
- Add preprocessing operations first using `ov::preprocess::PrePostProcessor`
|
||||
- Compile model on the target then using `ov::Core::compile_model`
|
||||
- Then, compile the model on the target using `ov::Core::compile_model`
|
||||
|
||||
Having preprocessing operations as a part of OpenVINO opset allows to read and serialize preprocessed model as the IR file format.
|
||||
Having preprocessing operations as part of an OpenVINO opset makes it possible to read and serialize a preprocessed model as the IR file format.
|
||||
|
||||
It's also important to mention that since OpenVINO 2.0, the Runtime API does not assume any default layouts like Inference Engine did, for example both `{ 1, 224, 224, 3 }` and `{ 1, 3, 224, 224 }` shapes are supposed to have `NCHW` layout while only the last shape has `NCHW`. So, some preprocessing capabilities in OpenVINO Runtime API 2.0 requires explicitly set layouts, see [Layout overview](../layout_overview.md) how to do it. For example, to perform image scaling by partial dimensions `H` and `W`, preprocessing needs to know what dimensions are `H` and `W`.
|
||||
It is also important to mention that the OpenVINO Runtime API 2.0 does not assume any default layouts, like Inference Engine did. For example, both `{ 1, 224, 224, 3 }` and `{ 1, 3, 224, 224 }` shapes are supposed to be in the `NCHW` layout, while only the latter one is. So, some preprocessing capabilities in the API require layouts to be set explicitly. To learn how to do it, refer to [Layout overview](../layout_overview.md). For example, to perform image scaling by partial dimensions `H` and `W`, preprocessing needs to know what dimensions `H` and `W` are.
|
||||
|
||||
> **NOTE**: Use Model Optimizer preprocessing capabilities to insert and optimize preprocessing operations to the model. In this case you don't need to read model in runtime application and set preprocessing, you can use [model caching feature](../Model_caching_overview.md) to improve time to inference stage.
|
||||
> **NOTE**: Use Model Optimizer preprocessing capabilities to insert preprocessing operations in you model for optimization. This way the application does not need to read the model and set preprocessing repeatedly, you can use the [model caching feature](../Model_caching_overview.md) to improve the time-to-inference.
|
||||
|
||||
The steps below demonstrates how to migrate preprocessing scenarios from Inference Engine API to OpenVINO Runtime API 2.0.
|
||||
The snippets suppose we need to preprocess a model input with tensor name `tensor_name`, in Inferenece Engine API using operation names to address the data, it's called `operation_name`.
|
||||
The steps below demonstrate how to migrate preprocessing scenarios from the Inference Engine API to the OpenVINO Runtime API 2.0.
|
||||
The snippets assume we need to preprocess a model input with the tensor name of `tensor_name` in the Inferenece Engine API, using operation names to address the data, called `operation_name`.
|
||||
|
||||
#### Importing preprocessing in Python
|
||||
|
||||
In order to utilize preprocessing following imports must be added.
|
||||
In order to utilize preprocessing, the following imports must be added.
|
||||
|
||||
Inference Engine API:
|
||||
|
||||
@@ -31,7 +31,7 @@ OpenVINO Runtime API 2.0:
|
||||
|
||||
@snippet docs/snippets/ov_preprocessing_migration.py ov_imports
|
||||
|
||||
There are two different namespaces `runtime`, which contains OpenVINO Runtime API classes and `preprocess` which provides Preprocessing API.
|
||||
There are two different namespaces: `runtime`, which contains OpenVINO Runtime API classes; and `preprocess`, which provides the Preprocessing API.
|
||||
|
||||
|
||||
### Mean and scale values
|
||||
|
||||
Reference in New Issue
Block a user