sync the same updates with 22/1 (#11071)

* Add Overview page

* Revert "Add Overview page"

* updates

* update

* updates
This commit is contained in:
Yuan Xu
2022-03-21 17:11:39 +08:00
committed by GitHub
parent 782ef6b42e
commit be9bbb676d
6 changed files with 11 additions and 13 deletions

View File

@@ -1,6 +1,6 @@
# Inference Pipeline {#openvino_2_0_inference_pipeline}
Usually to inference model with the OpenVINO™ Runtime an user needs to do the following steps in the application pipeline:
Usually to infer models with OpenVINO™ Runtime, you need to do the following steps in the application pipeline:
- 1. Create Core object
- 2. Read model from the disk
- 2.1. (Optional) Model preprocessing
@@ -10,7 +10,7 @@ Usually to inference model with the OpenVINO™ Runtime an user needs to do the
- 6. Start inference
- 7. Process the inference results
Code snippets below cover these steps and show how application code should be changed for migration to OpenVINO™ Runtime 2.0.
The following code shows how to change the application code in each step to migrate to OpenVINO™ Runtime 2.0.
## 1. Create Core

View File

@@ -1,4 +1,4 @@
# Configure devices {#openvino_2_0_configure_devices}
# Configuring Devices {#openvino_2_0_configure_devices}
### Introduction

View File

@@ -1,4 +1,4 @@
# Model creation in runtime {#openvino_2_0_model_creation}
# Model Creation in Runtime {#openvino_2_0_model_creation}
OpenVINO™ Runtime API 2.0 includes nGraph engine as a common part. The `ngraph` namespace was changed to `ov`, all other ngraph API is preserved as is.
Code snippets below show how application code should be changed for migration to OpenVINO™ Runtime API 2.0.

View File

@@ -1,4 +1,4 @@
# OpenVINO™ 2.0 Transition Guide {#openvino_2_0_transition_guide}
# OpenVINO™ Transition Guide for API 2.0 {#openvino_2_0_transition_guide}
@sphinxdirective
@@ -24,7 +24,7 @@ Older versions of OpenVINO™ (prior to 2022.1) required to change the logic of
- Inference Engine API (`InferenceEngine::CNNNetwork`) also applied some conversion rules for input and output precisions because of device plugins limitations.
- Users need to specify input shapes during model conversions in Model Optimizer and work with static shapes in the application.
OpenVINO™ introduces API 2.0 to align logic of working with model as it is done in the frameworks - no layout and precision changes, operates with tensor names and indices to address inputs and outputs. OpenVINO Runtime is composed of Inference Engine API used for inference and nGraph API targeted to work with models, operations. The API 2.0 has common structure, naming convention styles, namespaces, removes duplicated structures. See [How to migrate to OpenVINO API 2.0](common_inference_pipeline.md) for details.
OpenVINO™ introduces API 2.0 (also called OpenVINO API v2) to align the logic of working with model as it is done in the frameworks - no layout and precision changes, operates with tensor names and indices to address inputs and outputs. OpenVINO Runtime is composed of Inference Engine API used for inference and nGraph API targeted to work with models and operations. API 2.0 has common structure, naming convention styles, namespaces, and removes duplicated structures. See [Changes to Inference Pipeline in OpenVINO API v2](common_inference_pipeline.md) for details.
> **NOTE**: Most important is that your existing application can continue working with OpenVINO Runtime 2022.1 as it used to be, but we recommend migration to API 2.0 to unlock additional features like [Preprocessing](../preprocessing_overview.md) and [Dynamic shapes support](../ov_dynamic_shapes.md).
@@ -38,7 +38,7 @@ The IR v11 is supported by all OpenVINO Development tools including Post-Trainin
### IR v10 Compatibility
OpenVINO API 2.0 also supports models in IR v10 for backward compatibility. So, if a user has an IR v10, it can be fed to OpenVINO Runtime as well (see [migration steps](common_inference_pipeline.md)).
API 2.0 also supports models in IR v10 for backward compatibility. So, if a user has an IR v10, it can be fed to OpenVINO Runtime as well (see [migration steps](common_inference_pipeline.md)).
Some OpenVINO Development Tools also support both IR v10 and IR v11 as an input:
- Accuracy checker also supports IR v10, but requires an additional option to denote which API is used underneath.
@@ -52,8 +52,6 @@ The following OpenVINO tools don't support IR v10 as an input, and require to ge
### Differences between Inference Engine and OpenVINO Runtime 2022.1
### Differences between Inference Engine and OpenVINO Runtime 2.0
Inference Engine and nGraph APIs are not deprecated, they are fully functional and can be used in applications. However, it's highly recommended to migrate to API 2.0, because it already has additional features and this list will be extended later. The following list of additional features is supported by API 2.0:
- [Working with dynamic shapes](../ov_dynamic_shapes.md). The feature is quite useful for best performance for NLP (Neural Language Processing) models, super resolution models and other which accepts dynamic input shapes.
- [Preprocessing of the model](../preprocessing_overview.md) to add preprocessing operations to the inference models and fully occupy the accelerator and free CPU resources.

View File

@@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:caf1538ca8b64cbc243ab8e4a87b38a7eb071c2f19955fe881cd221807f485b7
size 312545
oid sha256:f5795ad0828f75cb660bea786b1aaa604ce442d7de23e461626212dc7c6cb139
size 254

View File

@@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:f1f8bc12837e03b1a2c1386c2bac512c21b1fb073d990379079c317488e9ce1c
size 39015
oid sha256:9c2d33cebe15397ef651521173832d3fed1733e465ec63f31db27c97329e9464
size 253