[TF FE] Add user guide about TF FE Capabilities and Limitations (#14622)

* [TF FE] Add user guide about TF FE Capabilities and Limitations

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Update docs/resources/tensorflow_frontend.md

* Update docs/OV_Runtime_UG/protecting_model_guide.md

Co-authored-by: Maxim Vafin <maxim.vafin@intel.com>

* Update docs/OV_Runtime_UG/deployment/local-distribution.md

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
Co-authored-by: Maxim Vafin <maxim.vafin@intel.com>
This commit is contained in:
Roman Kazantsev 2022-12-14 17:41:00 +04:00 committed by GitHub
parent 2c20b9a111
commit 0cf95d26bf
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
8 changed files with 24 additions and 6 deletions

View File

@ -47,7 +47,8 @@ The granularity of OpenVINO packages may vary for different distribution types.
- The main library `openvino` is used by users' C++ applications to link against with. The library provides all OpenVINO Runtime public APIs, including both API 2.0 and the previous Inference Engine and nGraph APIs. For C language applications, `openvino_c` is additionally required for distribution.
- The "optional" plugin libraries like `openvino_intel_cpu_plugin` (matching the `openvino_.+_plugin` pattern) are used to provide inference capabilities on specific devices or additional capabilities like [Hetero Execution](../hetero_execution.md) and [Multi-Device Execution](../multi_device.md).
- The "optional" plugin libraries like `openvino_ir_frontend` (matching `openvino_.+_frontend`) are used to provide capabilities to read models of different file formats such as OpenVINO IR, ONNX, and PaddlePaddle.
- The "optional" plugin libraries like `openvino_ir_frontend` (matching `openvino_.+_frontend`) are used to provide capabilities to read models of different file formats such as OpenVINO IR,
TensorFlow (check [TensorFlow Frontend Capabilities and Limitations](../../resources/tensorflow_frontend.md)), ONNX, and PaddlePaddle.
Here the term "optional" means that if the application does not use the capability enabled by the plugin, the plugin library or a package with the plugin is not needed in the final distribution.

View File

@ -120,12 +120,13 @@ The `HETERO`, `MULTI`, `BATCH` and `AUTO` execution modes can also be used expli
OpenVINO Runtime uses frontend libraries dynamically to read models in different formats:
- `openvino_ir_frontend` is used to read OpenVINO IR.
- `openvino_tensorflow_frontend` is used to read TensorFlow file format. Check [TensorFlow Frontend Capabilities and Limitations](../../resources/tensorflow_frontend.md).
- `openvino_onnx_frontend` is used to read ONNX file format.
- `openvino_paddle_frontend` is used to read Paddle file format.
Depending on the model format types that are used in the application in `ov::Core::read_model`, pick up the appropriate libraries.
> **NOTE**: To optimize the size of final distribution package, you are recommended to convert models to OpenVINO IR by using [Model Optimizer](../../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md). This way you don't have to keep ONNX, PaddlePaddle, and other frontend libraries in the distribution package.
> **NOTE**: To optimize the size of final distribution package, you are recommended to convert models to OpenVINO IR by using [Model Optimizer](../../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md). This way you don't have to keep TensorFlow, ONNX, PaddlePaddle, and other frontend libraries in the distribution package.
### (Legacy) Preprocessing via G-API

View File

@ -65,7 +65,7 @@ To understand the differences between Inference Engine API and API 2.0, see the
- Inference Engine API does not support compiling of models with dynamic input shapes.
- **New behavior** implemented in 2022.1 assumes full model alignment with the framework:
- Model Optimizer preserves input element types and order of dimensions (layouts), and stores tensor names from the original models.
- OpenVINO Runtime 2022.1 reads models in any format (OpenVINO IR v10, OpenVINO IR v11, ONNX, PaddlePaddle, etc.).
- OpenVINO Runtime 2022.1 reads models in any format (OpenVINO IR v10, OpenVINO IR v11, TensorFlow (check [TensorFlow Frontend Capabilities and Limitations](../../resources/tensorflow_frontend.md)), ONNX, PaddlePaddle, etc.).
- API 2.0 uses tensor names for addressing, which is the standard approach among the compatible model frameworks.
- API 2.0 can also address input and output tensors by the index. Some model formats like ONNX are sensitive to the input and output order, which is preserved by OpenVINO 2022.1.

View File

@ -19,7 +19,8 @@
@endsphinxdirective
OpenVINO Runtime is a set of C++ libraries with C and Python bindings providing a common API to deliver inference solutions on the platform of your choice. Use the OpenVINO Runtime API to read an Intermediate Representation (IR), ONNX, or PaddlePaddle model and execute it on preferred devices.
OpenVINO Runtime is a set of C++ libraries with C and Python bindings providing a common API to deliver inference solutions on the platform of your choice. Use the OpenVINO Runtime API to read an Intermediate Representation (IR),
TensorFlow (check [TensorFlow Frontend Capabilities and Limitations](../resources/tensorflow_frontend.md)), ONNX, or PaddlePaddle model and execute it on preferred devices.
OpenVINO Runtime uses a plugin architecture. Its plugins are software components that contain complete implementation for inference on a particular Intel® hardware device: CPU, GPU, VPU, etc. Each plugin implements the unified API and provides additional hardware-specific APIs for configuring devices or API interoperability between OpenVINO Runtime and underlying plugin backend.

View File

@ -10,7 +10,7 @@ Most available preprocessing steps can also be performed via command-line option
## Code example - Saving Model with Preprocessing to OpenVINO IR
When some preprocessing steps cannot be integrated into the execution graph using Model Optimizer command-line options (for example, `YUV`->`RGB` color space conversion, `Resize`, etc.), it is possible to write a simple code which:
- Reads the original model (OpenVINO IR, ONNX, PaddlePaddle).
- Reads the original model (OpenVINO IR, TensorFlow (check [TensorFlow Frontend Capabilities and Limitations](../resources/tensorflow_frontend.md)), ONNX, PaddlePaddle).
- Adds the preprocessing/postprocessing steps.
- Saves resulting model as IR (`.xml` and `.bin`).

View File

@ -17,7 +17,8 @@ This guide presents how to use OpenVINO securely with protected models.
After a model is optimized by the OpenVINO Model Optimizer, it's deployed
to target devices in the OpenVINO Intermediate Representation (OpenVINO IR) format. An optimized
model is stored on edge device and is executed by the OpenVINO Runtime.
ONNX and PDPD models can be read natively by OpenVINO Runtime as well.
TensorFlow (check [TensorFlow Frontend Capabilities and Limitations](../resources/tensorflow_frontend.md)), ONNX
and PaddlePaddle models can be read natively by OpenVINO Runtime as well.
Encrypting and optimizing model before deploying it to the edge device can be
used to protect deep-learning models. The edge device should keep the stored model

View File

@ -25,6 +25,7 @@
openvino_docs_OV_Glossary
openvino_docs_Legal_Information
openvino_docs_telemetry_information
openvino_docs_MO_DG_TensorFlow_Frontend
Case Studies <https://www.intel.com/openvino-success-stories>

View File

@ -0,0 +1,13 @@
# OpenVINO TensorFlow Frontend Capabilities and Limitations {#openvino_docs_MO_DG_TensorFlow_Frontend}
TensorFlow Frontend is C++ based Frontend for conversion of TensorFlow models and is available as a preview feature starting from 2022.3.
That means that you can start experimenting with `--use_new_frontend` option passed to Model Optimizer to enjoy improved conversion time for limited scope of models
or directly loading TensorFlow models through `read_model()` method.
The current limitations:
* IRs generated by new TensorFlow Frontend are compatible only with OpenVINO API 2.0
* There is no full parity yet between legacy Model Optimizer TensorFlow Frontend and new TensorFlow Frontend so primary path for model conversion is still legacy frontend
* Model coverage and performance is continuously improving so some conversion phase failures, performance and accuracy issues might occur in case model is not yet covered.
Known unsupported models: object detection models and all models with transformation configs, models with TF1/TF2 control flow, Complex type and training parts
* `read_model()` method supports only `*.pb` format while Model Optimizer (or `convert_model` call) will accept other formats as well which are accepted by existing legacy frontend