TensorFlow Lite FrontEnd: documentation changes (#17187)

* First glance doc changes

* Apply suggestions from code review

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_TensorFlow_Lite.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

---------

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>
This commit is contained in:
Evgenya Stepyreva
2023-04-25 16:18:24 +04:00
committed by GitHub
parent 27210b6505
commit ee4ccec190
21 changed files with 165 additions and 21 deletions

View File

@@ -112,6 +112,7 @@ OpenVINO Runtime uses frontend libraries dynamically to read models in different
- ``openvino_ir_frontend`` is used to read OpenVINO IR.
- ``openvino_tensorflow_frontend`` is used to read TensorFlow file format.
- ``openvino_tensorflow_lite_frontend`` is used to read TensorFlow Lite file format.
- ``openvino_onnx_frontend`` is used to read ONNX file format.
- ``openvino_paddle_frontend`` is used to read Paddle file format.
@@ -119,7 +120,7 @@ Depending on the model format types that are used in the application in `ov::Cor
.. note::
To optimize the size of final distribution package, you are recommended to convert models to OpenVINO IR by using :doc:`Model Optimizer <openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide>`. This way you don't have to keep TensorFlow, ONNX, PaddlePaddle, and other frontend libraries in the distribution package.
To optimize the size of final distribution package, you are recommended to convert models to OpenVINO IR by using :doc:`Model Optimizer <openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide>`. This way you don't have to keep TensorFlow, TensorFlow Lite, ONNX, PaddlePaddle, and other frontend libraries in the distribution package.
(Legacy) Preprocessing via G-API
++++++++++++++++++++++++++++++++

View File

@@ -16,7 +16,7 @@
openvino_docs_OV_UG_model_state_intro
OpenVINO Runtime is a set of C++ libraries with C and Python bindings providing a common API to deliver inference solutions on the platform of your choice. Use the OpenVINO Runtime API to read an Intermediate Representation (IR), TensorFlow, ONNX, or PaddlePaddle model and execute it on preferred devices.
OpenVINO Runtime is a set of C++ libraries with C and Python bindings providing a common API to deliver inference solutions on the platform of your choice. Use the OpenVINO Runtime API to read an Intermediate Representation (IR), TensorFlow, TensorFlow Lite, ONNX, or PaddlePaddle model and execute it on preferred devices.
OpenVINO Runtime uses a plugin architecture. Its plugins are software components that contain complete implementation for inference on a particular Intel® hardware device: CPU, GPU, GNA, etc. Each plugin implements the unified API and provides additional hardware-specific APIs for configuring devices or API interoperability between OpenVINO Runtime and underlying plugin backend.

View File

@@ -22,7 +22,7 @@ When some preprocessing steps cannot be integrated into the execution graph usin
Model Optimizer command-line options (for example, ``YUV``->``RGB`` color space conversion,
``Resize``, etc.), it is possible to write a simple code which:
* Reads the original model (OpenVINO IR, TensorFlow, ONNX, PaddlePaddle).
* Reads the original model (OpenVINO IR, TensorFlow, TensorFlow Lite, ONNX, PaddlePaddle).
* Adds the preprocessing/postprocessing steps.
* Saves resulting model as IR (``.xml`` and ``.bin``).

View File

@@ -11,7 +11,7 @@ This guide presents how to use OpenVINO securely with protected models.
Secure Model Deployment
#######################
After a model is optimized by the OpenVINO Model Optimizer, it's deployednto target devices in the OpenVINO Intermediate Representation (OpenVINO IR) format. An optimized model is stored on edge device and is executed by the OpenVINO Runtime. TensorFlow, ONNX and PaddlePaddle models can be read natively by OpenVINO Runtime as well.
After a model is optimized by the OpenVINO Model Optimizer, it's deployed to target devices in the OpenVINO Intermediate Representation (OpenVINO IR) format. An optimized model is stored on edge device and is executed by the OpenVINO Runtime. TensorFlow, TensorFlow Lite, ONNX and PaddlePaddle models can be read natively by OpenVINO Runtime as well.
Encrypting and optimizing model before deploying it to the edge device can be used to protect deep-learning models. The edge device should keep the stored model protected all the time and have the model decrypted **in runtime only** for use by the OpenVINO Runtime.