TensorFlow Lite FrontEnd: documentation changes (#17187)
* First glance doc changes * Apply suggestions from code review Co-authored-by: Tatiana Savina <tatiana.savina@intel.com> * Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_TensorFlow_Lite.md Co-authored-by: Tatiana Savina <tatiana.savina@intel.com> --------- Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>
This commit is contained in:
committed by
GitHub
parent
27210b6505
commit
ee4ccec190
@@ -18,7 +18,7 @@ Every deep learning workflow begins with obtaining a model. You can choose to pr
|
||||
|
||||
The approach to fully convert a model is considered the default choice, as it allows the full extent of OpenVINO features. The OpenVINO IR model format is used by other conversion and preparation tools, such as the Post-Training Optimization Tool, for further optimization of the converted model.
|
||||
|
||||
Conversion is not required for ONNX, PaddlePaddle, and TensorFlow models (check :doc:`TensorFlow Frontend Capabilities and Limitations <openvino_docs_MO_DG_TensorFlow_Frontend>`), as OpenVINO provides C++ and Python APIs for importing them to OpenVINO Runtime directly. It provides a convenient way to quickly switch from framework-based code to OpenVINO-based code in your inference application.
|
||||
Conversion is not required for ONNX, PaddlePaddle, TensorFlow Lite and TensorFlow models (check :doc:`TensorFlow Frontend Capabilities and Limitations <openvino_docs_MO_DG_TensorFlow_Frontend>`), as OpenVINO provides C++ and Python APIs for importing them to OpenVINO Runtime directly. It provides a convenient way to quickly switch from framework-based code to OpenVINO-based code in your inference application.
|
||||
|
||||
This section describes how to obtain and prepare your model for work with OpenVINO to get the best inference results:
|
||||
|
||||
|
||||
@@ -19,7 +19,7 @@
|
||||
OpenVINO Plugin Developer Guide <openvino_docs_ie_plugin_dg_overview>
|
||||
|
||||
The Intel® Distribution of OpenVINO™ toolkit supports neural network models trained with various frameworks, including
|
||||
TensorFlow, PyTorch, ONNX, PaddlePaddle, Apache MXNet, Caffe, and Kaldi. The list of supported operations is different for
|
||||
TensorFlow, PyTorch, ONNX, TensorFlow Lite, PaddlePaddle, Apache MXNet, Caffe, and Kaldi. The list of supported operations is different for
|
||||
each of the supported frameworks. To see the operations supported by your framework, refer to :doc:`Supported Framework Operations <openvino_docs_MO_DG_prepare_model_Supported_Frameworks_Layers>`.
|
||||
|
||||
Custom operations, which are not included in the list, are not recognized by OpenVINO out-of-the-box. The need for custom operation may appear in two cases:
|
||||
@@ -52,13 +52,13 @@ Mapping from Framework Operation
|
||||
|
||||
Mapping of custom operation is implemented differently, depending on model format used for import. You may choose one of the following:
|
||||
|
||||
1. If a model is represented in the ONNX (including models exported from Pytorch in ONNX), PaddlePaddle or TensorFlow formats, then one of the classes from :doc:`Frontend Extension API <openvino_docs_Extensibility_UG_Frontend_Extensions>` should be used. It consists of several classes available in C++ which can be used with the ``--extensions`` option in Model Optimizer or when a model is imported directly to OpenVINO runtime using the ``read_model`` method. Python API is also available for runtime model import.
|
||||
1. If a model is represented in the ONNX (including models exported from Pytorch in ONNX), TensorFlow Lite, PaddlePaddle or TensorFlow formats, then one of the classes from :doc:`Frontend Extension API <openvino_docs_Extensibility_UG_Frontend_Extensions>` should be used. It consists of several classes available in C++ which can be used with the ``--extensions`` option in Model Optimizer or when a model is imported directly to OpenVINO runtime using the ``read_model`` method. Python API is also available for runtime model import.
|
||||
|
||||
2. If a model is represented in the Caffe, Kaldi or MXNet formats, then :doc:`Model Optimizer Extensions <openvino_docs_MO_DG_prepare_model_customize_model_optimizer_Customize_Model_Optimizer>` should be used. This approach is available for model conversion in Model Optimizer only.
|
||||
|
||||
Existing of two approaches simultaneously is explained by two different types of frontends used for model conversion in OpenVINO: new frontends (ONNX, PaddlePaddle and TensorFlow) and legacy frontends (Caffe, Kaldi and Apache MXNet). Model Optimizer can use both front-ends in contrast to the direct import of model with ``read_model`` method which can use new frontends only. Follow one of the appropriate guides referenced above to implement mappings depending on framework frontend.
|
||||
Existing of two approaches simultaneously is explained by two different types of frontends used for model conversion in OpenVINO: new frontends (ONNX, PaddlePaddle, TensorFlow Lite and TensorFlow) and legacy frontends (Caffe, Kaldi and Apache MXNet). Model Optimizer can use both front-ends in contrast to the direct import of model with ``read_model`` method which can use new frontends only. Follow one of the appropriate guides referenced above to implement mappings depending on framework frontend.
|
||||
|
||||
If you are implementing extensions for new ONNX, PaddlePaddle or TensorFlow frontends and plan to use the ``--extensions`` option in Model Optimizer for model conversion, then the extensions should be:
|
||||
If you are implementing extensions for new ONNX, PaddlePaddle, TensorFlow Lite or TensorFlow frontends and plan to use the ``--extensions`` option in Model Optimizer for model conversion, then the extensions should be:
|
||||
|
||||
1. Implemented in C++ only.
|
||||
|
||||
|
||||
@@ -7,7 +7,7 @@ mapping of custom operations from framework model representation to OpenVINO rep
|
||||
Refer to :doc:`Introduction to OpenVINO Extension <openvino_docs_Extensibility_UG_Intro>` to
|
||||
understand the entire flow.
|
||||
|
||||
This API is applicable to new frontends only, which exist for ONNX, PaddlePaddle, and TensorFlow.
|
||||
This API is applicable to new frontends only, which exist for ONNX, TensorFlow Lite, PaddlePaddle, and TensorFlow.
|
||||
If a different model format is used, follow legacy
|
||||
:doc:`Model Optimizer Extensions <openvino_docs_MO_DG_prepare_model_customize_model_optimizer_Customize_Model_Optimizer>`
|
||||
guide.
|
||||
|
||||
@@ -19,7 +19,7 @@
|
||||
|
||||
Model Optimizer is a cross-platform command-line tool that facilitates the transition between training and deployment environments, performs static model analysis, and adjusts deep learning models for optimal execution on end-point target devices.
|
||||
|
||||
To use it, you need a pre-trained deep learning model in one of the supported formats: TensorFlow, PyTorch, PaddlePaddle, MXNet, Caffe, Kaldi, or ONNX. Model Optimizer converts the model to the OpenVINO Intermediate Representation format (IR), which you can infer later with :doc:`OpenVINO™ Runtime <openvino_docs_OV_UG_OV_Runtime_User_Guide>`.
|
||||
To use it, you need a pre-trained deep learning model in one of the supported formats: TensorFlow, PyTorch, PaddlePaddle, TensorFlow Lite, MXNet, Caffe, Kaldi, or ONNX. Model Optimizer converts the model to the OpenVINO Intermediate Representation format (IR), which you can infer later with :doc:`OpenVINO™ Runtime <openvino_docs_OV_UG_OV_Runtime_User_Guide>`.
|
||||
|
||||
Note that Model Optimizer does not infer models.
|
||||
|
||||
|
||||
@@ -814,6 +814,120 @@ paddlepaddle >= 2.1
|
||||
========================================== ===============================================================================
|
||||
|
||||
|
||||
TensorFlow Lite Supported Operators
|
||||
###########################################################
|
||||
|
||||
========================================== ===============================================================================
|
||||
Operator Name in TensorFlow Lite Limitations
|
||||
========================================== ===============================================================================
|
||||
ABS
|
||||
ADD
|
||||
ADD_N
|
||||
ARG_MAX
|
||||
ARG_MIN
|
||||
AVERAGE_POOL_2D
|
||||
BATCH_MATMUL
|
||||
BATCH_TO_SPACE_ND
|
||||
BROADCAST_ARGS
|
||||
BROADCAST_TO
|
||||
CAST
|
||||
CEIL
|
||||
COMPLEX_ABS Supported in a specific pattern with RFFT2D
|
||||
CONCATENATION
|
||||
CONV_2D
|
||||
COS
|
||||
DEPTH_TO_SPACE
|
||||
DEPTHWISE_CONV_2D
|
||||
DEQUANTIZE
|
||||
DIV
|
||||
ELU
|
||||
EQUAL
|
||||
EXP
|
||||
EXPAND_DIMS
|
||||
FILL
|
||||
FLOOR
|
||||
FLOOR_DIV
|
||||
FLOOR_MOD
|
||||
FULLY_CONNECTED
|
||||
GATHER
|
||||
GATHER_ND
|
||||
GREATER
|
||||
GREATER_EQUAL
|
||||
HARD_SWISH
|
||||
L2_NORMALIZATION
|
||||
LEAKY_RELU
|
||||
LESS
|
||||
LESS_EQUAL
|
||||
LOG
|
||||
LOG_SOFTMAX
|
||||
LOGICAL_AND
|
||||
LOGICAL_NOT
|
||||
LOGICAL_OR
|
||||
LOGISTIC
|
||||
MATRIX_DIAG
|
||||
MAX_POOL_2D
|
||||
MAXIMUM
|
||||
MEAN
|
||||
MINIMUM
|
||||
MIRROR_PAD
|
||||
MUL
|
||||
NEG
|
||||
NOT_EQUAL
|
||||
ONE_HOT
|
||||
PACK
|
||||
PAD
|
||||
PADV2
|
||||
POW
|
||||
PRELU
|
||||
QUANTIZE
|
||||
RANGE
|
||||
RANK
|
||||
REDUCE_ALL
|
||||
REDUCE_ANY
|
||||
REDUCE_MAX
|
||||
REDUCE_MIN
|
||||
REDUCE_PROD
|
||||
RELU
|
||||
RELU6
|
||||
RESHAPE
|
||||
RESIZE_BILINEAR
|
||||
RESIZE_NEAREST_NEIGHBOR
|
||||
REVERSE_V2
|
||||
RFFT2D Supported in a specific pattern with COMPLEX_ABS
|
||||
ROUND
|
||||
RSQRT
|
||||
SCATTER_ND
|
||||
SEGMENT_SUM
|
||||
SELECT
|
||||
SELECT_V2
|
||||
SHAPE
|
||||
SIGN
|
||||
SIN
|
||||
SLICE
|
||||
SOFTMAX
|
||||
SPACE_TO_BATCH_ND
|
||||
SPACE_TO_DEPTH
|
||||
SPLIT
|
||||
SPLIT_V
|
||||
SQRT
|
||||
SQUARE
|
||||
SQUARED_DIFFERENCE
|
||||
SQUEEZE
|
||||
STRIDED_SLICE
|
||||
SUB
|
||||
SUM
|
||||
TANH
|
||||
TILE
|
||||
TOPK_V2
|
||||
TRANSPOSE
|
||||
TRANSPOSE_CONV
|
||||
UNIQUE
|
||||
UNPACK
|
||||
WHERE
|
||||
ZEROS_LIKE
|
||||
========================================== ===============================================================================
|
||||
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
|
||||
|
||||
@@ -0,0 +1,24 @@
|
||||
# Converting a TensorFlow Lite Model {#openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow_Lite}
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
To convert a TensorFlow Lite model, use the ``mo`` script and specify the path to the input ``.tflite`` model file:
|
||||
|
||||
.. code-block:: sh
|
||||
|
||||
mo --input_model <INPUT_MODEL>.tflite
|
||||
|
||||
.. note:: TensorFlow Lite models are supported via FrontEnd API. You may skip conversion to IR and read models directly by OpenVINO runtime API.
|
||||
|
||||
Supported TensorFlow Lite Layers
|
||||
###################################
|
||||
|
||||
For the list of supported standard layers, refer to the :doc:`Supported Framework Layers <openvino_docs_MO_DG_prepare_model_Supported_Frameworks_Layers>` page.
|
||||
|
||||
Supported TensorFlow Lite Models
|
||||
###################################
|
||||
|
||||
More than eighty percent of public TensorFlow Lite models are supported from open sources `TensorFlow Hub <https://tfhub.dev/s?deployment-format=lite&subtype=module,placeholder>`__ and `MediaPipe <https://developers.google.com/mediapipe>`__.
|
||||
Unsupported models usually have custom TensorFlow Lite operations.
|
||||
|
||||
@endsphinxdirective
|
||||
@@ -13,8 +13,8 @@ Intermediate Representation should be specifically formed to be suitable for low
|
||||
Such a model is called a Low Precision IR and can be generated in two ways:
|
||||
|
||||
* By :doc:`quantizing regular IR with the Post-Training Optimization tool <pot_introduction>`
|
||||
* Using Model Optimizer for a model pre-trained for Low Precision inference: TensorFlow pre-TFLite models (``.pb`` model file with ``FakeQuantize`` operations) and ONNX quantized models.
|
||||
Both TensorFlow and ONNX quantized models can be prepared by `Neural Network Compression Framework <https://github.com/openvinotoolkit/nncf/blob/develop/README.md>`__.
|
||||
* Using Model Optimizer for a model pre-trained for Low Precision inference: TensorFlow models (``.pb`` model file with ``FakeQuantize`` operations), quantized TensorFlow Lite models and ONNX quantized models.
|
||||
TensorFlow and ONNX quantized models can be prepared by `Neural Network Compression Framework <https://github.com/openvinotoolkit/nncf/blob/develop/README.md>`__.
|
||||
|
||||
For an operation to be executed in INT8, it must have `FakeQuantize` operations as inputs.
|
||||
For more details, see the :doc:`specification of FakeQuantize operation <openvino_docs_ops_quantization_FakeQuantize_1>`.
|
||||
|
||||
@@ -9,6 +9,7 @@
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_ONNX
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_PyTorch
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow_Lite
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Paddle
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_MxNet
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Caffe
|
||||
@@ -18,7 +19,7 @@
|
||||
|
||||
**OpenVINO IR (Intermediate Representation)** - the proprietary format of OpenVINO™, benefiting from the full extent of its features.
|
||||
|
||||
**ONNX, PaddlePaddle, TensorFlow** - formats supported directly, which means they can be used with OpenVINO Runtime without any prior conversion. For a guide on how to run inference on ONNX, PaddlePaddle, or TensorFlow, see how to :doc:`Integrate OpenVINO™ with Your Application <openvino_docs_OV_UG_Integrate_OV_with_your_application>`.
|
||||
**ONNX, PaddlePaddle, TensorFlow, TensorFlow Lite** - formats supported directly, which means they can be used with OpenVINO Runtime without any prior conversion. For a guide on how to run inference on ONNX, PaddlePaddle, or TensorFlow, see how to :doc:`Integrate OpenVINO™ with Your Application <openvino_docs_OV_UG_Integrate_OV_with_your_application>`.
|
||||
|
||||
**MXNet, Caffe, Kaldi** - formats supported indirectly, which means they need to be converted to OpenVINO IR before running inference. The conversion is done with Model Optimizer and in some cases may involve intermediate steps.
|
||||
|
||||
@@ -27,6 +28,7 @@ Refer to the following articles for details on conversion for different formats
|
||||
* :doc:`How to convert ONNX <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_ONNX>`
|
||||
* :doc:`How to convert PaddlePaddle <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Paddle>`
|
||||
* :doc:`How to convert TensorFlow <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow>`
|
||||
* :doc:`How to convert TensorFlow Lite <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow_Lite>`
|
||||
* :doc:`How to convert MXNet <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_MxNet>`
|
||||
* :doc:`How to convert Caffe <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Caffe>`
|
||||
* :doc:`How to convert Kaldi <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Kaldi>`
|
||||
|
||||
@@ -13,7 +13,7 @@
|
||||
|
||||
This article describes Model Optimizer internals. Altering them may result in application instability, and in case of future changes to the API, lack of backward compatibility.
|
||||
|
||||
> **NOTE**: If you want to add support for ONNX, PaddlePaddle or Tensorflow operations, or you are not familiar with other extension alternatives in OpenVINO, read [this guide](../../../Extensibility_UG/Intro.md) instead.
|
||||
> **NOTE**: If you want to add support for ONNX, TensorFlow Lite, PaddlePaddle or TensorFlow operations, or you are not familiar with other extension alternatives in OpenVINO, read [this guide](../../../Extensibility_UG/Intro.md) instead.
|
||||
|
||||
<a name="model-optimizer-extensibility"></a>Model Optimizer extensibility mechanism enables support of new operations and custom transformations to generate the optimized intermediate representation (IR) as described in the
|
||||
[Deep Learning Network Intermediate Representation and Operation Sets in OpenVINO™](../../IR_and_opsets.md). This
|
||||
|
||||
@@ -112,6 +112,7 @@ OpenVINO Runtime uses frontend libraries dynamically to read models in different
|
||||
|
||||
- ``openvino_ir_frontend`` is used to read OpenVINO IR.
|
||||
- ``openvino_tensorflow_frontend`` is used to read TensorFlow file format.
|
||||
- ``openvino_tensorflow_lite_frontend`` is used to read TensorFlow Lite file format.
|
||||
- ``openvino_onnx_frontend`` is used to read ONNX file format.
|
||||
- ``openvino_paddle_frontend`` is used to read Paddle file format.
|
||||
|
||||
@@ -119,7 +120,7 @@ Depending on the model format types that are used in the application in `ov::Cor
|
||||
|
||||
.. note::
|
||||
|
||||
To optimize the size of final distribution package, you are recommended to convert models to OpenVINO IR by using :doc:`Model Optimizer <openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide>`. This way you don't have to keep TensorFlow, ONNX, PaddlePaddle, and other frontend libraries in the distribution package.
|
||||
To optimize the size of final distribution package, you are recommended to convert models to OpenVINO IR by using :doc:`Model Optimizer <openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide>`. This way you don't have to keep TensorFlow, TensorFlow Lite, ONNX, PaddlePaddle, and other frontend libraries in the distribution package.
|
||||
|
||||
(Legacy) Preprocessing via G-API
|
||||
++++++++++++++++++++++++++++++++
|
||||
|
||||
@@ -16,7 +16,7 @@
|
||||
openvino_docs_OV_UG_model_state_intro
|
||||
|
||||
|
||||
OpenVINO Runtime is a set of C++ libraries with C and Python bindings providing a common API to deliver inference solutions on the platform of your choice. Use the OpenVINO Runtime API to read an Intermediate Representation (IR), TensorFlow, ONNX, or PaddlePaddle model and execute it on preferred devices.
|
||||
OpenVINO Runtime is a set of C++ libraries with C and Python bindings providing a common API to deliver inference solutions on the platform of your choice. Use the OpenVINO Runtime API to read an Intermediate Representation (IR), TensorFlow, TensorFlow Lite, ONNX, or PaddlePaddle model and execute it on preferred devices.
|
||||
|
||||
OpenVINO Runtime uses a plugin architecture. Its plugins are software components that contain complete implementation for inference on a particular Intel® hardware device: CPU, GPU, GNA, etc. Each plugin implements the unified API and provides additional hardware-specific APIs for configuring devices or API interoperability between OpenVINO Runtime and underlying plugin backend.
|
||||
|
||||
|
||||
@@ -22,7 +22,7 @@ When some preprocessing steps cannot be integrated into the execution graph usin
|
||||
Model Optimizer command-line options (for example, ``YUV``->``RGB`` color space conversion,
|
||||
``Resize``, etc.), it is possible to write a simple code which:
|
||||
|
||||
* Reads the original model (OpenVINO IR, TensorFlow, ONNX, PaddlePaddle).
|
||||
* Reads the original model (OpenVINO IR, TensorFlow, TensorFlow Lite, ONNX, PaddlePaddle).
|
||||
* Adds the preprocessing/postprocessing steps.
|
||||
* Saves resulting model as IR (``.xml`` and ``.bin``).
|
||||
|
||||
|
||||
@@ -11,7 +11,7 @@ This guide presents how to use OpenVINO securely with protected models.
|
||||
Secure Model Deployment
|
||||
#######################
|
||||
|
||||
After a model is optimized by the OpenVINO Model Optimizer, it's deployednto target devices in the OpenVINO Intermediate Representation (OpenVINO IR) format. An optimized model is stored on edge device and is executed by the OpenVINO Runtime. TensorFlow, ONNX and PaddlePaddle models can be read natively by OpenVINO Runtime as well.
|
||||
After a model is optimized by the OpenVINO Model Optimizer, it's deployed to target devices in the OpenVINO Intermediate Representation (OpenVINO IR) format. An optimized model is stored on edge device and is executed by the OpenVINO Runtime. TensorFlow, TensorFlow Lite, ONNX and PaddlePaddle models can be read natively by OpenVINO Runtime as well.
|
||||
|
||||
Encrypting and optimizing model before deploying it to the edge device can be used to protect deep-learning models. The edge device should keep the stored model protected all the time and have the model decrypted **in runtime only** for use by the OpenVINO Runtime.
|
||||
|
||||
|
||||
@@ -194,6 +194,6 @@ In this case OpenVINO CMake scripts take `TBBROOT` environment variable into acc
|
||||
[PDPD]:https://github.com/PaddlePaddle/Paddle
|
||||
[TensorFlow]:https://www.tensorflow.org/
|
||||
[TensorFlow Lite]:https://www.tensorflow.org/lite
|
||||
[PyTorch]:https://www.tensorflow.org/lite
|
||||
[PyTorch]:https://pytorch.org/
|
||||
[FlatBuffers]:https://google.github.io/flatbuffers/
|
||||
[oneTBB]:https://github.com/oneapi-src/oneTBB
|
||||
|
||||
@@ -44,6 +44,7 @@ OpenVINO components define several common groups which allow to run tests for se
|
||||
- ONNX_FE - ONNX frontend tests
|
||||
- PADDLE_FE - Paddle frontend tests
|
||||
- TF_FE - TensorFlow frontend tests
|
||||
- TFL_FE - TensorFlow Lite frontend tests
|
||||
- CPU - CPU plugin tests
|
||||
- GPU - GPU plugin tests
|
||||
- GNA - GNA plugin tests
|
||||
|
||||
@@ -27,7 +27,7 @@ This section provides reference documents that guide you through the OpenVINO to
|
||||
| Apart from the core components, OpenVINO offers tools, plugins, and expansions revolving around it, even if not constituting necessary parts of its workflow. This section gives you an overview of what makes up the OpenVINO toolkit.
|
||||
|
||||
| :doc:`OpenVINO Extensibility Mechanism <openvino_docs_Extensibility_UG_Intro>`
|
||||
| The Intel® Distribution of OpenVINO™ toolkit supports neural network models trained with various frameworks, including TensorFlow, PyTorch, ONNX, PaddlePaddle, Apache MXNet, Caffe, and Kaldi. Learn how to extend OpenVINO functionality with custom settings.
|
||||
| The Intel® Distribution of OpenVINO™ toolkit supports neural network models trained with various frameworks, including TensorFlow, PyTorch, ONNX, TensorFlow Lite, PaddlePaddle, Apache MXNet, Caffe, and Kaldi. Learn how to extend OpenVINO functionality with custom settings.
|
||||
|
||||
| :doc:`Media Processing and Computer Vision Libraries <media_processing_cv_libraries>`
|
||||
| The OpenVINO™ toolkit also works with the following media processing frameworks and libraries:
|
||||
|
||||
@@ -80,13 +80,13 @@ Glossary of terms used in OpenVINO™
|
||||
| OpenVINO™ Core is a software component that manages inference on certain Intel(R) hardware devices: CPU, GPU, GNA, etc.
|
||||
|
||||
| *OpenVINO™ API*
|
||||
| The basic default API for all supported devices, which allows you to load a model from Intermediate Representation or convert from ONNX, PaddlePaddle, TensorFlow file formats, set input and output formats and execute the model on various devices.
|
||||
| The basic default API for all supported devices, which allows you to load a model from Intermediate Representation or convert from ONNX, PaddlePaddle, TensorFlow, TensorFlow Lite file formats, set input and output formats and execute the model on various devices.
|
||||
|
||||
| *OpenVINO™ Runtime*
|
||||
| A C++ library with a set of classes that you can use in your application to infer input tensors and get the results.
|
||||
|
||||
| *<code>ov::Model</code>*
|
||||
| A class of the Model that OpenVINO™ Runtime reads from IR or converts from ONNX, PaddlePaddle, TensorFlow formats. Consists of model structure, weights and biases.
|
||||
| A class of the Model that OpenVINO™ Runtime reads from IR or converts from ONNX, PaddlePaddle, TensorFlow, TensorFlow Lite formats. Consists of model structure, weights and biases.
|
||||
|
||||
| *<code>ov::CompiledModel</code>*
|
||||
| An instance of the compiled model which allows the OpenVINO™ Runtime to request (several) infer requests and perform inference synchronously or asynchronously.
|
||||
|
||||
@@ -163,6 +163,7 @@ Additional Resources
|
||||
To learn more about converting models from specific frameworks, go to:
|
||||
* :ref:`Convert Your Caffe Model <convert model caffe>`
|
||||
* :ref:`Convert Your TensorFlow Model <convert model tf>`
|
||||
* :ref:`Convert Your TensorFlow Lite Model <convert model tfl>`
|
||||
* :ref:`Convert Your Apache MXNet Model <convert model mxnet>`
|
||||
* :ref:`Convert Your Kaldi Model <convert model kaldi>`
|
||||
* :ref:`Convert Your ONNX Model <convert model onnx>`
|
||||
|
||||
@@ -218,6 +218,7 @@ Additional Resources
|
||||
To learn more about converting models from specific frameworks, go to:
|
||||
* :ref:`Convert Your Caffe Model <convert model caffe>`
|
||||
* :ref:`Convert Your TensorFlow Model <convert model tf>`
|
||||
* :ref:`Convert Your TensorFlow Lite Model <convert model tfl>`
|
||||
* :ref:`Convert Your Apache MXNet Model <convert model mxnet>`
|
||||
* :ref:`Convert Your Kaldi Model <convert model kaldi>`
|
||||
* :ref:`Convert Your ONNX Model <convert model onnx>`
|
||||
|
||||
@@ -6,7 +6,7 @@ ov::AnyMap gpu_config = {};
|
||||
//! [part5]
|
||||
ov::Core core;
|
||||
|
||||
// Read a network in IR, TensorFlow, PaddlePaddle, or ONNX format:
|
||||
// Read a network in IR, TensorFlow, TensorFlow Lite, PaddlePaddle, or ONNX format:
|
||||
std::shared_ptr<ov::Model> model = core.read_model("sample.xml");
|
||||
|
||||
// Configure the CPU and the GPU devices when compiling model
|
||||
|
||||
@@ -17,7 +17,7 @@ ov_add_test_target(
|
||||
ADD_CLANG_FORMAT
|
||||
LABELS
|
||||
OV
|
||||
TF_FE
|
||||
TFL_FE
|
||||
)
|
||||
|
||||
# Test model generating
|
||||
|
||||
Reference in New Issue
Block a user