[DOCS] Add FrontEnd API note (#18154)
* add note * fix typo * add advance cases note * tf doc note * wording change
This commit is contained in:
parent
f306a11b82
commit
c3b7e81722
@ -7,6 +7,8 @@ Introduction to ONNX
|
||||
|
||||
`ONNX <https://github.com/onnx/onnx>`__ is a representation format for deep learning models that allows AI developers to easily transfer models between different frameworks. It is hugely popular among deep learning tools, like PyTorch, Caffe2, Apache MXNet, Microsoft Cognitive Toolkit, and many others.
|
||||
|
||||
.. note:: ONNX models are supported via FrontEnd API. You may skip conversion to IR and read models directly by OpenVINO runtime API. Refer to the :doc:`inference example <openvino_docs_OV_UG_Integrate_OV_with_your_application>` for more details. Using ``convert_model`` is still necessary in more complex cases, such as new custom inputs/outputs in model pruning, adding pre-processing, or using Python conversion extensions.
|
||||
|
||||
Converting an ONNX Model
|
||||
########################
|
||||
|
||||
|
@ -4,6 +4,8 @@
|
||||
|
||||
This page provides general instructions on how to convert a model from a PaddlePaddle format to the OpenVINO IR format using Model Optimizer. The instructions are different depending on PaddlePaddle model format.
|
||||
|
||||
.. note:: PaddlePaddle models are supported via FrontEnd API. You may skip conversion to IR and read models directly by OpenVINO runtime API. Refer to the :doc:`inference example <openvino_docs_OV_UG_Integrate_OV_with_your_application>` for more details. Using ``convert_model`` is still necessary in more complex cases, such as new custom inputs/outputs in model pruning, adding pre-processing, or using Python conversion extensions.
|
||||
|
||||
Converting PaddlePaddle Model Inference Format
|
||||
##############################################
|
||||
|
||||
|
@ -4,6 +4,8 @@
|
||||
|
||||
This page provides general instructions on how to run model conversion from a TensorFlow format to the OpenVINO IR format. The instructions are different depending on whether your model was created with TensorFlow v1.X or TensorFlow v2.X.
|
||||
|
||||
.. note:: TensorFlow models are supported via :doc:`FrontEnd API <openvino_docs_MO_DG_TensorFlow_Frontend>`. You may skip conversion to IR and read models directly by OpenVINO runtime API. Refer to the :doc:`inference example <openvino_docs_OV_UG_Integrate_OV_with_your_application>` for more details. Using ``convert_model`` is still necessary in more complex cases, such as new custom inputs/outputs in model pruning, adding pre-processing, or using Python conversion extensions.
|
||||
|
||||
To use model conversion API, install OpenVINO Development Tools by following the :doc:`installation instructions <openvino_docs_install_guides_install_dev_tools>`.
|
||||
|
||||
Converting TensorFlow 1 Models
|
||||
|
@ -8,7 +8,7 @@ To convert a TensorFlow Lite model, use the ``mo`` script and specify the path t
|
||||
|
||||
mo --input_model <INPUT_MODEL>.tflite
|
||||
|
||||
.. note:: TensorFlow Lite models are supported via FrontEnd API. You may skip conversion to IR and read models directly by OpenVINO runtime API.
|
||||
.. note:: TensorFlow Lite models are supported via FrontEnd API. You may skip conversion to IR and read models directly by OpenVINO runtime API. Refer to the :doc:`inference example <openvino_docs_OV_UG_Integrate_OV_with_your_application>` for more details. Using ``convert_model`` is still necessary in more complex cases, such as new custom inputs/outputs in model pruning, adding pre-processing, or using Python conversion extensions.
|
||||
|
||||
Supported TensorFlow Lite Layers
|
||||
###################################
|
||||
|
Loading…
Reference in New Issue
Block a user