[DOCS] Updating Supported Model Formats article (#18495)

* supported_model_formats

* add-method

* apply-commits-18214

Applying commits from:
https://github.com/openvinotoolkit/openvino/pull/18214

* Update docs/MO_DG/prepare_model/convert_model/supported_model_formats.md

* Update docs/MO_DG/prepare_model/convert_model/supported_model_formats.md

* Update docs/MO_DG/prepare_model/convert_model/supported_model_formats.md

* Update docs/MO_DG/prepare_model/convert_model/supported_model_formats.md

* Update docs/MO_DG/prepare_model/convert_model/supported_model_formats.md

* Update supported_model_formats.md

* Update docs/MO_DG/prepare_model/convert_model/supported_model_formats.md

* Update docs/MO_DG/prepare_model/convert_model/supported_model_formats.md

* review-suggestions

* Update supported_model_formats.md
This commit is contained in:
Sebastian Golebiewski 2023-08-23 18:26:07 +02:00 committed by GitHub
parent 6a628f7056
commit 6d3726024d
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

View File

@ -17,31 +17,533 @@
openvino_docs_MO_DG_prepare_model_convert_model_tutorials openvino_docs_MO_DG_prepare_model_convert_model_tutorials
.. meta:: .. meta::
:description: In OpenVINO, ONNX, PaddlePaddle, TensorFlow and TensorFlow Lite :description: Learn about supported model formats and the methods used to convert, read and compile them in OpenVINO™.
models do not require any prior conversion, while MxNet, Caffe and Kaldi do.
**OpenVINO IR (Intermediate Representation)** - the proprietary format of OpenVINO, benefiting from the full extent of its features. **OpenVINO IR (Intermediate Representation)** - the proprietary and default format of OpenVINO, benefiting from the full extent of its features. All other model formats presented below will ultimately be converted to :doc:`OpenVINO IR <openvino_ir>`.
**ONNX, PaddlePaddle, TensorFlow, TensorFlow Lite** - formats supported directly, which means they can be used with **PyTorch, TensorFlow, ONNX, and PaddlePaddle** may be used without any prior conversion and can be read by OpenVINO Runtime API by the use of ``read_model()`` or ``compile_model()``. Additional adjustment for the model can be performed using the ``convert_model()`` method, which allows you to set shapes, types or the layout of model inputs, cut parts of the model, freeze inputs etc. The detailed information of capabilities of ``convert_model()`` can be found in :doc:`this <openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide>` article.
OpenVINO Runtime without any prior conversion. For a guide on how to run inference on ONNX, PaddlePaddle, or TensorFlow,
see how to :doc:`Integrate OpenVINO™ with Your Application <openvino_docs_OV_UG_Integrate_OV_with_your_application>`.
**MXNet, Caffe, Kaldi** - legacy formats that need to be converted to OpenVINO IR before running inference. Below you will find code examples for each method, for all supported model formats.
The model conversion in some cases may involve intermediate steps. OpenVINO is currently proceeding
**to deprecate these formats** and **remove their support entirely in the future**. .. tab-set::
.. tab-item:: PyTorch
:sync: torch
.. tab-set::
.. tab-item:: Python
:sync: py
* The ``convert_model()`` method:
This is the only method applicable to PyTorch models.
.. dropdown:: List of supported formats:
* **Python objects**:
* ``torch.nn.Module``
* ``torch.jit.ScriptModule``
* ``torch.jit.ScriptFunction``
.. code-block:: py
:force:
model = torchvision.models.resnet50(pretrained=True)
ov_model = convert_model(model)
compiled_model = core.compile_model(ov_model, "AUTO")
For more details on conversion, refer to the
:doc:`guide <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_PyTorch>`
and an example `tutorial <https://docs.openvino.ai/nightly/notebooks/102-pytorch-onnx-to-openvino-with-output.html>`__
on this topic.
.. tab-item:: TensorFlow
:sync: tf
.. tab-set::
.. tab-item:: Python
:sync: py
* The ``convert_model()`` method:
When you use the ``convert_model()`` method, you have more control and you can specify additional adjustments for ``ov.Model``. The ``read_model()`` and ``compile_model()`` methods are easier to use, however, they do not have such capabilities. With ``ov.Model`` you can choose to optimize, compile and run inference on it or serialize it into a file for subsequent use.
.. dropdown:: List of supported formats:
* **Files**:
* SavedModel - ``<SAVED_MODEL_DIRECTORY>`` or ``<INPUT_MODEL>.pb``
* Checkpoint - ``<INFERENCE_GRAPH>.pb`` or ``<INFERENCE_GRAPH>.pbtxt``
* MetaGraph - ``<INPUT_META_GRAPH>.meta``
* **Python objects**:
* ``tf.keras.Model``
* ``tf.keras.layers.Layer``
* ``tf.Module``
* ``tf.compat.v1.Graph``
* ``tf.compat.v1.GraphDef``
* ``tf.function``
* ``tf.compat.v1.session``
* ``tf.train.checkpoint``
.. code-block:: py
:force:
ov_model = convert_model("saved_model.pb")
compiled_model = core.compile_model(ov_model, "AUTO")
For more details on conversion, refer to the
:doc:`guide <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow>`
and an example `tutorial <https://docs.openvino.ai/nightly/notebooks/101-tensorflow-to-openvino-with-output.html>`__
on this topic.
* The ``read_model()`` and ``compile_model()`` methods:
.. dropdown:: List of supported formats:
* **Files**:
* SavedModel - ``<SAVED_MODEL_DIRECTORY>`` or ``<INPUT_MODEL>.pb``
* Checkpoint - ``<INFERENCE_GRAPH>.pb`` or ``<INFERENCE_GRAPH>.pbtxt``
* MetaGraph - ``<INPUT_META_GRAPH>.meta``
.. code-block:: py
:force:
ov_model = read_model("saved_model.pb")
compiled_model = core.compile_model(ov_model, "AUTO")
For a guide on how to run inference, see how to
:doc:`Integrate OpenVINO™ with Your Application <openvino_docs_OV_UG_Integrate_OV_with_your_application>`.
For TensorFlow format, see :doc:`TensorFlow Frontend Capabilities and Limitations <openvino_docs_MO_DG_TensorFlow_Frontend>`.
.. tab-item:: C++
:sync: cpp
* The ``compile_model()`` method:
.. dropdown:: List of supported formats:
* **Files**:
* SavedModel - ``<SAVED_MODEL_DIRECTORY>`` or ``<INPUT_MODEL>.pb``
* Checkpoint - ``<INFERENCE_GRAPH>.pb`` or ``<INFERENCE_GRAPH>.pbtxt``
* MetaGraph - ``<INPUT_META_GRAPH>.meta``
.. code-block:: cpp
ov::CompiledModel compiled_model = core.compile_model("saved_model.pb", "AUTO");
For a guide on how to run inference, see how to
:doc:`Integrate OpenVINO™ with Your Application <openvino_docs_OV_UG_Integrate_OV_with_your_application>`.
.. tab-item:: C
:sync: c
* The ``compile_model()`` method:
.. dropdown:: List of supported formats:
* **Files**:
* SavedModel - ``<SAVED_MODEL_DIRECTORY>`` or ``<INPUT_MODEL>.pb``
* Checkpoint - ``<INFERENCE_GRAPH>.pb`` or ``<INFERENCE_GRAPH>.pbtxt``
* MetaGraph - ``<INPUT_META_GRAPH>.meta``
.. code-block:: c
ov_compiled_model_t* compiled_model = NULL;
ov_core_compile_model_from_file(core, "saved_model.pb", "AUTO", 0, &compiled_model);
For a guide on how to run inference, see how to
:doc:`Integrate OpenVINO™ with Your Application <openvino_docs_OV_UG_Integrate_OV_with_your_application>`.
.. tab-item:: CLI
:sync: cli
You can use ``mo`` command-line tool to convert a model to IR. The obtained IR can then be read by ``read_model()`` and inferred.
.. code-block:: sh
mo --input_model <INPUT_MODEL>.pb
For details on the conversion, refer to the
:doc:`article <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow>`.
.. tab-item:: TensorFlow Lite
:sync: tflite
.. tab-set::
.. tab-item:: Python
:sync: py
* The ``convert_model()`` method:
When you use the ``convert_model()`` method, you have more control and you can specify additional adjustments for ``ov.Model``. The ``read_model()`` and ``compile_model()`` methods are easier to use, however, they do not have such capabilities. With ``ov.Model`` you can choose to optimize, compile and run inference on it or serialize it into a file for subsequent use.
.. dropdown:: List of supported formats:
* **Files**:
* ``<INPUT_MODEL>.tflite``
.. code-block:: py
:force:
ov_model = convert_model("<INPUT_MODEL>.tflite")
compiled_model = core.compile_model(ov_model, "AUTO")
For more details on conversion, refer to the
:doc:`guide <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow>`
and an example `tutorial <https://docs.openvino.ai/nightly/notebooks/119-tflite-to-openvino-with-output.html>`__
on this topic.
* The ``read_model()`` method:
.. dropdown:: List of supported formats:
* **Files**:
* ``<INPUT_MODEL>.tflite``
.. code-block:: py
:force:
ov_model = read_model("<INPUT_MODEL>.tflite")
compiled_model = core.compile_model(ov_model, "AUTO")
* The ``compile_model()`` method:
.. dropdown:: List of supported formats:
* **Files**:
* ``<INPUT_MODEL>.tflite``
.. code-block:: py
:force:
compiled_model = core.compile_model("<INPUT_MODEL>.tflite", "AUTO")
For a guide on how to run inference, see how to
:doc:`Integrate OpenVINO™ with Your Application <openvino_docs_OV_UG_Integrate_OV_with_your_application>`.
.. tab-item:: C++
:sync: cpp
* The ``compile_model()`` method:
.. dropdown:: List of supported formats:
* **Files**:
* ``<INPUT_MODEL>.tflite``
.. code-block:: cpp
ov::CompiledModel compiled_model = core.compile_model("<INPUT_MODEL>.tflite", "AUTO");
For a guide on how to run inference, see how to
:doc:`Integrate OpenVINO™ with Your Application <openvino_docs_OV_UG_Integrate_OV_with_your_application>`.
.. tab-item:: C
:sync: c
* The ``compile_model()`` method:
.. dropdown:: List of supported formats:
* **Files**:
* ``<INPUT_MODEL>.tflite``
.. code-block:: c
ov_compiled_model_t* compiled_model = NULL;
ov_core_compile_model_from_file(core, "<INPUT_MODEL>.tflite", "AUTO", 0, &compiled_model);
For a guide on how to run inference, see how to
:doc:`Integrate OpenVINO™ with Your Application <openvino_docs_OV_UG_Integrate_OV_with_your_application>`.
.. tab-item:: CLI
:sync: cli
* The ``convert_model()`` method:
You can use ``mo`` command-line tool to convert a model to IR. The obtained IR can then be read by ``read_model()`` and inferred.
.. dropdown:: List of supported formats:
* **Files**:
* ``<INPUT_MODEL>.tflite``
.. code-block:: sh
mo --input_model <INPUT_MODEL>.tflite
For details on the conversion, refer to the
:doc:`article <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow_Lite>`.
.. tab-item:: ONNX
:sync: onnx
.. tab-set::
.. tab-item:: Python
:sync: py
* The ``convert_model()`` method:
When you use the ``convert_model()`` method, you have more control and you can specify additional adjustments for ``ov.Model``. The ``read_model()`` and ``compile_model()`` methods are easier to use, however, they do not have such capabilities. With ``ov.Model`` you can choose to optimize, compile and run inference on it or serialize it into a file for subsequent use.
.. dropdown:: List of supported formats:
* **Files**:
* ``<INPUT_MODEL>.onnx``
.. code-block:: py
:force:
ov_model = convert_model("<INPUT_MODEL>.onnx")
compiled_model = core.compile_model(ov_model, "AUTO")
For more details on conversion, refer to the
:doc:`guide <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_ONNX>`
and an example `tutorial <https://docs.openvino.ai/nightly/notebooks/102-pytorch-onnx-to-openvino-with-output.html>`__
on this topic.
* The ``read_model()`` method:
.. dropdown:: List of supported formats:
* **Files**:
* ``<INPUT_MODEL>.onnx``
.. code-block:: py
:force:
ov_model = read_model("<INPUT_MODEL>.onnx")
compiled_model = core.compile_model(ov_model, "AUTO")
* The ``compile_model()`` method:
.. dropdown:: List of supported formats:
* **Files**:
* ``<INPUT_MODEL>.onnx``
.. code-block:: py
:force:
compiled_model = core.compile_model("<INPUT_MODEL>.onnx", "AUTO")
For a guide on how to run inference, see how to :doc:`Integrate OpenVINO™ with Your Application <openvino_docs_OV_UG_Integrate_OV_with_your_application>`.
.. tab-item:: C++
:sync: cpp
* The ``compile_model()`` method:
.. dropdown:: List of supported formats:
* **Files**:
* ``<INPUT_MODEL>.onnx``
.. code-block:: cpp
ov::CompiledModel compiled_model = core.compile_model("<INPUT_MODEL>.onnx", "AUTO");
For a guide on how to run inference, see how to :doc:`Integrate OpenVINO™ with Your Application <openvino_docs_OV_UG_Integrate_OV_with_your_application>`.
.. tab-item:: C
:sync: c
* The ``compile_model()`` method:
.. dropdown:: List of supported formats:
* **Files**:
* ``<INPUT_MODEL>.onnx``
.. code-block:: c
ov_compiled_model_t* compiled_model = NULL;
ov_core_compile_model_from_file(core, "<INPUT_MODEL>.onnx", "AUTO", 0, &compiled_model);
For details on the conversion, refer to the :doc:`article <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_ONNX>`
.. tab-item:: CLI
:sync: cli
* The ``convert_model()`` method:
You can use ``mo`` command-line tool to convert a model to IR. The obtained IR can then be read by ``read_model()`` and inferred.
.. dropdown:: List of supported formats:
* **Files**:
* ``<INPUT_MODEL>.onnx``
.. code-block:: sh
mo --input_model <INPUT_MODEL>.onnx
For details on the conversion, refer to the
:doc:`article <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_ONNX>`
.. tab-item:: PaddlePaddle
:sync: pdpd
.. tab-set::
.. tab-item:: Python
:sync: py
* The ``convert_model()`` method:
When you use the ``convert_model()`` method, you have more control and you can specify additional adjustments for ``ov.Model``. The ``read_model()`` and ``compile_model()`` methods are easier to use, however, they do not have such capabilities. With ``ov.Model`` you can choose to optimize, compile and run inference on it or serialize it into a file for subsequent use.
.. dropdown:: List of supported formats:
* **Files**:
* ``<INPUT_MODEL>.pdmodel``
* **Python objects**:
* ``paddle.hapi.model.Model``
* ``paddle.fluid.dygraph.layers.Layer``
* ``paddle.fluid.executor.Executor``
.. code-block:: py
:force:
ov_model = convert_model("<INPUT_MODEL>.pdmodel")
compiled_model = core.compile_model(ov_model, "AUTO")
For more details on conversion, refer to the
:doc:`guide <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Paddle>`
and an example `tutorial <https://docs.openvino.ai/nightly/notebooks/103-paddle-to-openvino-classification-with-output.html>`__
on this topic.
* The ``read_model()`` method:
.. dropdown:: List of supported formats:
* **Files**:
* ``<INPUT_MODEL>.pdmodel``
.. code-block:: py
:force:
ov_model = read_model("<INPUT_MODEL>.pdmodel")
compiled_model = core.compile_model(ov_model, "AUTO")
* The ``compile_model()`` method:
.. dropdown:: List of supported formats:
* **Files**:
* ``<INPUT_MODEL>.pdmodel``
.. code-block:: py
:force:
compiled_model = core.compile_model("<INPUT_MODEL>.pdmodel", "AUTO")
For a guide on how to run inference, see how to
:doc:`Integrate OpenVINO™ with Your Application <openvino_docs_OV_UG_Integrate_OV_with_your_application>`.
.. tab-item:: C++
:sync: cpp
* The ``compile_model()`` method:
.. dropdown:: List of supported formats:
* **Files**:
* ``<INPUT_MODEL>.pdmodel``
.. code-block:: cpp
ov::CompiledModel compiled_model = core.compile_model("<INPUT_MODEL>.pdmodel", "AUTO");
For a guide on how to run inference, see how to
:doc:`Integrate OpenVINO™ with Your Application <openvino_docs_OV_UG_Integrate_OV_with_your_application>`.
.. tab-item:: C
:sync: c
* The ``compile_model()`` method:
.. dropdown:: List of supported formats:
* **Files**:
* ``<INPUT_MODEL>.pdmodel``
.. code-block:: c
ov_compiled_model_t* compiled_model = NULL;
ov_core_compile_model_from_file(core, "<INPUT_MODEL>.pdmodel", "AUTO", 0, &compiled_model);
For a guide on how to run inference, see how to
:doc:`Integrate OpenVINO™ with Your Application <openvino_docs_OV_UG_Integrate_OV_with_your_application>`.
.. tab-item:: CLI
:sync: cli
* The ``convert_model()`` method:
You can use ``mo`` command-line tool to convert a model to IR. The obtained IR can then be read by ``read_model()`` and inferred.
.. dropdown:: List of supported formats:
* **Files**:
* ``<INPUT_MODEL>.pdmodel``
.. code-block:: sh
mo --input_model <INPUT_MODEL>.pdmodel
For details on the conversion, refer to the
:doc:`article <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Paddle>`.
**MXNet, Caffe, and Kaldi** are legacy formats that need to be converted to OpenVINO IR before running inference. The model conversion in some cases may involve intermediate steps. For more details, refer to the :doc:`MXNet <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_MxNet>`, :doc:`Caffe <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Caffe>`, :doc:`Kaldi <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Kaldi>` conversion guides.
OpenVINO is currently proceeding **to deprecate these formats** and **remove their support entirely in the future**. Converting these formats to ONNX or using an LTS version might be a viable solution for inference in OpenVINO Toolkit.
.. note::
To convert models, :doc:`install OpenVINO™ Development Tools <openvino_docs_install_guides_install_dev_tools>`,
which include model conversion API.
Refer to the following articles for details on conversion for different formats and models: Refer to the following articles for details on conversion for different formats and models:
* :doc:`How to convert ONNX <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_ONNX>`
* :doc:`How to convert PaddlePaddle <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Paddle>`
* :doc:`How to convert TensorFlow <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow>`
* :doc:`How to convert TensorFlow Lite <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow_Lite>`
* :doc:`How to convert MXNet <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_MxNet>`
* :doc:`How to convert Caffe <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Caffe>`
* :doc:`How to convert Kaldi <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Kaldi>`
* :doc:`Conversion examples for specific models <openvino_docs_MO_DG_prepare_model_convert_model_tutorials>` * :doc:`Conversion examples for specific models <openvino_docs_MO_DG_prepare_model_convert_model_tutorials>`
* :doc:`Model preparation methods <openvino_docs_model_processing_introduction>`
@endsphinxdirective @endsphinxdirective