[DOCS] Updating MO documentation (#18757)
* restructure-mo-docs * apply-commits-18214 Applying commits from: https://github.com/openvinotoolkit/openvino/pull/18214 * update * Apply suggestions from code review Co-authored-by: Anastasiia Pnevskaia <anastasiia.pnevskaia@intel.com> * Apply suggestions from code review * Update model_introduction.md * Update docs/resources/tensorflow_frontend.md * Create MO_Python_API.md * Apply suggestions from code review Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com> * revert * Update Cutting_Model.md * serialize * serialize-in-image * Update Deep_Learning_Model_Optimizer_DevGuide.md * Apply suggestions from code review Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com> * Update model_conversion_diagram.svg --------- Co-authored-by: Anastasiia Pnevskaia <anastasiia.pnevskaia@intel.com> Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>
This commit is contained in:
parent
6d3726024d
commit
22fe12fe9b
@ -3,7 +3,8 @@
|
|||||||
@sphinxdirective
|
@sphinxdirective
|
||||||
|
|
||||||
.. meta::
|
.. meta::
|
||||||
:description: Preparing models for OpenVINO Runtime. Learn how to convert and compile models from different frameworks or read them directly.
|
:description: Preparing models for OpenVINO Runtime. Learn about the methods
|
||||||
|
used to read, convert and compile models from different frameworks.
|
||||||
|
|
||||||
|
|
||||||
.. toctree::
|
.. toctree::
|
||||||
@ -17,39 +18,43 @@
|
|||||||
|
|
||||||
Every deep learning workflow begins with obtaining a model. You can choose to prepare a custom one, use a ready-made solution and adjust it to your needs, or even download and run a pre-trained network from an online database, such as `TensorFlow Hub <https://tfhub.dev/>`__, `Hugging Face <https://huggingface.co/>`__, `Torchvision models <https://pytorch.org/hub/>`__.
|
Every deep learning workflow begins with obtaining a model. You can choose to prepare a custom one, use a ready-made solution and adjust it to your needs, or even download and run a pre-trained network from an online database, such as `TensorFlow Hub <https://tfhub.dev/>`__, `Hugging Face <https://huggingface.co/>`__, `Torchvision models <https://pytorch.org/hub/>`__.
|
||||||
|
|
||||||
:doc:`OpenVINO™ supports several model formats <Supported_Model_Formats>` and allows converting them to it's own, `openvino.runtime.Model <api/ie_python_api/_autosummary/openvino.runtime.Model.html>`__ (`ov.Model <api/ie_python_api/_autosummary/openvino.runtime.Model.html>`__ ), providing a tool dedicated to this task.
|
Import a model using ``read_model()``
|
||||||
|
#################################################
|
||||||
|
|
||||||
There are several options to convert a model from original framework to OpenVINO model format (``ov.Model``).
|
Model files (not Python objects) from :doc:`ONNX, PaddlePaddle, TensorFlow and TensorFlow Lite <Supported_Model_Formats>` (check :doc:`TensorFlow Frontend Capabilities and Limitations <openvino_docs_MO_DG_TensorFlow_Frontend>`) do not require a separate step for model conversion, that is ``mo.convert_model``.
|
||||||
|
|
||||||
The ``read_model()`` method reads a model from a file and produces ``ov.Model``. If the file is in one of the supported original framework file formats, it is converted automatically to OpenVINO Intermediate Representation. If the file is already in the OpenVINO IR format, it is read "as-is", without any conversion involved. ``ov.Model`` can be serialized to IR using the ``ov.serialize()`` method. The serialized IR can be further optimized using :doc:`Neural Network Compression Framework (NNCF) <ptq_introduction>` that applies post-training quantization methods.
|
The ``read_model()`` method reads a model from a file and produces `openvino.runtime.Model <api/ie_python_api/_autosummary/openvino.runtime.Model.html>`__. If the file is in one of the supported original framework file :doc:`formats <Supported_Model_Formats>`, the method runs internal conversion to an OpenVINO model format. If the file is already in the :doc:`OpenVINO IR format <openvino_ir>`, it is read "as-is", without any conversion involved.
|
||||||
|
|
||||||
Convert a model in Python
|
You can also convert a model from original framework to `openvino.runtime.Model <api/ie_python_api/_autosummary/openvino.runtime.Model.html>`__ using ``convert_model()`` method. More details about ``convert_model()`` are provided in :doc:`model conversion guide <openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide>` .
|
||||||
######################################
|
|
||||||
|
|
||||||
Model conversion API, specifically, the ``mo.convert_model()`` method converts a model from original framework to ``ov.Model``. ``mo.convert_model()`` returns ``ov.Model`` object in memory so the ``read_model()`` method is not required. The resulting ``ov.Model`` can be inferred in the same training environment (python script or Jupiter Notebook). ``mo.convert_model()`` provides a convenient way to quickly switch from framework-based code to OpenVINO-based code in your inference application. In addition to model files, ``mo.convert_model()`` can take OpenVINO extension objects constructed directly in Python for easier conversion of operations that are not supported in OpenVINO. The ``mo.convert_model()`` method also has a set of parameters to :doc:`cut the model <openvino_docs_MO_DG_prepare_model_convert_model_Cutting_Model>`, :doc:`set input shapes or layout <openvino_docs_MO_DG_prepare_model_convert_model_Converting_Model>`, :doc:`add preprocessing <openvino_docs_MO_DG_Additional_Optimization_Use_Cases>`, etc.
|
``ov.Model`` can be saved to IR using the ``ov.save_model()`` method. The saved IR can be further optimized using :doc:`Neural Network Compression Framework (NNCF) <ptq_introduction>` that applies post-training quantization methods.
|
||||||
|
|
||||||
.. image:: _static/images/model_conversion_diagram.svg
|
.. note::
|
||||||
:alt: model conversion diagram
|
|
||||||
|
|
||||||
Convert a model with ``mo`` command-line tool
|
``convert_model()`` also allows you to perform input/output cut, add pre-processing or add custom Python conversion extensions.
|
||||||
#############################################
|
|
||||||
|
|
||||||
Another option to convert a model is to use ``mo`` command-line tool. ``mo`` is a cross-platform tool that facilitates the transition between training and deployment environments, performs static model analysis, and adjusts deep learning models for optimal execution on end-point target devices in the same measure, as the ``mo.convert_model`` method.
|
Convert a model with Python using ``mo.convert_model()``
|
||||||
|
###########################################################
|
||||||
|
|
||||||
``mo`` requires the use of a pre-trained deep learning model in one of the supported formats: TensorFlow, TensorFlow Lite, PaddlePaddle, or ONNX. ``mo`` converts the model to the OpenVINO Intermediate Representation format (IR), which needs to be read with the ``ov.read_model()`` method. Then, you can compile and infer the ``ov.Model`` later with :doc:`OpenVINO™ Runtime <openvino_docs_OV_UG_OV_Runtime_User_Guide>`.
|
Model conversion API, specifically, the ``mo.convert_model()`` method converts a model from original framework to ``ov.Model``. ``mo.convert_model()`` returns ``ov.Model`` object in memory so the ``read_model()`` method is not required. The resulting ``ov.Model`` can be inferred in the same training environment (python script or Jupiter Notebook). ``mo.convert_model()`` provides a convenient way to quickly switch from framework-based code to OpenVINO-based code in your inference application.
|
||||||
|
|
||||||
|
In addition to model files, ``mo.convert_model()`` can take OpenVINO extension objects constructed directly in Python for easier conversion of operations that are not supported in OpenVINO. The ``mo.convert_model()`` method also has a set of parameters to :doc:`cut the model <openvino_docs_MO_DG_prepare_model_convert_model_Cutting_Model>`, :doc:`set input shapes or layout <openvino_docs_MO_DG_prepare_model_convert_model_Converting_Model>`, :doc:`add preprocessing <openvino_docs_MO_DG_Additional_Optimization_Use_Cases>`, etc.
|
||||||
|
|
||||||
The figure below illustrates the typical workflow for deploying a trained deep learning model:
|
The figure below illustrates the typical workflow for deploying a trained deep learning model, where IR is a pair of files describing the model:
|
||||||
|
|
||||||
.. image:: _static/images/BASIC_FLOW_MO_simplified.svg
|
|
||||||
|
|
||||||
where IR is a pair of files describing the model:
|
|
||||||
|
|
||||||
* ``.xml`` - Describes the network topology.
|
* ``.xml`` - Describes the network topology.
|
||||||
* ``.bin`` - Contains the weights and biases binary data.
|
* ``.bin`` - Contains the weights and biases binary data.
|
||||||
|
|
||||||
|
.. image:: _static/images/model_conversion_diagram.svg
|
||||||
|
:alt: model conversion diagram
|
||||||
|
|
||||||
Model files (not Python objects) from ONNX, PaddlePaddle, TensorFlow and TensorFlow Lite (check :doc:`TensorFlow Frontend Capabilities and Limitations <openvino_docs_MO_DG_TensorFlow_Frontend>`) do not require a separate step for model conversion, that is ``mo.convert_model``. OpenVINO provides C++ and Python APIs for importing the models to OpenVINO Runtime directly by just calling the ``read_model`` method.
|
|
||||||
|
Convert a model using ``mo`` command-line tool
|
||||||
|
#################################################
|
||||||
|
|
||||||
|
Another option to convert a model is to use ``mo`` command-line tool. ``mo`` is a cross-platform tool that facilitates the transition between training and deployment environments, performs static model analysis, and adjusts deep learning models for optimal execution on end-point target devices in the same measure, as the ``mo.convert_model()`` method.
|
||||||
|
|
||||||
|
``mo`` requires the use of a pre-trained deep learning model in one of the supported formats: TensorFlow, TensorFlow Lite, PaddlePaddle, or ONNX. ``mo`` converts the model to the OpenVINO Intermediate Representation format (IR), which needs to be read with the ``ov.read_model()`` method. Then, you can compile and infer the ``ov.Model`` later with :doc:`OpenVINO™ Runtime <openvino_docs_OV_UG_OV_Runtime_User_Guide>`.
|
||||||
|
|
||||||
The results of both ``mo`` and ``mo.convert_model()`` conversion methods described above are the same. You can choose one of them, depending on what is most convenient for you. Keep in mind that there should not be any differences in the results of model conversion if the same set of parameters is used.
|
The results of both ``mo`` and ``mo.convert_model()`` conversion methods described above are the same. You can choose one of them, depending on what is most convenient for you. Keep in mind that there should not be any differences in the results of model conversion if the same set of parameters is used.
|
||||||
|
|
||||||
|
@ -44,18 +44,21 @@ To convert a model to OpenVINO model format (``ov.Model``), you can use the foll
|
|||||||
|
|
||||||
If the out-of-the-box conversion (only the ``input_model`` parameter is specified) is not successful, use the parameters mentioned below to override input shapes and cut the model:
|
If the out-of-the-box conversion (only the ``input_model`` parameter is specified) is not successful, use the parameters mentioned below to override input shapes and cut the model:
|
||||||
|
|
||||||
- model conversion API provides two parameters to override original input shapes for model conversion: ``input`` and ``input_shape``.
|
- ``input`` and ``input_shape`` - the model conversion API parameters used to override original input shapes for model conversion,
|
||||||
For more information about these parameters, refer to the :doc:`Setting Input Shapes <openvino_docs_MO_DG_prepare_model_convert_model_Converting_Model>` guide.
|
|
||||||
|
For more information about the parameters, refer to the :doc:`Setting Input Shapes <openvino_docs_MO_DG_prepare_model_convert_model_Converting_Model>` guide.
|
||||||
|
|
||||||
|
- ``input`` and ``output`` - the model conversion API parameters used to define new inputs and outputs of the converted model to cut off unwanted parts (such as unsupported operations and training sub-graphs),
|
||||||
|
|
||||||
- To cut off unwanted parts of a model (such as unsupported operations and training sub-graphs),
|
|
||||||
use the ``input`` and ``output`` parameters to define new inputs and outputs of the converted model.
|
|
||||||
For a more detailed description, refer to the :doc:`Cutting Off Parts of a Model <openvino_docs_MO_DG_prepare_model_convert_model_Cutting_Model>` guide.
|
For a more detailed description, refer to the :doc:`Cutting Off Parts of a Model <openvino_docs_MO_DG_prepare_model_convert_model_Cutting_Model>` guide.
|
||||||
|
|
||||||
You can also insert additional input pre-processing sub-graphs into the converted model by using
|
- ``mean_values``, ``scales_values``, ``layout`` - the parameters used to insert additional input pre-processing sub-graphs into the converted model,
|
||||||
the ``mean_values``, ``scales_values``, ``layout``, and other parameters described
|
|
||||||
in the :doc:`Embedding Preprocessing Computation <openvino_docs_MO_DG_Additional_Optimization_Use_Cases>` article.
|
|
||||||
|
|
||||||
The ``compress_to_fp16`` compression parameter in ``mo`` command-line tool allows generating IR with constants (for example, weights for convolutions and matrix multiplications) compressed to ``FP16`` data type. For more details, refer to the :doc:`Compression of a Model to FP16 <openvino_docs_MO_DG_FP16_Compression>` guide.
|
For more details, see the :doc:`Embedding Preprocessing Computation <openvino_docs_MO_DG_Additional_Optimization_Use_Cases>` article.
|
||||||
|
|
||||||
|
- ``compress_to_fp16`` - a compression parameter in ``mo`` command-line tool, which allows generating IR with constants (for example, weights for convolutions and matrix multiplications) compressed to ``FP16`` data type.
|
||||||
|
|
||||||
|
For more details, refer to the :doc:`Compression of a Model to FP16 <openvino_docs_MO_DG_FP16_Compression>` guide.
|
||||||
|
|
||||||
To get the full list of conversion parameters, run the following command:
|
To get the full list of conversion parameters, run the following command:
|
||||||
|
|
||||||
|
@ -21,12 +21,33 @@ This page provides instructions on model conversion from the ONNX format to the
|
|||||||
|
|
||||||
Model conversion process assumes you have an ONNX model that was directly downloaded from a public repository or converted from any framework that supports exporting to the ONNX format.
|
Model conversion process assumes you have an ONNX model that was directly downloaded from a public repository or converted from any framework that supports exporting to the ONNX format.
|
||||||
|
|
||||||
To convert an ONNX model, run model conversion with the path to the input model ``.onnx`` file:
|
.. tab-set::
|
||||||
|
|
||||||
|
.. tab-item:: Python
|
||||||
|
:sync: py
|
||||||
|
|
||||||
|
To convert an ONNX model, run ``convert_model()`` method with the path to the ``<INPUT_MODEL>.onnx`` file:
|
||||||
|
|
||||||
|
.. code-block:: py
|
||||||
|
:force:
|
||||||
|
|
||||||
|
ov_model = convert_model("<INPUT_MODEL>.onnx")
|
||||||
|
compiled_model = core.compile_model(ov_model, "AUTO")
|
||||||
|
|
||||||
|
.. important::
|
||||||
|
|
||||||
|
The ``convert_model()`` method returns ``ov.Model`` that you can optimize, compile, or save to a file for subsequent use.
|
||||||
|
|
||||||
|
.. tab-item:: CLI
|
||||||
|
:sync: cli
|
||||||
|
|
||||||
|
You can use ``mo`` command-line tool to convert a model to IR. The obtained IR can then be read by ``read_model()`` and inferred.
|
||||||
|
|
||||||
.. code-block:: sh
|
.. code-block:: sh
|
||||||
|
|
||||||
mo --input_model <INPUT_MODEL>.onnx
|
mo --input_model <INPUT_MODEL>.onnx
|
||||||
|
|
||||||
|
|
||||||
There are no ONNX specific parameters, so only framework-agnostic parameters are available to convert your model. For details, see the *General Conversion Parameters* section in the :doc:`Converting a Model to Intermediate Representation (IR) <openvino_docs_MO_DG_prepare_model_convert_model_Converting_Model>` guide.
|
There are no ONNX specific parameters, so only framework-agnostic parameters are available to convert your model. For details, see the *General Conversion Parameters* section in the :doc:`Converting a Model to Intermediate Representation (IR) <openvino_docs_MO_DG_prepare_model_convert_model_Converting_Model>` guide.
|
||||||
|
|
||||||
Supported ONNX Layers
|
Supported ONNX Layers
|
||||||
|
@ -32,17 +32,15 @@ To convert a PaddlePaddle model, use the ``mo`` script and specify the path to t
|
|||||||
Converting PaddlePaddle Model From Memory Using Python API
|
Converting PaddlePaddle Model From Memory Using Python API
|
||||||
##########################################################
|
##########################################################
|
||||||
|
|
||||||
Model conversion API supports passing PaddlePaddle models directly from memory.
|
Model conversion API supports passing the following PaddlePaddle models directly from memory:
|
||||||
|
|
||||||
Following PaddlePaddle model formats are supported:
|
|
||||||
|
|
||||||
* ``paddle.hapi.model.Model``
|
* ``paddle.hapi.model.Model``
|
||||||
* ``paddle.fluid.dygraph.layers.Layer``
|
* ``paddle.fluid.dygraph.layers.Layer``
|
||||||
* ``paddle.fluid.executor.Executor``
|
* ``paddle.fluid.executor.Executor``
|
||||||
|
|
||||||
Converting certain PaddlePaddle models may require setting ``example_input`` or ``example_output``. Below examples show how to execute such the conversion.
|
When you convert certain PaddlePaddle models, you may need to set the ``example_input`` or ``example_output`` parameters first. Below you will find examples that show how to convert aforementioned model formats using the parameters.
|
||||||
|
|
||||||
* Example of converting ``paddle.hapi.model.Model`` format model:
|
* ``paddle.hapi.model.Model``
|
||||||
|
|
||||||
.. code-block:: py
|
.. code-block:: py
|
||||||
:force:
|
:force:
|
||||||
@ -64,9 +62,9 @@ Converting certain PaddlePaddle models may require setting ``example_input`` or
|
|||||||
from openvino.runtime import serialize
|
from openvino.runtime import serialize
|
||||||
serialize(ov_model, "ov_model.xml", "ov_model.bin")
|
serialize(ov_model, "ov_model.xml", "ov_model.bin")
|
||||||
|
|
||||||
* Example of converting ``paddle.fluid.dygraph.layers.Layer`` format model:
|
* ``paddle.fluid.dygraph.layers.Layer``
|
||||||
|
|
||||||
``example_input`` is required while ``example_output`` is optional, which accept the following formats:
|
``example_input`` is required while ``example_output`` is optional, and accept the following formats:
|
||||||
|
|
||||||
``list`` with tensor(``paddle.Tensor``) or InputSpec(``paddle.static.input.InputSpec``)
|
``list`` with tensor(``paddle.Tensor``) or InputSpec(``paddle.static.input.InputSpec``)
|
||||||
|
|
||||||
@ -83,9 +81,9 @@ Converting certain PaddlePaddle models may require setting ``example_input`` or
|
|||||||
# convert to OpenVINO IR format
|
# convert to OpenVINO IR format
|
||||||
ov_model = convert_model(model, example_input=[x])
|
ov_model = convert_model(model, example_input=[x])
|
||||||
|
|
||||||
* Example of converting ``paddle.fluid.executor.Executor`` format model:
|
* ``paddle.fluid.executor.Executor``
|
||||||
|
|
||||||
``example_input`` and ``example_output`` are required, which accept the following formats:
|
``example_input`` and ``example_output`` are required, and accept the following formats:
|
||||||
|
|
||||||
``list`` or ``tuple`` with variable(``paddle.static.data``)
|
``list`` or ``tuple`` with variable(``paddle.static.data``)
|
||||||
|
|
||||||
@ -110,70 +108,21 @@ Converting certain PaddlePaddle models may require setting ``example_input`` or
|
|||||||
# convert to OpenVINO IR format
|
# convert to OpenVINO IR format
|
||||||
ov_model = convert_model(exe, example_input=[x], example_output=[y])
|
ov_model = convert_model(exe, example_input=[x], example_output=[y])
|
||||||
|
|
||||||
|
|
||||||
|
.. important::
|
||||||
|
|
||||||
|
The ``convert_model()`` method returns ``ov.Model`` that you can optimize, compile, or save to a file for subsequent use.
|
||||||
|
|
||||||
|
|
||||||
Supported PaddlePaddle Layers
|
Supported PaddlePaddle Layers
|
||||||
#############################
|
#############################
|
||||||
|
|
||||||
For the list of supported standard layers, refer to the :doc:`Supported Operations <openvino_resources_supported_operations_frontend>` page.
|
For the list of supported standard layers, refer to the :doc:`Supported Operations <openvino_resources_supported_operations_frontend>` page.
|
||||||
|
|
||||||
Officially Supported PaddlePaddle Models
|
|
||||||
########################################
|
|
||||||
|
|
||||||
The following PaddlePaddle models have been officially validated and confirmed to work (as of OpenVINO 2022.1):
|
|
||||||
|
|
||||||
.. list-table::
|
|
||||||
:widths: 20 25 55
|
|
||||||
:header-rows: 1
|
|
||||||
|
|
||||||
* - Model Name
|
|
||||||
- Model Type
|
|
||||||
- Description
|
|
||||||
* - ppocr-det
|
|
||||||
- optical character recognition
|
|
||||||
- Models are exported from `PaddleOCR <https://github.com/PaddlePaddle/PaddleOCR/tree/release/2.1/>`_. Refer to `READ.md <https://github.com/PaddlePaddle/PaddleOCR/tree/release/2.1/#pp-ocr-20-series-model-listupdate-on-dec-15>`_.
|
|
||||||
* - ppocr-rec
|
|
||||||
- optical character recognition
|
|
||||||
- Models are exported from `PaddleOCR <https://github.com/PaddlePaddle/PaddleOCR/tree/release/2.1/>`_. Refer to `READ.md <https://github.com/PaddlePaddle/PaddleOCR/tree/release/2.1/#pp-ocr-20-series-model-listupdate-on-dec-15>`_.
|
|
||||||
* - ResNet-50
|
|
||||||
- classification
|
|
||||||
- Models are exported from `PaddleClas <https://github.com/PaddlePaddle/PaddleClas/tree/release/2.1/>`_. Refer to `getting_started_en.md <https://github.com/PaddlePaddle/PaddleClas/blob/release/2.1/docs/en/tutorials/getting_started_en.md#4-use-the-inference-model-to-predict>`_.
|
|
||||||
* - MobileNet v2
|
|
||||||
- classification
|
|
||||||
- Models are exported from `PaddleClas <https://github.com/PaddlePaddle/PaddleClas/tree/release/2.1/>`_. Refer to `getting_started_en.md <https://github.com/PaddlePaddle/PaddleClas/blob/release/2.1/docs/en/tutorials/getting_started_en.md#4-use-the-inference-model-to-predict>`_.
|
|
||||||
* - MobileNet v3
|
|
||||||
- classification
|
|
||||||
- Models are exported from `PaddleClas <https://github.com/PaddlePaddle/PaddleClas/tree/release/2.1/>`_. Refer to `getting_started_en.md <https://github.com/PaddlePaddle/PaddleClas/blob/release/2.1/docs/en/tutorials/getting_started_en.md#4-use-the-inference-model-to-predict>`_.
|
|
||||||
* - BiSeNet v2
|
|
||||||
- semantic segmentation
|
|
||||||
- Models are exported from `PaddleSeg <https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.1>`_. Refer to `model_export.md <https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.1/docs/model_export.md#>`_.
|
|
||||||
* - DeepLab v3 plus
|
|
||||||
- semantic segmentation
|
|
||||||
- Models are exported from `PaddleSeg <https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.1>`_. Refer to `model_export.md <https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.1/docs/model_export.md#>`_.
|
|
||||||
* - Fast-SCNN
|
|
||||||
- semantic segmentation
|
|
||||||
- Models are exported from `PaddleSeg <https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.1>`_. Refer to `model_export.md <https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.1/docs/model_export.md#>`_.
|
|
||||||
* - OCRNET
|
|
||||||
- semantic segmentation
|
|
||||||
- Models are exported from `PaddleSeg <https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.1>`_. Refer to `model_export.md <https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.1/docs/model_export.md#>`_.
|
|
||||||
* - Yolo v3
|
|
||||||
- detection
|
|
||||||
- Models are exported from `PaddleDetection <https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.1>`_. Refer to `EXPORT_MODEL.md <https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.1/deploy/EXPORT_MODEL.md#>`_.
|
|
||||||
* - ppyolo
|
|
||||||
- detection
|
|
||||||
- Models are exported from `PaddleDetection <https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.1>`_. Refer to `EXPORT_MODEL.md <https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.1/deploy/EXPORT_MODEL.md#>`_.
|
|
||||||
* - MobileNetv3-SSD
|
|
||||||
- detection
|
|
||||||
- Models are exported from `PaddleDetection <https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.2>`_. Refer to `EXPORT_MODEL.md <https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.2/deploy/EXPORT_MODEL.md#>`_.
|
|
||||||
* - U-Net
|
|
||||||
- semantic segmentation
|
|
||||||
- Models are exported from `PaddleSeg <https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.3>`_. Refer to `model_export.md <https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.3/docs/model_export.md#>`_.
|
|
||||||
* - BERT
|
|
||||||
- language representation
|
|
||||||
- Models are exported from `PaddleNLP <https://github.com/PaddlePaddle/PaddleNLP/tree/v2.1.1>`_. Refer to `README.md <https://github.com/PaddlePaddle/PaddleNLP/tree/develop/examples/language_model/bert#readme>`_.
|
|
||||||
|
|
||||||
Frequently Asked Questions (FAQ)
|
Frequently Asked Questions (FAQ)
|
||||||
################################
|
################################
|
||||||
|
|
||||||
When model conversion API is unable to run to completion due to typographical errors, incorrectly used options, or other issues, it provides explanatory messages. They describe the potential cause of the problem and give a link to the :doc:`Model Optimizer FAQ <openvino_docs_MO_DG_prepare_model_Model_Optimizer_FAQ>`, which provides instructions on how to resolve most issues. The FAQ also includes links to relevant sections in :doc:`Convert a Model <openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide>` to help you understand what went wrong.
|
The model conversion API displays explanatory messages for typographical errors, incorrectly used options, or other issues. They describe the potential cause of the problem and give a link to the :doc:`Model Optimizer FAQ <openvino_docs_MO_DG_prepare_model_Model_Optimizer_FAQ>`, which provides instructions on how to resolve most issues. The FAQ also includes links to relevant sections in :doc:`Convert a Model <openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide>` to help you understand what went wrong.
|
||||||
|
|
||||||
Additional Resources
|
Additional Resources
|
||||||
####################
|
####################
|
||||||
|
@ -33,8 +33,8 @@ Following PyTorch model formats are supported:
|
|||||||
|
|
||||||
Converting certain PyTorch models may require model tracing, which needs ``input_shape`` or ``example_input`` parameters to be set.
|
Converting certain PyTorch models may require model tracing, which needs ``input_shape`` or ``example_input`` parameters to be set.
|
||||||
|
|
||||||
``example_input`` is used as example input for model tracing.
|
* ``example_input`` is used as example input for model tracing.
|
||||||
``input_shape`` is used for constructing a float zero-filled torch.Tensor for model tracing.
|
* ``input_shape`` is used for constructing a float zero-filled torch.Tensor for model tracing.
|
||||||
|
|
||||||
Example of using ``example_input``:
|
Example of using ``example_input``:
|
||||||
|
|
||||||
@ -56,6 +56,10 @@ Example of using ``example_input``:
|
|||||||
* ``list`` or ``tuple`` with tensors (``openvino.runtime.Tensor`` / ``torch.Tensor`` / ``np.ndarray``)
|
* ``list`` or ``tuple`` with tensors (``openvino.runtime.Tensor`` / ``torch.Tensor`` / ``np.ndarray``)
|
||||||
* ``dictionary`` where key is the input name, value is the tensor (``openvino.runtime.Tensor`` / ``torch.Tensor`` / ``np.ndarray``)
|
* ``dictionary`` where key is the input name, value is the tensor (``openvino.runtime.Tensor`` / ``torch.Tensor`` / ``np.ndarray``)
|
||||||
|
|
||||||
|
.. important::
|
||||||
|
|
||||||
|
The ``convert_model()`` method returns ``ov.Model`` that you can optimize, compile, or save to a file for subsequent use.
|
||||||
|
|
||||||
Exporting a PyTorch Model to ONNX Format
|
Exporting a PyTorch Model to ONNX Format
|
||||||
########################################
|
########################################
|
||||||
|
|
||||||
|
@ -299,6 +299,10 @@ Model conversion API supports passing TensorFlow/TensorFlow2 models directly fro
|
|||||||
checkpoint.restore(save_path)
|
checkpoint.restore(save_path)
|
||||||
ov_model = convert_model(checkpoint)
|
ov_model = convert_model(checkpoint)
|
||||||
|
|
||||||
|
.. important::
|
||||||
|
|
||||||
|
The ``convert_model()`` method returns ``ov.Model`` that you can optimize, compile, or save to a file for subsequent use.
|
||||||
|
|
||||||
Supported TensorFlow and TensorFlow 2 Keras Layers
|
Supported TensorFlow and TensorFlow 2 Keras Layers
|
||||||
##################################################
|
##################################################
|
||||||
|
|
||||||
|
@ -13,7 +13,11 @@ To convert a TensorFlow Lite model, use the ``mo`` script and specify the path t
|
|||||||
|
|
||||||
mo --input_model <INPUT_MODEL>.tflite
|
mo --input_model <INPUT_MODEL>.tflite
|
||||||
|
|
||||||
.. note:: TensorFlow Lite models are supported via FrontEnd API. You may skip conversion to IR and read models directly by OpenVINO runtime API. Refer to the :doc:`inference example <openvino_docs_OV_UG_Integrate_OV_with_your_application>` for more details. Using ``convert_model`` is still necessary in more complex cases, such as new custom inputs/outputs in model pruning, adding pre-processing, or using Python conversion extensions.
|
TensorFlow Lite models are supported via FrontEnd API. You may skip conversion to IR and read models directly by OpenVINO runtime API. Refer to the :doc:`inference example <openvino_docs_OV_UG_Integrate_OV_with_your_application>` for more details. Using ``convert_model`` is still necessary in more complex cases, such as new custom inputs/outputs in model pruning, adding pre-processing, or using Python conversion extensions.
|
||||||
|
|
||||||
|
.. important::
|
||||||
|
|
||||||
|
The ``convert_model()`` method returns ``ov.Model`` that you can optimize, compile, or save to a file for subsequent use.
|
||||||
|
|
||||||
Supported TensorFlow Lite Layers
|
Supported TensorFlow Lite Layers
|
||||||
###################################
|
###################################
|
||||||
|
@ -22,7 +22,7 @@ The following examples are the situations when model cutting is useful or even r
|
|||||||
Model conversion API parameters
|
Model conversion API parameters
|
||||||
###############################
|
###############################
|
||||||
|
|
||||||
Model conversion API provides command line options ``input`` and ``output`` to specify new entry and exit nodes, while ignoring the rest of the model:
|
Model conversion API provides ``input`` and ``output`` command-line options to specify new entry and exit nodes, while ignoring the rest of the model:
|
||||||
|
|
||||||
* ``input`` option accepts a list of layer names of the input model that should be treated as new entry points to the model. See the full list of accepted types for input on :doc:`Model Conversion Python API <openvino_docs_MO_DG_Python_API>` page.
|
* ``input`` option accepts a list of layer names of the input model that should be treated as new entry points to the model. See the full list of accepted types for input on :doc:`Model Conversion Python API <openvino_docs_MO_DG_Python_API>` page.
|
||||||
* ``output`` option accepts a list of layer names of the input model that should be treated as new exit points from the model.
|
* ``output`` option accepts a list of layer names of the input model that should be treated as new exit points from the model.
|
||||||
|
@ -1,3 +0,0 @@
|
|||||||
version https://git-lfs.github.com/spec/v1
|
|
||||||
oid sha256:53f6f30af6d39d91d7f3f4e3bbd086e3dbc94e5ac97233d56a90826579759e7f
|
|
||||||
size 104225
|
|
@ -1,3 +1,3 @@
|
|||||||
version https://git-lfs.github.com/spec/v1
|
version https://git-lfs.github.com/spec/v1
|
||||||
oid sha256:bc27fc105f73d9fb6e1a5436843c090c0748486632084aa27611222b2738a108
|
oid sha256:894cc0f49385b304f7129b31e616cfc47dd188e910fca8d726b006bcbb3082f3
|
||||||
size 187128
|
size 252381
|
||||||
|
@ -13,8 +13,7 @@ Also, the frontend allows loading TensorFlow models in SavedModel, MetaGraph, an
|
|||||||
The current limitations:
|
The current limitations:
|
||||||
|
|
||||||
* IRs generated by new TensorFlow Frontend are compatible only with OpenVINO API 2.0
|
* IRs generated by new TensorFlow Frontend are compatible only with OpenVINO API 2.0
|
||||||
* There is no full parity between the legacy frontend and the new frontend in MO. Known limitations compared to the legacy approach are: TF1 Control flow, Complex types, models requiring config files and old python extensions. The solution detects unsupported functionalities and provides fallback.
|
* There is no full parity between the legacy frontend and the new frontend in MO. Known limitations compared to the legacy approach are: TF1 Control flow, Complex types, models requiring config files and old python extensions. The solution detects unsupported functionalities and provides fallback. To force the use of the legacy frontend, ``--use_legacy_frontend`` must be specified.
|
||||||
|
|
||||||
To force the use of the legacy frontend, ``--use_legacy_frontend`` must be specified.
|
|
||||||
|
|
||||||
@endsphinxdirective
|
@endsphinxdirective
|
||||||
|
Loading…
Reference in New Issue
Block a user