From 6d3726024df514c647b7f5c7ccafe54d84e02070 Mon Sep 17 00:00:00 2001 From: Sebastian Golebiewski Date: Wed, 23 Aug 2023 18:26:07 +0200 Subject: [PATCH] [DOCS] Updating Supported Model Formats article (#18495) * supported_model_formats * add-method * apply-commits-18214 Applying commits from: https://github.com/openvinotoolkit/openvino/pull/18214 * Update docs/MO_DG/prepare_model/convert_model/supported_model_formats.md * Update docs/MO_DG/prepare_model/convert_model/supported_model_formats.md * Update docs/MO_DG/prepare_model/convert_model/supported_model_formats.md * Update docs/MO_DG/prepare_model/convert_model/supported_model_formats.md * Update docs/MO_DG/prepare_model/convert_model/supported_model_formats.md * Update supported_model_formats.md * Update docs/MO_DG/prepare_model/convert_model/supported_model_formats.md * Update docs/MO_DG/prepare_model/convert_model/supported_model_formats.md * review-suggestions * Update supported_model_formats.md --- .../convert_model/supported_model_formats.md | 536 +++++++++++++++++- 1 file changed, 519 insertions(+), 17 deletions(-) diff --git a/docs/MO_DG/prepare_model/convert_model/supported_model_formats.md b/docs/MO_DG/prepare_model/convert_model/supported_model_formats.md index ce509ad059c..91ad25df461 100644 --- a/docs/MO_DG/prepare_model/convert_model/supported_model_formats.md +++ b/docs/MO_DG/prepare_model/convert_model/supported_model_formats.md @@ -17,31 +17,533 @@ openvino_docs_MO_DG_prepare_model_convert_model_tutorials .. meta:: - :description: In OpenVINO, ONNX, PaddlePaddle, TensorFlow and TensorFlow Lite - models do not require any prior conversion, while MxNet, Caffe and Kaldi do. + :description: Learn about supported model formats and the methods used to convert, read and compile them in OpenVINO™. -**OpenVINO IR (Intermediate Representation)** - the proprietary format of OpenVINO™, benefiting from the full extent of its features. +**OpenVINO IR (Intermediate Representation)** - the proprietary and default format of OpenVINO, benefiting from the full extent of its features. All other model formats presented below will ultimately be converted to :doc:`OpenVINO IR `. -**ONNX, PaddlePaddle, TensorFlow, TensorFlow Lite** - formats supported directly, which means they can be used with -OpenVINO Runtime without any prior conversion. For a guide on how to run inference on ONNX, PaddlePaddle, or TensorFlow, -see how to :doc:`Integrate OpenVINO™ with Your Application `. +**PyTorch, TensorFlow, ONNX, and PaddlePaddle** may be used without any prior conversion and can be read by OpenVINO Runtime API by the use of ``read_model()`` or ``compile_model()``. Additional adjustment for the model can be performed using the ``convert_model()`` method, which allows you to set shapes, types or the layout of model inputs, cut parts of the model, freeze inputs etc. The detailed information of capabilities of ``convert_model()`` can be found in :doc:`this ` article. -**MXNet, Caffe, Kaldi** - legacy formats that need to be converted to OpenVINO IR before running inference. -The model conversion in some cases may involve intermediate steps. OpenVINO is currently proceeding -**to deprecate these formats** and **remove their support entirely in the future**. +Below you will find code examples for each method, for all supported model formats. + +.. tab-set:: + + .. tab-item:: PyTorch + :sync: torch + + .. tab-set:: + + .. tab-item:: Python + :sync: py + + * The ``convert_model()`` method: + + This is the only method applicable to PyTorch models. + + .. dropdown:: List of supported formats: + + * **Python objects**: + + * ``torch.nn.Module`` + * ``torch.jit.ScriptModule`` + * ``torch.jit.ScriptFunction`` + + .. code-block:: py + :force: + + model = torchvision.models.resnet50(pretrained=True) + ov_model = convert_model(model) + compiled_model = core.compile_model(ov_model, "AUTO") + + For more details on conversion, refer to the + :doc:`guide ` + and an example `tutorial `__ + on this topic. + + .. tab-item:: TensorFlow + :sync: tf + + .. tab-set:: + + .. tab-item:: Python + :sync: py + + * The ``convert_model()`` method: + + When you use the ``convert_model()`` method, you have more control and you can specify additional adjustments for ``ov.Model``. The ``read_model()`` and ``compile_model()`` methods are easier to use, however, they do not have such capabilities. With ``ov.Model`` you can choose to optimize, compile and run inference on it or serialize it into a file for subsequent use. + + .. dropdown:: List of supported formats: + + * **Files**: + + * SavedModel - ```` or ``.pb`` + * Checkpoint - ``.pb`` or ``.pbtxt`` + * MetaGraph - ``.meta`` + + * **Python objects**: + + * ``tf.keras.Model`` + * ``tf.keras.layers.Layer`` + * ``tf.Module`` + * ``tf.compat.v1.Graph`` + * ``tf.compat.v1.GraphDef`` + * ``tf.function`` + * ``tf.compat.v1.session`` + * ``tf.train.checkpoint`` + + .. code-block:: py + :force: + + ov_model = convert_model("saved_model.pb") + compiled_model = core.compile_model(ov_model, "AUTO") + + For more details on conversion, refer to the + :doc:`guide ` + and an example `tutorial `__ + on this topic. + + * The ``read_model()`` and ``compile_model()`` methods: + + .. dropdown:: List of supported formats: + + * **Files**: + + * SavedModel - ```` or ``.pb`` + * Checkpoint - ``.pb`` or ``.pbtxt`` + * MetaGraph - ``.meta`` + + .. code-block:: py + :force: + + ov_model = read_model("saved_model.pb") + compiled_model = core.compile_model(ov_model, "AUTO") + + For a guide on how to run inference, see how to + :doc:`Integrate OpenVINO™ with Your Application `. + For TensorFlow format, see :doc:`TensorFlow Frontend Capabilities and Limitations `. + + .. tab-item:: C++ + :sync: cpp + + * The ``compile_model()`` method: + + .. dropdown:: List of supported formats: + + * **Files**: + + * SavedModel - ```` or ``.pb`` + * Checkpoint - ``.pb`` or ``.pbtxt`` + * MetaGraph - ``.meta`` + + .. code-block:: cpp + + ov::CompiledModel compiled_model = core.compile_model("saved_model.pb", "AUTO"); + + For a guide on how to run inference, see how to + :doc:`Integrate OpenVINO™ with Your Application `. + + .. tab-item:: C + :sync: c + + * The ``compile_model()`` method: + + .. dropdown:: List of supported formats: + + * **Files**: + + * SavedModel - ```` or ``.pb`` + * Checkpoint - ``.pb`` or ``.pbtxt`` + * MetaGraph - ``.meta`` + + .. code-block:: c + + ov_compiled_model_t* compiled_model = NULL; + ov_core_compile_model_from_file(core, "saved_model.pb", "AUTO", 0, &compiled_model); + + For a guide on how to run inference, see how to + :doc:`Integrate OpenVINO™ with Your Application `. + + .. tab-item:: CLI + :sync: cli + + You can use ``mo`` command-line tool to convert a model to IR. The obtained IR can then be read by ``read_model()`` and inferred. + + .. code-block:: sh + + mo --input_model .pb + + For details on the conversion, refer to the + :doc:`article `. + + .. tab-item:: TensorFlow Lite + :sync: tflite + + .. tab-set:: + + .. tab-item:: Python + :sync: py + + * The ``convert_model()`` method: + + When you use the ``convert_model()`` method, you have more control and you can specify additional adjustments for ``ov.Model``. The ``read_model()`` and ``compile_model()`` methods are easier to use, however, they do not have such capabilities. With ``ov.Model`` you can choose to optimize, compile and run inference on it or serialize it into a file for subsequent use. + + .. dropdown:: List of supported formats: + + * **Files**: + + * ``.tflite`` + + .. code-block:: py + :force: + + ov_model = convert_model(".tflite") + compiled_model = core.compile_model(ov_model, "AUTO") + + For more details on conversion, refer to the + :doc:`guide ` + and an example `tutorial `__ + on this topic. + + + * The ``read_model()`` method: + + .. dropdown:: List of supported formats: + + * **Files**: + + * ``.tflite`` + + .. code-block:: py + :force: + + ov_model = read_model(".tflite") + compiled_model = core.compile_model(ov_model, "AUTO") + + * The ``compile_model()`` method: + + .. dropdown:: List of supported formats: + + * **Files**: + + * ``.tflite`` + + .. code-block:: py + :force: + + compiled_model = core.compile_model(".tflite", "AUTO") + + For a guide on how to run inference, see how to + :doc:`Integrate OpenVINO™ with Your Application `. + + + .. tab-item:: C++ + :sync: cpp + + * The ``compile_model()`` method: + + .. dropdown:: List of supported formats: + + * **Files**: + + * ``.tflite`` + + .. code-block:: cpp + + ov::CompiledModel compiled_model = core.compile_model(".tflite", "AUTO"); + + For a guide on how to run inference, see how to + :doc:`Integrate OpenVINO™ with Your Application `. + + .. tab-item:: C + :sync: c + + * The ``compile_model()`` method: + + .. dropdown:: List of supported formats: + + * **Files**: + + * ``.tflite`` + + .. code-block:: c + + ov_compiled_model_t* compiled_model = NULL; + ov_core_compile_model_from_file(core, ".tflite", "AUTO", 0, &compiled_model); + + For a guide on how to run inference, see how to + :doc:`Integrate OpenVINO™ with Your Application `. + + .. tab-item:: CLI + :sync: cli + + * The ``convert_model()`` method: + + You can use ``mo`` command-line tool to convert a model to IR. The obtained IR can then be read by ``read_model()`` and inferred. + + .. dropdown:: List of supported formats: + + * **Files**: + + * ``.tflite`` + + .. code-block:: sh + + mo --input_model .tflite + + For details on the conversion, refer to the + :doc:`article `. + + .. tab-item:: ONNX + :sync: onnx + + .. tab-set:: + + .. tab-item:: Python + :sync: py + + * The ``convert_model()`` method: + + When you use the ``convert_model()`` method, you have more control and you can specify additional adjustments for ``ov.Model``. The ``read_model()`` and ``compile_model()`` methods are easier to use, however, they do not have such capabilities. With ``ov.Model`` you can choose to optimize, compile and run inference on it or serialize it into a file for subsequent use. + + .. dropdown:: List of supported formats: + + * **Files**: + + * ``.onnx`` + + .. code-block:: py + :force: + + ov_model = convert_model(".onnx") + compiled_model = core.compile_model(ov_model, "AUTO") + + For more details on conversion, refer to the + :doc:`guide ` + and an example `tutorial `__ + on this topic. + + + * The ``read_model()`` method: + + .. dropdown:: List of supported formats: + + * **Files**: + + * ``.onnx`` + + .. code-block:: py + :force: + + ov_model = read_model(".onnx") + compiled_model = core.compile_model(ov_model, "AUTO") + + * The ``compile_model()`` method: + + .. dropdown:: List of supported formats: + + * **Files**: + + * ``.onnx`` + + .. code-block:: py + :force: + + compiled_model = core.compile_model(".onnx", "AUTO") + + For a guide on how to run inference, see how to :doc:`Integrate OpenVINO™ with Your Application `. + + + .. tab-item:: C++ + :sync: cpp + + * The ``compile_model()`` method: + + .. dropdown:: List of supported formats: + + * **Files**: + + * ``.onnx`` + + .. code-block:: cpp + + ov::CompiledModel compiled_model = core.compile_model(".onnx", "AUTO"); + + For a guide on how to run inference, see how to :doc:`Integrate OpenVINO™ with Your Application `. + + .. tab-item:: C + :sync: c + + * The ``compile_model()`` method: + + .. dropdown:: List of supported formats: + + * **Files**: + + * ``.onnx`` + + .. code-block:: c + + ov_compiled_model_t* compiled_model = NULL; + ov_core_compile_model_from_file(core, ".onnx", "AUTO", 0, &compiled_model); + + For details on the conversion, refer to the :doc:`article ` + + .. tab-item:: CLI + :sync: cli + + * The ``convert_model()`` method: + + You can use ``mo`` command-line tool to convert a model to IR. The obtained IR can then be read by ``read_model()`` and inferred. + + .. dropdown:: List of supported formats: + + * **Files**: + + * ``.onnx`` + + .. code-block:: sh + + mo --input_model .onnx + + For details on the conversion, refer to the + :doc:`article ` + + .. tab-item:: PaddlePaddle + :sync: pdpd + + .. tab-set:: + + .. tab-item:: Python + :sync: py + + * The ``convert_model()`` method: + + When you use the ``convert_model()`` method, you have more control and you can specify additional adjustments for ``ov.Model``. The ``read_model()`` and ``compile_model()`` methods are easier to use, however, they do not have such capabilities. With ``ov.Model`` you can choose to optimize, compile and run inference on it or serialize it into a file for subsequent use. + + .. dropdown:: List of supported formats: + + * **Files**: + + * ``.pdmodel`` + + * **Python objects**: + + * ``paddle.hapi.model.Model`` + * ``paddle.fluid.dygraph.layers.Layer`` + * ``paddle.fluid.executor.Executor`` + + .. code-block:: py + :force: + + ov_model = convert_model(".pdmodel") + compiled_model = core.compile_model(ov_model, "AUTO") + + For more details on conversion, refer to the + :doc:`guide ` + and an example `tutorial `__ + on this topic. + + * The ``read_model()`` method: + + .. dropdown:: List of supported formats: + + * **Files**: + + * ``.pdmodel`` + + .. code-block:: py + :force: + + ov_model = read_model(".pdmodel") + compiled_model = core.compile_model(ov_model, "AUTO") + + * The ``compile_model()`` method: + + .. dropdown:: List of supported formats: + + * **Files**: + + * ``.pdmodel`` + + .. code-block:: py + :force: + + compiled_model = core.compile_model(".pdmodel", "AUTO") + + For a guide on how to run inference, see how to + :doc:`Integrate OpenVINO™ with Your Application `. + + .. tab-item:: C++ + :sync: cpp + + * The ``compile_model()`` method: + + .. dropdown:: List of supported formats: + + * **Files**: + + * ``.pdmodel`` + + .. code-block:: cpp + + ov::CompiledModel compiled_model = core.compile_model(".pdmodel", "AUTO"); + + For a guide on how to run inference, see how to + :doc:`Integrate OpenVINO™ with Your Application `. + + .. tab-item:: C + :sync: c + + * The ``compile_model()`` method: + + .. dropdown:: List of supported formats: + + * **Files**: + + * ``.pdmodel`` + + .. code-block:: c + + ov_compiled_model_t* compiled_model = NULL; + ov_core_compile_model_from_file(core, ".pdmodel", "AUTO", 0, &compiled_model); + + For a guide on how to run inference, see how to + :doc:`Integrate OpenVINO™ with Your Application `. + + .. tab-item:: CLI + :sync: cli + + * The ``convert_model()`` method: + + You can use ``mo`` command-line tool to convert a model to IR. The obtained IR can then be read by ``read_model()`` and inferred. + + .. dropdown:: List of supported formats: + + * **Files**: + + * ``.pdmodel`` + + .. code-block:: sh + + mo --input_model .pdmodel + + For details on the conversion, refer to the + :doc:`article `. + + +**MXNet, Caffe, and Kaldi** are legacy formats that need to be converted to OpenVINO IR before running inference. The model conversion in some cases may involve intermediate steps. For more details, refer to the :doc:`MXNet `, :doc:`Caffe `, :doc:`Kaldi ` conversion guides. + +OpenVINO is currently proceeding **to deprecate these formats** and **remove their support entirely in the future**. Converting these formats to ONNX or using an LTS version might be a viable solution for inference in OpenVINO Toolkit. + +.. note:: + + To convert models, :doc:`install OpenVINO™ Development Tools `, + which include model conversion API. Refer to the following articles for details on conversion for different formats and models: -* :doc:`How to convert ONNX ` -* :doc:`How to convert PaddlePaddle ` -* :doc:`How to convert TensorFlow ` -* :doc:`How to convert TensorFlow Lite ` -* :doc:`How to convert MXNet ` -* :doc:`How to convert Caffe ` -* :doc:`How to convert Kaldi ` - * :doc:`Conversion examples for specific models ` +* :doc:`Model preparation methods ` @endsphinxdirective