[DOCS] adjustment to supported devices
adjustments will continue in following PRs
This commit is contained in:
parent
b0d917f0cb
commit
9715ccd992
@ -16,14 +16,14 @@
|
||||
omz_tools_downloader
|
||||
|
||||
|
||||
Every deep learning workflow begins with obtaining a model. You can choose to prepare a custom one, use a ready-made solution and adjust it to your needs, or even download and run a pre-trained network from an online database, such as `TensorFlow Hub <https://tfhub.dev/>`__, `Hugging Face <https://huggingface.co/>`__, `Torchvision models <https://pytorch.org/hub/>`__.
|
||||
Every deep learning workflow begins with obtaining a model. You can choose to prepare a custom one, use a ready-made solution and adjust it to your needs, or even download and run a pre-trained network from an online database, such as `TensorFlow Hub <https://tfhub.dev/>`__, `Hugging Face <https://huggingface.co/>`__, or `Torchvision models <https://pytorch.org/hub/>`__.
|
||||
|
||||
Import a model using ``read_model()``
|
||||
#################################################
|
||||
|
||||
Model files (not Python objects) from :doc:`ONNX, PaddlePaddle, TensorFlow and TensorFlow Lite <Supported_Model_Formats>` (check :doc:`TensorFlow Frontend Capabilities and Limitations <openvino_docs_MO_DG_TensorFlow_Frontend>`) do not require a separate step for model conversion, that is ``mo.convert_model``.
|
||||
|
||||
The ``read_model()`` method reads a model from a file and produces `openvino.runtime.Model <api/ie_python_api/_autosummary/openvino.runtime.Model.html>`__. If the file is in one of the supported original framework file :doc:`formats <Supported_Model_Formats>`, the method runs internal conversion to an OpenVINO model format. If the file is already in the :doc:`OpenVINO IR format <openvino_ir>`, it is read "as-is", without any conversion involved.
|
||||
The ``read_model()`` method reads a model from a file and produces `openvino.runtime.Model <api/ie_python_api/_autosummary/openvino.runtime.Model.html>`__. If the file is in one of the supported original framework :doc:`file formats <Supported_Model_Formats>`, the method runs internal conversion to an OpenVINO model format. If the file is already in the :doc:`OpenVINO IR format <openvino_ir>`, it is read "as-is", without any conversion involved.
|
||||
|
||||
You can also convert a model from original framework to `openvino.runtime.Model <api/ie_python_api/_autosummary/openvino.runtime.Model.html>`__ using ``convert_model()`` method. More details about ``convert_model()`` are provided in :doc:`model conversion guide <openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide>` .
|
||||
|
||||
|
@ -17,14 +17,14 @@
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_tutorials
|
||||
|
||||
.. meta::
|
||||
:description: Learn about supported model formats and the methods used to convert, read and compile them in OpenVINO™.
|
||||
:description: Learn about supported model formats and the methods used to convert, read, and compile them in OpenVINO™.
|
||||
|
||||
|
||||
**OpenVINO IR (Intermediate Representation)** - the proprietary and default format of OpenVINO, benefiting from the full extent of its features. All other model formats presented below will ultimately be converted to :doc:`OpenVINO IR <openvino_ir>`.
|
||||
**OpenVINO IR (Intermediate Representation)** - the proprietary and default format of OpenVINO, benefiting from the full extent of its features. All other supported model formats, as listed below, are converted to :doc:`OpenVINO IR <openvino_ir>` to enable inference. Consider storing your model in this format to minimize first-inference latency, perform model optimization, and, in some cases, save space on your drive.
|
||||
|
||||
**PyTorch, TensorFlow, ONNX, and PaddlePaddle** may be used without any prior conversion and can be read by OpenVINO Runtime API by the use of ``read_model()`` or ``compile_model()``. Additional adjustment for the model can be performed using the ``convert_model()`` method, which allows you to set shapes, types or the layout of model inputs, cut parts of the model, freeze inputs etc. The detailed information of capabilities of ``convert_model()`` can be found in :doc:`this <openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide>` article.
|
||||
**PyTorch, TensorFlow, ONNX, and PaddlePaddle** can be used by OpenVINO Runtime API, via the ``read_model()`` and ``compile_model()`` commands, resulting in under-the-hood conversion to OpenVINO IR. To perform additional adjustments to the model you can use the ``convert_model()`` method, which allows you to set shapes, change model input types or layouts, cut parts of the model, freeze inputs etc. A detailed description of ``convert_model()`` capabilities can be found in the :doc:`model conversion guide <openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide>`.
|
||||
|
||||
Below you will find code examples for each method, for all supported model formats.
|
||||
Here are code examples for each method, for all supported model formats.
|
||||
|
||||
.. tab-set::
|
||||
|
||||
@ -531,19 +531,21 @@ Below you will find code examples for each method, for all supported model forma
|
||||
:doc:`article <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Paddle>`.
|
||||
|
||||
|
||||
**MXNet, Caffe, and Kaldi** are legacy formats that need to be converted to OpenVINO IR before running inference. The model conversion in some cases may involve intermediate steps. For more details, refer to the :doc:`MXNet <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_MxNet>`, :doc:`Caffe <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Caffe>`, :doc:`Kaldi <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Kaldi>` conversion guides.
|
||||
**MXNet, Caffe, and Kaldi** are legacy formats that need to be converted explicitly to OpenVINO IR or ONNX before running inference.
|
||||
As OpenVINO is currently proceeding **to deprecate these formats** and **remove their support entirely in the future**,
|
||||
converting them to ONNX for use with OpenVINO should be considered the default path.
|
||||
|
||||
OpenVINO is currently proceeding **to deprecate these formats** and **remove their support entirely in the future**. Converting these formats to ONNX or using an LTS version might be a viable solution for inference in OpenVINO Toolkit.
|
||||
|
||||
.. note::
|
||||
|
||||
To convert models, :doc:`install OpenVINO™ Development Tools <openvino_docs_install_guides_install_dev_tools>`,
|
||||
which include model conversion API.
|
||||
If you want to keep working with the legacy formats the old way, refer to a previous
|
||||
`OpenVINO LTS version and its documentation. <https://docs.openvino.ai/2022.3/Supported_Model_Formats.html>.
|
||||
|
||||
OpenVINO versions of 2023 are mostly compatible with the old instructions,
|
||||
through a deprecated MO tool, installed with the deprecated OpenVINO Developer Tools package.
|
||||
|
||||
`OpenVINO 2023.0 <https://docs.openvino.ai/2023.0/Supported_Model_Formats.html> is the last
|
||||
release officially supporting the MO conversion process for the legacy formats.
|
||||
|
||||
Refer to the following articles for details on conversion for different formats and models:
|
||||
|
||||
* :doc:`Conversion examples for specific models <openvino_docs_MO_DG_prepare_model_convert_model_tutorials>`
|
||||
* :doc:`Model preparation methods <openvino_docs_model_processing_introduction>`
|
||||
|
||||
@endsphinxdirective
|
||||
|
Loading…
Reference in New Issue
Block a user