diff --git a/docs/_static/images/BASIC_FLOW_IE_C.svg b/docs/_static/images/BASIC_FLOW_IE_C.svg index af2d88040ad..65bb26020df 100644 --- a/docs/_static/images/BASIC_FLOW_IE_C.svg +++ b/docs/_static/images/BASIC_FLOW_IE_C.svg @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:18bc08f90f844c09594cfa538f4ba2205ea2e67c849927490c01923e394ed11a -size 71578 +oid sha256:63301a7c31b6660fbdb55fb733e20af6a172c0512455f5de8c6be5e1a5b3ed0b +size 71728 diff --git a/docs/articles_en/documentation/openvino_legacy_features/mo_ovc_transition/legacy_conversion_api/convert_python_model_objects.md b/docs/articles_en/documentation/openvino_legacy_features/mo_ovc_transition/legacy_conversion_api/convert_python_model_objects.md index 7208052bdc2..c9b9e2d276f 100644 --- a/docs/articles_en/documentation/openvino_legacy_features/mo_ovc_transition/legacy_conversion_api/convert_python_model_objects.md +++ b/docs/articles_en/documentation/openvino_legacy_features/mo_ovc_transition/legacy_conversion_api/convert_python_model_objects.md @@ -20,7 +20,7 @@ Example of converting a PyTorch model directly from memory: import torchvision - model = torchvision.models.resnet50(pretrained=True) + model = torchvision.models.resnet50(weights='DEFAULT') ov_model = convert_model(model) The following types are supported as an input model for ``convert_model()``: diff --git a/docs/articles_en/documentation/openvino_legacy_features/mo_ovc_transition/legacy_conversion_api/supported_model_formats.md b/docs/articles_en/documentation/openvino_legacy_features/mo_ovc_transition/legacy_conversion_api/supported_model_formats.md index 068ba7fca16..fc0bf685c7f 100644 --- a/docs/articles_en/documentation/openvino_legacy_features/mo_ovc_transition/legacy_conversion_api/supported_model_formats.md +++ b/docs/articles_en/documentation/openvino_legacy_features/mo_ovc_transition/legacy_conversion_api/supported_model_formats.md @@ -58,7 +58,7 @@ Here are code examples of how to use these methods with different model formats: .. code-block:: py :force: - model = torchvision.models.resnet50(pretrained=True) + model = torchvision.models.resnet50(weights='DEFAULT') ov_model = convert_model(model) compiled_model = core.compile_model(ov_model, "AUTO") diff --git a/docs/articles_en/documentation/openvino_legacy_features/mo_ovc_transition/legacy_conversion_api/supported_model_formats/Convert_Model_From_PyTorch.md b/docs/articles_en/documentation/openvino_legacy_features/mo_ovc_transition/legacy_conversion_api/supported_model_formats/Convert_Model_From_PyTorch.md index 0cafd306653..3215573de5e 100644 --- a/docs/articles_en/documentation/openvino_legacy_features/mo_ovc_transition/legacy_conversion_api/supported_model_formats/Convert_Model_From_PyTorch.md +++ b/docs/articles_en/documentation/openvino_legacy_features/mo_ovc_transition/legacy_conversion_api/supported_model_formats/Convert_Model_From_PyTorch.md @@ -26,7 +26,7 @@ To convert a PyTorch model to the OpenVINO IR format, use the OVC API (supersedi import torch from openvino.tools.mo import convert_model - model = torchvision.models.resnet50(pretrained=True) + model = torchvision.models.resnet50(weights='DEFAULT') ov_model = convert_model(model) Following PyTorch model formats are supported: @@ -45,7 +45,7 @@ parameter to be set, for example: import torch from openvino.tools.mo import convert_model - model = torchvision.models.resnet50(pretrained=True) + model = torchvision.models.resnet50(weights='DEFAULT') ov_model = convert_model(model, example_input=torch.randn(1, 3, 100, 100)) ``example_input`` accepts the following formats: @@ -70,7 +70,7 @@ Exporting a PyTorch Model to ONNX Format It is also possible to export a PyTorch model to ONNX and then convert it to OpenVINO IR. To convert and deploy a PyTorch model this way, follow these steps: 1. `Export a PyTorch model to ONNX <#exporting-a-pytorch-model-to-onnx-format>`__. -2. :doc:`Convert the ONNX model ` to produce an optimized :doc:`Intermediate Representation ` of the model based on the trained network topology, weights, and biases values. +2. :doc:`Convert an ONNX model ` to produce an optimized :doc:`Intermediate Representation ` of the model based on the trained network topology, weights, and biases values. PyTorch models are defined in Python. To export them, use the ``torch.onnx.export()`` method. The code to evaluate or test the model is usually provided with its code and can be used for its initialization and export. diff --git a/docs/articles_en/documentation/openvino_legacy_features/mo_ovc_transition/legacy_conversion_api/supported_model_formats/Convert_Model_From_TensorFlow.md b/docs/articles_en/documentation/openvino_legacy_features/mo_ovc_transition/legacy_conversion_api/supported_model_formats/Convert_Model_From_TensorFlow.md index 1d34263e65e..5edea16a92f 100644 --- a/docs/articles_en/documentation/openvino_legacy_features/mo_ovc_transition/legacy_conversion_api/supported_model_formats/Convert_Model_From_TensorFlow.md +++ b/docs/articles_en/documentation/openvino_legacy_features/mo_ovc_transition/legacy_conversion_api/supported_model_formats/Convert_Model_From_TensorFlow.md @@ -17,7 +17,7 @@ Converting TensorFlow 1 Models Converting Frozen Model Format +++++++++++++++++++++++++++++++ -To convert a TensorFlow model, use the ``*mo*`` script to simply convert a model with a path to the input model ``*.pb*`` file: +To convert a TensorFlow model, use the ``*mo*`` script to simply convert a model with a path to the input model *.pb* file: .. code-block:: sh @@ -30,7 +30,7 @@ Converting Non-Frozen Model Formats There are three ways to store non-frozen TensorFlow models and convert them by model conversion API: 1. **Checkpoint**. In this case, a model consists of two files: ``inference_graph.pb`` (or ``inference_graph.pbtxt``) and ``checkpoint_file.ckpt``. -If you do not have an inference graph file, refer to the `Freezing Custom Models in Python <#Freezing-Custom-Models-in-Python>`__ section. +If you do not have an inference graph file, refer to the `Freezing Custom Models in Python <#freezing-custom-models-in-python>`__ section. To convert the model with the inference graph in ``.pb`` format, run the `mo` script with a path to the checkpoint file: .. code-block:: sh @@ -139,7 +139,7 @@ It is essential to freeze the model before pruning. Use the following code snipp Keras H5 ++++++++ -If you have a model in the HDF5 format, load the model using TensorFlow 2 and serialize it in the +If you have a model in HDF5 format, load the model using TensorFlow 2 and serialize it to SavedModel format. Here is an example of how to do it: .. code-block:: py diff --git a/docs/articles_en/openvino_workflow/model_introduction.md b/docs/articles_en/openvino_workflow/model_introduction.md index cadd407ba0b..c11beb17e67 100644 --- a/docs/articles_en/openvino_workflow/model_introduction.md +++ b/docs/articles_en/openvino_workflow/model_introduction.md @@ -22,17 +22,17 @@ and run a pre-trained network from an online database, such as or `Torchvision models `__. If your selected model is in one of the :doc:`OpenVINO™ supported model formats `, -you can use it directly, without the need to save as the OpenVINO IR. +you can use it directly, without the need to save as OpenVINO IR (`openvino.Model `__ - -`ov.Model `__). +`ov.Model `__). For this purpose, you can use ``openvino.Core.read_model`` and ``openvino.Core.compile_model`` methods, so that conversion is performed automatically before inference, for -maximum convenience (note that working with PyTorch differs slightly, the Python API -being the only option, while TensorFlow may present additional considerations -:doc:`TensorFlow Frontend Capabilities and Limitations `). +maximum convenience. Note that for PyTorch models, Python API +is the only conversion option. TensorFlow may present additional considerations +:doc:`TensorFlow Frontend Capabilities and Limitations `. -For better performance and more optimization options, OpenVINO offers a conversion +For better performance and more optimization options, OpenVINO also offers a conversion API with two possible approaches: the Python API functions (``openvino.convert_model`` and ``openvino.save_model``) and the ``ovc`` command line tool, which are described in detail in this article. @@ -50,7 +50,7 @@ and ``openvino.save_model``) and the ``ovc`` command line tool, which are descri Convert a Model in Python: ``convert_model`` ############################################## -You can use the Model conversion API in Python with the ``openvino.convert_model`` function. This function converts a model from its original framework representation, for example PyTorch or TensorFlow, to the object of type ``openvino.Model``. The resulting ``openvino.Model`` can be inferred in the same application (Python script or Jupyter Notebook) or saved into a file using``openvino.save_model`` for future use. Below, there are examples of how to use the ``openvino.convert_model`` with models from popular public repositories: +You can use the Model conversion API in Python with the ``openvino.convert_model`` function. This function converts a model from its original framework representation, for example PyTorch or TensorFlow, to the object of type ``openvino.Model``. The resulting ``openvino.Model`` can be compiled with ``openvino.compile_model`` and inferred in the same application (Python script or Jupyter Notebook) or saved into a file using``openvino.save_model`` for future use. Below, there are examples of how to use the ``openvino.convert_model`` with models from popular public repositories: .. tab-set:: @@ -64,7 +64,7 @@ You can use the Model conversion API in Python with the ``openvino.convert_model import torch from torchvision.models import resnet50 - model = resnet50(pretrained=True) + model = resnet50(weights='DEFAULT') # prepare input_data input_data = torch.rand(1, 3, 224, 224) @@ -81,7 +81,7 @@ You can use the Model conversion API in Python with the ``openvino.convert_model # compile model compiled_model = ov.compile_model(ov_model) - # run the inference + # run inference result = compiled_model(input_data) .. tab-item:: Hugging Face Transformers diff --git a/docs/articles_en/openvino_workflow/model_introduction/supported_model_formats.md b/docs/articles_en/openvino_workflow/model_introduction/supported_model_formats.md index 903199e1547..6ff5e620f10 100644 --- a/docs/articles_en/openvino_workflow/model_introduction/supported_model_formats.md +++ b/docs/articles_en/openvino_workflow/model_introduction/supported_model_formats.md @@ -21,7 +21,7 @@ * :doc:`How to convert TensorFlow Lite ` * :doc:`How to convert PaddlePaddle ` -To choose the best workflow for your application, read :doc:`Introduction to Model Preparation` +To choose the best workflow for your application, read the :doc:`Model Preparation section ` Refer to the list of all supported conversion options in :doc:`Conversion Parameters ` diff --git a/docs/articles_en/openvino_workflow/model_introduction/supported_model_formats/Convert_Model_From_PyTorch.md b/docs/articles_en/openvino_workflow/model_introduction/supported_model_formats/Convert_Model_From_PyTorch.md index 6fcd6d7c03a..f6e6986c5d9 100644 --- a/docs/articles_en/openvino_workflow/model_introduction/supported_model_formats/Convert_Model_From_PyTorch.md +++ b/docs/articles_en/openvino_workflow/model_introduction/supported_model_formats/Convert_Model_From_PyTorch.md @@ -18,7 +18,7 @@ Here is the simplest example of PyTorch model conversion using a model from ``to import torch import openvino as ov - model = torchvision.models.resnet50(pretrained=True) + model = torchvision.models.resnet50(weights='DEFAULT') ov_model = ov.convert_model(model) ``openvino.convert_model`` function supports the following PyTorch model object types: @@ -27,9 +27,9 @@ Here is the simplest example of PyTorch model conversion using a model from ``to * ``torch.jit.ScriptModule`` * ``torch.jit.ScriptFunction`` -When passing a ``torch.nn.Module`` derived class object as an input model, converting PyTorch models often requires the ``example_input`` parameter to be specified in the ``openvino.convert_model`` function call. Internally it triggers the model tracing during the model conversion process, using the capabilities of the ``torch.jit.trace`` function. +When using ``torch.nn.Module`` as an input model, ``openvino.convert_model`` often requires the ``example_input`` parameter to be specified. Internally, it triggers the model tracing during the model conversion process, using the capabilities of the ``torch.jit.trace`` function. -The use of ``example_input`` can lead to a better quality of the resulting OpenVINO model in terms of correctness and performance compared to converting the same original model without specifying ``example_input``. While the necessity of ``example_input`` depends on the implementation details of a specific PyTorch model, it is recommended to always set the ``example_input`` parameter when it is available. +The use of ``example_input`` can lead to a better quality OpenVINO model in terms of correctness and performance compared to converting the same original model without specifying ``example_input``. While the necessity of ``example_input`` depends on the implementation details of a specific PyTorch model, it is recommended to always set the ``example_input`` parameter when it is available. The value for the ``example_input`` parameter can be easily derived from knowing the input tensor's element type and shape. While it may not be suitable for all cases, random numbers can frequently serve this purpose effectively: @@ -131,7 +131,7 @@ Exporting a PyTorch Model to ONNX Format An alternative method of converting PyTorch models is exporting a PyTorch model to ONNX with ``torch.onnx.export`` first and then converting the resulting ``.onnx`` file to OpenVINO Model with ``openvino.convert_model``. It can be considered as a backup solution if a model cannot be converted directly from PyTorch to OpenVINO as described in the above chapters. Converting through ONNX can be more expensive in terms of code, conversion time, and allocated memory. 1. Refer to the `Exporting PyTorch models to ONNX format `__ guide to learn how to export models from PyTorch to ONNX. -2. Follow :doc:`Convert the ONNX model ` chapter to produce OpenVINO model. +2. Follow :doc:`Convert an ONNX model ` chapter to produce OpenVINO model. Here is an illustration of using these two steps together: @@ -142,7 +142,7 @@ Here is an illustration of using these two steps together: import torch import openvino as ov - model = torchvision.models.resnet50(pretrained=True) + model = torchvision.models.resnet50(weights='DEFAULT') # 1. Export to ONNX torch.onnx.export(model, (torch.rand(1, 3, 224, 224), ), 'model.onnx') # 2. Convert to OpenVINO diff --git a/docs/articles_en/openvino_workflow/model_introduction/supported_model_formats/Convert_Model_From_TensorFlow.md b/docs/articles_en/openvino_workflow/model_introduction/supported_model_formats/Convert_Model_From_TensorFlow.md index 5a7a3ab3a7c..bec51f537cd 100644 --- a/docs/articles_en/openvino_workflow/model_introduction/supported_model_formats/Convert_Model_From_TensorFlow.md +++ b/docs/articles_en/openvino_workflow/model_introduction/supported_model_formats/Convert_Model_From_TensorFlow.md @@ -45,7 +45,7 @@ To convert a model, run conversion with the directory as the model argument: Keras H5 Format +++++++++++++++ -If you have a model in the HDF5 format, load the model using TensorFlow 2 and serialize it in the +If you have a model in HDF5 format, load the model using TensorFlow 2 and serialize it to SavedModel format. Here is an example of how to do it: .. code-block:: py diff --git a/docs/articles_en/openvino_workflow/model_introduction/supported_model_formats/Convert_Model_From_TensorFlow_Lite.md b/docs/articles_en/openvino_workflow/model_introduction/supported_model_formats/Convert_Model_From_TensorFlow_Lite.md index e25795c95a4..642615f0173 100644 --- a/docs/articles_en/openvino_workflow/model_introduction/supported_model_formats/Convert_Model_From_TensorFlow_Lite.md +++ b/docs/articles_en/openvino_workflow/model_introduction/supported_model_formats/Convert_Model_From_TensorFlow_Lite.md @@ -7,7 +7,7 @@ TensorFlow Lite format to the OpenVINO Model. -To convert an ONNX model, run model conversion with the path to the ``.tflite`` model file: +To convert an TensorFlow Lite model, run model conversion with the path to the ``.tflite`` model file: .. tab-set::