[DOCS] release adjustments pass 2 (#19805)

port: https://github.com/openvinotoolkit/openvino/pull/19796
This commit is contained in:
Karol Blaszczak 2023-09-13 13:47:20 +02:00 committed by GitHub
parent 5eff59a2d0
commit 7445f5bea6
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
9 changed files with 20 additions and 19 deletions

View File

@ -33,6 +33,7 @@ Convert a Model in Python: ``convert_model``
You can use the Model conversion API in Python with the ``openvino.convert_model`` function. This function converts a model from its original framework representation, for example Pytorch or TensorFlow, to the object of type ``openvino.Model``. The resulting ``openvino.Model`` can be inferred in the same application (Python script or Jupyter Notebook) or saved into a file using``openvino.save_model`` for future use. Below, there are examples of how to use the ``openvino.convert_model`` with models from popular public repositories:
.. tab-set::
.. tab-item:: Torchvision

View File

@ -22,7 +22,7 @@
optimal execution on target devices.
.. note::
This part of the documentation describes a legacy approach to model conversion. Starting with OpenVINO 2023.1, a simpler alternative API for model conversion is available: ``openvino.convert_model`` and OpenVINO Model Converter ``ovc`` CLI tool. Refer to `Model preparation <openvino_docs_model_processing_introduction>` for more details. If you are still using `openvino.tools.mo.convert_model` or `mo` CLI tool, you can still refer to this documentation. However, consider checking the `transition guide <openvino_docs_OV_Converter_UG_prepare_model_convert_model_MO_OVC_transition>` to learn how to migrate from the legacy conversion API to the new one. Depending on the model topology, the new API can be a better option for you.
This part of the documentation describes a legacy approach to model conversion. Starting with OpenVINO 2023.1, a simpler alternative API for model conversion is available: ``openvino.convert_model`` and OpenVINO Model Converter ``ovc`` CLI tool. Refer to :doc:`Model preparation <openvino_docs_model_processing_introduction>` for more details. If you are still using `openvino.tools.mo.convert_model` or `mo` CLI tool, you can still refer to this documentation. However, consider checking the :doc:`transition guide <openvino_docs_OV_Converter_UG_prepare_model_convert_model_MO_OVC_transition>` to learn how to migrate from the legacy conversion API to the new one. Depending on the model topology, the new API can be a better option for you.
To convert a model to OpenVINO model format (``ov.Model``), you can use the following command:

View File

@ -77,17 +77,17 @@ For details on how plugins handle compressed ``FP16`` models, see
- ``extension`` parameter which makes possible conversion of the models consisting of operations that are not supported by OpenVINO out-of-the-box. It requires implementing of an OpenVINO extension first, please refer to :doc:`Frontend Extensions <openvino_docs_Extensibility_UG_Frontend_Extensions>` guide.
- ``share_weigths`` parameter with default value `True` allows reusing memory with original weights. For models loaded in Python and then passed to ``openvino.convert_model``, that means that OpenVINO model will share the same areas in program memory where the original weights are located. For models loaded from files by ``openvino.convert_model``, file memory mapping is used to avoid extra memory allocation. When enabled, the original model cannot be destroyed (Python object cannot be deallocated and original model file cannot be deleted) for the whole lifetime of OpenVINO model. If it is not desired, set ``share_weights=False`` when calling ``openvino.convert_model``.
- ``share_weigths`` parameter with default value ``True`` allows reusing memory with original weights. For models loaded in Python and then passed to ``openvino.convert_model``, that means that OpenVINO model will share the same areas in program memory where the original weights are located. For models loaded from files by ``openvino.convert_model``, file memory mapping is used to avoid extra memory allocation. When enabled, the original model cannot be destroyed (Python object cannot be deallocated and original model file cannot be deleted) for the whole lifetime of OpenVINO model. If it is not desired, set ``share_weights=False`` when calling ``openvino.convert_model``.
.. note:: ``ovc`` doesn't have ``share_weights`` option and always uses sharing to reduce conversion time and consume less amount of memory during the conversion.
.. note:: ``ovc`` does not have ``share_weights`` option and always uses sharing to reduce conversion time and consume less amount of memory during the conversion.
- ``output_model`` parameter in ``ovc`` and ``openvino.save_model`` specifies name for output ``.xml`` file with the resulting OpenVINO IR. The accompanying ``.bin`` file name will be generated automatically by replacing ``.xml`` extension with ``.bin`` extension. The value of ``output_model`` must end with ``.xml`` extension. For ``ovc`` command line tool, ``output_model`` can also contain a name of a directory. In this case, the resulting OpenVINO IR files will be put into that directory with a base name of ``.xml`` and ``.bin`` files matching the original model base name passed to ``ovc`` as a parameter. For example, when calling ``ovc your_model.onnx --output_model directory_name``, files ``directory_name/your_model.xml`` and ``directory_name/your_model.bin`` will be created. If ``output_model`` is not used, then the current directory is used as a destination directory.
.. note:: ``openvino.save_model`` doesn't support a directory for ``output_model`` parameter value because ``openvino.save_model`` gets OpenVINO model object represented in a memory and there is no original model file name available for output file name generation. For the same reason, ``output_model`` is a mandatory parameter for ``openvino.save_model``.
.. note:: ``openvino.save_model`` does not support a directory for ``output_model`` parameter value because ``openvino.save_model`` gets OpenVINO model object represented in a memory and there is no original model file name available for output file name generation. For the same reason, ``output_model`` is a mandatory parameter for ``openvino.save_model``.
- ``verbose`` parameter activates extra diagnostics printed to the standard output. Use for debugging purposes in case there is an issue with the conversion and to collect information for better bug reporting to OpenVINO team.
.. note:: Weights sharing doesn't equally work for all the supported model formats. The value of this flag is considered as a hint for the conversion API, and actual sharing is used only if it is implemented and possible for a particular model representation.
.. note:: Weights sharing does not equally work for all the supported model formats. The value of this flag is considered as a hint for the conversion API, and actual sharing is used only if it is implemented and possible for a particular model representation.
You can always run ``ovc -h`` or ``ovc --help`` to recall all the supported parameters for ``ovc``.

View File

@ -35,7 +35,7 @@ To convert a PaddlePaddle model, use the ``ovc`` or ``openvino.convert_model`` a
ovc your_model_file.pdmodel
**For example**, this command converts a yolo v3 PaddlePaddle model to OpenVINO IR model:
**For example**, this command converts a YOLOv3 PaddlePaddle model to OpenVINO IR model:
.. tab-set::
@ -44,15 +44,15 @@ To convert a PaddlePaddle model, use the ``ovc`` or ``openvino.convert_model`` a
.. code-block:: py
import openvino as ov
ov.convert_model('yolov3.pdmodel')
import openvino as ov
ov.convert_model('yolov3.pdmodel')
.. tab-item:: CLI
:sync: cli
.. code-block:: sh
ovc yolov3.pdmodel
ovc yolov3.pdmodel
Converting PaddlePaddle Python Model
####################################

View File

@ -110,7 +110,7 @@ Non-tensor Data Types
When a non-tensor data type, such as a ``tuple`` or ``dict``, appears in a model input or output, it is flattened. The flattening means that each element within the ``tuple`` will be represented as a separate input or output. The same is true for ``dict`` values, where the keys of the ``dict`` are used to form a model input/output name. The original non-tensor input or output is replaced by one or multiple new inputs or outputs resulting from this flattening process. This flattening procedure is applied recursively in the case of nested ``tuples`` and ``dicts`` until it reaches the assumption that the most nested data type is a tensor.
For example, if the original model is called with ``example_input=(a, (b, c, (d, e)))``, where ``a``, ``b``, ...``e`` are tensors, it means that the original model has two inputs. The first is a tensor ``a``, and the second is a tuple ``(b, c, (d, e))``, containing two tensors ``b`` and ``c`` and a nested tuple ``(d, e)``. Then the resulting OpenVINO model will have signature ``(a, b, c, d, e)``, which means it will have five inputs, all of type tensor, instead of two in the original model.
For example, if the original model is called with ``example_input=(a, (b, c, (d, e)))``, where ``a``, ``b``, ... ``e`` are tensors, it means that the original model has two inputs. The first is a tensor ``a``, and the second is a tuple ``(b, c, (d, e))``, containing two tensors ``b`` and ``c`` and a nested tuple ``(d, e)``. Then the resulting OpenVINO model will have signature ``(a, b, c, d, e)``, which means it will have five inputs, all of type tensor, instead of two in the original model.
Flattening of a ``dict`` is supported for outputs only. If your model has an input of type ``dict``, you will need to decompose the ``dict`` to one or multiple tensor inputs by modifying the original model signature or making a wrapper model on top of the original model. This approach hides the dictionary from the model signature and allows it to be processed inside the model successfully.

View File

@ -8,9 +8,9 @@
This page provides general instructions on how to run model conversion from a TensorFlow format to the OpenVINO IR format. The instructions are different depending on whether your model was created with TensorFlow v1.X or TensorFlow v2.X.
.. note:: TensorFlow models can be loaded by `openvino.Core.read_model` or `openvino.Core.compile_model` methods by OpenVINO runtime API without preparing OpenVINO IR first. Refer to the :doc:`inference example <openvino_docs_OV_UG_Integrate_OV_with_your_application>` for more details. Using ``openvino.convert_model`` is still recommended if model load latency matters for the inference application.
.. note:: TensorFlow models can be loaded by ``openvino.Core.read_model`` or ``openvino.Core.compile_model`` methods by OpenVINO runtime API without preparing OpenVINO IR first. Refer to the :doc:`inference example <openvino_docs_OV_UG_Integrate_OV_with_your_application>` for more details. Using ``openvino.convert_model`` is still recommended if model load latency matters for the inference application.
.. note:: Examples below that convert TensorFlow models from a file, do not require any version of TensorFlow to be installed on the system, except in cases when the `tensorflow` module is imported explicitly.
.. note:: Examples below that convert TensorFlow models from a file, do not require any version of TensorFlow to be installed on the system, except in cases when the ``tensorflow`` module is imported explicitly.
Converting TensorFlow 2 Models
##############################
@ -238,7 +238,7 @@ Model conversion API supports passing TensorFlow/TensorFlow2 models directly fro
model = MyModule(name="simple_module")
ov_model = ov.convert_model(model, input=[-1])
.. note:: There is a known bug in ``openvino.convert_model`` on using ``tf.Variable`` nodes in the model graph. The results of the conversion of such models are unpredictable. It is recommended to save a model with ``tf.Variable`` into TensorFlow Saved Model format and load it with `openvino.convert_model`.
.. note:: There is a known bug in ``openvino.convert_model`` on using ``tf.Variable`` nodes in the model graph. The results of the conversion of such models are unpredictable. It is recommended to save a model with ``tf.Variable`` into TensorFlow Saved Model format and load it with ``openvino.convert_model``.
* ``tf.compat.v1.Graph``

View File

@ -12,7 +12,7 @@
:hidden:
Install OpenVINO <openvino_docs_install_guides_overview>
Additional Hardware setup <openvino_docs_install_guides_configurations_header>
Additional Hardware Setup <openvino_docs_install_guides_configurations_header>
Troubleshooting <openvino_docs_get_started_guide_troubleshooting>

View File

@ -58,7 +58,7 @@
| **Build OpenVINO from source**
| OpenVINO Toolkit source files are available on GitHub as open source. If you want to build your own version of OpenVINO for your platform,
follow the `OpenVINO Build Instructions <https://github.com/openvinotoolkit/openvino/blob/master/docs/dev/build.md>`__ .
follow the `OpenVINO Build Instructions <https://github.com/openvinotoolkit/openvino/blob/master/docs/dev/build.md>`__.

View File

@ -60,13 +60,13 @@ which means the compiler stage will require additional time to complete the proc
After installation, you can use OpenVINO in your product by running:
.. code-block:: sh
.. code-block:: sh
find_package(OpenVINO)
find_package(OpenVINO)
.. code-block:: sh
.. code-block:: sh
cmake -B [build directory] -S . -DCMAKE_TOOLCHAIN_FILE=[path to vcpkg]/scripts/buildsystems/vcpkg.cmake
cmake -B [build directory] -S . -DCMAKE_TOOLCHAIN_FILE=[path to vcpkg]/scripts/buildsystems/vcpkg.cmake
Congratulations! You've just Installed OpenVINO! For some use cases you may still
need to install additional components. Check the