DOCS shift to rst - Shape Inference and Preprocessing (#16213)
This commit is contained in:
parent
3a96e06d4c
commit
1268bfdca2
@ -2,7 +2,13 @@
|
||||
|
||||
With Model Optimizer you can increase your model's efficiency by providing an additional shape definition, with these two parameters: `--input_shape` and `--static_shape`.
|
||||
|
||||
@anchor when_to_specify_input_shapes
|
||||
@sphinxdirective
|
||||
|
||||
.. _when_to_specify_input_shapes:
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
|
||||
## Specifying --input_shape Command-line Parameter
|
||||
Model Optimizer supports conversion of models with dynamic input shapes that contain undefined dimensions.
|
||||
However, if the shape of data is not going to change from one inference request to another,
|
||||
|
@ -8,169 +8,212 @@
|
||||
|
||||
troubleshooting_reshape_errors
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
OpenVINO™ enables you to change model input shape during the application runtime.
|
||||
It may be useful when you want to feed the model an input that has different size than the model input shape.
|
||||
The following instructions are for cases where you need to change the model input shape repeatedly.
|
||||
|
||||
.. note::
|
||||
|
||||
If you need to do this only once, prepare a model with updated shapes via
|
||||
:doc:`Model Optimizer <openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide>`.
|
||||
For more information, refer to the :ref:`Specifying --input_shape Command-line Parameter <when_to_specify_input_shapes>` article.
|
||||
|
||||
|
||||
OpenVINO™ enables you to change model input shape during the application runtime. It may be useful when you want to feed the model an input that has different size than the model input shape. The following instructions are for cases where you need to change the model input shape repeatedly.
|
||||
The reshape method
|
||||
++++++++++++++++++++
|
||||
|
||||
> **NOTE**: If you need to do this only once, prepare a model with updated shapes via [Model Optimizer](@ref openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide). For more information, refer to the [Specifying --input_shape Command-line Parameter](@ref when_to_specify_input_shapes) article.
|
||||
The reshape method is used as ``ov::Model::reshape`` in C++ and
|
||||
`Model.reshape <api/ie_python_api/_autosummary/openvino.runtime.Model.html#openvino.runtime.Model.reshape>`__
|
||||
in Python. The method updates input shapes and propagates them down to the outputs
|
||||
of the model through all intermediate layers. The code below is an example of how
|
||||
to set a new batch size with the ``reshape`` method:
|
||||
|
||||
### The reshape method
|
||||
.. tab-set::
|
||||
|
||||
The reshape method is used as `ov::Model::reshape` in C++ and [Model.reshape](api/ie_python_api/_autosummary/openvino.runtime.Model.html#openvino.runtime.Model.reshape) in Python. The method updates input shapes and propagates them down to the outputs of the model through all intermediate layers.
|
||||
The code below is an example of how to set a new batch size with the `reshape` method:
|
||||
.. tab-item:: C++
|
||||
:sync: cpp
|
||||
|
||||
@sphinxtabset
|
||||
.. doxygensnippet:: docs/snippets/ShapeInference.cpp
|
||||
:language: cpp
|
||||
:fragment: picture_snippet
|
||||
|
||||
@sphinxtab{C++}
|
||||
.. tab-item:: Python
|
||||
:sync: py
|
||||
|
||||
@snippet snippets/ShapeInference.cpp picture_snippet
|
||||
.. doxygensnippet:: docs/snippets/ShapeInference.py
|
||||
:language: Python
|
||||
:fragment: picture_snippet
|
||||
|
||||
@endsphinxtab
|
||||
The diagram below presents the results of using the method, where the size of
|
||||
model input is changed with an image input:
|
||||
|
||||
@sphinxtab{Python}
|
||||
.. image:: _static/images/original_vs_reshaped_model.svg
|
||||
|
||||
@snippet docs/snippets/ShapeInference.py picture_snippet
|
||||
When using the ``reshape`` method, you may take one of the approaches:
|
||||
|
||||
@endsphinxtab
|
||||
.. _usage_of_reshape_method:
|
||||
|
||||
@endsphinxtabset
|
||||
|
||||
The diagram below presents the results of using the method, where the size of model input is changed with an image input:
|
||||
1. You can pass a new shape to the method in order to change the input shape of
|
||||
the model with a single input. See the example of adjusting spatial dimensions to the input image:
|
||||
|
||||

|
||||
.. tab-set::
|
||||
|
||||
When using the `reshape` method, you may take one of the approaches:
|
||||
.. tab-item:: C++
|
||||
:sync: cpp
|
||||
|
||||
@anchor usage_of_reshape_method
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
#. You can pass a new shape to the method in order to change the input shape of the model with a single input. See the example of adjusting spatial dimensions to the input image:
|
||||
|
||||
.. tab:: C++
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ShapeInference.cpp
|
||||
:language: cpp
|
||||
:fragment: spatial_reshape
|
||||
|
||||
.. tab:: Python
|
||||
|
||||
|
||||
.. tab-item:: Python
|
||||
:sync: py
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ShapeInference.py
|
||||
:language: python
|
||||
:fragment: simple_spatials_change
|
||||
|
||||
|
||||
To do the opposite - to resize input image to match the input shapes of the model, use the :doc:`pre-processing API <openvino_docs_OV_UG_Preprocessing_Overview>`.
|
||||
To do the opposite - to resize input image to match the input shapes of the model,
|
||||
use the :doc:`pre-processing API <openvino_docs_OV_UG_Preprocessing_Overview>`.
|
||||
|
||||
|
||||
#. You can express a reshape plan, specifying the input by the port, the index, and the tensor name:
|
||||
2. You can express a reshape plan, specifying the input by the port, the index, and the tensor name:
|
||||
|
||||
.. tab:: Port
|
||||
|
||||
.. tab:: C++
|
||||
|
||||
map<ov::Output<ov::Node>, ov::PartialShape specifies input by passing actual input port:
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ShapeInference.cpp
|
||||
:language: cpp
|
||||
:fragment: [obj_to_shape]
|
||||
|
||||
.. tab:: Python
|
||||
|
||||
`openvino.runtime.Output` dictionary key specifies input by passing actual input object.
|
||||
Dictionary values representing new shapes could be `PartialShape`:
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ShapeInference.py
|
||||
:language: python
|
||||
:fragment: [obj_to_shape]
|
||||
|
||||
.. tab:: Index
|
||||
|
||||
.. tab:: C++
|
||||
|
||||
map<size_t, ov::PartialShape> specifies input by its index:
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ShapeInference.cpp
|
||||
:language: cpp
|
||||
:fragment: [idx_to_shape]
|
||||
|
||||
.. tab:: Python
|
||||
|
||||
`int` dictionary key specifies input by its index.
|
||||
Dictionary values representing new shapes could be `tuple`:
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ShapeInference.py
|
||||
:language: python
|
||||
:fragment: [idx_to_shape]
|
||||
|
||||
.. tab:: Tensor Name
|
||||
|
||||
.. tab:: C++
|
||||
|
||||
map<string, ov::PartialShape> specifies input by its name:
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ShapeInference.cpp
|
||||
:language: cpp
|
||||
:fragment: [name_to_shape]
|
||||
|
||||
.. tab:: Python
|
||||
|
||||
`str` dictionary key specifies input by its name.
|
||||
Dictionary values representing new shapes could be `str`:
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ShapeInference.py
|
||||
:language: python
|
||||
:fragment: [name_to_shape]
|
||||
.. tab-set::
|
||||
|
||||
@endsphinxdirective
|
||||
.. tab-item:: Port
|
||||
|
||||
.. tab-set::
|
||||
|
||||
.. tab-item:: C++
|
||||
:sync: cpp
|
||||
|
||||
``map<ov::Output<ov::Node>, ov::PartialShape`` specifies input by passing actual input port:
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ShapeInference.cpp
|
||||
:language: cpp
|
||||
:fragment: [obj_to_shape]
|
||||
|
||||
.. tab-item:: Python
|
||||
:sync: py
|
||||
|
||||
``openvino.runtime.Output`` dictionary key specifies input by passing actual input object.
|
||||
Dictionary values representing new shapes could be ``PartialShape``:
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ShapeInference.py
|
||||
:language: python
|
||||
:fragment: [obj_to_shape]
|
||||
|
||||
.. tab-item:: Index
|
||||
|
||||
.. tab-set::
|
||||
|
||||
.. tab-item:: C++
|
||||
:sync: cpp
|
||||
|
||||
``map<size_t, ov::PartialShape>`` specifies input by its index:
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ShapeInference.cpp
|
||||
:language: cpp
|
||||
:fragment: [idx_to_shape]
|
||||
|
||||
.. tab-item:: Python
|
||||
:sync: py
|
||||
|
||||
``int`` dictionary key specifies input by its index.
|
||||
Dictionary values representing new shapes could be ``tuple``:
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ShapeInference.py
|
||||
:language: python
|
||||
:fragment: [idx_to_shape]
|
||||
|
||||
.. tab-item:: Tensor Name
|
||||
|
||||
.. tab-set::
|
||||
|
||||
.. tab-item:: C++
|
||||
:sync: cpp
|
||||
|
||||
``map<string, ov::PartialShape>`` specifies input by its name:
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ShapeInference.cpp
|
||||
:language: cpp
|
||||
:fragment: [name_to_shape]
|
||||
|
||||
.. tab-item:: Python
|
||||
:sync: py
|
||||
|
||||
``str`` dictionary key specifies input by its name.
|
||||
Dictionary values representing new shapes could be ``str``:
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ShapeInference.py
|
||||
:language: python
|
||||
:fragment: [name_to_shape]
|
||||
|
||||
|
||||
You can find the usage scenarios of the `reshape` method in [Hello Reshape SSD Samples](@ref openvino_inference_engine_samples_hello_reshape_ssd_README).
|
||||
You can find the usage scenarios of the ``reshape`` method in
|
||||
:doc:`Hello Reshape SSD Samples <openvino_inference_engine_samples_hello_reshape_ssd_README>`.
|
||||
|
||||
> **NOTE**: In some cases, models may not be ready to be reshaped. Therefore, a new input shape cannot be set neither with [Model Optimizer](@ref openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide) nor the `reshape` method.
|
||||
.. note::
|
||||
|
||||
### The set_batch method
|
||||
In some cases, models may not be ready to be reshaped. Therefore, a new input
|
||||
shape cannot be set neither with :doc:`Model Optimizer <openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide>`
|
||||
nor the ``reshape`` method.
|
||||
|
||||
The set_batch method
|
||||
++++++++++++++++++++
|
||||
|
||||
The meaning of the model batch may vary depending on the model design.
|
||||
To change the batch dimension of the model, [set the layout](@ref declare_model_s_layout) and call the `set_batch` method.
|
||||
To change the batch dimension of the model, :ref:`set the layout <declare_model_s_layout>` and call the ``set_batch`` method.
|
||||
|
||||
@sphinxtabset
|
||||
.. tab-set::
|
||||
|
||||
@sphinxtab{C++}
|
||||
.. tab-item:: C++
|
||||
:sync: cpp
|
||||
|
||||
@snippet snippets/ShapeInference.cpp set_batch
|
||||
.. doxygensnippet:: docs/snippets/ShapeInference.cpp
|
||||
:language: cpp
|
||||
:fragment: set_batch
|
||||
|
||||
@endsphinxtab
|
||||
.. tab-item:: Python
|
||||
:sync: py
|
||||
|
||||
@sphinxtab{Python}
|
||||
.. doxygensnippet:: docs/snippets/ShapeInference.py
|
||||
:language: Python
|
||||
:fragment: set_batch
|
||||
|
||||
@snippet docs/snippets/ShapeInference.py set_batch
|
||||
|
||||
@endsphinxtab
|
||||
The ``set_batch`` method is a high-level API of the reshape functionality, so all
|
||||
information about the ``reshape`` method implications are applicable for ``set_batch``
|
||||
too, including the troubleshooting section.
|
||||
|
||||
@endsphinxtabset
|
||||
Once you set the input shape of the model, call the ``compile_model`` method to
|
||||
get a ``CompiledModel`` object for inference with updated shapes.
|
||||
|
||||
The `set_batch` method is a high-level API of the reshape functionality, so all information about the `reshape` method implications are applicable for `set_batch` too, including the troubleshooting section.
|
||||
There are other approaches to change model input shapes during the stage of
|
||||
:ref:`IR generation <when_to_specify_input_shapes>` or :ref:`model representation <openvino_docs_OV_UG_Model_Representation>` in OpenVINO Runtime.
|
||||
|
||||
Once you set the input shape of the model, call the `compile_model` method to get a `CompiledModel` object for inference with updated shapes.
|
||||
|
||||
There are other approaches to change model input shapes during the stage of [IR generation](@ref when_to_specify_input_shapes) or [model representation](@ref openvino_docs_OV_UG_Model_Representation) in OpenVINO Runtime.
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
.. important::
|
||||
|
||||
Shape-changing functionality could be used to turn dynamic model input into a static one and vice versa. Always set static shapes when the shape of data is NOT going to change from one inference to another. Setting static shapes can avoid memory and runtime overheads for dynamic shapes which may vary depending on hardware plugin and model used. For more information, refer to the :doc:`Dynamic Shapes <openvino_docs_OV_UG_DynamicShapes>`.
|
||||
Shape-changing functionality could be used to turn dynamic model input into a
|
||||
static one and vice versa. Always set static shapes when the shape of data is
|
||||
NOT going to change from one inference to another. Setting static shapes can
|
||||
avoid memory and runtime overheads for dynamic shapes which may vary depending
|
||||
on hardware plugin and model used. For more information, refer to the
|
||||
:doc:`Dynamic Shapes <openvino_docs_OV_UG_DynamicShapes>`.
|
||||
|
||||
|
||||
Additional Resources
|
||||
####################
|
||||
|
||||
* :doc:`Extensibility documentation <openvino_docs_Extensibility_UG_Intro>` - describes a special mechanism in OpenVINO that allows adding support of shape inference for custom operations.
|
||||
* `ov::Model::reshape <classov_1_1Model.html#doxid-classov-1-1-model-1aa21aff80598d5089d591888a4c7f33ae>`__ - in OpenVINO Runtime C++ API
|
||||
* `Model.reshape <api/ie_python_api/_autosummary/openvino.runtime.Model.html#openvino.runtime.Model.reshape>`__ - in OpenVINO Runtime Python API.
|
||||
* :doc:`Dynamic Shapes <openvino_docs_OV_UG_DynamicShapes>`
|
||||
* :doc:`OpenVINO samples <openvino_docs_OV_UG_Samples_Overview>`
|
||||
* :doc:`Preprocessing API <openvino_docs_OV_UG_Preprocessing_Overview>`
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
|
||||
## Additional Resources
|
||||
|
||||
* [Extensibility documentation](@ref openvino_docs_Extensibility_UG_Intro) - describes a special mechanism in OpenVINO that allows adding support of shape inference for custom operations.
|
||||
* `ov::Model::reshape` - in OpenVINO Runtime C++ API
|
||||
* [Model.reshape](api/ie_python_api/_autosummary/openvino.runtime.Model.html#openvino.runtime.Model.reshape) - in OpenVINO Runtime Python API.
|
||||
* [Dynamic Shapes](@ref openvino_docs_OV_UG_DynamicShapes)
|
||||
* [OpenVINO samples](@ref openvino_docs_OV_UG_Samples_Overview)
|
||||
* [Preprocessing API](@ref openvino_docs_OV_UG_Preprocessing_Overview)
|
||||
|
@ -10,162 +10,188 @@
|
||||
openvino_docs_OV_UG_Layout_Overview
|
||||
openvino_docs_OV_UG_Preprocess_Usecase_save
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
## Introduction
|
||||
Introduction
|
||||
####################
|
||||
|
||||
When input data does not fit the model input tensor perfectly, additional operations/steps are needed to transform the data to the format expected by the model. These operations are known as "preprocessing".
|
||||
|
||||
### Example
|
||||
Consider the following standard example: deep learning model expects input with the `{1, 3, 224, 224}` shape, `FP32` precision, `RGB` color channels order, and it requires data normalization (subtract mean and divide by scale factor). However, there is just a `640x480` `BGR` image (data is `{480, 640, 3}`). This means that the following operations must be performed:
|
||||
- Convert `U8` buffer to `FP32`.
|
||||
- Transform to `planar` format: from `{1, 480, 640, 3}` to `{1, 3, 480, 640}`.
|
||||
- Resize image from 640x480 to 224x224.
|
||||
- Make `BGR->RGB` conversion as model expects `RGB`.
|
||||
- For each pixel, subtract mean values and divide by scale factor.
|
||||
Example
|
||||
++++++++++++++++++++
|
||||
|
||||
Consider the following standard example: deep learning model expects input with the ``{1, 3, 224, 224}`` shape, ``FP32`` precision, ``RGB`` color channels order, and it requires data normalization (subtract mean and divide by scale factor). However, there is just a ``640x480 BGR`` image (data is ``{480, 640, 3}``). This means that the following operations must be performed:
|
||||
|
||||
* Convert ``U8`` buffer to ``FP32``.
|
||||
* Transform to ``planar`` format: from ``{1, 480, 640, 3}`` to ``{1, 3, 480, 640}``.
|
||||
* Resize image from 640x480 to 224x224.
|
||||
* Make ``BGR->RGB`` conversion as model expects ``RGB``.
|
||||
* For each pixel, subtract mean values and divide by scale factor.
|
||||
|
||||
|
||||

|
||||
.. image:: _static/images/preprocess_not_fit.png
|
||||
|
||||
|
||||
Even though it is relatively easy to implement all these steps in the application code manually, before actual inference, it is also possible with the use of Preprocessing API. Advantages of using the API are:
|
||||
- Preprocessing API is easy to use.
|
||||
- Preprocessing steps will be integrated into execution graph and will be performed on selected device (CPU/GPU/etc.) rather than always being executed on CPU. This will improve selected device utilization which is always good.
|
||||
|
||||
## Preprocessing API
|
||||
* Preprocessing API is easy to use.
|
||||
* Preprocessing steps will be integrated into execution graph and will be performed on selected device (CPU/GPU/etc.) rather than always being executed on CPU. This will improve selected device utilization which is always good.
|
||||
|
||||
Preprocessing API
|
||||
####################
|
||||
|
||||
Intuitively, preprocessing API consists of the following parts:
|
||||
1. **Tensor** - declares user data format, like shape, [layout](./layout_overview.md), precision, color format from actual user's data.
|
||||
2. **Steps** - describes sequence of preprocessing steps which need to be applied to user data.
|
||||
3. **Model** - specifies model data format. Usually, precision and shape are already known for model, only additional information, like [layout](./layout_overview.md) can be specified.
|
||||
|
||||
> **NOTE**: Graph modifications of a model shall be performed after the model is read from a drive and **before** it is loaded on the actual device.
|
||||
1. **Tensor** - declares user data format, like shape, :doc:`layout <openvino_docs_OV_UG_Layout_Overview>`, precision, color format from actual user's data.
|
||||
2. **Steps** - describes sequence of preprocessing steps which need to be applied to user data.
|
||||
3. **Model** - specifies model data format. Usually, precision and shape are already known for model, only additional information, like :doc:`layout <openvino_docs_OV_UG_Layout_Overview>` can be specified.
|
||||
|
||||
### PrePostProcessor Object
|
||||
.. note::
|
||||
|
||||
The `ov::preprocess::PrePostProcessor` class allows specifying preprocessing and postprocessing steps for a model read from disk.
|
||||
Graph modifications of a model shall be performed after the model is read from a drive and **before** it is loaded on the actual device.
|
||||
|
||||
@sphinxtabset
|
||||
PrePostProcessor Object
|
||||
+++++++++++++++++++++++
|
||||
|
||||
@sphinxtab{C++}
|
||||
The `ov::preprocess::PrePostProcessor <classov_1_1preprocess_1_1PrePostProcessor.html#doxid-classov-1-1preprocess-1-1-pre-post-processor>`__ class allows specifying preprocessing and postprocessing steps for a model read from disk.
|
||||
|
||||
@snippet docs/snippets/ov_preprocessing.cpp ov:preprocess:create
|
||||
.. tab-set::
|
||||
|
||||
@endsphinxtab
|
||||
.. tab-item:: C++
|
||||
:sync: cpp
|
||||
|
||||
@sphinxtab{Python}
|
||||
.. doxygensnippet:: docs/snippets/ov_preprocessing.cpp
|
||||
:language: cpp
|
||||
:fragment: ov:preprocess:create
|
||||
|
||||
@snippet docs/snippets/ov_preprocessing.py ov:preprocess:create
|
||||
.. tab-item:: Python
|
||||
:sync: py
|
||||
|
||||
@endsphinxtab
|
||||
.. doxygensnippet:: docs/snippets/ov_preprocessing.py
|
||||
:language: python
|
||||
:fragment: ov:preprocess:create
|
||||
|
||||
@endsphinxtabset
|
||||
|
||||
### Declare User's Data Format
|
||||
Declare User's Data Format
|
||||
++++++++++++++++++++++++++
|
||||
|
||||
To address particular input of a model/preprocessor, use the `ov::preprocess::PrePostProcessor::input(input_name)` method.
|
||||
To address particular input of a model/preprocessor, use the ``ov::preprocess::PrePostProcessor::input(input_name)`` method.
|
||||
|
||||
@sphinxtabset
|
||||
|
||||
@sphinxtab{C++}
|
||||
.. tab-set::
|
||||
|
||||
@snippet docs/snippets/ov_preprocessing.cpp ov:preprocess:tensor
|
||||
.. tab-item:: C++
|
||||
:sync: cpp
|
||||
|
||||
@endsphinxtab
|
||||
.. doxygensnippet:: docs/snippets/ov_preprocessing.cpp
|
||||
:language: cpp
|
||||
:fragment: ov:preprocess:tensor
|
||||
|
||||
@sphinxtab{Python}
|
||||
.. tab-item:: Python
|
||||
:sync: py
|
||||
|
||||
@snippet docs/snippets/ov_preprocessing.py ov:preprocess:tensor
|
||||
.. doxygensnippet:: docs/snippets/ov_preprocessing.py
|
||||
:language: python
|
||||
:fragment: ov:preprocess:tensor
|
||||
|
||||
@endsphinxtab
|
||||
|
||||
@endsphinxtabset
|
||||
|
||||
Below is all the specified input information:
|
||||
- Precision is `U8` (unsigned 8-bit integer).
|
||||
- Data represents tensor with the `{1,480,640,3}` shape.
|
||||
- [Layout](./layout_overview.md) is "NHWC". It means: `height=480`, `width=640`, `channels=3`'.
|
||||
- Color format is `BGR`.
|
||||
|
||||
@anchor declare_model_s_layout
|
||||
### Declaring Model Layout
|
||||
|
||||
Model input already has information about precision and shape. Preprocessing API is not intended to modify this. The only thing that may be specified is input data [layout](./layout_overview.md)
|
||||
|
||||
@sphinxtabset
|
||||
|
||||
@sphinxtab{C++}
|
||||
|
||||
@snippet docs/snippets/ov_preprocessing.cpp ov:preprocess:model
|
||||
|
||||
@endsphinxtab
|
||||
|
||||
@sphinxtab{Python}
|
||||
|
||||
@snippet docs/snippets/ov_preprocessing.py ov:preprocess:model
|
||||
|
||||
@endsphinxtab
|
||||
|
||||
@endsphinxtabset
|
||||
* Precision is ``U8`` (unsigned 8-bit integer).
|
||||
* Data represents tensor with the ``{1,480,640,3}`` shape.
|
||||
* :doc:`Layout <openvino_docs_OV_UG_Layout_Overview>` is "NHWC". It means: ``height=480``, ``width=640``, ``channels=3``.
|
||||
* Color format is ``BGR``.
|
||||
|
||||
|
||||
Now, if the model input has `{1,3,224,224}` shape, preprocessing will be able to identify the `height=224`, `width=224`, and `channels=3` of that model. The `height`/`width` information is necessary for `resize`, and `channels` is needed for mean/scale normalization.
|
||||
.. _declare_model_s_layout:
|
||||
|
||||
### Preprocessing Steps
|
||||
Declaring Model Layout
|
||||
++++++++++++++++++++++
|
||||
|
||||
Model input already has information about precision and shape. Preprocessing API is not intended to modify this. The only thing that may be specified is input data :doc:`layout <openvino_docs_OV_UG_Layout_Overview>`
|
||||
|
||||
|
||||
.. tab-set::
|
||||
|
||||
.. tab-item:: C++
|
||||
:sync: cpp
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ov_preprocessing.cpp
|
||||
:language: cpp
|
||||
:fragment: ov:preprocess:model
|
||||
|
||||
.. tab-item:: Python
|
||||
:sync: py
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ov_preprocessing.py
|
||||
:language: python
|
||||
:fragment: ov:preprocess:model
|
||||
|
||||
|
||||
Now, if the model input has ``{1,3,224,224}`` shape, preprocessing will be able to identify the ``height=224``, ``width=224``, and ``channels=3`` of that model. The ``height``/ ``width`` information is necessary for ``resize``, and ``channels`` is needed for mean/scale normalization.
|
||||
|
||||
Preprocessing Steps
|
||||
++++++++++++++++++++
|
||||
|
||||
Now, the sequence of preprocessing steps can be defined:
|
||||
|
||||
@sphinxtabset
|
||||
|
||||
@sphinxtab{C++}
|
||||
.. tab-set::
|
||||
|
||||
@snippet docs/snippets/ov_preprocessing.cpp ov:preprocess:steps
|
||||
.. tab-item:: C++
|
||||
:sync: cpp
|
||||
|
||||
@endsphinxtab
|
||||
.. doxygensnippet:: docs/snippets/ov_preprocessing.cpp
|
||||
:language: cpp
|
||||
:fragment: ov:preprocess:steps
|
||||
|
||||
@sphinxtab{Python}
|
||||
.. tab-item:: Python
|
||||
:sync: py
|
||||
|
||||
@snippet docs/snippets/ov_preprocessing.py ov:preprocess:steps
|
||||
.. doxygensnippet:: docs/snippets/ov_preprocessing.py
|
||||
:language: python
|
||||
:fragment: ov:preprocess:steps
|
||||
|
||||
@endsphinxtab
|
||||
|
||||
@endsphinxtabset
|
||||
|
||||
Perform the following:
|
||||
|
||||
1. Convert `U8` to `FP32` precision.
|
||||
2. Convert current color format from `BGR` to `RGB`.
|
||||
3. Resize to `height`/`width` of a model. Be aware that if a model accepts dynamic size e.g., `{?, 3, ?, ?}`, `resize` will not know how to resize the picture. Therefore, in this case, target `height`/`width` should be specified. For more details, see also the `ov::preprocess::PreProcessSteps::resize()`.
|
||||
4. Subtract mean from each channel. In this step, color format is already `RGB`, so `100.5` will be subtracted from each `Red` component, and `101.5` will be subtracted from each `Blue` one.
|
||||
5. Divide each pixel data to appropriate scale value. In this example, each `Red` component will be divided by 50, `Green` by 51, and `Blue` by 52 respectively.
|
||||
6. Keep in mind that the last `convert_layout` step is commented out as it is not necessary to specify the last layout conversion. The `PrePostProcessor` will do such conversion automatically.
|
||||
1. Convert ``U8`` to ``FP32`` precision.
|
||||
2. Convert current color format from ``BGR`` to ``RGB``.
|
||||
3. Resize to ``height``/ ``width`` of a model. Be aware that if a model accepts dynamic size e.g., ``{?, 3, ?, ?}``, ``resize`` will not know how to resize the picture. Therefore, in this case, target ``height``/ ``width`` should be specified. For more details, see also the `ov::preprocess::PreProcessSteps::resize() <classov_1_1preprocess_1_1PreProcessSteps.html#doxid-classov-1-1preprocess-1-1-pre-process-steps-1a40dab78be1222fee505ed6a13400efe6>`__.
|
||||
4. Subtract mean from each channel. In this step, color format is already ``RGB``, so ``100.5`` will be subtracted from each ``Red`` component, and ``101.5`` will be subtracted from each ``Blue`` one.
|
||||
5. Divide each pixel data to appropriate scale value. In this example, each ``Red`` component will be divided by 50, ``Green`` by 51, and ``Blue`` by 52 respectively.
|
||||
6. Keep in mind that the last ``convert_layout`` step is commented out as it is not necessary to specify the last layout conversion. The ``PrePostProcessor`` will do such conversion automatically.
|
||||
|
||||
### Integrating Steps into a Model
|
||||
Integrating Steps into a Model
|
||||
++++++++++++++++++++++++++++++
|
||||
|
||||
Once the preprocessing steps have been finished the model can be finally built. It is possible to display `PrePostProcessor` configuration for debugging purposes:
|
||||
|
||||
@sphinxtabset
|
||||
|
||||
@sphinxtab{C++}
|
||||
|
||||
@snippet docs/snippets/ov_preprocessing.cpp ov:preprocess:build
|
||||
|
||||
@endsphinxtab
|
||||
|
||||
@sphinxtab{Python}
|
||||
|
||||
@snippet docs/snippets/ov_preprocessing.py ov:preprocess:build
|
||||
|
||||
@endsphinxtab
|
||||
|
||||
@endsphinxtabset
|
||||
Once the preprocessing steps have been finished the model can be finally built. It is possible to display ``PrePostProcessor`` configuration for debugging purposes:
|
||||
|
||||
|
||||
The `model` will accept `U8` input with the shape of `{1, 480, 640, 3}` and the `BGR` channel order. All conversion steps will be integrated into the execution graph. Now, model can be loaded on the device and the image can be passed to the model without any data manipulation in the application.
|
||||
.. tab-set::
|
||||
|
||||
.. tab-item:: C++
|
||||
:sync: cpp
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ov_preprocessing.cpp
|
||||
:language: cpp
|
||||
:fragment: ov:preprocess:build
|
||||
|
||||
.. tab-item:: Python
|
||||
:sync: py
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ov_preprocessing.py
|
||||
:language: python
|
||||
:fragment: ov:preprocess:build
|
||||
|
||||
|
||||
## Additional Resources
|
||||
The ``model`` will accept ``U8`` input with the shape of ``{1, 480, 640, 3}`` and the ``BGR`` channel order. All conversion steps will be integrated into the execution graph. Now, model can be loaded on the device and the image can be passed to the model without any data manipulation in the application.
|
||||
|
||||
* [Preprocessing Details](@ref openvino_docs_OV_UG_Preprocessing_Details)
|
||||
* [Layout API overview](@ref openvino_docs_OV_UG_Layout_Overview)
|
||||
* <code>ov::preprocess::PrePostProcessor</code> C++ class documentation
|
||||
|
||||
Additional Resources
|
||||
####################
|
||||
|
||||
* :doc:`Preprocessing Details <openvino_docs_OV_UG_Preprocessing_Details>`
|
||||
* :doc:`Layout API overview <openvino_docs_OV_UG_Layout_Overview>`
|
||||
* `ov::preprocess::PrePostProcessor <classov_1_1preprocess_1_1PrePostProcessor.html#doxid-classov-1-1preprocess-1-1-pre-post-processor>`__ C++ class documentation
|
||||
|
||||
@endsphinxdirective
|
||||
|
@ -1,84 +1,111 @@
|
||||
# Use Case - Integrate and Save Preprocessing Steps Into IR {#openvino_docs_OV_UG_Preprocess_Usecase_save}
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
Previous sections covered the topic of the [preprocessing steps](@ref openvino_docs_OV_UG_Preprocessing_Details) and the overview of [Layout](@ref openvino_docs_OV_UG_Layout_Overview) API.
|
||||
Previous sections covered the topic of the :doc:`preprocessing steps <openvino_docs_OV_UG_Preprocessing_Details>`
|
||||
and the overview of :doc:`Layout <openvino_docs_OV_UG_Layout_Overview>` API.
|
||||
|
||||
For many applications, it is also important to minimize read/load time of a model. Therefore, performing integration of preprocessing steps every time on application startup, after `ov::runtime::Core::read_model`, may seem inconvenient. In such cases, once pre and postprocessing steps have been added, it can be useful to store new execution model to OpenVINO Intermediate Representation (OpenVINO IR, `.xml` format).
|
||||
For many applications, it is also important to minimize read/load time of a model.
|
||||
Therefore, performing integration of preprocessing steps every time on application
|
||||
startup, after ``ov::runtime::Core::read_model``, may seem inconvenient. In such cases,
|
||||
once pre and postprocessing steps have been added, it can be useful to store new execution
|
||||
model to OpenVINO Intermediate Representation (OpenVINO IR, `.xml` format).
|
||||
|
||||
Most available preprocessing steps can also be performed via command-line options, using Model Optimizer. For details on such command-line options, refer to the [Optimizing Preprocessing Computation](../MO_DG/prepare_model/Additional_Optimizations.md).
|
||||
Most available preprocessing steps can also be performed via command-line options,
|
||||
using Model Optimizer. For details on such command-line options, refer to the
|
||||
:doc:`Optimizing Preprocessing Computation <openvino_docs_MO_DG_Additional_Optimization_Use_Cases>`.
|
||||
|
||||
## Code example - Saving Model with Preprocessing to OpenVINO IR
|
||||
Code example - Saving Model with Preprocessing to OpenVINO IR
|
||||
#############################################################
|
||||
|
||||
When some preprocessing steps cannot be integrated into the execution graph using Model Optimizer command-line options (for example, `YUV`->`RGB` color space conversion, `Resize`, etc.), it is possible to write a simple code which:
|
||||
- Reads the original model (OpenVINO IR, TensorFlow, ONNX, PaddlePaddle).
|
||||
- Adds the preprocessing/postprocessing steps.
|
||||
- Saves resulting model as IR (`.xml` and `.bin`).
|
||||
When some preprocessing steps cannot be integrated into the execution graph using
|
||||
Model Optimizer command-line options (for example, ``YUV``->``RGB`` color space conversion,
|
||||
``Resize``, etc.), it is possible to write a simple code which:
|
||||
|
||||
Consider the example, where an original ONNX model takes one `float32` input with the `{1, 3, 224, 224}` shape, the `RGB` channel order, and mean/scale values applied. In contrast, the application provides `BGR` image buffer with a non-fixed size and input images as batches of two. Below is the model conversion code that can be applied in the model preparation script for such a case.
|
||||
* Reads the original model (OpenVINO IR, TensorFlow, ONNX, PaddlePaddle).
|
||||
* Adds the preprocessing/postprocessing steps.
|
||||
* Saves resulting model as IR (``.xml`` and ``.bin``).
|
||||
|
||||
- Includes / Imports
|
||||
Consider the example, where an original ONNX model takes one ``float32`` input with the
|
||||
``{1, 3, 224, 224}`` shape, the ``RGB`` channel order, and mean/scale values applied.
|
||||
In contrast, the application provides ``BGR`` image buffer with a non-fixed size and
|
||||
input images as batches of two. Below is the model conversion code that can be applied
|
||||
in the model preparation script for such a case.
|
||||
|
||||
@sphinxtabset
|
||||
|
||||
@sphinxtab{C++}
|
||||
|
||||
@snippet docs/snippets/ov_preprocessing.cpp ov:preprocess:save_headers
|
||||
|
||||
@endsphinxtab
|
||||
|
||||
@sphinxtab{Python}
|
||||
|
||||
@snippet docs/snippets/ov_preprocessing.py ov:preprocess:save_headers
|
||||
|
||||
@endsphinxtab
|
||||
|
||||
@endsphinxtabset
|
||||
|
||||
- Preprocessing & Saving to the OpenVINO IR code.
|
||||
|
||||
@sphinxtabset
|
||||
|
||||
@sphinxtab{C++}
|
||||
|
||||
@snippet docs/snippets/ov_preprocessing.cpp ov:preprocess:save
|
||||
|
||||
@endsphinxtab
|
||||
|
||||
@sphinxtab{Python}
|
||||
|
||||
@snippet docs/snippets/ov_preprocessing.py ov:preprocess:save
|
||||
|
||||
@endsphinxtab
|
||||
|
||||
@endsphinxtabset
|
||||
* Includes / Imports
|
||||
|
||||
|
||||
## Application Code - Load Model to Target Device
|
||||
.. tab-set::
|
||||
|
||||
After this, the application code can load a saved file and stop preprocessing. In this case, enable [model caching](./Model_caching_overview.md) to minimize load time when the cached model is available.
|
||||
.. tab-item:: C++
|
||||
:sync: cpp
|
||||
|
||||
@sphinxtabset
|
||||
.. doxygensnippet:: docs/snippets/ov_preprocessing.cpp
|
||||
:language: cpp
|
||||
:fragment: ov:preprocess:save_headers
|
||||
|
||||
@sphinxtab{C++}
|
||||
.. tab-item:: Python
|
||||
:sync: py
|
||||
|
||||
@snippet docs/snippets/ov_preprocessing.cpp ov:preprocess:save_load
|
||||
|
||||
@endsphinxtab
|
||||
|
||||
@sphinxtab{Python}
|
||||
|
||||
@snippet docs/snippets/ov_preprocessing.py ov:preprocess:save_load
|
||||
|
||||
@endsphinxtab
|
||||
|
||||
@endsphinxtabset
|
||||
.. doxygensnippet:: docs/snippets/ov_preprocessing.py
|
||||
:language: Python
|
||||
:fragment: ov:preprocess:save_headers
|
||||
|
||||
|
||||
## Additional Resources
|
||||
* [Preprocessing Details](@ref openvino_docs_OV_UG_Preprocessing_Details)
|
||||
* [Layout API overview](@ref openvino_docs_OV_UG_Layout_Overview)
|
||||
* [Model Optimizer - Optimize Preprocessing Computation](../MO_DG/prepare_model/Additional_Optimizations.md)
|
||||
* [Model Caching Overview](./Model_caching_overview.md)
|
||||
* The `ov::preprocess::PrePostProcessor` C++ class documentation
|
||||
* The `ov::pass::Serialize` - pass to serialize model to XML/BIN
|
||||
* The `ov::set_batch` - update batch dimension for a given model
|
||||
* Preprocessing & Saving to the OpenVINO IR code.
|
||||
|
||||
|
||||
.. tab-set::
|
||||
|
||||
.. tab-item:: C++
|
||||
:sync: cpp
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ov_preprocessing.cpp
|
||||
:language: cpp
|
||||
:fragment: ov:preprocess:save
|
||||
|
||||
.. tab-item:: Python
|
||||
:sync: py
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ov_preprocessing.py
|
||||
:language: Python
|
||||
:fragment: ov:preprocess:save
|
||||
|
||||
|
||||
Application Code - Load Model to Target Device
|
||||
##############################################
|
||||
|
||||
After this, the application code can load a saved file and stop preprocessing. In this case, enable
|
||||
:doc:`model caching <openvino_docs_OV_UG_Model_caching_overview>` to minimize load
|
||||
time when the cached model is available.
|
||||
|
||||
|
||||
.. tab-set::
|
||||
|
||||
.. tab-item:: C++
|
||||
:sync: cpp
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ov_preprocessing.cpp
|
||||
:language: cpp
|
||||
:fragment: ov:preprocess:save_load
|
||||
|
||||
.. tab-item:: Python
|
||||
:sync: py
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ov_preprocessing.py
|
||||
:language: Python
|
||||
:fragment: ov:preprocess:save_load
|
||||
|
||||
|
||||
Additional Resources
|
||||
####################
|
||||
|
||||
* :doc:`Preprocessing Details <openvino_docs_OV_UG_Preprocessing_Details>`
|
||||
* :doc:`Layout API overview <openvino_docs_OV_UG_Layout_Overview>`
|
||||
* :doc:`Model Optimizer - Optimize Preprocessing Computation <openvino_docs_MO_DG_Additional_Optimization_Use_Cases>`
|
||||
* :doc:`Model Caching Overview<openvino_docs_OV_UG_Model_caching_overview>`
|
||||
* The `ov::preprocess::PrePostProcessor <classov_1_1preprocess_1_1PrePostProcessor.html#doxid-classov-1-1preprocess-1-1-pre-post-processor>` C++ class documentation
|
||||
* The `ov::pass::Serialize <classov_1_1pass_1_1Serialize.html#doxid-classov-1-1pass-1-1-serialize>` - pass to serialize model to XML/BIN
|
||||
* The `ov::set_batch <namespaceov.html#doxid-namespaceov-1a3314e2ff91fcc9ffec05b1a77c37862b>` - update batch dimension for a given model
|
||||
|
||||
@endsphinxdirective
|
||||
|
Loading…
Reference in New Issue
Block a user