Reshape documentation (#10901)
* Reshape documentation * Converting Model : reshape metrined, Supported Devices: no shape inference mentioning * demos removed
This commit is contained in:
parent
bdc89b1571
commit
2f0620600f
@ -1,19 +1,19 @@
|
||||
# Setting Input Shapes {#openvino_docs_MO_DG_prepare_model_convert_model_Converting_Model}
|
||||
|
||||
Model Optimizer provides the option of making models more efficient by providing additional shape definition.
|
||||
It is achieved with two parameters: `--input_shape` and `--static_shape`, used under certain conditions.
|
||||
Paragraphs below provide details about specifying input shapes for model conversion.
|
||||
|
||||
## When to Specify --input_shape Command-line Parameter <a name="when_to_specify_input_shapes"></a>
|
||||
Model Optimizer supports conversion of models with dynamic input shapes that contain undefined dimensions.
|
||||
However, if the shape of data is not going to change from one inference to another,
|
||||
However, if the shape of inference data is not going to change from one inference request to another,
|
||||
it is recommended to set up static shapes (when all dimensions are fully defined) for the inputs.
|
||||
It can be beneficial from a performance perspective and memory consumption.
|
||||
To set up static shapes, Model Optimizer provides the `--input_shape` parameter.
|
||||
The same functionality is also available in runtime via `reshape` method, please refer to [Changing input shapes](../../../OV_Runtime_UG/ShapeInference.md).
|
||||
For more information about dynamic shapes in runtime, refer to [Dynamic Shapes](../../../OV_Runtime_UG/ov_dynamic_shapes.md)
|
||||
This is an offline approach to set static shapes and it can save time and memory on runtime shape change.
|
||||
To learn more about runtime shape change please see a dedicated article about [reshape feature](../../../OV_Runtime_UG/ShapeInference.md).
|
||||
For more information about the dynamic shapes, refer to [Dynamic Shapes](../../../OV_Runtime_UG/ov_dynamic_shapes.md)
|
||||
|
||||
OpenVINO Runtime API can have limitations to infer models with undefined dimensions on some hardware.
|
||||
In this case, the `--input_shape` parameter and the `reshape` method can help to resolve undefined dimensions.
|
||||
In this case, the `--input_shape` parameter and the [reshape method](../../../OV_Runtime_UG/ShapeInference.md) can help resolving undefined dimensions.
|
||||
|
||||
Sometimes Model Optimizer is unable to convert models out-of-the-box (only the `--input_model` parameter is specified).
|
||||
Such problem can relate to models with inputs of undefined ranks and a case of cutting off parts of a model.
|
||||
@ -58,10 +58,14 @@ For example, launch the Model Optimizer for the ONNX* OCR model and specify a bo
|
||||
mo --input_model ocr.onnx --input data,seq_len --input_shape [1..3,150,200,1],[1..3]
|
||||
```
|
||||
|
||||
Practically, some models are not ready for input shapes change.
|
||||
In this case, a new input shape cannot be set via Model Optimizer.
|
||||
Learn more about shape inference <a href="_docs_OV_Runtime_UG_ShapeInference.html#troubleshooting">troubleshooting</a> and ways to <a href="_docs_OV_Runtime_UG_ShapeInference.html#how-to-fix-non-reshape-able-model">relax shape inference flow</a>.
|
||||
|
||||
## When to Specify --static_shape Command-line Parameter
|
||||
Model Optimizer provides the `--static_shape` parameter that allows evaluating shapes of all operations in the model for fixed input shapes
|
||||
and to fold shape computing sub-graphs into constants. The resulting IR can be more compact in size and the loading time for such IR can be decreased.
|
||||
However, the resulting IR will not be reshape-able with the help of the `reshape` method from OpenVINO Runtime API.
|
||||
However, the resulting IR will not be reshape-able with the help of the [reshape method](../../../OV_Runtime_UG/ShapeInference.md) from OpenVINO Runtime API.
|
||||
It is worth noting that the `--input_shape` parameter does not affect reshape-ability of the model.
|
||||
|
||||
For example, launch the Model Optimizer for the ONNX* OCR model using `--static_shape`.
|
||||
|
@ -8,85 +8,79 @@
|
||||
<div id="switcher-cpp" class="switcher-anchor">C++</div>
|
||||
@endsphinxdirective
|
||||
|
||||
OpenVINO™ provides two methods for runtime model reshaping: setting a new input shape and setting a new batch dimension value.
|
||||
OpenVINO™ provides capabilities to change model input shape during the runtime.
|
||||
It may be useful in case you would like to feed model an input that has different size than model input shape.
|
||||
In case you need to do this only once <a href="_docs_MO_DG_prepare_model_convert_model_Converting_Model.html#when-to-specify-input-shapes">prepare a model with updated shapes via Model Optimizer</a> for all the other cases follow instructions further.
|
||||
|
||||
### Set a new input shape with the reshape() method
|
||||
### Set a new input shape with reshape method
|
||||
|
||||
The `InferenceEngine::CNNNetwork::reshape` method updates input shapes and propagates them down to the outputs of the model through all intermediate layers.
|
||||
The `ov::Model::reshape` method updates input shapes and propagates them down to the outputs of the model through all intermediate layers.
|
||||
Example: Changing the batch size and spatial dimensions of input of a model with an image input:
|
||||
|
||||
> **NOTES**:
|
||||
> - Starting with the 2021.1 release, the Model Optimizer converts topologies keeping shape-calculating sub-graphs by default, which enables correct shape propagation during reshaping in most cases.
|
||||
> - Older versions of IRs are not guaranteed to reshape successfully. Please regenerate them with the Model Optimizer of the latest version of OpenVINO™.<br>
|
||||
> - If an ONNX model does not have a fully defined input shape and the model was imported with the ONNX importer, reshape the model before loading it to the plugin.
|
||||

|
||||
|
||||
### Set a new batch dimension value with the setBatchSize() method
|
||||
Please see the code to achieve that:
|
||||
|
||||
The meaning of a model batch may vary depending on the model design.
|
||||
This method does not deduce batch placement for inputs from the model architecture.
|
||||
It assumes that the batch is placed at the zero index in the shape for all inputs and uses the `InferenceEngine::CNNNetwork::reshape` method to propagate updated shapes through the model.
|
||||
@snippet snippets/ShapeInference.cpp picture_snippet
|
||||
|
||||
The method transforms the model before a new shape propagation to relax a hard-coded batch dimension in the model, if any.
|
||||
### Set a new batch size with set_batch method
|
||||
|
||||
Use `InferenceEngine::CNNNetwork::reshape` instead of `InferenceEngine::CNNNetwork::setBatchSize` to set new input shapes for the model if the model has one of the following:
|
||||
Meaning of the model batch may vary depending on the model design.
|
||||
In order to change the batch dimension of the model, please <a href="_docs_OV_Runtime_UG_preprocessing_overview.html#declare-model-s-layout">set the ov::Layout</a> and call the `ov::set_batch` method.
|
||||
|
||||
* Multiple inputs with different zero-index dimension meanings
|
||||
* Input without a batch dimension
|
||||
* 0D, 1D, or 3D shape
|
||||
@snippet snippets/ShapeInference.cpp set_batch
|
||||
|
||||
The `InferenceEngine::CNNNetwork::setBatchSize` method is a high-level API method that wraps the `InferenceEngine::CNNNetwork::reshape` method call and works for trivial models from the batch placement standpoint.
|
||||
Use `InferenceEngine::CNNNetwork::reshape` for other models.
|
||||
`ov::set_batch` method is a high level API of `ov::Model::reshape` functionality, so all information about `ov::Model::reshape` method implications are applicable for `ov::set_batch` too, including the troubleshooting section.
|
||||
|
||||
Using the `InferenceEngine::CNNNetwork::setBatchSize` method for models with a non-zero index batch placement or for models with inputs that do not have a batch dimension may lead to undefined behaviour.
|
||||
Once the input shape of `ov::Model` is set, call the `ov::Core::compile_model` method to get an `ov::CompiledModel` object for inference with updated shapes.
|
||||
|
||||
You can change input shapes multiple times using the `InferenceEngine::CNNNetwork::reshape` and `InferenceEngine::CNNNetwork::setBatchSize` methods in any order.
|
||||
If a model has a hard-coded batch dimension, use `InferenceEngine::CNNNetwork::setBatchSize` first to change the batch, then call `InferenceEngine::CNNNetwork::reshape` to update other dimensions, if needed.
|
||||
There are other approaches to change model input shapes during the stage of <a href="_docs_MO_DG_prepare_model_convert_model_Converting_Model.html#when-to-specify-input-shapes">IR generation</a> or [ov::Model creation](../OV_Runtime_UG/model_representation.md).
|
||||
|
||||
Inference Engine takes three kinds of a model description as an input, which are converted into an `InferenceEngine::CNNNetwork` object:
|
||||
1. [Intermediate Representation (IR)](../MO_DG/IR_and_opsets.md) through `InferenceEngine::Core::ReadNetwork`
|
||||
2. [OpenVINO Model](../OV_Runtime_UG/model_representation.md) through the constructor of `InferenceEngine::CNNNetwork`
|
||||
### Dynamic Shape Notice
|
||||
|
||||
`InferenceEngine::CNNNetwork` keeps an `ngraph::Function` object with the model description internally.
|
||||
The object should have fully-defined input shapes to be successfully loaded to Inference Engine plugins.
|
||||
To resolve undefined input dimensions of a model, call the `CNNNetwork::reshape` method to provide new input shapes before loading to the Inference Engine plugin.
|
||||
|
||||
Run the following code right after `InferenceEngine::CNNNetwork` creation to explicitly check for model input names and shapes:
|
||||
|
||||
```cpp
|
||||
CNNNetwork network = ... // read IR / ONNX model or create from nGraph::Function explicitly
|
||||
const auto parameters = network.getFunction()->get_parameters();
|
||||
for (const auto & parameter : parameters) {
|
||||
std::cout << "name: " << parameter->get_friendly_name() << " shape: " << parameter->get_partial_shape() << std::endl;
|
||||
if (parameter->get_partial_shape().is_dynamic())
|
||||
std::cout << "ATTENTION: Input shape is not fully defined. Use the CNNNetwork::reshape method to resolve it." << std::endl;
|
||||
}
|
||||
```
|
||||
|
||||
To feed input data of a shape that is different from the model input shape, reshape the model first.
|
||||
|
||||
Once the input shape of `InferenceEngine::CNNNetwork` is set, call the `InferenceEngine::Core::LoadNetwork` method to get an `InferenceEngine::ExecutableNetwork` object for inference with updated shapes.
|
||||
|
||||
There are other approaches to reshape the model during the stage of <a href="_docs_MO_DG_prepare_model_convert_model_Converting_Model.html#when_to_specify_input_shapes">IR generation</a> or [ov::Model creation](../OV_Runtime_UG/model_representation.md).
|
||||
|
||||
Practically, some models are not ready to be reshaped. In this case, a new input shape cannot be set with the Model Optimizer or the `InferenceEngine::CNNNetwork::reshape` method.
|
||||
Shape-changing functionality could be used to turn dynamic model input into a static one and vice versa.
|
||||
To learn more about dynamic shapes in OpenVINO please see a [dedicated article](../OV_Runtime_UG/ov_dynamic_shapes.md).
|
||||
|
||||
### Usage of Reshape Method <a name="usage_of_reshape_method"></a>
|
||||
|
||||
The primary method of the feature is `InferenceEngine::CNNNetwork::reshape`. It gets new input shapes and propagates it from input to output for all intermediates layers of the given network.
|
||||
The method takes `InferenceEngine::ICNNNetwork::InputShapes` - a map of pairs: name of input data and its dimension.
|
||||
The primary method of the feature is `ov::Model::reshape`. It is overloaded to better serve two main use cases:
|
||||
|
||||
The algorithm for resizing network is the following:
|
||||
1) To change input shape of model with single input you may pass new shape into the method. Please see the example of adjusting spatial dimensions to the input image:
|
||||
|
||||
1) **Collect the map of input names and shapes from Intermediate Representation (IR)** using helper method `InferenceEngine::CNNNetwork::getInputShapes`
|
||||
@snippet snippets/ShapeInference.cpp spatial_reshape
|
||||
|
||||
2) **Set new input shapes**
|
||||
To do the opposite - resize input image to the input shapes of the model, use the [pre-processing API](../OV_Runtime_UG/preprocessing_overview.md).
|
||||
|
||||
3) **Call reshape**
|
||||
2) Otherwise, you can express reshape plan via mapping of input and its new shape:
|
||||
* `map<ov::Output<ov::Node>, ov::PartialShape` specifies input by passing actual input port
|
||||
* `map<size_t, ov::PartialShape>` specifies input by its index
|
||||
* `map<string, ov::PartialShape>` specifies input by its name
|
||||
|
||||
Here is a code example:
|
||||
@sphinxdirective
|
||||
|
||||
@snippet snippets/ShapeInference.cpp part0
|
||||
.. tab:: Port
|
||||
|
||||
The Shape Inference feature is used in [Smart Classroom Demo](@ref omz_demos_smart_classroom_demo_cpp).
|
||||
.. doxygensnippet:: docs/snippets/ShapeInference.cpp
|
||||
:language: cpp
|
||||
:fragment: [obj_to_shape]
|
||||
|
||||
.. tab:: Index
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ShapeInference.cpp
|
||||
:language: cpp
|
||||
:fragment: [idx_to_shape]
|
||||
|
||||
.. tab:: Tensor Name
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ShapeInference.cpp
|
||||
:language: cpp
|
||||
:fragment: [name_to_shape]
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
Please find usage scenarios of `reshape` feature in our [samples](Samples_Overview.md) starting with [Hello Reshape Sample](../../samples/cpp/hello_reshape_ssd/README.html)
|
||||
|
||||
Practically, some models are not ready to be reshaped. In this case, a new input shape cannot be set with the Model Optimizer or the `ov::Model::reshape` method.
|
||||
|
||||
### Troubleshooting Reshape Errors
|
||||
|
||||
@ -109,10 +103,30 @@ For example, [publicly available Inception family models from TensorFlow*](https
|
||||
- Changing the model input shape may significantly affect its accuracy.
|
||||
For example, Object Detection models from TensorFlow have resizing restrictions by design.
|
||||
To keep the model valid after the reshape, choose a new input shape that satisfies conditions listed in the `pipeline.config` file.
|
||||
For details, refer to the <a href="_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_Object_Detection_API_Models.html#tf_od_custom_input_shape">Tensorflow Object Detection API models resizing techniques</a>.
|
||||
For details, refer to the <a href="_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_Object_Detection_API_Models.html-custom-input-shape">Tensorflow Object Detection API models resizing techniques</a>.
|
||||
|
||||
### How To Fix Non-Reshape-able Model<a name="how-to-fix-non-reshape-able-model"></a>
|
||||
|
||||
Some operators which prevent normal shape propagation can be fixed. To do so you can:
|
||||
* see if the issue can be fixed via changing the values of some operators input.
|
||||
E.g. most common problem of non-reshape-able models is a `Reshape` operator with hardcoded output shape.
|
||||
You can cut-off hard-coded 2nd input of `Reshape` and fill it in with relaxed values.
|
||||
For the following example on the picture Model Optimizer CLI should be:
|
||||
```sh
|
||||
mo --input_model path/to/model --input data[8,3,224,224],1:reshaped[2]->[0 -1]`
|
||||
```
|
||||
With `1:reshaped[2]` we request to cut 2nd input (counting from zero, so `1:` means 2nd inputs) of operation named `reshaped` and replace it with a `Parameter` with shape `[2]`.
|
||||
With `->[0 -1]` we replace this new `Parameter` by a `Constant` operator which has value `[0, -1]`.
|
||||
Since `Reshape` operator has `0` and `-1` as a specific values (see the meaning in [the specification](../ops/shape/Reshape_1.md)) it allows to propagate shapes freely without losing the intended meaning of `Reshape`.
|
||||
|
||||

|
||||
|
||||
* transform model during Model Optimizer conversion on the back phase. See [Model Optimizer extension article](../MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md)
|
||||
* transform OpenVINO Model during the runtime. See [OpenVINO Runtime Transformations article](../Extensibility_UG/ov_transformations.md)
|
||||
* modify the original model with the help of original framework
|
||||
|
||||
### Extensibility
|
||||
The Inference Engine provides a special mechanism that allows adding support of shape inference for custom operations. This mechanism is described in the [Extensibility documentation](../Extensibility_UG/Intro.md)
|
||||
OpenVINO provides a special mechanism that allows adding support of shape inference for custom operations. This mechanism is described in the [Extensibility documentation](../Extensibility_UG/Intro.md)
|
||||
|
||||
## Introduction (Python)
|
||||
|
||||
@ -122,104 +136,146 @@ The Inference Engine provides a special mechanism that allows adding support of
|
||||
<div id="switcher-python" class="switcher-anchor">Python</div>
|
||||
@endsphinxdirective
|
||||
|
||||
OpenVINO™ provides the following methods for runtime model reshaping:
|
||||
OpenVINO™ provides capabilities to change model input shape during the runtime.
|
||||
It may be useful in case you would like to feed model an input that has different size than model input shape.
|
||||
In case you need to do this only once <a href="_docs_MO_DG_prepare_model_convert_model_Converting_Model.html#when-to-specify-input-shapes">prepare a model with updated shapes via Model Optimizer</a> for all the other cases follow instructions further.
|
||||
|
||||
* Set a new input shape with the [IENetwork.reshape](api/ie_python_api/_autosummary/openvino.inference_engine.IENetwork.html#openvino.inference_engine.IENetwork.reshape) method.
|
||||
### Set a new input shape with reshape method
|
||||
|
||||
The [IENetwork.reshape](api/ie_python_api/_autosummary/openvino.inference_engine.IENetwork.html#openvino.inference_engine.IENetwork.reshape) method updates input shapes and propagates them down to the outputs of the model through all intermediate layers.
|
||||
The [Model.reshape](api/ie_python_api/_autosummary/openvino.runtime.Model.html#openvino.runtime.Model.reshape) method updates input shapes and propagates them down to the outputs of the model through all intermediate layers.
|
||||
Example: Changing the batch size and spatial dimensions of input of a model with an image input:
|
||||
|
||||
**NOTES**:
|
||||
* Model Optimizer converts topologies keeping shape-calculating sub-graphs by default, which enables correct shape propagation during reshaping in most cases.
|
||||
* Older versions of IRs are not guaranteed to reshape successfully. Please regenerate them with the Model Optimizer of the latest version of OpenVINO™.
|
||||
* If an ONNX model does not have a fully defined input shape and the model was imported with the ONNX importer, reshape the model before loading it to the plugin.
|
||||

|
||||
|
||||
Please see the code to achieve that:
|
||||
|
||||
* Set a new batch dimension value with the [IENetwork.batch_size](api/ie_python_api/_autosummary/openvino.inference_engine.IENetwork.html#openvino.inference_engine.IENetwork.batch_size) method.
|
||||
@sphinxdirective
|
||||
|
||||
The meaning of a model batch may vary depending on the model design. This method does not deduce batch placement for inputs from the model architecture. It assumes that the batch is placed at the zero index in the shape for all inputs and uses the [IENetwork.reshape](api/ie_python_api/_autosummary/openvino.inference_engine.IENetwork.html#openvino.inference_engine.IENetwork.reshape) method to propagate updated shapes through the model.
|
||||
.. doxygensnippet:: docs/snippets/ShapeInference.py
|
||||
:language: python
|
||||
:fragment: [picture_snippet]
|
||||
|
||||
The method transforms the model before a new shape propagation to relax a hard-coded batch dimension in the model, if any.
|
||||
@endsphinxdirective
|
||||
|
||||
Use [IENetwork.reshape](api/ie_python_api/_autosummary/openvino.inference_engine.IENetwork.html#openvino.inference_engine.IENetwork.reshape) rather than [IENetwork.batch_size](api/ie_python_api/_autosummary/openvino.inference_engine.IENetwork.html#openvino.inference_engine.IENetwork.batch_size) to set new input shapes for the model if the model has:
|
||||
### Set a new batch size with set_batch method
|
||||
|
||||
* Multiple inputs with different zero-index dimension meanings
|
||||
* Input without a batch dimension
|
||||
* 0D, 1D, or 3D shape
|
||||
Meaning of the model batch may vary depending on the model design.
|
||||
In order to change the batch dimension of the model, please <a href="_docs_OV_Runtime_UG_preprocessing_overview.html#declare-model-s-layout">set the Layout</a> for inputs and call the [set_batch](api/ie_python_api/_autosummary/openvino.runtime.set_batch.html) method.
|
||||
|
||||
The [IENetwork.batch_size](api/ie_python_api/_autosummary/openvino.inference_engine.IENetwork.html#openvino.inference_engine.IENetwork.batch_size) method is a high-level API method that wraps the [IENetwork.reshape](api/ie_python_api/_autosummary/openvino.inference_engine.IENetwork.html#openvino.inference_engine.IENetwork.reshape) method call and works for trivial models from the batch placement standpoint. Use [IENetwork.reshape](api/ie_python_api/_autosummary/openvino.inference_engine.IENetwork.html#openvino.inference_engine.IENetwork.reshape) for other models.
|
||||
@sphinxdirective
|
||||
|
||||
Using the [IENetwork.batch_size](api/ie_python_api/_autosummary/openvino.inference_engine.IENetwork.html#openvino.inference_engine.IENetwork.batch_size) method for models with a non-zero index batch placement or for models with inputs that do not have a batch dimension may lead to undefined behaviour.
|
||||
.. doxygensnippet:: docs/snippets/ShapeInference.py
|
||||
:language: python
|
||||
:fragment: [set_batch]
|
||||
|
||||
You can change input shapes multiple times using the `IENetwork.reshape` and `IENetwork.batch_size` methods in any order. If a model has a hard-coded batch dimension, use `IENetwork.batch_size` first to change the batch, then call `IENetwork.reshape` to update other dimensions, if needed.
|
||||
@endsphinxdirective
|
||||
|
||||
Inference Engine takes three kinds of a model description as an input, which are converted into an IENetwork object:
|
||||
[set_batch](api/ie_python_api/_autosummary/openvino.runtime.set_batch.html) method is a high level API of [Model.reshape](api/ie_python_api/_autosummary/openvino.runtime.Model.html#openvino.runtime.Model.reshape) functionality, so all information about [Model.reshape](api/ie_python_api/_autosummary/openvino.runtime.Model.html#openvino.runtime.Model.reshape) method implications are applicable for [set_batch](api/ie_python_api/_autosummary/openvino.runtime.set_batch.html) too, including the troubleshooting section.
|
||||
|
||||
1. Intermediate Representation (IR) through `IECore.read_network`
|
||||
2. ONNX model through `IECore.read_network`
|
||||
3. nGraph function through the constructor of IENetwork
|
||||
Once the input shape of [Model](api/ie_python_api/_autosummary/openvino.runtime.Model.html) is set, call the [compile_model](api/ie_python_api/_autosummary/openvino.runtime.compile_model.html) method to get a [CompiledModel](api/ie_python_api/_autosummary/openvino.runtime.CompiledModel.html) object for inference with updated shapes.
|
||||
|
||||
IENetwork keeps an `ngraph::Function` object with the model description internally. The object should have fully defined input shapes to be successfully loaded to the Inference Engine plugins. To resolve undefined input dimensions of a model, call the `IENetwork.reshape` method providing new input shapes before loading to the Inference Engine plugin.
|
||||
There are other approaches to change model input shapes during the stage of <a href="_docs_MO_DG_prepare_model_convert_model_Converting_Model.html#when-to-specify-input-shapes">IR generation</a> or [Model creation](../OV_Runtime_UG/model_representation.md).
|
||||
|
||||
Run the following code right after IENetwork creation to explicitly check for model input names and shapes:
|
||||
### Dynamic Shape Notice
|
||||
|
||||
To feed input data of a shape that is different from the model input shape, reshape the model first.
|
||||
Shape-changing functionality could be used to turn dynamic model input into a static one and vice versa.
|
||||
To learn more about dynamic shapes in OpenVINO please see a [dedicated article](../OV_Runtime_UG/ov_dynamic_shapes.md).
|
||||
|
||||
Once the input shape of IENetwork is set, call the `IECore.load_network` method to get an ExecutableNetwork object for inference with updated shapes.
|
||||
### Usage of Reshape Method <a name="usage_of_reshape_method"></a>
|
||||
|
||||
There are other approaches to reshape the model during the stage of IR generation or [OpenVINO model](https://docs.openvino.ai/latest/openvino_docs_nGraph_DG_PythonAPI.html#create_an_ngraph_function_from_a_graph) creation.
|
||||
The primary method of the feature is [Model.reshape](api/ie_python_api/_autosummary/openvino.runtime.Model.html#openvino.runtime.Model.reshape). It is overloaded to better serve two main use cases:
|
||||
|
||||
Practically, some models are not ready to be reshaped. In this case, a new input shape cannot be set with the Model Optimizer or the `IENetwork.reshape` method.
|
||||
1) To change input shape of model with single input you may pass new shape into the method. Please see the example of adjusting spatial dimensions to the input image:
|
||||
|
||||
### Troubleshooting Reshape Errors
|
||||
Operation semantics may impose restrictions on input shapes of the operation. Shape collision during shape propagation may be a sign that a new shape does not satisfy the restrictions. Changing the model input shape may result in intermediate operations shape collision.
|
||||
@sphinxdirective
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ShapeInference.py
|
||||
:language: python
|
||||
:fragment: [simple_spatials_change]
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
To do the opposite - resize input image to the input shapes of the model, use the [pre-processing API](../OV_Runtime_UG/preprocessing_overview.md).
|
||||
|
||||
2) Otherwise, you can express reshape plan via dictionary mapping input and its new shape:
|
||||
Dictionary keys could be
|
||||
* `str` specifies input by its name
|
||||
* `int` specifies input by its index
|
||||
* `openvino.runtime.Output` specifies input by passing actual input object
|
||||
|
||||
Dictionary values (representing new shapes) could be
|
||||
* `list`
|
||||
* `tuple`
|
||||
* `PartialShape`
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
.. tab:: Port
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ShapeInference.py
|
||||
:language: python
|
||||
:fragment: [obj_to_shape]
|
||||
|
||||
.. tab:: Index
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ShapeInference.py
|
||||
:language: python
|
||||
:fragment: [idx_to_shape]
|
||||
|
||||
.. tab:: Tensor Name
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ShapeInference.py
|
||||
:language: python
|
||||
:fragment: [name_to_shape]
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
Please find usage scenarios of `reshape` feature in our [samples](Samples_Overview.md) and [demos](ToDo), starting with [Hello Reshape Sample](../../samples/python/hello_reshape_ssd/README.html)
|
||||
|
||||
Practically, some models are not ready to be reshaped. In this case, a new input shape cannot be set with the Model Optimizer or the `Model.reshape` method.
|
||||
|
||||
### Troubleshooting Reshape Errors <a name="troubleshooting"></a>
|
||||
|
||||
Operation semantics may impose restrictions on input shapes of the operation.
|
||||
Shape collision during shape propagation may be a sign that a new shape does not satisfy the restrictions.
|
||||
Changing the model input shape may result in intermediate operations shape collision.
|
||||
|
||||
Examples of such operations:
|
||||
* <a href="_docs_ops_shape_Reshape_1.html">Reshape</a> operation with a hard-coded output shape value
|
||||
* <a href="_docs_ops_matrix_MatMul_1.html">MatMul</a> operation with the `Const` second input cannot be resized by spatial dimensions due to operation semantics
|
||||
|
||||
* Reshape operation with a hard-coded output shape value
|
||||
* MatMul operation with the Const second input cannot be resized by spatial dimensions due to operation semantics
|
||||
Model structure and logic should not change significantly after model reshaping.
|
||||
- The Global Pooling operation is commonly used to reduce output feature map of classification models output.
|
||||
Having the input of the shape [N, C, H, W], Global Pooling returns the output of the shape [N, C, 1, 1].
|
||||
Model architects usually express Global Pooling with the help of the `Pooling` operation with the fixed kernel size [H, W].
|
||||
During spatial reshape, having the input of the shape [N, C, H1, W1], Pooling with the fixed kernel size [H, W] returns the output of the shape [N, C, H2, W2], where H2 and W2 are commonly not equal to `1`.
|
||||
It breaks the classification model structure.
|
||||
For example, [publicly available Inception family models from TensorFlow*](https://github.com/tensorflow/models/tree/master/research/slim#pre-trained-models) have this issue.
|
||||
|
||||
A model's structure and logic should not significantly change after model reshaping.
|
||||
- Changing the model input shape may significantly affect its accuracy.
|
||||
For example, Object Detection models from TensorFlow have resizing restrictions by design.
|
||||
To keep the model valid after the reshape, choose a new input shape that satisfies conditions listed in the `pipeline.config` file.
|
||||
For details, refer to the <a href="_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_Object_Detection_API_Models.html#custom-input-shape">Tensorflow Object Detection API models resizing techniques</a>.
|
||||
|
||||
* The Global Pooling operation is commonly used to reduce output feature map of classification models output. Having the input of the shape [N, C, H, W], Global Pooling returns the output of the shape [N, C, 1, 1]. Model architects usually express Global Pooling with the help of the Pooling operation with the fixed kernel size [H, W]. During spatial reshape, having the input of the shape [N, C, H1, W1], Pooling with the fixed kernel size [H, W] returns the output of the shape [N, C, H2, W2], where H2 and W2 are commonly not equal to 1. It breaks the classification model structure. For example, publicly available Inception family models from TensorFlow* have this issue.
|
||||
### How To Fix Non-Reshape-able Model<a name="how-to-fix-non-reshape-able-model"></a>
|
||||
|
||||
* Changing the model input shape may significantly affect its accuracy. For example, Object Detection models from TensorFlow have resizing restrictions by design. To keep the model valid after the reshape, choose a new input shape that satisfies conditions listed in the pipeline.config file. For details, refer to the Tensorflow Object Detection API models resizing techniques.
|
||||
|
||||
|
||||
### Usage of the Reshape Method
|
||||
|
||||
The primary method of the feature is `IENetwork.reshape`. It gets new input shapes and propagates it from input to output for all intermediates layers of the given network. Use `IENetwork.input_info` to get names of input_layers and `.tensor_desc.dims` to get the current network input shape.
|
||||
|
||||
The following code example shows how to reshape a model to the size of an input image.
|
||||
|
||||
```python
|
||||
import cv2
|
||||
import numpy as np
|
||||
from openvino.inference_engine import IECore
|
||||
|
||||
ie = IECore()
|
||||
|
||||
# Read an input image and transpose input to NCWH
|
||||
image = cv2.imread(path_to_image_file)
|
||||
input_image = image.transpose((2, 0, 1))
|
||||
input_image = np.expand_dims(input_image, axis=0)
|
||||
|
||||
# Load the model and get input info
|
||||
# Note that this model must support arbitrary input shapes
|
||||
net = ie.read_network(model=path_to_xml_file)
|
||||
input_layer = next(iter(net.input_info))
|
||||
print(f"Input shape: {net.input_info[input_blob].tensor_desc.dims}")
|
||||
|
||||
# Call reshape
|
||||
net.reshape({input_layer: input_image.shape})
|
||||
print(f"New input shape: {net.input_info[input_blob].tensor_desc.dims}")
|
||||
|
||||
# Load the model to the device and proceed with inference
|
||||
exec_net = ie.load_network(network=net, device_name="CPU")
|
||||
Some operators which prevent normal shape propagation can be fixed. To do so you can:
|
||||
* see if the issue can be fixed via changing the values of some operators input.
|
||||
E.g. most common problem of non-reshape-able models is a `Reshape` operator with hardcoded output shape.
|
||||
You can cut-off hard-coded 2nd input of `Reshape` and fill it in with relaxed values.
|
||||
For the following example on the picture Model Optimizer CLI should be:
|
||||
```sh
|
||||
mo --input_model path/to/model --input data[8,3,224,224],1:reshaped[2]->[0 -1]`
|
||||
```
|
||||
With `1:reshaped[2]` we request to cut 2nd input (counting from zero, so `1:` means 2nd inputs) of operation named `reshaped` and replace it with a `Parameter` with shape `[2]`.
|
||||
With `->[0 -1]` we replace this new `Parameter` by a `Constant` operator which has value `[0, -1]`.
|
||||
Since `Reshape` operator has `0` and `-1` as a specific values (see the meaning in [the specification](../ops/shape/Reshape_1.md)) it allows to propagate shapes freely without losing the intended meaning of `Reshape`.
|
||||
|
||||

|
||||
|
||||
* transform model during Model Optimizer conversion on the back phase. See [Model Optimizer extension article](../MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md)
|
||||
* transform OpenVINO Model during the runtime. See [OpenVINO Runtime Transformations article](../Extensibility_UG/ov_transformations.md)
|
||||
* modify the original model with the help of original framework
|
||||
|
||||
### Extensibility
|
||||
The Inference Engine provides a special mechanism that allows adding support of shape inference for custom operations. This mechanism is described in the [Extensibility documentation](../Extensibility_UG/Intro.md)
|
||||
|
||||
### See Also:
|
||||
|
||||
[Hello Reshape Python Sample](../../samples/python/hello_reshape_ssd/README.html)
|
||||
OpenVINO provides a special mechanism that allows adding support of shape inference for custom operations. This mechanism is described in the [Extensibility documentation](../Extensibility_UG/Intro.md)
|
||||
|
3
docs/OV_Runtime_UG/img/batch_relaxation.png
Executable file
3
docs/OV_Runtime_UG/img/batch_relaxation.png
Executable file
@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:920917780f3f4cfdd6f384d60744c3805809b43fe8c4922d07eb4c2a4bf11679
|
||||
size 57392
|
3
docs/OV_Runtime_UG/img/original_vs_reshaped_model.png
Executable file
3
docs/OV_Runtime_UG/img/original_vs_reshaped_model.png
Executable file
@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:f22fe23c17bcd2930871af56832ab3df5bbb7df93f1fc7e4a06de3ce2020791d
|
||||
size 117644
|
@ -88,7 +88,7 @@ Here we've specified all information about user's input:
|
||||
- [Layout](./layout_overview.md) is "NHWC". It means that 'height=480, width=640, channels=3'
|
||||
- Color format is `BGR`
|
||||
|
||||
### Declare model's layout
|
||||
### Declare model's layout <a name="declare_model_s_layout"></a>
|
||||
|
||||
Model's input already has information about precision and shape. Preprocessing API is not intended to modify this. The only thing that may be specified is input's data [layout](./layout_overview.md)
|
||||
|
||||
|
@ -29,7 +29,6 @@ The table below shows the plugin libraries and additional dependencies for Linux
|
||||
| MYRIAD | `libopenvino_intel_myriad_plugin.so` | `libusb.so` | `openvino_intel_myriad_plugin.dll`| `usb.dll` | `libopenvino_intel_myriad_plugin.so` | `libusb.dylib` |
|
||||
| HDDL | `libintel_hddl_plugin.so` | `libbsl.so`, `libhddlapi.so`, `libmvnc-hddl.so` | `intel_hddl_plugin.dll` | `bsl.dll`, `hddlapi.dll`, `json-c.dll`, `libcrypto-1_1-x64.dll`, `libssl-1_1-x64.dll`, `mvnc-hddl.dll` | Is not supported | - |
|
||||
| GNA | `libopenvino_intel_gna_plugin.so` | `libgna.so`, | `openvino_intel_gna_plugin.dll` | `gna.dll` | Is not supported | - |
|
||||
| Arm® CPU | `libopenvino_arm_cpu_plugin.so` | | Is not supported | - | `libopenvino_arm_cpu_plugin.so` | - |
|
||||
| HETERO | `libopenvino_hetero_plugin.so` | Same as for selected plugins | `openvino_hetero_plugin.dll` | Same as for selected plugins | `libopenvino_hetero_plugin.so` | Same as for selected plugins |
|
||||
| MULTI | `libopenvino_auto_plugin.so` | Same as for selected plugins | `openvino_auto_plugin.dll` | Same as for selected plugins | `libopenvino_auto_plugin.so` | Same as for selected plugins |
|
||||
| AUTO | `libopenvino_auto_plugin.so` | Same as for selected plugins | `openvino_auto_plugin.dll` | Same as for selected plugins | `libopenvino_auto_plugin.so` | Same as for selected plugins |
|
||||
@ -127,154 +126,154 @@ For setting relevant configuration, refer to the
|
||||
(step 3 "Configure input and output").
|
||||
|
||||
### Supported Layers
|
||||
The following layers are supported by the plugins and by [Shape Inference feature](../ShapeInference.md):
|
||||
The following layers are supported by the plugins:
|
||||
|
||||
| Layers | GPU | CPU | VPU | GNA | Arm® CPU | ShapeInfer |
|
||||
|:-------------------------------|:-------------:|:-------------:|:-------------:|:-------------:|:---------------:|:-------------:|
|
||||
| Abs | Supported | Supported\*\* | Supported | Not Supported | Supported | Supported |
|
||||
| Acos | Supported | Supported\*\* | Not Supported | Not Supported |Supported\*\*\*\*|Supported |
|
||||
| Acosh | Supported | Supported\*\* | Not Supported | Not Supported |Supported\*\*\*\*|Supported |
|
||||
| Activation-Clamp | Supported |Supported\*\*\*| Supported | Supported | Supported | Supported |
|
||||
| Activation-ELU | Supported |Supported\*\*\*| Supported | Not Supported | Supported | Supported |
|
||||
| Activation-Exp | Supported |Supported\*\*\*| Supported | Supported | Supported | Supported |
|
||||
| Activation-Leaky ReLU | Supported |Supported\*\*\*| Supported | Supported | Not Supported | Supported |
|
||||
| Activation-Not | Supported |Supported\*\*\*| Supported | Not Supported | Not Supported | Supported |
|
||||
| Activation-PReLU | Supported |Supported\*\*\*| Supported | Not Supported | Supported | Supported |
|
||||
| Activation-ReLU | Supported |Supported\*\*\*| Supported | Supported | Supported | Supported |
|
||||
| Activation-ReLU6 | Supported |Supported\*\*\*| Supported | Not Supported | Not Supported | Supported |
|
||||
| Activation-Sigmoid/Logistic | Supported |Supported\*\*\*| Supported | Supported | Supported | Supported |
|
||||
| Activation-TanH | Supported |Supported\*\*\*| Supported | Supported | Supported | Supported |
|
||||
| ArgMax | Supported | Supported\*\* | Supported | Not Supported | Not Supported | Supported |
|
||||
| Asin | Supported | Supported\*\* | Not Supported | Not Supported |Supported\*\*\*\*| Supported |
|
||||
| Asinh | Supported | Supported\*\* | Not Supported | Not Supported |Supported\*\*\*\*| Supported |
|
||||
| Atan | Supported | Supported\*\* | Not Supported | Not Supported |Supported\*\*\*\*| Supported |
|
||||
| Atanh | Supported | Supported\*\* | Not Supported | Not Supported |Supported\*\*\*\*| Supported |
|
||||
| BatchNormalization | Supported | Supported | Supported | Not Supported | Supported | Supported |
|
||||
| BinaryConvolution | Supported | Supported | Not Supported | Not Supported | Not Supported | Supported |
|
||||
| Broadcast | Supported | Supported\*\* | Supported | Not Supported | Supported | Supported |
|
||||
| Ceil | Supported | Supported\*\* | Supported | Not Supported | Supported | Supported |
|
||||
| Concat | Supported |Supported\*\*\*| Supported | Supported | Supported | Supported |
|
||||
| Const | Supported | Supported | Supported | Supported | Supported | Not Supported |
|
||||
| Convolution-Dilated | Supported | Supported | Supported | Not Supported | Supported | Supported |
|
||||
| Convolution-Dilated 3D | Supported | Supported | Not Supported | Not Supported | Not Supported | Not Supported |
|
||||
| Convolution-Grouped | Supported | Supported | Supported | Not Supported | Supported | Supported |
|
||||
| Convolution-Grouped 3D | Supported | Supported | Not Supported | Not Supported | Not Supported | Not Supported |
|
||||
| Convolution-Ordinary | Supported | Supported | Supported | Supported\* | Supported | Supported |
|
||||
| Convolution-Ordinary 3D | Supported | Supported | Not Supported | Not Supported | Not Supported | Not Supported |
|
||||
| Cos | Supported | Supported\*\* | Not Supported | Not Supported |Supported\*\*\*\*| Supported |
|
||||
| Cosh | Supported | Supported\*\* | Not Supported | Not Supported |Supported\*\*\*\*| Supported |
|
||||
| Crop | Supported | Supported | Supported | Supported | Not Supported | Supported |
|
||||
| CTCGreedyDecoder | Supported\*\* | Supported\*\* | Supported\* | Not Supported |Supported\*\*\*\*| Supported |
|
||||
| Deconvolution | Supported | Supported | Supported | Not Supported | Not Supported | Supported |
|
||||
| Deconvolution 3D | Supported | Supported | Not Supported | Not Supported | Not Supported | Not Supported |
|
||||
| DeformableConvolution | Supported | Supported | Not Supported | Not Supported | Not Supported | Supported |
|
||||
| DepthToSpace | Supported | Supported\*\* | Not Supported | Not Supported | Supported\* | Supported |
|
||||
| DetectionOutput | Supported | Supported\*\* | Supported\* | Not Supported |Supported\*\*\*\*| Supported |
|
||||
| Eltwise-And | Supported |Supported\*\*\*| Supported | Not Supported | Supported | Supported |
|
||||
| Eltwise-Add | Supported |Supported\*\*\*| Supported | Not Supported | Supported | Supported |
|
||||
| Eltwise-Div | Supported |Supported\*\*\*| Supported | Not Supported | Supported | Supported |
|
||||
| Eltwise-Equal | Supported |Supported\*\*\*| Supported | Not Supported | Supported\* | Supported |
|
||||
| Eltwise-FloorMod | Supported |Supported\*\*\*| Supported | Not Supported |Supported\*\*\*\*| Supported |
|
||||
| Eltwise-Greater | Supported |Supported\*\*\*| Supported | Not Supported | Supported | Supported |
|
||||
| Eltwise-GreaterEqual | Supported |Supported\*\*\*| Supported | Not Supported | Supported | Supported |
|
||||
| Eltwise-Less | Supported |Supported\*\*\*| Supported | Not Supported | Supported\* | Supported |
|
||||
| Eltwise-LessEqual | Supported |Supported\*\*\*| Supported | Not Supported | Supported\* | Supported |
|
||||
| Eltwise-LogicalAnd | Supported |Supported\*\*\*| Supported | Not Supported | Supported | Supported |
|
||||
| Eltwise-LogicalOr | Supported |Supported\*\*\*| Supported | Not Supported | Supported | Supported |
|
||||
| Eltwise-LogicalXor | Supported |Supported\*\*\*| Supported | Not Supported | Supported | Supported |
|
||||
| Eltwise-Max | Supported |Supported\*\*\*| Supported | Not Supported | Supported | Supported |
|
||||
| Eltwise-Min | Supported |Supported\*\*\*| Supported | Not Supported | Supported | Supported |
|
||||
| Eltwise-Mul | Supported |Supported\*\*\*| Supported | Supported | Supported | Supported |
|
||||
| Eltwise-NotEqual | Supported |Supported\*\*\*| Supported | Not Supported | Supported\* | Supported |
|
||||
| Eltwise-Pow | Supported |Supported\*\*\*| Supported | Not Supported | Supported | Supported |
|
||||
| Eltwise-Prod | Supported |Supported\*\*\*| Supported | Supported | Not Supported | Supported |
|
||||
| Eltwise-SquaredDiff | Supported |Supported\*\*\*| Supported | Not Supported | Supported | Supported |
|
||||
| Eltwise-Sub | Supported |Supported\*\*\*| Supported | Supported | Supported | Supported |
|
||||
| Eltwise-Sum | Supported |Supported\*\*\*| Supported | Supported |Supported\*\*\*\*| Supported |
|
||||
| Erf | Supported | Supported\*\* | Supported | Not Supported |Supported\*\*\*\*| Supported |
|
||||
| Exp | Supported | Supported | Supported | Supported | Supported | Supported |
|
||||
| FakeQuantize | Not Supported | Supported | Not Supported | Not Supported | Supported\* | Supported |
|
||||
| Fill | Not Supported | Supported\*\* | Not Supported | Not Supported | Not Supported | Supported |
|
||||
| Flatten | Supported | Supported | Supported | Not Supported | Not Supported | Supported |
|
||||
| Floor | Supported | Supported\*\* | Supported | Not Supported | Supported | Supported |
|
||||
| FullyConnected (Inner Product) | Supported |Supported\*\*\*| Supported | Supported | Supported | Supported |
|
||||
| Gather | Supported | Supported\*\* | Supported | Not Supported | Supported\* | Supported |
|
||||
| GatherTree | Not Supported | Supported\*\* | Not Supported | Not Supported |Supported\*\*\*\*| Supported |
|
||||
| Gemm | Supported | Supported | Supported | Not Supported | Not Supported | Supported |
|
||||
| GRN | Supported\*\* | Supported\*\* | Supported | Not Supported | Supported | Supported |
|
||||
| HardSigmoid | Supported | Supported\*\* | Not Supported | Not Supported |Supported\*\*\*\*| Supported |
|
||||
| Interp | Supported\*\* | Supported\*\* | Supported | Not Supported | Supported\* | Supported\* |
|
||||
| Log | Supported | Supported\*\* | Supported | Supported | Supported | Supported |
|
||||
| LRN (Norm) | Supported | Supported | Supported | Not Supported | Supported\* | Supported |
|
||||
| LSTMCell | Supported | Supported | Supported | Supported | Supported | Not Supported |
|
||||
| GRUCell | Supported | Supported | Not Supported | Not Supported | Supported | Not Supported |
|
||||
| RNNCell | Supported | Supported | Not Supported | Not Supported | Supported | Not Supported |
|
||||
| LSTMSequence | Supported | Supported | Supported | Not Supported |Supported\*\*\*\*| Not Supported |
|
||||
| GRUSequence | Supported | Supported | Not Supported | Not Supported |Supported\*\*\*\*| Not Supported |
|
||||
| RNNSequence | Supported | Supported | Not Supported | Not Supported |Supported\*\*\*\*| Not Supported |
|
||||
| LogSoftmax | Supported | Supported\*\* | Not Supported | Not Supported | Supported | Not Supported |
|
||||
| Memory | Not Supported | Supported | Not Supported | Supported | Not Supported | Supported |
|
||||
| MVN | Supported | Supported\*\* | Supported\* | Not Supported | Supported\* | Supported |
|
||||
| Neg | Supported | Supported\*\* | Not Supported | Not Supported | Supported | Supported |
|
||||
| NonMaxSuppression | Not Supported | Supported\*\* | Supported | Not Supported |Supported\*\*\*\*| Supported |
|
||||
| Normalize | Supported | Supported\*\* | Supported\* | Not Supported | Supported\* | Supported |
|
||||
| OneHot | Supported | Supported\*\* | Supported | Not Supported |Supported\*\*\*\*| Supported |
|
||||
| Pad | Supported | Supported\*\* | Supported\* | Not Supported | Supported\* | Supported |
|
||||
| Permute | Supported | Supported | Supported | Supported\* | Not Supported | Supported |
|
||||
| Pooling(AVG,MAX) | Supported | Supported | Supported | Supported | Supported | Supported |
|
||||
| Pooling(AVG,MAX) 3D | Supported | Supported | Not Supported | Not Supported | Supported\* | Not Supported |
|
||||
| Power | Supported | Supported\*\* | Supported | Supported\* | Supported | Supported |
|
||||
| PowerFile | Not Supported | Supported\*\* | Not Supported | Not Supported | Not Supported | Not Supported |
|
||||
| PriorBox | Supported | Supported\*\* | Supported | Not Supported | Supported | Supported |
|
||||
| PriorBoxClustered | Supported\*\* | Supported\*\* | Supported | Not Supported | Supported | Supported |
|
||||
| Proposal | Supported | Supported\*\* | Supported | Not Supported |Supported\*\*\*\*| Supported |
|
||||
| PSROIPooling | Supported | Supported\*\* | Supported | Not Supported |Supported\*\*\*\*| Supported |
|
||||
| Range | Not Supported | Supported\*\* | Not Supported | Not Supported | Not Supported | Supported |
|
||||
| Reciprocal | Supported | Supported\*\* | Not Supported | Not Supported | Not Supported | Supported |
|
||||
| ReduceAnd | Supported | Supported\*\* | Supported | Not Supported |Supported\*\*\*\*| Supported |
|
||||
| ReduceL1 | Supported | Supported\*\* | Not Supported | Not Supported | Supported | Supported |
|
||||
| ReduceL2 | Supported | Supported\*\* | Not Supported | Not Supported | Supported | Supported |
|
||||
| ReduceLogSum | Supported | Supported\*\* | Not Supported | Not Supported | Supported | Supported |
|
||||
| ReduceLogSumExp | Supported | Supported\*\* | Not Supported | Not Supported | Not Supported | Supported |
|
||||
| ReduceMax | Supported | Supported\*\* | Supported | Not Supported | Supported | Supported |
|
||||
| ReduceMean | Supported | Supported\*\* | Supported | Not Supported | Supported | Supported |
|
||||
| ReduceMin | Supported | Supported\*\* | Supported | Not Supported | Supported | Supported |
|
||||
| ReduceOr | Supported | Supported\*\* | Not Supported | Not Supported |Supported\*\*\*\*| Supported |
|
||||
| ReduceProd | Supported | Supported\*\* | Not Supported | Not Supported | Supported | Supported |
|
||||
| ReduceSum | Supported | Supported\*\* | Supported | Not Supported | Supported | Supported |
|
||||
| ReduceSumSquare | Supported | Supported\*\* | Not Supported | Not Supported | Not Supported | Supported |
|
||||
| RegionYolo | Supported | Supported\*\* | Supported | Not Supported |Supported\*\*\*\*| Supported |
|
||||
| ReorgYolo | Supported | Supported\*\* | Supported | Not Supported | Supported | Supported |
|
||||
| Resample | Supported | Supported\*\* | Supported | Not Supported | Not Supported | Supported |
|
||||
| Reshape | Supported |Supported\*\*\*| Supported | Supported | Supported | Supported\* |
|
||||
| ReverseSequence | Supported | Supported\*\* | Supported | Not Supported |Supported\*\*\*\*| Supported |
|
||||
| RNN | Not Supported | Supported | Supported | Not Supported | Supported | Not Supported |
|
||||
| ROIPooling | Supported\* | Supported | Supported | Not Supported |Supported\*\*\*\*| Supported |
|
||||
| ScaleShift | Supported |Supported\*\*\*| Supported\* | Supported | Not Supported | Supported |
|
||||
| ScatterUpdate | Not Supported | Supported\*\* | Supported | Not Supported | Not Supported | Supported |
|
||||
| Select | Supported | Supported | Supported | Not Supported | Supported | Supported |
|
||||
| Selu | Supported | Supported\*\* | Not Supported | Not Supported |Supported\*\*\*\*| Supported |
|
||||
| ShuffleChannels | Supported | Supported\*\* | Not Supported | Not Supported | Supported | Supported |
|
||||
| Sign | Supported | Supported\*\* | Supported | Not Supported | Supported | Supported |
|
||||
| Sin | Supported | Supported\*\* | Not Supported | Not Supported | Supported | Supported |
|
||||
| Sinh | Supported | Supported\*\* | Not Supported | Not Supported |Supported\*\*\*\*| Supported |
|
||||
| SimplerNMS | Supported | Supported\*\* | Not Supported | Not Supported | Not Supported | Supported |
|
||||
| Slice | Supported |Supported\*\*\*| Supported | Supported | Not Supported | Supported |
|
||||
| SoftMax | Supported |Supported\*\*\*| Supported | Not Supported | Supported | Supported |
|
||||
| Softplus | Supported | Supported\*\* | Supported | Not Supported | Supported | Supported |
|
||||
| Softsign | Supported | Supported\*\* | Not Supported | Supported | Not Supported | Supported |
|
||||
| SpaceToDepth | Not Supported | Supported\*\* | Not Supported | Not Supported | Supported\* | Supported |
|
||||
| SpatialTransformer | Not Supported | Supported\*\* | Not Supported | Not Supported | Not Supported | Supported |
|
||||
| Split | Supported |Supported\*\*\*| Supported | Supported | Supported | Supported |
|
||||
| Squeeze | Supported | Supported\*\* | Supported | Supported | Supported | Supported |
|
||||
| StridedSlice | Supported | Supported\*\* | Supported | Not Supported | Supported\* | Supported |
|
||||
| Tan | Supported | Supported\*\* | Not Supported | Not Supported |Supported\*\*\*\*| Supported |
|
||||
| TensorIterator | Not Supported | Supported | Supported | Supported | Supported | Not Supported |
|
||||
| Tile | Supported\*\* |Supported\*\*\*| Supported | Not Supported | Supported | Supported |
|
||||
| TopK | Supported | Supported\*\* | Supported | Not Supported |Supported\*\*\*\*| Supported |
|
||||
| Unpooling | Supported | Not Supported | Not Supported | Not Supported | Not Supported | Not Supported |
|
||||
| Unsqueeze | Supported | Supported\*\* | Supported | Supported | Supported | Supported |
|
||||
| Upsampling | Supported | Not Supported | Not Supported | Not Supported | Not Supported | Not Supported |
|
||||
| Layers | GPU | CPU | VPU | GNA | Arm® CPU |
|
||||
|:-------------------------------|:-------------:|:-------------:|:-------------:|:-------------:|:---------------:|
|
||||
| Abs | Supported | Supported\*\* | Supported | Not Supported | Supported |
|
||||
| Acos | Supported | Supported\*\* | Not Supported | Not Supported |Supported\*\*\*\*|
|
||||
| Acosh | Supported | Supported\*\* | Not Supported | Not Supported |Supported\*\*\*\*|
|
||||
| Activation-Clamp | Supported |Supported\*\*\*| Supported | Supported | Supported |
|
||||
| Activation-ELU | Supported |Supported\*\*\*| Supported | Not Supported | Supported |
|
||||
| Activation-Exp | Supported |Supported\*\*\*| Supported | Supported | Supported |
|
||||
| Activation-Leaky ReLU | Supported |Supported\*\*\*| Supported | Supported | Not Supported |
|
||||
| Activation-Not | Supported |Supported\*\*\*| Supported | Not Supported | Not Supported |
|
||||
| Activation-PReLU | Supported |Supported\*\*\*| Supported | Not Supported | Supported |
|
||||
| Activation-ReLU | Supported |Supported\*\*\*| Supported | Supported | Supported |
|
||||
| Activation-ReLU6 | Supported |Supported\*\*\*| Supported | Not Supported | Not Supported |
|
||||
| Activation-Sigmoid/Logistic | Supported |Supported\*\*\*| Supported | Supported | Supported |
|
||||
| Activation-TanH | Supported |Supported\*\*\*| Supported | Supported | Supported |
|
||||
| ArgMax | Supported | Supported\*\* | Supported | Not Supported | Not Supported |
|
||||
| Asin | Supported | Supported\*\* | Not Supported | Not Supported |Supported\*\*\*\*|
|
||||
| Asinh | Supported | Supported\*\* | Not Supported | Not Supported |Supported\*\*\*\*|
|
||||
| Atan | Supported | Supported\*\* | Not Supported | Not Supported |Supported\*\*\*\*|
|
||||
| Atanh | Supported | Supported\*\* | Not Supported | Not Supported |Supported\*\*\*\*|
|
||||
| BatchNormalization | Supported | Supported | Supported | Not Supported | Supported |
|
||||
| BinaryConvolution | Supported | Supported | Not Supported | Not Supported | Not Supported |
|
||||
| Broadcast | Supported | Supported\*\* | Supported | Not Supported | Supported |
|
||||
| Ceil | Supported | Supported\*\* | Supported | Not Supported | Supported |
|
||||
| Concat | Supported |Supported\*\*\*| Supported | Supported | Supported |
|
||||
| Const | Supported | Supported | Supported | Supported | Supported |
|
||||
| Convolution-Dilated | Supported | Supported | Supported | Not Supported | Supported |
|
||||
| Convolution-Dilated 3D | Supported | Supported | Not Supported | Not Supported | Not Supported |
|
||||
| Convolution-Grouped | Supported | Supported | Supported | Not Supported | Supported |
|
||||
| Convolution-Grouped 3D | Supported | Supported | Not Supported | Not Supported | Not Supported |
|
||||
| Convolution-Ordinary | Supported | Supported | Supported | Supported\* | Supported |
|
||||
| Convolution-Ordinary 3D | Supported | Supported | Not Supported | Not Supported | Not Supported |
|
||||
| Cos | Supported | Supported\*\* | Not Supported | Not Supported |Supported\*\*\*\*|
|
||||
| Cosh | Supported | Supported\*\* | Not Supported | Not Supported |Supported\*\*\*\*|
|
||||
| Crop | Supported | Supported | Supported | Supported | Not Supported |
|
||||
| CTCGreedyDecoder | Supported\*\* | Supported\*\* | Supported\* | Not Supported |Supported\*\*\*\*|
|
||||
| Deconvolution | Supported | Supported | Supported | Not Supported | Not Supported |
|
||||
| Deconvolution 3D | Supported | Supported | Not Supported | Not Supported | Not Supported |
|
||||
| DeformableConvolution | Supported | Supported | Not Supported | Not Supported | Not Supported |
|
||||
| DepthToSpace | Supported | Supported\*\* | Not Supported | Not Supported | Supported\* |
|
||||
| DetectionOutput | Supported | Supported\*\* | Supported\* | Not Supported |Supported\*\*\*\*|
|
||||
| Eltwise-And | Supported |Supported\*\*\*| Supported | Not Supported | Supported |
|
||||
| Eltwise-Add | Supported |Supported\*\*\*| Supported | Not Supported | Supported |
|
||||
| Eltwise-Div | Supported |Supported\*\*\*| Supported | Not Supported | Supported |
|
||||
| Eltwise-Equal | Supported |Supported\*\*\*| Supported | Not Supported | Supported\* |
|
||||
| Eltwise-FloorMod | Supported |Supported\*\*\*| Supported | Not Supported |Supported\*\*\*\*|
|
||||
| Eltwise-Greater | Supported |Supported\*\*\*| Supported | Not Supported | Supported |
|
||||
| Eltwise-GreaterEqual | Supported |Supported\*\*\*| Supported | Not Supported | Supported |
|
||||
| Eltwise-Less | Supported |Supported\*\*\*| Supported | Not Supported | Supported\* |
|
||||
| Eltwise-LessEqual | Supported |Supported\*\*\*| Supported | Not Supported | Supported\* |
|
||||
| Eltwise-LogicalAnd | Supported |Supported\*\*\*| Supported | Not Supported | Supported |
|
||||
| Eltwise-LogicalOr | Supported |Supported\*\*\*| Supported | Not Supported | Supported |
|
||||
| Eltwise-LogicalXor | Supported |Supported\*\*\*| Supported | Not Supported | Supported |
|
||||
| Eltwise-Max | Supported |Supported\*\*\*| Supported | Not Supported | Supported |
|
||||
| Eltwise-Min | Supported |Supported\*\*\*| Supported | Not Supported | Supported |
|
||||
| Eltwise-Mul | Supported |Supported\*\*\*| Supported | Supported | Supported |
|
||||
| Eltwise-NotEqual | Supported |Supported\*\*\*| Supported | Not Supported | Supported\* |
|
||||
| Eltwise-Pow | Supported |Supported\*\*\*| Supported | Not Supported | Supported |
|
||||
| Eltwise-Prod | Supported |Supported\*\*\*| Supported | Supported | Not Supported |
|
||||
| Eltwise-SquaredDiff | Supported |Supported\*\*\*| Supported | Not Supported | Supported |
|
||||
| Eltwise-Sub | Supported |Supported\*\*\*| Supported | Supported | Supported |
|
||||
| Eltwise-Sum | Supported |Supported\*\*\*| Supported | Supported |Supported\*\*\*\*|
|
||||
| Erf | Supported | Supported\*\* | Supported | Not Supported |Supported\*\*\*\*|
|
||||
| Exp | Supported | Supported | Supported | Supported | Supported |
|
||||
| FakeQuantize | Not Supported | Supported | Not Supported | Not Supported | Supported\* |
|
||||
| Fill | Not Supported | Supported\*\* | Not Supported | Not Supported | Not Supported |
|
||||
| Flatten | Supported | Supported | Supported | Not Supported | Not Supported |
|
||||
| Floor | Supported | Supported\*\* | Supported | Not Supported | Supported |
|
||||
| FullyConnected (Inner Product) | Supported |Supported\*\*\*| Supported | Supported | Supported |
|
||||
| Gather | Supported | Supported\*\* | Supported | Not Supported | Supported\* |
|
||||
| GatherTree | Not Supported | Supported\*\* | Not Supported | Not Supported |Supported\*\*\*\*|
|
||||
| Gemm | Supported | Supported | Supported | Not Supported | Not Supported |
|
||||
| GRN | Supported\*\* | Supported\*\* | Supported | Not Supported | Supported |
|
||||
| HardSigmoid | Supported | Supported\*\* | Not Supported | Not Supported |Supported\*\*\*\*|
|
||||
| Interp | Supported\*\* | Supported\*\* | Supported | Not Supported | Supported\* |
|
||||
| Log | Supported | Supported\*\* | Supported | Supported | Supported |
|
||||
| LRN (Norm) | Supported | Supported | Supported | Not Supported | Supported\* |
|
||||
| LSTMCell | Supported | Supported | Supported | Supported | Supported |
|
||||
| GRUCell | Supported | Supported | Not Supported | Not Supported | Supported |
|
||||
| RNNCell | Supported | Supported | Not Supported | Not Supported | Supported |
|
||||
| LSTMSequence | Supported | Supported | Supported | Not Supported |Supported\*\*\*\*|
|
||||
| GRUSequence | Supported | Supported | Not Supported | Not Supported |Supported\*\*\*\*|
|
||||
| RNNSequence | Supported | Supported | Not Supported | Not Supported |Supported\*\*\*\*|
|
||||
| LogSoftmax | Supported | Supported\*\* | Not Supported | Not Supported | Supported |
|
||||
| Memory | Not Supported | Supported | Not Supported | Supported | Not Supported |
|
||||
| MVN | Supported | Supported\*\* | Supported\* | Not Supported | Supported\* |
|
||||
| Neg | Supported | Supported\*\* | Not Supported | Not Supported | Supported |
|
||||
| NonMaxSuppression | Not Supported | Supported\*\* | Supported | Not Supported |Supported\*\*\*\*|
|
||||
| Normalize | Supported | Supported\*\* | Supported\* | Not Supported | Supported\* |
|
||||
| OneHot | Supported | Supported\*\* | Supported | Not Supported |Supported\*\*\*\*|
|
||||
| Pad | Supported | Supported\*\* | Supported\* | Not Supported | Supported\* |
|
||||
| Permute | Supported | Supported | Supported | Supported\* | Not Supported |
|
||||
| Pooling(AVG,MAX) | Supported | Supported | Supported | Supported | Supported |
|
||||
| Pooling(AVG,MAX) 3D | Supported | Supported | Not Supported | Not Supported | Supported\* |
|
||||
| Power | Supported | Supported\*\* | Supported | Supported\* | Supported |
|
||||
| PowerFile | Not Supported | Supported\*\* | Not Supported | Not Supported | Not Supported |
|
||||
| PriorBox | Supported | Supported\*\* | Supported | Not Supported | Supported |
|
||||
| PriorBoxClustered | Supported\*\* | Supported\*\* | Supported | Not Supported | Supported |
|
||||
| Proposal | Supported | Supported\*\* | Supported | Not Supported |Supported\*\*\*\*|
|
||||
| PSROIPooling | Supported | Supported\*\* | Supported | Not Supported |Supported\*\*\*\*|
|
||||
| Range | Not Supported | Supported\*\* | Not Supported | Not Supported | Not Supported |
|
||||
| Reciprocal | Supported | Supported\*\* | Not Supported | Not Supported | Not Supported |
|
||||
| ReduceAnd | Supported | Supported\*\* | Supported | Not Supported |Supported\*\*\*\*|
|
||||
| ReduceL1 | Supported | Supported\*\* | Not Supported | Not Supported | Supported |
|
||||
| ReduceL2 | Supported | Supported\*\* | Not Supported | Not Supported | Supported |
|
||||
| ReduceLogSum | Supported | Supported\*\* | Not Supported | Not Supported | Supported |
|
||||
| ReduceLogSumExp | Supported | Supported\*\* | Not Supported | Not Supported | Not Supported |
|
||||
| ReduceMax | Supported | Supported\*\* | Supported | Not Supported | Supported |
|
||||
| ReduceMean | Supported | Supported\*\* | Supported | Not Supported | Supported |
|
||||
| ReduceMin | Supported | Supported\*\* | Supported | Not Supported | Supported |
|
||||
| ReduceOr | Supported | Supported\*\* | Not Supported | Not Supported |Supported\*\*\*\*|
|
||||
| ReduceProd | Supported | Supported\*\* | Not Supported | Not Supported | Supported |
|
||||
| ReduceSum | Supported | Supported\*\* | Supported | Not Supported | Supported |
|
||||
| ReduceSumSquare | Supported | Supported\*\* | Not Supported | Not Supported | Not Supported |
|
||||
| RegionYolo | Supported | Supported\*\* | Supported | Not Supported |Supported\*\*\*\*|
|
||||
| ReorgYolo | Supported | Supported\*\* | Supported | Not Supported | Supported |
|
||||
| Resample | Supported | Supported\*\* | Supported | Not Supported | Not Supported |
|
||||
| Reshape | Supported |Supported\*\*\*| Supported | Supported | Supported |
|
||||
| ReverseSequence | Supported | Supported\*\* | Supported | Not Supported |Supported\*\*\*\*|
|
||||
| RNN | Not Supported | Supported | Supported | Not Supported | Supported |
|
||||
| ROIPooling | Supported\* | Supported | Supported | Not Supported |Supported\*\*\*\*|
|
||||
| ScaleShift | Supported |Supported\*\*\*| Supported\* | Supported | Not Supported |
|
||||
| ScatterUpdate | Not Supported | Supported\*\* | Supported | Not Supported | Not Supported |
|
||||
| Select | Supported | Supported | Supported | Not Supported | Supported |
|
||||
| Selu | Supported | Supported\*\* | Not Supported | Not Supported |Supported\*\*\*\*|
|
||||
| ShuffleChannels | Supported | Supported\*\* | Not Supported | Not Supported | Supported |
|
||||
| Sign | Supported | Supported\*\* | Supported | Not Supported | Supported |
|
||||
| Sin | Supported | Supported\*\* | Not Supported | Not Supported | Supported |
|
||||
| Sinh | Supported | Supported\*\* | Not Supported | Not Supported |Supported\*\*\*\*|
|
||||
| SimplerNMS | Supported | Supported\*\* | Not Supported | Not Supported | Not Supported |
|
||||
| Slice | Supported |Supported\*\*\*| Supported | Supported | Not Supported |
|
||||
| SoftMax | Supported |Supported\*\*\*| Supported | Not Supported | Supported |
|
||||
| Softplus | Supported | Supported\*\* | Supported | Not Supported | Supported |
|
||||
| Softsign | Supported | Supported\*\* | Not Supported | Supported | Not Supported |
|
||||
| SpaceToDepth | Not Supported | Supported\*\* | Not Supported | Not Supported | Supported\* |
|
||||
| SpatialTransformer | Not Supported | Supported\*\* | Not Supported | Not Supported | Not Supported |
|
||||
| Split | Supported |Supported\*\*\*| Supported | Supported | Supported |
|
||||
| Squeeze | Supported | Supported\*\* | Supported | Supported | Supported |
|
||||
| StridedSlice | Supported | Supported\*\* | Supported | Not Supported | Supported\* |
|
||||
| Tan | Supported | Supported\*\* | Not Supported | Not Supported |Supported\*\*\*\*|
|
||||
| TensorIterator | Not Supported | Supported | Supported | Supported | Supported |
|
||||
| Tile | Supported\*\* |Supported\*\*\*| Supported | Not Supported | Supported |
|
||||
| TopK | Supported | Supported\*\* | Supported | Not Supported |Supported\*\*\*\*|
|
||||
| Unpooling | Supported | Not Supported | Not Supported | Not Supported | Not Supported |
|
||||
| Unsqueeze | Supported | Supported\*\* | Supported | Supported | Supported |
|
||||
| Upsampling | Supported | Not Supported | Not Supported | Not Supported | Not Supported |
|
||||
|
||||
\*- support is limited to the specific parameters. Refer to "Known Layers Limitation" section for the device [from the list of supported](Supported_Devices.md).
|
||||
|
||||
|
@ -1,44 +1,66 @@
|
||||
#include <ie_core.hpp>
|
||||
#include <openvino/runtime/core.hpp>
|
||||
#include <openvino/core/layout.hpp>
|
||||
#include <opencv2/core/core.hpp>
|
||||
#include <opencv2/imgcodecs.hpp>
|
||||
#include <opencv2/highgui.hpp>
|
||||
|
||||
|
||||
int main() {
|
||||
int batch_size = 1;
|
||||
//! [part0]
|
||||
InferenceEngine::Core core;
|
||||
// ------------- 0. Read IR and image ----------------------------------------------
|
||||
InferenceEngine::CNNNetwork network = core.ReadNetwork("path/to/IR/xml");
|
||||
ov::Core core;
|
||||
auto model = core.read_model("path/to/model");
|
||||
|
||||
//! [picture_snippet]
|
||||
model->reshape({8, 3, 448, 448});
|
||||
//! [picture_snippet]
|
||||
|
||||
size_t new_batch = 8;
|
||||
//! [set_batch]
|
||||
// Mark up batch in the layout of the input(s) and reset batch to the new value
|
||||
model->get_parameters()[0]->set_layout("N...");
|
||||
ov::set_batch(model, new_batch);
|
||||
//! [set_batch]
|
||||
|
||||
//! [spatial_reshape]
|
||||
// Read an image and adjust models single input for image to fit
|
||||
cv::Mat image = cv::imread("path/to/image");
|
||||
// ---------------------------------------------------------------------------------
|
||||
|
||||
// ------------- 1. Collect the map of input names and shapes from IR---------------
|
||||
auto input_shapes = network.getInputShapes();
|
||||
// ---------------------------------------------------------------------------------
|
||||
|
||||
// ------------- 2. Set new input shapes -------------------------------------------
|
||||
std::string input_name;
|
||||
InferenceEngine::SizeVector input_shape;
|
||||
std::tie(input_name, input_shape) = *input_shapes.begin(); // let's consider first input only
|
||||
input_shape[0] = batch_size; // set batch size to the first input dimension
|
||||
input_shape[2] = image.rows; // changes input height to the image one
|
||||
input_shape[3] = image.cols; // changes input width to the image one
|
||||
input_shapes[input_name] = input_shape;
|
||||
// ---------------------------------------------------------------------------------
|
||||
|
||||
// ------------- 3. Call reshape ---------------------------------------------------
|
||||
network.reshape(input_shapes);
|
||||
// ---------------------------------------------------------------------------------
|
||||
model->reshape({1, 3, image.rows, image.cols});
|
||||
//! [spatial_reshape]
|
||||
|
||||
//! [obj_to_shape]
|
||||
std::map<ov::Output<ov::Node>, ov::PartialShape> port_to_shape;
|
||||
for (const ov::Output<ov::Node>& input : model->inputs()) {
|
||||
ov::PartialShape shape = input.get_partial_shape();
|
||||
// Modify shape to fit your needs
|
||||
// ...
|
||||
port_to_shape[input] = shape;
|
||||
}
|
||||
model->reshape(port_to_shape);
|
||||
//! [obj_to_shape]
|
||||
|
||||
// ------------- 4. Loading model to the device ------------------------------------
|
||||
std::string device = "CPU";
|
||||
InferenceEngine::ExecutableNetwork executable_network = core.LoadNetwork(network, device);
|
||||
// ---------------------------------------------------------------------------------
|
||||
//! [idx_to_shape]
|
||||
size_t i = 0;
|
||||
std::map<size_t, ov::PartialShape> idx_to_shape;
|
||||
for (const ov::Output<ov::Node>& input : model->inputs()) {
|
||||
ov::PartialShape shape = input.get_partial_shape();
|
||||
// Modify shape to fit your needs
|
||||
// ...
|
||||
idx_to_shape[i++] = shape;
|
||||
}
|
||||
model->reshape(idx_to_shape);
|
||||
//! [idx_to_shape]
|
||||
|
||||
//! [part0]
|
||||
//! [name_to_shape]
|
||||
std::map<std::string, ov::PartialShape> name_to_shape;
|
||||
for (const ov::Output<ov::Node>& input : model->inputs()) {
|
||||
ov::PartialShape shape = input.get_partial_shape();
|
||||
// input may have no name, in such case use map based on input index or port instead
|
||||
if (!input.get_names().empty()) {
|
||||
// Modify shape to fit your needs
|
||||
// ...
|
||||
name_to_shape[input.get_any_name()] = shape;
|
||||
}
|
||||
}
|
||||
model->reshape(name_to_shape);
|
||||
//! [name_to_shape]
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
55
docs/snippets/ShapeInference.py
Normal file
55
docs/snippets/ShapeInference.py
Normal file
@ -0,0 +1,55 @@
|
||||
# Copyright (C) 2018-2022 Intel Corporation
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
from openvino.runtime import Core, Layout, set_batch
|
||||
ov = Core()
|
||||
model = ov.read_model("path/to/model")
|
||||
|
||||
#! [picture_snippet]
|
||||
model.reshape([8, 3, 448, 448])
|
||||
#! [picture_snippet]
|
||||
|
||||
#! [set_batch]
|
||||
model.get_parameters()[0].set_layout(Layout("N..."))
|
||||
set_batch(model, 5)
|
||||
#! [set_batch]
|
||||
|
||||
#! [simple_spatials_change]
|
||||
from cv2 import imread
|
||||
image = imread("path/to/image")
|
||||
model.reshape({1, 3, image.shape[0], image.shape[1]})
|
||||
#! [simple_spatials_change]
|
||||
|
||||
#! [obj_to_shape]
|
||||
port_to_shape = dict()
|
||||
for input_obj in model.inputs:
|
||||
shape = input_obj.get_partial_shape()
|
||||
# modify shape to fit your needs
|
||||
# ...
|
||||
port_to_shape[input_obj] = shape
|
||||
model.reshape(port_to_shape)
|
||||
#! [obj_to_shape]
|
||||
|
||||
#! [idx_to_shape]
|
||||
idx_to_shape = dict()
|
||||
i = 0
|
||||
for input_obj in model.inputs:
|
||||
shape = input_obj.get_partial_shape()
|
||||
# modify shape to fit your needs
|
||||
# ...
|
||||
idx_to_shape[i] = shape
|
||||
i += 1
|
||||
model.reshape(idx_to_shape)
|
||||
#! [idx_to_shape]
|
||||
|
||||
#! [name_to_shape]
|
||||
name_to_shape = dict()
|
||||
for input_obj in model.inputs:
|
||||
shape = input_obj.get_partial_shape()
|
||||
# input may have no name, in such case use map based on input index or port instead
|
||||
if len(input_obj.get_names()) != 0:
|
||||
# modify shape to fit your needs
|
||||
# ...
|
||||
name_to_shape[input_obj.get_any_name()] = shape
|
||||
model.reshape(name_to_shape)
|
||||
#! [name_to_shape]
|
@ -154,4 +154,5 @@ int main() {
|
||||
reshape_with_dynamics();
|
||||
get_tensor();
|
||||
set_tensor();
|
||||
return 0;
|
||||
}
|
||||
|
Loading…
Reference in New Issue
Block a user