* Added info on DockerHub CI Framework
* Feature/azaytsev/change layout (#3295)
* Changes according to feedback comments
* Replaced @ref's with html links
* Fixed links, added a title page for installing from repos and images, fixed formatting issues
* Added links
* minor fix
* Added DL Streamer to the list of components installed by default
* Link fixes
* Link fixes
* ovms doc fix (#2988)
* added OpenVINO Model Server
* ovms doc fixes
Co-authored-by: Trawinski, Dariusz <dariusz.trawinski@intel.com>
* Updated openvino_docs.xml
* Updated the link to software license agreements
* Revert "Updated the link to software license agreements"
This reverts commit 706dac500e.
* Docs to Sphinx (#8151)
* docs to sphinx
* Update GPU.md
* Update CPU.md
* Update AUTO.md
* Update performance_int8_vs_fp32.md
* update
* update md
* updates
* disable doc ci
* disable ci
* fix index.rst
Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>
# Conflicts:
# .gitignore
# docs/CMakeLists.txt
# docs/IE_DG/Deep_Learning_Inference_Engine_DevGuide.md
# docs/IE_DG/Extensibility_DG/Custom_ONNX_Ops.md
# docs/IE_DG/Extensibility_DG/VPU_Kernel.md
# docs/IE_DG/InferenceEngine_QueryAPI.md
# docs/IE_DG/Int8Inference.md
# docs/IE_DG/Integrate_with_customer_application_new_API.md
# docs/IE_DG/Model_caching_overview.md
# docs/IE_DG/supported_plugins/GPU_RemoteBlob_API.md
# docs/IE_DG/supported_plugins/HETERO.md
# docs/IE_DG/supported_plugins/MULTI.md
# docs/MO_DG/prepare_model/convert_model/Convert_Model_From_Caffe.md
# docs/MO_DG/prepare_model/convert_model/Convert_Model_From_Kaldi.md
# docs/MO_DG/prepare_model/convert_model/Convert_Model_From_MxNet.md
# docs/MO_DG/prepare_model/convert_model/Convert_Model_From_ONNX.md
# docs/MO_DG/prepare_model/convert_model/Converting_Model.md
# docs/MO_DG/prepare_model/convert_model/Converting_Model_General.md
# docs/MO_DG/prepare_model/convert_model/Cutting_Model.md
# docs/MO_DG/prepare_model/convert_model/pytorch_specific/Convert_RNNT.md
# docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_EfficientDet_Models.md
# docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_WideAndDeep_Family_Models.md
# docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_YOLO_From_Tensorflow.md
# docs/doxygen/Doxyfile.config
# docs/doxygen/ie_docs.xml
# docs/doxygen/ie_plugin_api.config
# docs/doxygen/ngraph_cpp_api.config
# docs/doxygen/openvino_docs.xml
# docs/get_started/get_started_macos.md
# docs/get_started/get_started_raspbian.md
# docs/get_started/get_started_windows.md
# docs/img/cpu_int8_flow.png
# docs/index.md
# docs/install_guides/VisionAcceleratorFPGA_Configure.md
# docs/install_guides/VisionAcceleratorFPGA_Configure_Windows.md
# docs/install_guides/deployment-manager-tool.md
# docs/install_guides/installing-openvino-linux.md
# docs/install_guides/installing-openvino-macos.md
# docs/install_guides/installing-openvino-windows.md
# docs/optimization_guide/dldt_optimization_guide.md
# inference-engine/ie_bridges/c/include/c_api/ie_c_api.h
# inference-engine/ie_bridges/python/docs/api_overview.md
# inference-engine/ie_bridges/python/sample/ngraph_function_creation_sample/README.md
# inference-engine/ie_bridges/python/sample/speech_sample/README.md
# inference-engine/ie_bridges/python/src/openvino/inference_engine/ie_api.pyx
# inference-engine/include/ie_api.h
# inference-engine/include/ie_core.hpp
# inference-engine/include/ie_version.hpp
# inference-engine/samples/benchmark_app/README.md
# inference-engine/samples/speech_sample/README.md
# inference-engine/src/plugin_api/exec_graph_info.hpp
# inference-engine/src/plugin_api/file_utils.h
# inference-engine/src/transformations/include/transformations_visibility.hpp
# inference-engine/tools/benchmark_tool/README.md
# ngraph/core/include/ngraph/ngraph.hpp
# ngraph/frontend/onnx_common/include/onnx_common/parser.hpp
# ngraph/python/src/ngraph/utils/node_factory.py
# openvino/itt/include/openvino/itt.hpp
# thirdparty/ade
# tools/benchmark/README.md
* Cherry-picked remove font-family (#8211)
* Cherry-picked: Update get_started_scripts.md (#8338)
* doc updates (#8268)
* Various doc changes
* theme changes
* remove font-family (#8211)
* fix css
* Update uninstalling-openvino.md
* fix css
* fix
* Fixes for Installation Guides
Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>
Co-authored-by: kblaszczak-intel <karol.blaszczak@intel.com>
# Conflicts:
# docs/IE_DG/Bfloat16Inference.md
# docs/IE_DG/InferenceEngine_QueryAPI.md
# docs/IE_DG/OnnxImporterTutorial.md
# docs/IE_DG/supported_plugins/AUTO.md
# docs/IE_DG/supported_plugins/HETERO.md
# docs/IE_DG/supported_plugins/MULTI.md
# docs/MO_DG/prepare_model/convert_model/Convert_Model_From_Kaldi.md
# docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_YOLO_From_Tensorflow.md
# docs/install_guides/installing-openvino-macos.md
# docs/install_guides/installing-openvino-windows.md
# docs/ops/opset.md
# inference-engine/samples/benchmark_app/README.md
# inference-engine/tools/benchmark_tool/README.md
# thirdparty/ade
* Cherry-picked: doc script changes (#8568)
* fix openvino-sphinx-theme
* add linkcheck target
* fix
* change version
* add doxygen-xfail.txt
* fix
* AA
* fix
* fix
* fix
* fix
* fix
# Conflicts:
# thirdparty/ade
* Cherry-pick: Feature/azaytsev/doc updates gna 2021 4 2 (#8567)
* Various doc changes
* Reformatted C++/Pythob sections. Updated with info from PR8490
* additional fix
* Gemini Lake replaced with Elkhart Lake
* Fixed links in IGs, Added 12th Gen
# Conflicts:
# docs/IE_DG/supported_plugins/GNA.md
# thirdparty/ade
* Cherry-pick: Feature/azaytsev/doc fixes (#8897)
* Various doc changes
* Removed the empty Learning path topic
* Restored the Gemini Lake CPIU list
# Conflicts:
# docs/IE_DG/supported_plugins/GNA.md
# thirdparty/ade
* Cherry-pick: sphinx copybutton doxyrest code blocks (#8992)
# Conflicts:
# thirdparty/ade
* Cherry-pick: iframe video enable fullscreen (#9041)
# Conflicts:
# thirdparty/ade
* Cherry-pick: fix untitled titles (#9213)
# Conflicts:
# thirdparty/ade
* Cherry-pick: perf bench graph animation (#9045)
* animation
* fix
# Conflicts:
# thirdparty/ade
* Cherry-pick: doc pytest (#8888)
* docs pytest
* fixes
# Conflicts:
# docs/doxygen/doxygen-ignore.txt
# docs/scripts/ie_docs.xml
# thirdparty/ade
* Cherry-pick: restore deleted files (#9215)
* Added new operations to the doc structure (from removed ie_docs.xml)
* Additional fixes
* Update docs/IE_DG/InferenceEngine_QueryAPI.md
Co-authored-by: Helena Kloosterman <helena.kloosterman@intel.com>
* Update docs/IE_DG/Int8Inference.md
Co-authored-by: Helena Kloosterman <helena.kloosterman@intel.com>
* Update Custom_Layers_Guide.md
* Changes according to review comments
* doc scripts fixes
* Update docs/IE_DG/Int8Inference.md
Co-authored-by: Helena Kloosterman <helena.kloosterman@intel.com>
* Update Int8Inference.md
* update xfail
* clang format
* updated xfail
Co-authored-by: Trawinski, Dariusz <dariusz.trawinski@intel.com>
Co-authored-by: Nikolay Tyukaev <nikolay.tyukaev@intel.com>
Co-authored-by: kblaszczak-intel <karol.blaszczak@intel.com>
Co-authored-by: Yury Gorbachev <yury.gorbachev@intel.com>
Co-authored-by: Helena Kloosterman <helena.kloosterman@intel.com>
16 KiB
Using the Reshape Inference Feature
Introduction (C++)
@sphinxdirective .. raw:: html
<div id="switcher-cpp" class="switcher-anchor">C++</div>
@endsphinxdirective
OpenVINO™ provides two methods for runtime model reshaping: setting a new input shape and setting a new batch dimension value.
Set a new input shape with the reshape() method
The InferenceEngine::CNNNetwork::reshape method updates input shapes and propagates them down to the outputs of the model through all intermediate layers.
NOTES:
- Starting with the 2021.1 release, the Model Optimizer converts topologies keeping shape-calculating sub-graphs by default, which enables correct shape propagation during reshaping in most cases.
- Older versions of IRs are not guaranteed to reshape successfully. Please regenerate them with the Model Optimizer of the latest version of OpenVINO™.
- If an ONNX model does not have a fully defined input shape and the model was imported with the ONNX importer, reshape the model before loading it to the plugin.
Set a new batch dimension value with the setBatchSize() method
The meaning of a model batch may vary depending on the model design.
This method does not deduce batch placement for inputs from the model architecture.
It assumes that the batch is placed at the zero index in the shape for all inputs and uses the InferenceEngine::CNNNetwork::reshape method to propagate updated shapes through the model.
The method transforms the model before a new shape propagation to relax a hard-coded batch dimension in the model, if any.
Use InferenceEngine::CNNNetwork::reshape instead of InferenceEngine::CNNNetwork::setBatchSize to set new input shapes for the model if the model has one of the following:
-
Multiple inputs with different zero-index dimension meanings
-
Input without a batch dimension
-
0D, 1D, or 3D shape
The
InferenceEngine::CNNNetwork::setBatchSizemethod is a high-level API method that wraps theInferenceEngine::CNNNetwork::reshapemethod call and works for trivial models from the batch placement standpoint. UseInferenceEngine::CNNNetwork::reshapefor other models.Using the
InferenceEngine::CNNNetwork::setBatchSizemethod for models with a non-zero index batch placement or for models with inputs that do not have a batch dimension may lead to undefined behaviour.
You can change input shapes multiple times using the InferenceEngine::CNNNetwork::reshape and InferenceEngine::CNNNetwork::setBatchSize methods in any order.
If a model has a hard-coded batch dimension, use InferenceEngine::CNNNetwork::setBatchSize first to change the batch, then call InferenceEngine::CNNNetwork::reshape to update other dimensions, if needed.
Inference Engine takes three kinds of a model description as an input, which are converted into an InferenceEngine::CNNNetwork object:
- Intermediate Representation (IR) through
InferenceEngine::Core::ReadNetwork - ONNX model through
InferenceEngine::Core::ReadNetwork - nGraph function through the constructor of
InferenceEngine::CNNNetwork
InferenceEngine::CNNNetwork keeps an ngraph::Function object with the model description internally.
The object should have fully-defined input shapes to be successfully loaded to Inference Engine plugins.
To resolve undefined input dimensions of a model, call the CNNNetwork::reshape method to provide new input shapes before loading to the Inference Engine plugin.
Run the following code right after InferenceEngine::CNNNetwork creation to explicitly check for model input names and shapes:
CNNNetwork network = ... // read IR / ONNX model or create from nGraph::Function explicitly
const auto parameters = network.getFunction()->get_parameters();
for (const auto & parameter : parameters) {
std::cout << "name: " << parameter->get_friendly_name() << " shape: " << parameter->get_partial_shape() << std::endl;
if (parameter->get_partial_shape().is_dynamic())
std::cout << "ATTENTION: Input shape is not fully defined. Use the CNNNetwork::reshape method to resolve it." << std::endl;
}
To feed input data of a shape that is different from the model input shape, reshape the model first.
Once the input shape of InferenceEngine::CNNNetwork is set, call the InferenceEngine::Core::LoadNetwork method to get an InferenceEngine::ExecutableNetwork object for inference with updated shapes.
There are other approaches to reshape the model during the stage of IR generation or nGraph::Function creation.
Practically, some models are not ready to be reshaped. In this case, a new input shape cannot be set with the Model Optimizer or the InferenceEngine::CNNNetwork::reshape method.
Usage of Reshape Method
The primary method of the feature is InferenceEngine::CNNNetwork::reshape. It gets new input shapes and propagates it from input to output for all intermediates layers of the given network.
The method takes InferenceEngine::ICNNNetwork::InputShapes - a map of pairs: name of input data and its dimension.
The algorithm for resizing network is the following:
-
Collect the map of input names and shapes from Intermediate Representation (IR) using helper method
InferenceEngine::CNNNetwork::getInputShapes -
Set new input shapes
-
Call reshape
Here is a code example:
@snippet snippets/ShapeInference.cpp part0
The Shape Inference feature is used in [Smart Classroom Demo](@ref omz_demos_smart_classroom_demo_cpp).
Troubleshooting Reshape Errors
Operation semantics may impose restrictions on input shapes of the operation. Shape collision during shape propagation may be a sign that a new shape does not satisfy the restrictions. Changing the model input shape may result in intermediate operations shape collision.
Examples of such operations:
- Reshape operation with a hard-coded output shape value
- MatMul operation with the
Constsecond input cannot be resized by spatial dimensions due to operation semantics
Model structure and logic should not change significantly after model reshaping.
-
The Global Pooling operation is commonly used to reduce output feature map of classification models output. Having the input of the shape [N, C, H, W], Global Pooling returns the output of the shape [N, C, 1, 1]. Model architects usually express Global Pooling with the help of the
Poolingoperation with the fixed kernel size [H, W]. During spatial reshape, having the input of the shape [N, C, H1, W1], Pooling with the fixed kernel size [H, W] returns the output of the shape [N, C, H2, W2], where H2 and W2 are commonly not equal to1. It breaks the classification model structure. For example, publicly available Inception family models from TensorFlow* have this issue. -
Changing the model input shape may significantly affect its accuracy. For example, Object Detection models from TensorFlow have resizing restrictions by design. To keep the model valid after the reshape, choose a new input shape that satisfies conditions listed in the
pipeline.configfile. For details, refer to the Tensorflow Object Detection API models resizing techniques.
Extensibility
The Inference Engine provides a special mechanism that allows adding support of shape inference for custom operations. This mechanism is described in the Extensibility documentation
Introduction (Python)
@sphinxdirective .. raw:: html
<div id="switcher-python" class="switcher-anchor">Python</div>
@endsphinxdirective
OpenVINO™ provides the following methods for runtime model reshaping:
-
Set a new input shape with the IENetwork.reshape method.
The IENetwork.reshape method updates input shapes and propagates them down to the outputs of the model through all intermediate layers.
NOTES:
- Model Optimizer converts topologies keeping shape-calculating sub-graphs by default, which enables correct shape propagation during reshaping in most cases.
- Older versions of IRs are not guaranteed to reshape successfully. Please regenerate them with the Model Optimizer of the latest version of OpenVINO™.
- If an ONNX model does not have a fully defined input shape and the model was imported with the ONNX importer, reshape the model before loading it to the plugin.
-
Set a new batch dimension value with the IENetwork.batch_size method.
The meaning of a model batch may vary depending on the model design. This method does not deduce batch placement for inputs from the model architecture. It assumes that the batch is placed at the zero index in the shape for all inputs and uses the IENetwork.reshape method to propagate updated shapes through the model.
The method transforms the model before a new shape propagation to relax a hard-coded batch dimension in the model, if any.
Use IENetwork.reshape rather than IENetwork.batch_size to set new input shapes for the model if the model has:
- Multiple inputs with different zero-index dimension meanings
- Input without a batch dimension
- 0D, 1D, or 3D shape
The IENetwork.batch_size method is a high-level API method that wraps the IENetwork.reshape method call and works for trivial models from the batch placement standpoint. Use IENetwork.reshape for other models.
Using the IENetwork.batch_size method for models with a non-zero index batch placement or for models with inputs that do not have a batch dimension may lead to undefined behaviour.
You can change input shapes multiple times using the IENetwork.reshape and IENetwork.batch_size methods in any order. If a model has a hard-coded batch dimension, use IENetwork.batch_size first to change the batch, then call IENetwork.reshape to update other dimensions, if needed.
Inference Engine takes three kinds of a model description as an input, which are converted into an IENetwork object:
- Intermediate Representation (IR) through
IECore.read_network - ONNX model through
IECore.read_network - nGraph function through the constructor of IENetwork
IENetwork keeps an ngraph::Function object with the model description internally. The object should have fully defined input shapes to be successfully loaded to the Inference Engine plugins. To resolve undefined input dimensions of a model, call the IENetwork.reshape method providing new input shapes before loading to the Inference Engine plugin.
Run the following code right after IENetwork creation to explicitly check for model input names and shapes:
To feed input data of a shape that is different from the model input shape, reshape the model first.
Once the input shape of IENetwork is set, call the IECore.load_network method to get an ExecutableNetwork object for inference with updated shapes.
There are other approaches to reshape the model during the stage of IR generation or nGraph function creation.
Practically, some models are not ready to be reshaped. In this case, a new input shape cannot be set with the Model Optimizer or the IENetwork.reshape method.
Troubleshooting Reshape Errors
Operation semantics may impose restrictions on input shapes of the operation. Shape collision during shape propagation may be a sign that a new shape does not satisfy the restrictions. Changing the model input shape may result in intermediate operations shape collision.
Examples of such operations:
- Reshape operation with a hard-coded output shape value
- MatMul operation with the Const second input cannot be resized by spatial dimensions due to operation semantics
A model's structure and logic should not significantly change after model reshaping.
-
The Global Pooling operation is commonly used to reduce output feature map of classification models output. Having the input of the shape [N, C, H, W], Global Pooling returns the output of the shape [N, C, 1, 1]. Model architects usually express Global Pooling with the help of the Pooling operation with the fixed kernel size [H, W]. During spatial reshape, having the input of the shape [N, C, H1, W1], Pooling with the fixed kernel size [H, W] returns the output of the shape [N, C, H2, W2], where H2 and W2 are commonly not equal to 1. It breaks the classification model structure. For example, publicly available Inception family models from TensorFlow* have this issue.
-
Changing the model input shape may significantly affect its accuracy. For example, Object Detection models from TensorFlow have resizing restrictions by design. To keep the model valid after the reshape, choose a new input shape that satisfies conditions listed in the pipeline.config file. For details, refer to the Tensorflow Object Detection API models resizing techniques.
Usage of the Reshape Method
The primary method of the feature is IENetwork.reshape. It gets new input shapes and propagates it from input to output for all intermediates layers of the given network. Use IENetwork.input_info to get names of input_layers and .tensor_desc.dims to get the current network input shape.
The following code example shows how to reshape a model to the size of an input image.
import cv2
import numpy as np
from openvino.inference_engine import IECore
ie = IECore()
# Read an input image and transpose input to NCWH
image = cv2.imread(path_to_image_file)
input_image = image.transpose((2, 0, 1))
input_image = np.expand_dims(input_image, axis=0)
# Load the model and get input info
# Note that this model must support arbitrary input shapes
net = ie.read_network(model=path_to_xml_file)
input_layer = next(iter(net.input_info))
print(f"Input shape: {net.input_info[input_blob].tensor_desc.dims}")
# Call reshape
net.reshape({input_layer: input_image.shape})
print(f"New input shape: {net.input_info[input_blob].tensor_desc.dims}")
# Load the model to the device and proceed with inference
exec_net = ie.load_network(network=net, device_name="CPU")
Extensibility
The Inference Engine provides a special mechanism that allows adding support of shape inference for custom operations. This mechanism is described in the Extensibility documentation