Merge IE & nGraph DG (#10055)
* Changed folder for documentation * Fixed links * Merged nGraph DG to OpenVINO Runtime UG * Fixed errors * Fixed some issues * Fixed tree * Fixed typo * Update docs/documentation.md Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com> * Update README.md Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com> * Update README.md Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com> * Fixed name * FIxed snippets * Small fixes * Update docs/HOWTO/Custom_Layers_Guide.md Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com> * Update docs/OV_Runtime_UG/model_representation.md Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com> * Update docs/OV_Runtime_UG/model_representation.md Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com> * Update docs/OV_Runtime_UG/model_representation.md Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com> * Update docs/OV_Runtime_UG/model_representation.md Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com> * Update docs/OV_Runtime_UG/model_representation.md Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com> * Update docs/OV_Runtime_UG/model_representation.md Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com> * Update docs/OV_Runtime_UG/model_representation.md Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com> * Update docs/OV_Runtime_UG/model_representation.md Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com> * Update docs/OV_Runtime_UG/model_representation.md Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com> * Update docs/OV_Runtime_UG/model_representation.md Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com> * Update docs/OV_Runtime_UG/model_representation.md Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com> * Update docs/OV_Runtime_UG/model_representation.md Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com> * Update docs/OV_Runtime_UG/model_representation.md Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com> * Update docs/OV_Runtime_UG/model_representation.md Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com> * Update docs/OV_Runtime_UG/model_representation.md Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com> * Update docs/OV_Runtime_UG/model_representation.md Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com> * Update docs/OV_Runtime_UG/model_representation.md Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com> * Update docs/OV_Runtime_UG/model_representation.md Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com> * Update docs/OV_Runtime_UG/model_representation.md Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com> * Update docs/OV_Runtime_UG/model_representation.md Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com> * Update docs/OV_Runtime_UG/model_representation.md Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com> * Update docs/OV_Runtime_UG/model_representation.md Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com> * Update docs/OV_Runtime_UG/model_representation.md Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com> * Update docs/OV_Runtime_UG/model_representation.md Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com> * Fixed comments * Try to fix doc * Try to fix doc issue * Update docs/OV_Runtime_UG/Integrate_with_customer_application_new_API.md Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com> * Update docs/OV_Runtime_UG/model_representation.md Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com> * Update docs/OV_Runtime_UG/model_representation.md Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com> * Update docs/OV_Runtime_UG/model_representation.md Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com> Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>
@@ -44,8 +44,7 @@ Please report questions, issues and suggestions using:
|
||||
\* Other names and brands may be claimed as the property of others.
|
||||
|
||||
[Open Model Zoo]:https://github.com/openvinotoolkit/open_model_zoo
|
||||
[Inference Engine]:https://software.intel.com/en-us/articles/OpenVINO-InferEngine
|
||||
[Model Optimizer]:https://software.intel.com/en-us/articles/OpenVINO-ModelOptimizer
|
||||
[nGraph]:https://docs.openvino.ai/latest/openvino_docs_nGraph_DG_DevGuide.html
|
||||
[OpenVINO Runtime™]:https://docs.openvino.ai/latest/openvino_docs_IE_DG_Deep_Learning_Inference_Engine_DevGuide.html
|
||||
[Model Optimizer]:https://docs.openvino.ai/latest/openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html
|
||||
[tag on StackOverflow]:https://stackoverflow.com/search?q=%23openvino
|
||||
|
||||
|
||||
@@ -31,15 +31,15 @@ There are three steps to support inference of a model with custom operation(s):
|
||||
1. Add support for a custom operation in the [Model Optimizer](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md) so
|
||||
the Model Optimizer can generate the IR with the operation.
|
||||
2. Create an operation set and implement a custom nGraph operation in it as described in the
|
||||
[Custom nGraph Operation](../IE_DG/Extensibility_DG/AddingNGraphOps.md).
|
||||
3. Implement a customer operation in one of the [Inference Engine](../IE_DG/Deep_Learning_Inference_Engine_DevGuide.md)
|
||||
[Custom nGraph Operation](../OV_Runtime_UG/Extensibility_DG/AddingNGraphOps.md).
|
||||
3. Implement a customer operation in one of the [Inference Engine](../OV_Runtime_UG/Deep_Learning_Inference_Engine_DevGuide.md)
|
||||
plugins to support inference of this operation using a particular target hardware (CPU, GPU or VPU).
|
||||
|
||||
To see the operations that are supported by each device plugin for the Inference Engine, refer to the
|
||||
[Supported Devices](../IE_DG/supported_plugins/Supported_Devices.md).
|
||||
[Supported Devices](../OV_Runtime_UG/supported_plugins/Supported_Devices.md).
|
||||
|
||||
> **NOTE**: If a device doesn't support a particular operation, an alternative to creating a new operation is to target
|
||||
> an additional device using the HETERO plugin. The [Heterogeneous Plugin](../IE_DG/supported_plugins/HETERO.md) may be
|
||||
> an additional device using the HETERO plugin. The [Heterogeneous Plugin](../OV_Runtime_UG/supported_plugins/HETERO.md) may be
|
||||
> used to run an inference model on multiple devices allowing the unsupported operations on one device to "fallback" to
|
||||
> run on another device (e.g., CPU) that does support those operations.
|
||||
|
||||
@@ -61,20 +61,20 @@ operation. Refer to the "Operation Extractor" section of
|
||||
|
||||
## Custom Operations Extensions for the Inference Engine
|
||||
|
||||
Inference Engine provides an extension mechanism to support new operations. This mechanism is described in [Inference Engine Extensibility Mechanism](../IE_DG/Extensibility_DG/Intro.md).
|
||||
Inference Engine provides an extension mechanism to support new operations. This mechanism is described in [Inference Engine Extensibility Mechanism](../OV_Runtime_UG/Extensibility_DG/Intro.md).
|
||||
|
||||
Each device plugin includes a library of optimized implementations to execute known operations which must be extended to execute a custom operation. The custom operation extension is implemented according to the target device:
|
||||
|
||||
- Custom Operation CPU Extension
|
||||
- A compiled shared library (`.so` or `.dll`) needed by the CPU Plugin for executing the custom operation
|
||||
on a CPU. Refer to the [How to Implement Custom CPU Operations](../IE_DG/Extensibility_DG/CPU_Kernel.md) for more
|
||||
on a CPU. Refer to the [How to Implement Custom CPU Operations](../OV_Runtime_UG/Extensibility_DG/CPU_Kernel.md) for more
|
||||
details.
|
||||
- Custom Operation GPU Extension
|
||||
- OpenCL source code (.cl) for the custom operation kernel that will be compiled to execute on the GPU along with an operation description file (.xml) needed by the GPU Plugin for the custom operation kernel. Refer to the [How to Implement Custom GPU Operations](../IE_DG/Extensibility_DG/GPU_Kernel.md) for more details.
|
||||
- OpenCL source code (.cl) for the custom operation kernel that will be compiled to execute on the GPU along with an operation description file (.xml) needed by the GPU Plugin for the custom operation kernel. Refer to the [How to Implement Custom GPU Operations](../OV_Runtime_UG/Extensibility_DG/GPU_Kernel.md) for more details.
|
||||
- Custom Operation VPU Extension
|
||||
- OpenCL source code (.cl) for the custom operation kernel that will be compiled to execute on the VPU along with an operation description file (.xml) needed by the VPU Plugin for the custom operation kernel. Refer to [How to Implement Custom Operations for VPU](../IE_DG/Extensibility_DG/VPU_Kernel.md) for more details.
|
||||
- OpenCL source code (.cl) for the custom operation kernel that will be compiled to execute on the VPU along with an operation description file (.xml) needed by the VPU Plugin for the custom operation kernel. Refer to [How to Implement Custom Operations for VPU](../OV_Runtime_UG/Extensibility_DG/VPU_Kernel.md) for more details.
|
||||
|
||||
Also, it is necessary to implement nGraph custom operation according to [Custom nGraph Operation](../IE_DG/Extensibility_DG/AddingNGraphOps.md) so the Inference Engine can read an IR with this
|
||||
Also, it is necessary to implement nGraph custom operation according to [Custom nGraph Operation](../OV_Runtime_UG/Extensibility_DG/AddingNGraphOps.md) so the Inference Engine can read an IR with this
|
||||
operation and correctly infer output tensor shape and type.
|
||||
|
||||
## Enabling Magnetic Resonance Image Reconstruction Model
|
||||
@@ -125,7 +125,7 @@ Firstly, open the model in TensorBoard or other TensorFlow* model visualization
|
||||
batch dimension because the value for the batch dimension is not hardcoded in the model. Model Optimizer need to set all
|
||||
dynamic dimensions to some specific value to create the IR, therefore specify the command line parameter `-b 1` to set
|
||||
the batch dimension equal to 1. The actual batch size dimension can be changed at runtime using the Inference Engine API
|
||||
described in the [Using Shape Inference](../IE_DG/ShapeInference.md). Also refer to the General Conversion Parameters section in [Converting a Model to Intermediate Representation (IR)](../MO_DG/prepare_model/convert_model/Converting_Model.md) and [Convert Your TensorFlow* Model](../MO_DG/prepare_model/convert_model/Convert_Model_From_TensorFlow.md)
|
||||
described in the [Using Shape Inference](../OV_Runtime_UG/ShapeInference.md). Also refer to the General Conversion Parameters section in [Converting a Model to Intermediate Representation (IR)](../MO_DG/prepare_model/convert_model/Converting_Model.md) and [Convert Your TensorFlow* Model](../MO_DG/prepare_model/convert_model/Convert_Model_From_TensorFlow.md)
|
||||
for more details and command line parameters used for the model conversion.
|
||||
|
||||
```sh
|
||||
@@ -263,7 +263,7 @@ The sub-graph corresponding to the originally non-supported one is depicted in t
|
||||
|
||||
### Inference Engine Extension Implementation
|
||||
Now it is necessary to implement the extension for the CPU plugin with operation "FFT" introduced previously. The code
|
||||
below is based on the template extension described in [Inference Engine Extensibility Mechanism](../IE_DG/Extensibility_DG/Intro.md).
|
||||
below is based on the template extension described in [Inference Engine Extensibility Mechanism](../OV_Runtime_UG/Extensibility_DG/Intro.md).
|
||||
|
||||
#### CMake Build File
|
||||
The first step is to create a CMake configuration file which builds the extension. The content of the "CMakeLists.txt"
|
||||
@@ -284,7 +284,7 @@ in the `fft_op.cpp` file with the following content:
|
||||
|
||||
@snippet template_extension/old/fft_op.cpp fft_op:implementation
|
||||
|
||||
Refer to the [Custom nGraph Operation](../IE_DG/Extensibility_DG/AddingNGraphOps.md) for more details.
|
||||
Refer to the [Custom nGraph Operation](../OV_Runtime_UG/Extensibility_DG/AddingNGraphOps.md) for more details.
|
||||
|
||||
#### CPU FFT Kernel Implementation
|
||||
The operation implementation for CPU plugin uses OpenCV to perform the FFT. The header file "fft_kernel.hpp" has the
|
||||
@@ -296,11 +296,11 @@ The "fft_kernel.cpp" with the implementation of the CPU has the following conten
|
||||
|
||||
@snippet template_extension/old/fft_kernel.cpp fft_kernel:implementation
|
||||
|
||||
Refer to the [How to Implement Custom CPU Operations](../IE_DG/Extensibility_DG/CPU_Kernel.md) for more details.
|
||||
Refer to the [How to Implement Custom CPU Operations](../OV_Runtime_UG/Extensibility_DG/CPU_Kernel.md) for more details.
|
||||
|
||||
#### Extension Library Implementation
|
||||
The last step is to create an extension library "extension.cpp" and "extension.hpp" which will include the FFT
|
||||
operation for the CPU plugin. The code of the library is described in the [Extension Library](../IE_DG/Extensibility_DG/Extension.md).
|
||||
operation for the CPU plugin. The code of the library is described in the [Extension Library](../OV_Runtime_UG/Extensibility_DG/Extension.md).
|
||||
|
||||
### Building and Running the Custom Extension
|
||||
To build the extension, run the following:<br>
|
||||
@@ -335,8 +335,8 @@ python3 mri_reconstruction_demo.py \
|
||||
- OpenVINO™ toolkit online documentation: [https://docs.openvino.ai](https://docs.openvino.ai)
|
||||
- [Model Optimizer Developer Guide](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)
|
||||
- [Model Optimizer Extensibility](../MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md)
|
||||
- [Inference Engine Extensibility Mechanism](../IE_DG/Extensibility_DG/Intro.md)
|
||||
- [Inference Engine Samples Overview](../IE_DG/Samples_Overview.md)
|
||||
- [Inference Engine Extensibility Mechanism](../OV_Runtime_UG/Extensibility_DG/Intro.md)
|
||||
- [OpenVINO™ Toolkit Samples Overview](../OV_Runtime_UG/Samples_Overview.md)
|
||||
- [Overview of OpenVINO™ Toolkit Pre-Trained Models](@ref omz_models_group_intel)
|
||||
- For IoT Libraries and Code Samples see the [Intel® IoT Developer Kit](https://github.com/intel-iot-devkit).
|
||||
|
||||
|
||||
@@ -6,6 +6,7 @@
|
||||
:maxdepth: 1
|
||||
:hidden:
|
||||
|
||||
openvino_docs_MO_DG_IR_and_opsets
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_Converting_Model
|
||||
openvino_docs_MO_DG_Additional_Optimization_Use_Cases
|
||||
openvino_docs_MO_DG_prepare_model_customize_model_optimizer_Customize_Model_Optimizer
|
||||
@@ -19,7 +20,7 @@
|
||||
|
||||
Model Optimizer is a cross-platform command-line tool that facilitates the transition between the training and deployment environment, performs static model analysis, and adjusts deep learning models for optimal execution on end-point target devices.
|
||||
|
||||
Model Optimizer process assumes you have a network model trained using supported deep learning frameworks: Caffe*, TensorFlow*, Kaldi*, MXNet* or converted to the ONNX* format. Model Optimizer produces an Intermediate Representation (IR) of the network, which can be inferred with the [Inference Engine](../IE_DG/Deep_Learning_Inference_Engine_DevGuide.md).
|
||||
Model Optimizer process assumes you have a network model trained using supported deep learning frameworks: Caffe*, TensorFlow*, Kaldi*, MXNet* or converted to the ONNX* format. Model Optimizer produces an Intermediate Representation (IR) of the network, which can be inferred with the [Inference Engine](../OV_Runtime_UG/Deep_Learning_Inference_Engine_DevGuide.md).
|
||||
|
||||
> **NOTE**: Model Optimizer does not infer models. Model Optimizer is an offline tool that runs before the inference takes place.
|
||||
|
||||
|
||||
@@ -9,7 +9,7 @@ Model Optimizer performs preprocessing to a model. It is possible to optimize th
|
||||
If, for example, your network assumes the RGB inputs, the Model Optimizer can swap the channels in the first convolution using the `--reverse_input_channels` command line option, so you do not need to convert your inputs to RGB every time you get the BGR image, for example, from OpenCV*.
|
||||
|
||||
- **Larger batch size**<br>
|
||||
Notice that the devices like GPU are doing better with larger batch size. While it is possible to set the batch size in the runtime using the Inference Engine [ShapeInference feature](../../IE_DG/ShapeInference.md).
|
||||
Notice that the devices like GPU are doing better with larger batch size. While it is possible to set the batch size in the runtime using the Inference Engine [ShapeInference feature](../../OV_Runtime_UG/ShapeInference.md).
|
||||
|
||||
- **Resulting IR precision**<br>
|
||||
The resulting IR precision, for instance, `FP16` or `FP32`, directly affects performance. As CPU now supports `FP16` (while internally upscaling to `FP32` anyway) and because this is the best precision for a GPU target, you may want to always convert models to `FP16`. Notice that this is the only precision that Intel® Movidius™ Myriad™ 2 and Intel® Myriad™ X VPUs support.
|
||||
|
||||
@@ -18,7 +18,7 @@ You need to build your performance conclusions on reproducible data. Do the perf
|
||||
- If the warm-up run does not help or execution time still varies, you can try running a large number of iterations and then average or find a mean of the results.
|
||||
- For time values that range too much, use geomean.
|
||||
|
||||
Refer to the [Inference Engine Samples](../../IE_DG/Samples_Overview.md) for code examples for the performance measurements. Almost every sample, except interactive demos, has a `-ni` option to specify the number of iterations.
|
||||
Refer to the [Inference Engine Samples](../../OV_Runtime_UG/Samples_Overview.md) for code examples for the performance measurements. Almost every sample, except interactive demos, has a `-ni` option to specify the number of iterations.
|
||||
|
||||
## Getting performance numbers using OpenVINO tool
|
||||
|
||||
@@ -39,7 +39,7 @@ to execute on the CPU instead.
|
||||
For example, for the CPU throughput mode from the previous section, you can play with number of streams (`-nstreams` command-line param).
|
||||
Try different values of the `-nstreams` argument from `1` to a number of CPU cores and find one that provides the best performance. For example, on a 8-core CPU, compare the `-nstreams 1` (which is a latency-oriented scenario) to the `2`, `4` and `8` streams. Notice that `benchmark_app` automatically queries/creates/runs number of requests required to saturate the given number of streams.
|
||||
|
||||
Finally, notice that when you don't specify number of streams with `-nstreams`, "AUTO" value for the streams is used, e.g. for the CPU this is [CPU_THROUGHPUT_AUTO](../../IE_DG/supported_plugins/CPU.md). You can spot the actual value behind "AUTO" for your machine in the application output.
|
||||
Finally, notice that when you don't specify number of streams with `-nstreams`, "AUTO" value for the streams is used, e.g. for the CPU this is [CPU_THROUGHPUT_AUTO](../../OV_Runtime_UG/supported_plugins/CPU.md). You can spot the actual value behind "AUTO" for your machine in the application output.
|
||||
Notice that the "AUTO" number is not necessarily most optimal, so it is generally recommended to play either with the benchmark_app's "-nstreams" as described above, or via [new Workbench tool](@ref workbench_docs_Workbench_DG_Introduction).This allows you to simplify the app-logic, as you don't need to combine multiple inputs into a batch to achieve good CPU performance.
|
||||
Instead, it is possible to keep a separate infer request per camera or another source of input and process the requests in parallel using Async API.
|
||||
|
||||
@@ -47,7 +47,7 @@ Instead, it is possible to keep a separate infer request per camera or another s
|
||||
|
||||
When comparing the Inference Engine performance with the framework or another reference code, make sure that both versions are as similar as possible:
|
||||
|
||||
- Wrap exactly the inference execution (refer to the [Inference Engine Samples](../../IE_DG/Samples_Overview.md) for examples).
|
||||
- Wrap exactly the inference execution (refer to the [Inference Engine Samples](../../OV_Runtime_UG/Samples_Overview.md) for examples).
|
||||
- Do not include model loading time.
|
||||
- Ensure the inputs are identical for the Inference Engine and the framework. For example, Caffe\* allows to auto-populate the input with random values. Notice that it might give different performance than on real images.
|
||||
- Similarly, for correct performance comparison, make sure the access pattern, for example, input layouts, is optimal for Inference Engine (currently, it is NCHW).
|
||||
@@ -64,7 +64,7 @@ Alternatively, you can gather the raw profiling data that samples report, the se
|
||||
|
||||
### Internal Inference Performance Counters <a name="performance-counters"></a>
|
||||
|
||||
Almost every sample (inspect command-line options for a specific sample with `-h`) supports a `-pc` command that outputs internal execution breakdown. Refer to the [samples code](../../IE_DG/Samples_Overview.md) for the actual Inference Engine API behind that.
|
||||
Almost every sample (inspect command-line options for a specific sample with `-h`) supports a `-pc` command that outputs internal execution breakdown. Refer to the [samples code](../../OV_Runtime_UG/Samples_Overview.md) for the actual Inference Engine API behind that.
|
||||
|
||||
Below is example of CPU plugin output for a network (since the device is CPU, the layers wall clock `realTime` and the `cpu` time are the same):
|
||||
|
||||
|
||||
@@ -214,7 +214,7 @@ One of the layers in the specified topology might not have inputs or values. Ple
|
||||
|
||||
#### 24. What does the message "Part of the nodes was not translated to IE. Stopped" mean? <a name="question-24"></a>
|
||||
|
||||
Some of the layers are not supported by the Inference Engine and cannot be translated to an Intermediate Representation. You can extend the Model Optimizer by allowing generation of new types of layers and implement these layers in the dedicated Inference Engine plugins. For more information, refer to the [Custom Layers Guide](../../HOWTO/Custom_Layers_Guide.md) and [Inference Engine Extensibility Mechanism](../../IE_DG/Extensibility_DG/Intro.md)
|
||||
Some of the layers are not supported by the Inference Engine and cannot be translated to an Intermediate Representation. You can extend the Model Optimizer by allowing generation of new types of layers and implement these layers in the dedicated Inference Engine plugins. For more information, refer to the [Custom Layers Guide](../../HOWTO/Custom_Layers_Guide.md) and [Inference Engine Extensibility Mechanism](../../OV_Runtime_UG/Extensibility_DG/Intro.md)
|
||||
|
||||
#### 25. What does the message "While creating an edge from .. to .. : node name is undefined in the graph. Check correctness of the input model" mean? <a name="question-25"></a>
|
||||
|
||||
@@ -638,4 +638,4 @@ Starting from the 2022.1 version, the default IR conversion path for ONNX models
|
||||
Certain features, such as `--extensions` and `--transformations_config`, are not yet fully supported on the new frontends.
|
||||
For `--extensions`, the new frontends support only paths to shared libraries (.dll and .so). For `--transformations_config`, they support JSON configurations with defined library fields.
|
||||
Inputs freezing (enabled by `--freeze_placeholder_with_value` or `--input` arguments) is not supported on the new frontends.
|
||||
The IR conversion falls back to the old path if a user does not select any expected path of conversion explicitly (by `--use_new_frontend` or `--use_legacy_frontend` MO arguments) and unsupported pre-defined scenario is detected on the new frontend path.
|
||||
The IR conversion falls back to the old path if a user does not select any expected path of conversion explicitly (by `--use_new_frontend` or `--use_legacy_frontend` MO arguments) and unsupported pre-defined scenario is detected on the new frontend path.
|
||||
|
||||
@@ -4,8 +4,8 @@ A summary of the steps for optimizing and deploying a model that was trained wit
|
||||
|
||||
1. [Configure the Model Optimizer](../../Deep_Learning_Model_Optimizer_DevGuide.md) for Caffe\*.
|
||||
2. [Convert a Caffe\* Model](#Convert_From_Caffe) to produce an optimized [Intermediate Representation (IR)](../../IR_and_opsets.md) of the model based on the trained network topology, weights, and biases values
|
||||
3. Test the model in the Intermediate Representation format using the [Inference Engine](../../../IE_DG/Deep_Learning_Inference_Engine_DevGuide.md) in the target environment via provided Inference Engine [sample applications](../../../IE_DG/Samples_Overview.md)
|
||||
4. [Integrate](../../../IE_DG/Samples_Overview.md) the [Inference Engine](../../../IE_DG/Deep_Learning_Inference_Engine_DevGuide.md) in your application to deploy the model in the target environment
|
||||
3. Test the model in the Intermediate Representation format using the [Inference Engine](../../../OV_Runtime_UG/Deep_Learning_Inference_Engine_DevGuide.md) in the target environment via provided Inference Engine [sample applications](../../../OV_Runtime_UG/Samples_Overview.md)
|
||||
4. [Integrate](../../../OV_Runtime_UG/Samples_Overview.md) the [Inference Engine](../../../OV_Runtime_UG/Deep_Learning_Inference_Engine_DevGuide.md) in your application to deploy the model in the target environment
|
||||
|
||||
## Supported Topologies
|
||||
|
||||
|
||||
@@ -14,8 +14,8 @@ A summary of the steps for optimizing and deploying a model that was trained wit
|
||||
|
||||
1. [Configure the Model Optimizer](../../Deep_Learning_Model_Optimizer_DevGuide.md) for Kaldi\*.
|
||||
2. [Convert a Kaldi\* Model](#Convert_From_Kaldi) to produce an optimized [Intermediate Representation (IR)](../../IR_and_opsets.md) of the model based on the trained network topology, weights, and biases values.
|
||||
3. Test the model in the Intermediate Representation format using the [Inference Engine](../../../IE_DG/Deep_Learning_Inference_Engine_DevGuide.md) in the target environment via provided Inference Engine [sample applications](../../../IE_DG/Samples_Overview.md).
|
||||
4. [Integrate](../../../IE_DG/Samples_Overview.md) the [Inference Engine](../../../IE_DG/Deep_Learning_Inference_Engine_DevGuide.md) in your application to deploy the model in the target environment.
|
||||
3. Test the model in the Intermediate Representation format using the [Inference Engine](../../../OV_Runtime_UG/Deep_Learning_Inference_Engine_DevGuide.md) in the target environment via provided Inference Engine [sample applications](../../../OV_Runtime_UG/Samples_Overview.md).
|
||||
4. [Integrate](../../../OV_Runtime_UG/Samples_Overview.md) the [Inference Engine](../../../OV_Runtime_UG/Deep_Learning_Inference_Engine_DevGuide.md) in your application to deploy the model in the target environment.
|
||||
|
||||
> **NOTE**: The Model Optimizer supports the [nnet1](http://kaldi-asr.org/doc/dnn1.html) and [nnet2](http://kaldi-asr.org/doc/dnn2.html) formats of Kaldi models. Support of the [nnet3](http://kaldi-asr.org/doc/dnn3.html) format is limited.
|
||||
|
||||
|
||||
@@ -15,8 +15,8 @@ A summary of the steps for optimizing and deploying a model that was trained wit
|
||||
|
||||
1. [Configure the Model Optimizer](../../Deep_Learning_Model_Optimizer_DevGuide.md) for MXNet* (MXNet was used to train your model)
|
||||
2. [Convert a MXNet model](#ConvertMxNet) to produce an optimized [Intermediate Representation (IR)](../../IR_and_opsets.md) of the model based on the trained network topology, weights, and biases values
|
||||
3. Test the model in the Intermediate Representation format using the [Inference Engine](../../../IE_DG/Deep_Learning_Inference_Engine_DevGuide.md) in the target environment via provided Inference Engine [sample applications](../../../IE_DG/Samples_Overview.md)
|
||||
4. [Integrate](../../../IE_DG/Samples_Overview.md) the [Inference Engine](../../../IE_DG/Deep_Learning_Inference_Engine_DevGuide.md) in your application to deploy the model in the target environment
|
||||
3. Test the model in the Intermediate Representation format using the [Inference Engine](../../../OV_Runtime_UG/Deep_Learning_Inference_Engine_DevGuide.md) in the target environment via provided Inference Engine [sample applications](../../../OV_Runtime_UG/Samples_Overview.md)
|
||||
4. [Integrate](../../../OV_Runtime_UG/Samples_Overview.md) the [Inference Engine](../../../OV_Runtime_UG/Deep_Learning_Inference_Engine_DevGuide.md) in your application to deploy the model in the target environment
|
||||
|
||||
## Supported Topologies
|
||||
|
||||
|
||||
@@ -4,8 +4,8 @@ A summary of the steps for optimizing and deploying a model trained with Paddle\
|
||||
|
||||
1. [Configure the Model Optimizer](../../Deep_Learning_Model_Optimizer_DevGuide.md) for Paddle\*.
|
||||
2. [Convert a Paddle\* Model](#Convert_From_Paddle) to produce an optimized [Intermediate Representation (IR)](../../IR_and_opsets.md) of the model based on the trained network topology, weights, and biases.
|
||||
3. Test the model in the Intermediate Representation format using the [Inference Engine](../../../IE_DG/Deep_Learning_Inference_Engine_DevGuide.md) in the target environment via provided Inference Engine [sample applications](../../../IE_DG/Samples_Overview.md).
|
||||
4. [Integrate](../../../IE_DG/Samples_Overview.md) the [Inference Engine](../../../IE_DG/Deep_Learning_Inference_Engine_DevGuide.md) in your application to deploy the model in the target environment.
|
||||
3. Test the model in the Intermediate Representation format using the [Inference Engine](../../../OV_Runtime_UG/Deep_Learning_Inference_Engine_DevGuide.md) in the target environment via provided Inference Engine [sample applications](../../../OV_Runtime_UG/Samples_Overview.md).
|
||||
4. [Integrate](../../../OV_Runtime_UG/Samples_Overview.md) the [Inference Engine](../../../OV_Runtime_UG/Deep_Learning_Inference_Engine_DevGuide.md) in your application to deploy the model in the target environment.
|
||||
|
||||
## Supported Topologies
|
||||
|
||||
|
||||
@@ -48,8 +48,8 @@ PyTorch* framework is supported through export to ONNX\* format. A summary of th
|
||||
1. [Configure the Model Optimizer](../../Deep_Learning_Model_Optimizer_DevGuide.md) for ONNX\*.
|
||||
2. [Export PyTorch model to ONNX\*](#export-to-onnx).
|
||||
3. [Convert an ONNX\* model](Convert_Model_From_ONNX.md) to produce an optimized [Intermediate Representation (IR)](../../IR_and_opsets.md) of the model based on the trained network topology, weights, and biases values.
|
||||
4. Test the model in the Intermediate Representation format using the [Inference Engine](../../../IE_DG/Deep_Learning_Inference_Engine_DevGuide.md) in the target environment via provided [sample applications](../../../IE_DG/Samples_Overview.md).
|
||||
5. [Integrate](../../../IE_DG/Samples_Overview.md) the Inference Engine in your application to deploy the model in the target environment.
|
||||
4. Test the model in the Intermediate Representation format using the [Inference Engine](../../../OV_Runtime_UG/Deep_Learning_Inference_Engine_DevGuide.md) in the target environment via provided [sample applications](../../../OV_Runtime_UG/Samples_Overview.md).
|
||||
5. [Integrate](../../../OV_Runtime_UG/Samples_Overview.md) the Inference Engine in your application to deploy the model in the target environment.
|
||||
|
||||
## Export PyTorch\* Model to ONNX\* Format <a name="export-to-onnx"></a>
|
||||
|
||||
|
||||
@@ -29,8 +29,8 @@ A summary of the steps for optimizing and deploying a model that was trained wit
|
||||
1. [Configure the Model Optimizer](../../Deep_Learning_Model_Optimizer_DevGuide.md) for TensorFlow\* (TensorFlow was used to train your model).
|
||||
2. [Freeze the TensorFlow model](#freeze-the-tensorflow-model) if your model is not already frozen or skip this step and use the [instruction](#loading-nonfrozen-models) to a convert a non-frozen model.
|
||||
3. [Convert a TensorFlow\* model](#Convert_From_TF) to produce an optimized [Intermediate Representation (IR)](../../IR_and_opsets.md) of the model based on the trained network topology, weights, and biases values.
|
||||
4. Test the model in the Intermediate Representation format using the [Inference Engine](../../../IE_DG/Deep_Learning_Inference_Engine_DevGuide.md) in the target environment via provided [sample applications](../../../IE_DG/Samples_Overview.md).
|
||||
5. [Integrate](../../../IE_DG/Samples_Overview.md) the Inference Engine in your application to deploy the model in the target environment.
|
||||
4. Test the model in the Intermediate Representation format using the [Inference Engine](../../../OV_Runtime_UG/Deep_Learning_Inference_Engine_DevGuide.md) in the target environment via provided [sample applications](../../../OV_Runtime_UG/Samples_Overview.md).
|
||||
5. [Integrate](../../../OV_Runtime_UG/Samples_Overview.md) the Inference Engine in your application to deploy the model in the target environment.
|
||||
|
||||
## Supported Topologies
|
||||
|
||||
|
||||
@@ -3,7 +3,7 @@
|
||||
## Introduction
|
||||
|
||||
Inference Engine CPU and GPU plugin can infer models in the low precision.
|
||||
For details, refer to [Low Precision Inference on the CPU](../../../IE_DG/Int8Inference.md).
|
||||
For details, refer to [Low Precision Inference on the CPU](../../../OV_Runtime_UG/Int8Inference.md).
|
||||
|
||||
Intermediate Representation (IR) should be specifically formed to be suitable for low precision inference.
|
||||
Such an IR is called a Low Precision IR and you can generate it in two ways:
|
||||
|
||||
@@ -104,5 +104,5 @@ mo --input_model rnnt_prediction.onnx --input "symbol[1 1],hidden_in_1[2 1 320],
|
||||
mo --input_model rnnt_joint.onnx --input "0[1 1 1024],1[1 1 320]"
|
||||
```
|
||||
Please note that hardcoded value for sequence length = 157 was taken from the MLCommons but conversion to IR preserves
|
||||
network [reshapeability](../../../../IE_DG/ShapeInference.md), this means you can change input shapes manually to any value either during conversion or
|
||||
network [reshapeability](../../../../OV_Runtime_UG/ShapeInference.md), this means you can change input shapes manually to any value either during conversion or
|
||||
inference.
|
||||
|
||||
@@ -116,4 +116,4 @@ Run the Model Optimizer with the following command line parameters to generate r
|
||||
```
|
||||
For other applicable parameters, refer to [Convert Model from TensorFlow](../Convert_Model_From_TensorFlow.md).
|
||||
|
||||
For more information about reshape abilities, refer to [Using Shape Inference](../../../../IE_DG/ShapeInference.md).
|
||||
For more information about reshape abilities, refer to [Using Shape Inference](../../../../OV_Runtime_UG/ShapeInference.md).
|
||||
|
||||
@@ -47,7 +47,7 @@ There are two specificities with the supported part of the model.
|
||||
|
||||
The first is that the model contains an input with sequence length. So the model can be converted with
|
||||
a fixed input length shape, thus the model is not reshapeable.
|
||||
Refer to the [Using Shape Inference](../../../../IE_DG/ShapeInference.md).
|
||||
Refer to the [Using Shape Inference](../../../../OV_Runtime_UG/ShapeInference.md).
|
||||
|
||||
The second is that the frozen model still has two variables: `previous_state_c` and `previous_state_h`, figure
|
||||
with the frozen *.pb model is below. It means that the model keeps training these variables at each inference.
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
|
||||
> **NOTES**:
|
||||
> * Starting with the 2022.1 release, the Model Optimizer can convert the TensorFlow\* Object Detection API Faster and Mask RCNNs topologies differently. By default, the Model Optimizer adds operation "Proposal" to the generated IR. This operation needs an additional input to the model with name "image_info" which should be fed with several values describing the pre-processing applied to the input image (refer to the [Proposal](../../../../ops/detection/Proposal_4.md) operation specification for more information). However, this input is redundant for the models trained and inferred with equal size images. Model Optimizer can generate IR for such models and insert operation [DetectionOutput](../../../../ops/detection/DetectionOutput_1.md) instead of `Proposal`. The `DetectionOutput` operation does not require additional model input "image_info" and moreover, for some models the produced inference results are closer to the original TensorFlow\* model. In order to trigger new behaviour the attribute "operation_to_add" in the corresponding JSON transformation configuration file should be set to value "DetectionOutput" instead of default one "Proposal".
|
||||
> * Starting with the 2021.1 release, the Model Optimizer converts the TensorFlow\* Object Detection API SSDs, Faster and Mask RCNNs topologies keeping shape-calculating sub-graphs by default, so topologies can be re-shaped in the Inference Engine using dedicated reshape API. Refer to [Using Shape Inference](../../../../IE_DG/ShapeInference.md) for more information on how to use this feature. It is possible to change the both spatial dimensions of the input image and batch size.
|
||||
> * Starting with the 2021.1 release, the Model Optimizer converts the TensorFlow\* Object Detection API SSDs, Faster and Mask RCNNs topologies keeping shape-calculating sub-graphs by default, so topologies can be re-shaped in the Inference Engine using dedicated reshape API. Refer to [Using Shape Inference](../../../../OV_Runtime_UG/ShapeInference.md) for more information on how to use this feature. It is possible to change the both spatial dimensions of the input image and batch size.
|
||||
> * To generate IRs for TF 1 SSD topologies, the Model Optimizer creates a number of `PriorBoxClustered` operations instead of a constant node with prior boxes calculated for the particular input image size. This change allows you to reshape the topology in the Inference Engine using dedicated Inference Engine API. The reshaping is supported for all SSD topologies except FPNs which contain hardcoded shapes for some operations preventing from changing topology input shape.
|
||||
|
||||
## How to Convert a Model
|
||||
@@ -63,7 +63,7 @@ based on deep learning in various tasks, including Image Classifiacton, Visual O
|
||||
Speech Recognition, Natural Language Processing and others. Refer to the links below for more details.
|
||||
|
||||
|
||||
* [Inference Engine Samples](../../../../IE_DG/Samples_Overview.md)
|
||||
* [Inference Engine Samples](../../../../OV_Runtime_UG/Samples_Overview.md)
|
||||
* [Open Model Zoo Demos](@ref omz_demos)
|
||||
|
||||
## Important Notes About Feeding Input Images to the Samples
|
||||
|
||||
@@ -157,7 +157,7 @@ the following (for the case when `axis` is not equal to 0 and 1):
|
||||
4. Use the concatenated value as the second input to the `Reshape` operation.
|
||||
|
||||
It is highly recommended that you write shape-agnostic transformations to avoid model reshape-ability issues. Refer to
|
||||
[Using Shape Inference](../../../IE_DG/ShapeInference.md) for more information related to the reshaping of a model.
|
||||
[Using Shape Inference](../../../OV_Runtime_UG/ShapeInference.md) for more information related to the reshaping of a model.
|
||||
|
||||
More information on how to develop front phase transformations and dedicated API description is provided in the
|
||||
[Front Phase Transformations](#front-phase-transformations).
|
||||
@@ -171,7 +171,7 @@ defined as a mathematical expression using the [ShapeOf](../../../ops/shape/Shap
|
||||
|
||||
> **NOTE**: Model Optimizer does not fold sub-graphs starting from the [ShapeOf](../../../ops/shape/ShapeOf_3.md)
|
||||
> operation by default because this leads to a model non-reshape-ability (the command line parameter `--static_shape`
|
||||
> can override this behavior). Refer to [Using Shape Inference](../../../IE_DG/ShapeInference.md) for more information
|
||||
> can override this behavior). Refer to [Using Shape Inference](../../../OV_Runtime_UG/ShapeInference.md) for more information
|
||||
> related to reshaping of a model.
|
||||
|
||||
Model Optimizer calculates output shapes for all operations in a model to write them to Intermediate Representation
|
||||
@@ -507,7 +507,7 @@ There are a number of common attributes used in the operations. Here is the list
|
||||
Model Optimizer operations this attribute should be set to `None`. The model conversion fails if an operation with
|
||||
`type` equal to `None` comes to the IR emitting phase. **Mandatory**.
|
||||
* `version` — the operation set (opset) name the operation belongs to. If not specified, the Model Optimizer sets it
|
||||
equal to `experimental`. Refer to [nGraph Basic Concepts](@ref openvino_docs_nGraph_DG_basic_concepts) for more
|
||||
equal to `experimental`. Refer to [OpenVINO Model Representation](@ref openvino_docs_OV_Runtime_UG_Model_Representation) for more
|
||||
information about operation sets. **Mandatory**.
|
||||
* `op` — Model Optimizer type of the operation. In many cases, the value of `type` is equal to the value of `op`. But
|
||||
when the Model Optimizer cannot instantiate the opset operation during model loading, it creates an instance of an internal
|
||||
@@ -1259,6 +1259,6 @@ Refer to the `extensions/back/GatherNormalizer.py` for the example of a such typ
|
||||
## See Also <a name="see-also"></a>
|
||||
* [Deep Learning Network Intermediate Representation and Operation Sets in OpenVINO™](../../IR_and_opsets.md)
|
||||
* [Converting a Model to Intermediate Representation (IR)](../convert_model/Converting_Model.md)
|
||||
* [nGraph Basic Concepts](../../../nGraph_DG/nGraph_basic_concepts.md)
|
||||
* [Inference Engine Extensibility Mechanism](../../../IE_DG/Extensibility_DG/Intro.md)
|
||||
* [OpenVINO Model Representation](../../../OV_Runtime_UG/model_representation.md)
|
||||
* [Inference Engine Extensibility Mechanism](../../../OV_Runtime_UG/Extensibility_DG/Intro.md)
|
||||
* [Extending the Model Optimizer with Caffe* Python Layers](Extending_Model_Optimizer_with_Caffe_Python_Layers.md)
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Inference Engine Developer Guide {#openvino_docs_IE_DG_Deep_Learning_Inference_Engine_DevGuide}
|
||||
# OpenVINO™ Runtime User Guide {#openvino_docs_IE_DG_Deep_Learning_Inference_Engine_DevGuide}
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
@@ -8,6 +8,8 @@
|
||||
|
||||
openvino_2_0_transition_guide
|
||||
openvino_docs_IE_DG_Integrate_with_customer_application_new_API
|
||||
openvino_docs_OV_Runtime_UG_Model_Representation
|
||||
ngraph_transformation
|
||||
openvino_docs_deployment_optimization_guide_dldt_optimization_guide
|
||||
openvino_docs_IE_DG_Device_Plugins
|
||||
Direct ONNX Format Support <openvino_docs_IE_DG_ONNX_Support>
|
||||
@@ -35,8 +37,6 @@ The scheme below illustrates the typical workflow for deploying a trained deep l
|
||||
|
||||

|
||||
|
||||
\\* _nGraph_ is the internal graph representation in the OpenVINO™ toolkit. Use it to [build a model from source code](https://docs.openvino.ai/latest/openvino_docs_nGraph_DG_build_function.html).
|
||||
|
||||
|
||||
## Video
|
||||
|
||||
@@ -34,15 +34,13 @@ Read the sections below to learn about each item.
|
||||
```
|
||||
|
||||
2. **Include Inference Engine, nGraph and OpenCV libraries** in `project/CMakeLists.txt`
|
||||
[OpenCV](https://docs.opencv.org/master/db/df5/tutorial_linux_gcc_cmake.html) integration is needed mostly for pre-processing input data and nGraph for more complex applications using [nGraph API](../nGraph_DG/nGraph_dg.md).
|
||||
[OpenCV](https://docs.opencv.org/master/db/df5/tutorial_linux_gcc_cmake.html) integration is needed mostly for pre-processing input data and model representation in OpenVINO™ Runtime for more complex applications using [OpenVINO Model API](../OV_Runtime_UG/model_representation.md).
|
||||
``` cmake
|
||||
cmake_minimum_required(VERSION 3.0.0)
|
||||
project(project_name)
|
||||
find_package(ngraph REQUIRED)
|
||||
find_package(InferenceEngine REQUIRED)
|
||||
find_package(OpenCV REQUIRED)
|
||||
find_package(OpenVINO REQUIRED)
|
||||
add_executable(${PROJECT_NAME} src/main.cpp)
|
||||
target_link_libraries(${PROJECT_NAME} PRIVATE ${InferenceEngine_LIBRARIES} ${OpenCV_LIBS} ${NGRAPH_LIBRARIES})
|
||||
target_link_libraries(${PROJECT_NAME} PRIVATE openvino::runtime)
|
||||
```
|
||||
|
||||
### Use Inference Engine API to Implement Inference Pipeline
|
||||
@@ -457,7 +455,7 @@ Load the model to the device using `load_network()`:
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
This example is designed for CPU device, refer to the [Supported Devices](../IE_DG/supported_plugins/Supported_Devices.md) page to read about more devices.
|
||||
This example is designed for CPU device, refer to the [Supported Devices](../OV_Runtime_UG/supported_plugins/Supported_Devices.md) page to read about more devices.
|
||||
|
||||
#### Step 4. Prepare input
|
||||
```py
|
||||
@@ -491,4 +489,4 @@ Congratulations, you have made your first Python application with OpenVINO™ to
|
||||
[ie_api_flow_cpp]: img/BASIC_IE_API_workflow_Cpp.svg
|
||||
[ie_api_use_cpp]: img/IMPLEMENT_PIPELINE_with_API_C.svg
|
||||
[ie_api_flow_python]: img/BASIC_IE_API_workflow_Python.svg
|
||||
[ie_api_use_python]: img/IMPLEMENT_PIPELINE_with_API_Python.svg
|
||||
[ie_api_use_python]: img/IMPLEMENT_PIPELINE_with_API_Python.svg
|
||||
@@ -43,8 +43,8 @@ If a model has a hard-coded batch dimension, use `InferenceEngine::CNNNetwork::s
|
||||
|
||||
Inference Engine takes three kinds of a model description as an input, which are converted into an `InferenceEngine::CNNNetwork` object:
|
||||
1. [Intermediate Representation (IR)](../MO_DG/IR_and_opsets.md) through `InferenceEngine::Core::ReadNetwork`
|
||||
2. [ONNX model](../IE_DG/ONNX_Support.md) through `InferenceEngine::Core::ReadNetwork`
|
||||
3. [nGraph function](../nGraph_DG/nGraph_dg.md) through the constructor of `InferenceEngine::CNNNetwork`
|
||||
2. [ONNX model](../OV_Runtime_UG/ONNX_Support.md) through `InferenceEngine::Core::ReadNetwork`
|
||||
3. [OpenVINO Model](../OV_Runtime_UG/model_representation.md) through the constructor of `InferenceEngine::CNNNetwork`
|
||||
|
||||
`InferenceEngine::CNNNetwork` keeps an `ngraph::Function` object with the model description internally.
|
||||
The object should have fully-defined input shapes to be successfully loaded to Inference Engine plugins.
|
||||
@@ -66,7 +66,7 @@ To feed input data of a shape that is different from the model input shape, resh
|
||||
|
||||
Once the input shape of `InferenceEngine::CNNNetwork` is set, call the `InferenceEngine::Core::LoadNetwork` method to get an `InferenceEngine::ExecutableNetwork` object for inference with updated shapes.
|
||||
|
||||
There are other approaches to reshape the model during the stage of <a href="_docs_MO_DG_prepare_model_convert_model_Converting_Model.html#when_to_specify_input_shapes">IR generation</a> or [nGraph::Function creation](../nGraph_DG/build_function.md).
|
||||
There are other approaches to reshape the model during the stage of <a href="_docs_MO_DG_prepare_model_convert_model_Converting_Model.html#when_to_specify_input_shapes">IR generation</a> or [ov::Model creation](../OV_Runtime_UG/model_representation.md).
|
||||
|
||||
Practically, some models are not ready to be reshaped. In this case, a new input shape cannot be set with the Model Optimizer or the `InferenceEngine::CNNNetwork::reshape` method.
|
||||
|
||||
@@ -223,4 +223,4 @@ The Inference Engine provides a special mechanism that allows adding support of
|
||||
|
||||
### See Also:
|
||||
|
||||
[Hello Reshape Python Sample](../../inference_engine/ie_bridges/python/sample/hello_reshape_ssd/README.html)
|
||||
[Hello Reshape Python Sample](../../samples/python/hello_reshape_ssd/README.html)
|
||||
|
Before Width: | Height: | Size: 49 KiB After Width: | Height: | Size: 49 KiB |
|
Before Width: | Height: | Size: 49 KiB After Width: | Height: | Size: 49 KiB |
|
Before Width: | Height: | Size: 42 KiB After Width: | Height: | Size: 42 KiB |
|
Before Width: | Height: | Size: 47 KiB After Width: | Height: | Size: 47 KiB |
|
Before Width: | Height: | Size: 32 KiB After Width: | Height: | Size: 32 KiB |
|
Before Width: | Height: | Size: 246 KiB After Width: | Height: | Size: 246 KiB |
|
Before Width: | Height: | Size: 96 KiB After Width: | Height: | Size: 96 KiB |
|
Before Width: | Height: | Size: 79 KiB After Width: | Height: | Size: 79 KiB |
|
Before Width: | Height: | Size: 118 KiB After Width: | Height: | Size: 118 KiB |
|
Before Width: | Height: | Size: 203 KiB After Width: | Height: | Size: 203 KiB |
|
Before Width: | Height: | Size: 162 KiB After Width: | Height: | Size: 162 KiB |