Compare commits

...

9 Commits

Author SHA1 Message Date
Andrey Zaytsev
ba35364a53 Feature/benchmarks 2021 3 ehl (#5191)
* Added EHL config

* Updated graphs

* improve table formatting

* Wrap <iframe> tag with \htmlonly \endhtmlonly to avoid build errors

* Updated graphs

* Fixed links to TDP and Price for 8380
2021-04-12 15:38:07 +03:00
Anton Chetverikov
1c84064e06 Add specification for ExperimentalDetectron* oprations (#5128) 2021-04-07 14:01:08 +03:00
Nikolay Tyukaev
c370284bc4 fix doc iframe issue - 2021.3 (#5090)
* wrap with htmlonly

* wrap with htmlonly
2021-04-06 11:34:10 +03:00
Dmitry Kurtaev
81b0aec201 Enable mo.front.common.extractors module (#5038)
* Enable mo.front.common.extractors module (#5018)

* Enable mo.front.common.extractors module

* Update package_BOM.txt

* Test MO wheel content
2021-04-01 13:54:09 +03:00
Alexey Suhov
f37e14c614 update OpenCV version to 4.5.2 (#5069)
* update OpenCV version to 4.5.2
2021-04-01 13:30:03 +03:00
Sergey Lyubimtsev
7fe8264703 compression.configs.hardware config to package_data (#5066) 2021-04-01 11:34:59 +03:00
Andrey Zaytsev
a5cfe0ecb2 fixes for graphs (#5057) 2021-03-31 16:21:09 +03:00
Andrey Zaytsev
22cf9efcdc Feature/doc fixes 2021 3 (#4971)
* Made changes for CVS-50424

* Changes for CVS-49349

* Minor change for CVS-49349

* Changes for CVS-49343

* Cherry-pick #PR4254

* Replaced /opt/intel/openvino/ with /opt/intel/openvino_2021/ as the default target directory

* (CVS-50786) Added a new section Reference IMplementations to keep Speech Library and Speech Recognition Demos

* Doc fixes

* Replaced links to inference_engine_intro.md with Deep_Learning_Inference_Engine_DevGuide.md, fixed links

* Fixed link

* Fixes

* Fixes

* Reemoved Intel® Xeon® processor E family
2021-03-25 21:31:29 +03:00
Alina Alborova
1fdc9e372f [49342] Update recommended CMake version on install guide in documentation (#4763)
* Inserted a disclaimer

* Another disclaimer

* Update installing-openvino-windows.md

* Update installing-openvino-windows.md

* Update installing-openvino-windows.md
2021-03-24 15:03:41 +03:00
41 changed files with 909 additions and 441 deletions

View File

@@ -38,7 +38,7 @@ jobs:
SETUPVARS: $(INSTALL_DIR)\bin\setupvars.bat
IB_DIR: C:\Program Files (x86)\IncrediBuild
IB_TESTCONSOLE: $(IB_DIR)\IBTestConsole.exe
TEST_ENV_PATH: $(REPO_DIR)\inference-engine\temp\tbb\bin;$(REPO_DIR)\inference-engine\temp\opencv_4.5.1\opencv\bin;$(IB_DIR);%PATH%
TEST_ENV_PATH: $(REPO_DIR)\inference-engine\temp\tbb\bin;$(REPO_DIR)\inference-engine\temp\opencv_4.5.2\opencv\bin;$(IB_DIR);%PATH%
steps:
- script: |

View File

@@ -24,7 +24,7 @@ jobs:
SETUPVARS: $(INSTALL_DIR)\bin\setupvars.bat
IB_DIR: C:\Program Files (x86)\IncrediBuild
IB_TESTCONSOLE: $(IB_DIR)\IBTestConsole.exe
TEST_ENV_PATH: $(REPO_DIR)\inference-engine\temp\tbb\bin;$(REPO_DIR)\inference-engine\temp\opencv_4.5.1\opencv\bin;$(IB_DIR);%PATH%
TEST_ENV_PATH: $(REPO_DIR)\inference-engine\temp\tbb\bin;$(REPO_DIR)\inference-engine\temp\opencv_4.5.2\opencv\bin;$(IB_DIR);%PATH%
steps:
- script: |

View File

@@ -80,7 +80,17 @@ jobs:
python3 setup.py sdist bdist_wheel
working-directory: model-optimizer
- name: Test
- name: Test package content
run: |
echo "src = open('openvino_mo.egg-info/SOURCES.txt', 'rt').read().split()" | tee -a test_wheel.py
echo "ref = open('automation/package_BOM.txt', 'rt').read().split()" | tee -a test_wheel.py
echo "for name in ref:" | tee -a test_wheel.py
echo " if name.endswith('.py'):" | tee -a test_wheel.py
echo " assert name in src or './' + name in src, name + ' file missed'" | tee -a test_wheel.py
python3 test_wheel.py
working-directory: model-optimizer
- name: Test conversion
run: |
wget -q http://download.tensorflow.org/models/mobilenet_v1_2018_08_02/mobilenet_v1_1.0_224.tgz
tar -xf mobilenet_v1_1.0_224.tgz

View File

@@ -337,7 +337,7 @@ operation for the CPU plugin. The code of the library is described in the [Exte
In order to build the extension run the following:<br>
```bash
mkdir build && cd build
source /opt/intel/openvino/bin/setupvars.sh
source /opt/intel/openvino_2021/bin/setupvars.sh
cmake .. -DCMAKE_BUILD_TYPE=Release
make --jobs=$(nproc)
```

View File

@@ -1,88 +1,122 @@
# Inference Engine Developer Guide {#openvino_docs_IE_DG_Deep_Learning_Inference_Engine_DevGuide}
## Introduction to the OpenVINO™ Toolkit
> **NOTE:** [Intel® System Studio](https://software.intel.com/en-us/system-studio) is an all-in-one, cross-platform tool suite, purpose-built to simplify system bring-up and improve system and IoT device application performance on Intel® platforms. If you are using the Intel® Distribution of OpenVINO™ with Intel® System Studio, go to [Get Started with Intel® System Studio](https://software.intel.com/en-us/articles/get-started-with-openvino-and-intel-system-studio-2019).
The OpenVINO™ toolkit is a comprehensive toolkit that you can use to develop and deploy vision-oriented solutions on
Intel® platforms. Vision-oriented means the solutions use images or videos to perform specific tasks.
A few of the solutions use cases include autonomous navigation, digital surveillance cameras, robotics,
and mixed-reality headsets.
The OpenVINO™ toolkit:
* Enables CNN-based deep learning inference on the edge
* Supports heterogeneous execution across an Intel&reg; CPU, Intel&reg; Integrated Graphics, Intel&reg; Neural Compute Stick 2
* Speeds time-to-market via an easy-to-use library of computer vision functions and pre-optimized kernels
* Includes optimized calls for computer vision standards including OpenCV\*, OpenCL&trade;, and OpenVX\*
The OpenVINO™ toolkit includes the following components:
* Intel® Deep Learning Deployment Toolkit (Intel® DLDT)
- [Deep Learning Model Optimizer](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md) — A cross-platform command-line tool for importing models and
preparing them for optimal execution with the Deep Learning Inference Engine. The Model Optimizer supports converting Caffe*,
TensorFlow*, MXNet*, Kaldi*, ONNX* models.
- [Deep Learning Inference Engine](inference_engine_intro.md) — A unified API to allow high performance inference on many hardware types
including Intel® CPU, Intel® Processor Graphics, Intel® FPGA, Intel® Neural Compute Stick 2.
- [nGraph](../nGraph_DG/nGraph_dg.md) — graph representation and manipulation engine which is used to represent a model inside Inference Engine and allows the run-time model construction without using Model Optimizer.
* [OpenCV](https://docs.opencv.org/) — OpenCV* community version compiled for Intel® hardware.
Includes PVL libraries for computer vision.
* Drivers and runtimes for OpenCL™ version 2.1
* [Intel® Media SDK](https://software.intel.com/en-us/media-sdk)
* [OpenVX*](https://software.intel.com/en-us/cvsdk-ovx-guide) — Intel's implementation of OpenVX*
optimized for running on Intel® hardware (CPU, GPU, IPU).
* [Demos and samples](Samples_Overview.md).
This Guide provides overview of the Inference Engine describing the typical workflow for performing
This Guide provides an overview of the Inference Engine describing the typical workflow for performing
inference of a pre-trained and optimized deep learning model and a set of sample applications.
> **NOTES:**
> - Before you perform inference with the Inference Engine, your models should be converted to the Inference Engine format using the Model Optimizer or built directly in run-time using nGraph API. To learn about how to use Model Optimizer, refer to the [Model Optimizer Developer Guide](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md). To learn about the pre-trained and optimized models delivered with the OpenVINO™ toolkit, refer to [Pre-Trained Models](@ref omz_models_group_intel).
> - [Intel® System Studio](https://software.intel.com/en-us/system-studio) is an all-in-one, cross-platform tool suite, purpose-built to simplify system bring-up and improve system and IoT device application performance on Intel® platforms. If you are using the Intel® Distribution of OpenVINO™ with Intel® System Studio, go to [Get Started with Intel® System Studio](https://software.intel.com/en-us/articles/get-started-with-openvino-and-intel-system-studio-2019).
> **NOTE:** Before you perform inference with the Inference Engine, your models should be converted to the Inference Engine format using the Model Optimizer or built directly in run-time using nGraph API. To learn about how to use Model Optimizer, refer to the [Model Optimizer Developer Guide](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md). To learn about the pre-trained and optimized models delivered with the OpenVINO™ toolkit, refer to [Pre-Trained Models](@ref omz_models_group_intel).
After you have used the Model Optimizer to create an Intermediate Representation (IR), use the Inference Engine to infer the result for a given input data.
## Table of Contents
Inference Engine is a set of C++ libraries providing a common API to deliver inference solutions on the platform of your choice: CPU, GPU, or VPU. Use the Inference Engine API to read the Intermediate Representation, set the input and output formats, and execute the model on devices. While the C++ libraries is the primary implementation, C libraries and Python bindings are also available.
* [Inference Engine API Changes History](API_Changes.md)
For Intel® Distribution of OpenVINO™ toolkit, Inference Engine binaries are delivered within release packages.
* [Introduction to Inference Engine](inference_engine_intro.md)
The open source version is available in the [OpenVINO™ toolkit GitHub repository](https://github.com/openvinotoolkit/openvino) and can be built for supported platforms using the <a href="https://github.com/openvinotoolkit/openvino/wiki/BuildingCode">Inference Engine Build Instructions</a>.
* [Understanding Inference Engine Memory Primitives](Memory_primitives.md)
To learn about how to use the Inference Engine API for your application, see the [Integrating Inference Engine in Your Application](Integrate_with_customer_application_new_API.md) documentation.
* [Introduction to Inference Engine Device Query API](InferenceEngine_QueryAPI.md)
For complete API Reference, see the [Inference Engine API References](./api_references.html) section.
* [Adding Your Own Layers to the Inference Engine](Extensibility_DG/Intro.md)
Inference Engine uses a plugin architecture. Inference Engine plugin is a software component that contains complete implementation for inference on a certain Intel&reg; hardware device: CPU, GPU, VPU, etc. Each plugin implements the unified API and provides additional hardware-specific APIs.
* [Integrating Inference Engine in Your Application](Integrate_with_customer_application_new_API.md)
## Modules in the Inference Engine component
### Core Inference Engine Libraries ###
* [[DEPRECATED] Migration from Inference Engine Plugin API to Core API](Migration_CoreAPI.md)
Your application must link to the core Inference Engine libraries:
* Linux* OS:
- `libinference_engine.so`, which depends on `libinference_engine_transformations.so`, `libtbb.so`, `libtbbmalloc.so` and `libngraph.so`
* Windows* OS:
- `inference_engine.dll`, which depends on `inference_engine_transformations.dll`, `tbb.dll`, `tbbmalloc.dll` and `ngraph.dll`
* macOS*:
- `libinference_engine.dylib`, which depends on `libinference_engine_transformations.dylib`, `libtbb.dylib`, `libtbbmalloc.dylib` and `libngraph.dylib`
* [Introduction to Performance Topics](Intro_to_Performance.md)
The required C++ header files are located in the `include` directory.
* [Inference Engine Python API Overview](../../inference-engine/ie_bridges/python/docs/api_overview.md)
This library contains the classes to:
* Create Inference Engine Core object to work with devices and read network (InferenceEngine::Core)
* Manipulate network information (InferenceEngine::CNNNetwork)
* Execute and pass inputs and outputs (InferenceEngine::ExecutableNetwork and InferenceEngine::InferRequest)
* [Using Dynamic Batching feature](DynamicBatching.md)
### Plugin Libraries to Read a Network Object ###
* [Using Static Shape Infer feature](ShapeInference.md)
Starting from 2020.4 release, Inference Engine introduced a concept of `CNNNetwork` reader plugins. Such plugins can be automatically dynamically loaded by Inference Engine in runtime depending on file format:
* Linux* OS:
- `libinference_engine_ir_reader.so` to read a network from IR
- `libinference_engine_onnx_reader.so` to read a network from ONNX model format
* Windows* OS:
- `inference_engine_ir_reader.dll` to read a network from IR
- `inference_engine_onnx_reader.dll` to read a network from ONNX model format
* [Using Low-Precision 8-bit Integer Inference](Int8Inference.md)
### Device-Specific Plugin Libraries ###
* [Using Bfloat16 Inference](Bfloat16Inference.md)
For each supported target device, Inference Engine provides a plugin — a DLL/shared library that contains complete implementation for inference on this particular device. The following plugins are available:
* Utilities to Validate Your Converted Model
* [Using Cross Check Tool for Per-Layer Comparison Between Plugins](../../inference-engine/tools/cross_check_tool/README.md)
| Plugin | Device Type |
| ------- | ----------------------------- |
|CPU | Intel® Xeon® with Intel® AVX2 and AVX512, Intel® Core™ Processors with Intel® AVX2, Intel® Atom® Processors with Intel® SSE |
|GPU | Intel® Processor Graphics, including Intel® HD Graphics and Intel® Iris® Graphics |
|MYRIAD | Intel® Neural Compute Stick 2 powered by the Intel® Movidius™ Myriad™ X |
|GNA | Intel&reg; Speech Enabling Developer Kit, Amazon Alexa* Premium Far-Field Developer Kit, Intel&reg; Pentium&reg; Silver J5005 Processor, Intel&reg; Pentium&reg; Silver N5000 Processor, Intel&reg; Celeron&reg; J4005 Processor, Intel&reg; Celeron&reg; J4105 Processor, Intel&reg; Celeron&reg; Processor N4100, Intel&reg; Celeron&reg; Processor N4000, Intel&reg; Core&trade; i3-8121U Processor, Intel&reg; Core&trade; i7-1065G7 Processor, Intel&reg; Core&trade; i7-1060G7 Processor, Intel&reg; Core&trade; i5-1035G4 Processor, Intel&reg; Core&trade; i5-1035G7 Processor, Intel&reg; Core&trade; i5-1035G1 Processor, Intel&reg; Core&trade; i5-1030G7 Processor, Intel&reg; Core&trade; i5-1030G4 Processor, Intel&reg; Core&trade; i3-1005G1 Processor, Intel&reg; Core&trade; i3-1000G1 Processor, Intel&reg; Core&trade; i3-1000G4 Processor |
|HETERO | Automatic splitting of a network inference between several devices (for example if a device doesn't support certain layers|
|MULTI | Simultaneous inference of the same network on several devices in parallel|
* [Supported Devices](supported_plugins/Supported_Devices.md)
* [GPU](supported_plugins/CL_DNN.md)
* [CPU](supported_plugins/CPU.md)
* [VPU](supported_plugins/VPU.md)
* [MYRIAD](supported_plugins/MYRIAD.md)
* [HDDL](supported_plugins/HDDL.md)
* [Heterogeneous execution](supported_plugins/HETERO.md)
* [GNA](supported_plugins/GNA.md)
* [MULTI](supported_plugins/MULTI.md)
The table below shows the plugin libraries and additional dependencies for Linux, Windows and macOS platforms.
* [Pre-Trained Models](@ref omz_models_group_intel)
| Plugin | Library name for Linux | Dependency libraries for Linux | Library name for Windows | Dependency libraries for Windows | Library name for macOS | Dependency libraries for macOS |
|--------|-----------------------------|-------------------------------------------------------------|--------------------------|--------------------------------------------------------------------------------------------------------|------------------------------|---------------------------------------------|
| CPU | `libMKLDNNPlugin.so` | `libinference_engine_lp_transformations.so` | `MKLDNNPlugin.dll` | `inference_engine_lp_transformations.dll` | `libMKLDNNPlugin.so` | `inference_engine_lp_transformations.dylib` |
| GPU | `libclDNNPlugin.so` | `libinference_engine_lp_transformations.so`, `libOpenCL.so` | `clDNNPlugin.dll` | `OpenCL.dll`, `inference_engine_lp_transformations.dll` | Is not supported | - |
| MYRIAD | `libmyriadPlugin.so` | `libusb.so`, | `myriadPlugin.dll` | `usb.dll` | `libmyriadPlugin.so` | `libusb.dylib` |
| HDDL | `libHDDLPlugin.so` | `libbsl.so`, `libhddlapi.so`, `libmvnc-hddl.so` | `HDDLPlugin.dll` | `bsl.dll`, `hddlapi.dll`, `json-c.dll`, `libcrypto-1_1-x64.dll`, `libssl-1_1-x64.dll`, `mvnc-hddl.dll` | Is not supported | - |
| GNA | `libGNAPlugin.so` | `libgna.so`, | `GNAPlugin.dll` | `gna.dll` | Is not supported | - |
| HETERO | `libHeteroPlugin.so` | Same as for selected plugins | `HeteroPlugin.dll` | Same as for selected plugins | `libHeteroPlugin.so` | Same as for selected plugins |
| MULTI | `libMultiDevicePlugin.so` | Same as for selected plugins | `MultiDevicePlugin.dll` | Same as for selected plugins | `libMultiDevicePlugin.so` | Same as for selected plugins |
* [Known Issues](Known_Issues_Limitations.md)
> **NOTE**: All plugin libraries also depend on core Inference Engine libraries.
**Typical Next Step:** [Introduction to Inference Engine](inference_engine_intro.md)
Make sure those libraries are in your computer's path or in the place you pointed to in the plugin loader. Make sure each plugin's related dependencies are in the:
* Linux: `LD_LIBRARY_PATH`
* Windows: `PATH`
* macOS: `DYLD_LIBRARY_PATH`
On Linux and macOS, use the script `bin/setupvars.sh` to set the environment variables.
On Windows, run the `bin\setupvars.bat` batch file to set the environment variables.
To learn more about supported devices and corresponding plugins, see the [Supported Devices](supported_plugins/Supported_Devices.md) chapter.
## Common Workflow for Using the Inference Engine API
The common workflow contains the following steps:
1. **Create Inference Engine Core object** - Create an `InferenceEngine::Core` object to work with different devices, all device plugins are managed internally by the `Core` object. Register extensions with custom nGraph operations (`InferenceEngine::Core::AddExtension`).
2. **Read the Intermediate Representation** - Using the `InferenceEngine::Core` class, read an Intermediate Representation file into an object of the `InferenceEngine::CNNNetwork` class. This class represents the network in the host memory.
3. **Prepare inputs and outputs format** - After loading the network, specify input and output precision and the layout on the network. For these specification, use the `InferenceEngine::CNNNetwork::getInputsInfo()` and `InferenceEngine::CNNNetwork::getOutputsInfo()`.
4. Pass per device loading configurations specific to this device (`InferenceEngine::Core::SetConfig`), and register extensions to this device (`InferenceEngine::Core::AddExtension`).
5. **Compile and Load Network to device** - Use the `InferenceEngine::Core::LoadNetwork()` method with specific device (e.g. `CPU`, `GPU`, etc.) to compile and load the network on the device. Pass in the per-target load configuration for this compilation and load operation.
6. **Set input data** - With the network loaded, you have an `InferenceEngine::ExecutableNetwork` object. Use this object to create an `InferenceEngine::InferRequest` in which you signal the input buffers to use for input and output. Specify a device-allocated memory and copy it into the device memory directly, or tell the device to use your application memory to save a copy.
7. **Execute** - With the input and output memory now defined, choose your execution mode:
* Synchronously - `InferenceEngine::InferRequest::Infer()` method. Blocks until inference is completed.
* Asynchronously - `InferenceEngine::InferRequest::StartAsync()` method. Check status with the `InferenceEngine::InferRequest::Wait()` method (0 timeout), wait, or specify a completion callback.
8. **Get the output** - After inference is completed, get the output memory or read the memory you provided earlier. Do this with the `InferenceEngine::IInferRequest::GetBlob()` method.
## Video: Inference Engine Concept
[![](https://img.youtube.com/vi/e6R13V8nbak/0.jpg)](https://www.youtube.com/watch?v=e6R13V8nbak)
\htmlonly
<iframe width="560" height="315" src="https://www.youtube.com/embed/e6R13V8nbak" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
\endhtmlonly
## Further Reading
For more details on the Inference Engine API, refer to the [Integrating Inference Engine in Your Application](Integrate_with_customer_application_new_API.md) documentation.

View File

@@ -205,7 +205,7 @@ vi <user_home_directory>/.bashrc
2. Add this line to the end of the file:
```sh
source /opt/intel/openvino/bin/setupvars.sh
source /opt/intel/openvino_2021/bin/setupvars.sh
```
3. Save and close the file: press the **Esc** key, type `:wq` and press the **Enter** key.
@@ -242,4 +242,4 @@ sample, read the sample documentation by clicking the sample name in the samples
list above.
## See Also
* [Introduction to Inference Engine](inference_engine_intro.md)
* [Inference Engine Developer Guide](Deep_Learning_Inference_Engine_DevGuide.md)

View File

@@ -66,8 +66,8 @@ Shape collision during shape propagation may be a sign that a new shape does not
Changing the model input shape may result in intermediate operations shape collision.
Examples of such operations:
- [`Reshape` operation](../ops/shape/Reshape_1.md) with a hard-coded output shape value
- [`MatMul` operation](../ops/matrix/MatMul_1.md) with the `Const` second input cannot be resized by spatial dimensions due to operation semantics
- [Reshape](../ops/shape/Reshape_1.md) operation with a hard-coded output shape value
- [MatMul](../ops/matrix/MatMul_1.md) operation with the `Const` second input cannot be resized by spatial dimensions due to operation semantics
Model structure and logic should not change significantly after model reshaping.
- The Global Pooling operation is commonly used to reduce output feature map of classification models output.

View File

@@ -1,5 +1,11 @@
Introduction to Inference Engine {#openvino_docs_IE_DG_inference_engine_intro}
================================
# Introduction to Inference Engine {#openvino_docs_IE_DG_inference_engine_intro}
> **NOTE:** [Intel® System Studio](https://software.intel.com/en-us/system-studio) is an all-in-one, cross-platform tool suite, purpose-built to simplify system bring-up and improve system and IoT device application performance on Intel® platforms. If you are using the Intel® Distribution of OpenVINO™ with Intel® System Studio, go to [Get Started with Intel® System Studio](https://software.intel.com/en-us/articles/get-started-with-openvino-and-intel-system-studio-2019).
This Guide provides an overview of the Inference Engine describing the typical workflow for performing
inference of a pre-trained and optimized deep learning model and a set of sample applications.
> **NOTE:** Before you perform inference with the Inference Engine, your models should be converted to the Inference Engine format using the Model Optimizer or built directly in run-time using nGraph API. To learn about how to use Model Optimizer, refer to the [Model Optimizer Developer Guide](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md). To learn about the pre-trained and optimized models delivered with the OpenVINO™ toolkit, refer to [Pre-Trained Models](@ref omz_models_intel_index).
After you have used the Model Optimizer to create an Intermediate Representation (IR), use the Inference Engine to infer the result for a given input data.

View File

@@ -92,11 +92,20 @@ Notice that until R2 you had to calculate number of requests in your application
Notice that every OpenVINO sample that supports "-d" (which stays for "device") command-line option transparently accepts the multi-device.
The [Benchmark Application](../../../inference-engine/samples/benchmark_app/README.md) is the best reference to the optimal usage of the multi-device. As discussed multiple times earlier, you don't need to setup number of requests, CPU streams or threads as the application provides optimal out of the box performance.
Below is example command-line to evaluate HDDL+GPU performance with that:
```bash
$ ./benchmark_app d MULTI:HDDL,GPU m <model> -i <input> -niter 1000
```sh
./benchmark_app d MULTI:HDDL,GPU m <model> -i <input> -niter 1000
```
Notice that you can use the FP16 IR to work with multi-device (as CPU automatically upconverts it to the fp32) and rest of devices support it naturally.
Also notice that no demos are (yet) fully optimized for the multi-device, by means of supporting the OPTIMAL_NUMBER_OF_INFER_REQUESTS metric, using the GPU streams/throttling, and so on.
## Video: MULTI Plugin
[![](https://img.youtube.com/vi/xbORYFEmrqU/0.jpg)](https://www.youtube.com/watch?v=xbORYFEmrqU)
\htmlonly
<iframe width="560" height="315" src="https://www.youtube.com/embed/xbORYFEmrqU" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
\endhtmlonly
## See Also
* [Supported Devices](Supported_Devices.md)

View File

@@ -111,3 +111,22 @@ Model Optimizer produces an Intermediate Representation (IR) of the network, whi
* [Known Issues](Known_Issues_Limitations.md)
**Typical Next Step:** [Preparing and Optimizing your Trained Model with Model Optimizer](prepare_model/Prepare_Trained_Model.md)
## Video: Model Optimizer Concept
[![](https://img.youtube.com/vi/Kl1ptVb7aI8/0.jpg)](https://www.youtube.com/watch?v=Kl1ptVb7aI8)
\htmlonly
<iframe width="560" height="315" src="https://www.youtube.com/embed/Kl1ptVb7aI8" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
\endhtmlonly
## Video: Model Optimizer Basic Operation
[![](https://img.youtube.com/vi/BBt1rseDcy0/0.jpg)](https://www.youtube.com/watch?v=BBt1rseDcy0)
\htmlonly
<iframe width="560" height="315" src="https://www.youtube.com/embed/BBt1rseDcy0" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
\endhtmlonly
## Video: Choosing the Right Precision
[![](https://img.youtube.com/vi/RF8ypHyiKrY/0.jpg)](https://www.youtube.com/watch?v=RF8ypHyiKrY)
\htmlonly
<iframe width="560" height="315" src="https://www.youtube.com/embed/RF8ypHyiKrY" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
\endhtmlonly

View File

@@ -367,6 +367,11 @@ Refer to [Supported Framework Layers ](../Supported_Frameworks_Layers.md) for th
The Model Optimizer provides explanatory messages if it is unable to run to completion due to issues like typographical errors, incorrectly used options, or other issues. The message describes the potential cause of the problem and gives a link to the [Model Optimizer FAQ](../Model_Optimizer_FAQ.md). The FAQ has instructions on how to resolve most issues. The FAQ also includes links to relevant sections in the Model Optimizer Developer Guide to help you understand what went wrong.
## Video: Converting a TensorFlow Model
[![](https://img.youtube.com/vi/QW6532LtiTc/0.jpg)](https://www.youtube.com/watch?v=QW6532LtiTc)
\htmlonly
<iframe width="560" height="315" src="https://www.youtube.com/embed/QW6532LtiTc" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
\endhtmlonly
## Summary
In this document, you learned:

View File

@@ -10,252 +10,3 @@ Use the links below to review the benchmarking results for each alternative:
* [OpenVINO™ Model Server Benchmark Results](performance_benchmarks_ovms.md)
Performance for a particular application can also be evaluated virtually using [Intel® DevCloud for the Edge](https://devcloud.intel.com/edge/), a remote development environment with access to Intel® hardware and the latest versions of the Intel® Distribution of the OpenVINO™ Toolkit. [Learn more](https://devcloud.intel.com/edge/get_started/devcloud/) or [Register here](https://inteliot.force.com/DevcloudForEdge/s/).
\htmlonly
<!-- these CDN links and scripts are required. Add them to the <head> of your website -->
<link href="https://fonts.googleapis.com/css2?family=Roboto:wght@100;300;400;500;600;700;900&display=swap" rel="stylesheet" type="text/css">
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/font-awesome/4.7.0/css/font-awesome.min.css" type="text/css">
<script src="https://cdn.jsdelivr.net/npm/chart.js@2.9.3/dist/Chart.min.js"></script>
<script src="https://cdn.jsdelivr.net/npm/chartjs-plugin-datalabels"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/chartjs-plugin-annotation/0.5.7/chartjs-plugin-annotation.min.js"></script>
<script src="https://cdn.jsdelivr.net/npm/chartjs-plugin-barchart-background@1.3.0/build/Plugin.Barchart.Background.min.js"></script>
<script src="https://cdn.jsdelivr.net/npm/chartjs-plugin-deferred@1"></script>
<!-- download this file and place on your server (or include the styles inline) -->
<link rel="stylesheet" href="ovgraphs.css" type="text/css">
\endhtmlonly
\htmlonly
<script src="bert-large-uncased-whole-word-masking-squad-int8-0001-ov-2021-2-185.js" id="bert-large-uncased-whole-word-masking-squad-int8-0001-ov-2021-2-185"></script>
\endhtmlonly
\htmlonly
<script src="deeplabv3-tf-ov-2021-2-185.js" id="deeplabv3-tf-ov-2021-2-185"></script>
\endhtmlonly
\htmlonly
<script src="densenet-121-tf-ov-2021-2-185.js" id="densenet-121-tf-ov-2021-2-185"></script>
\endhtmlonly
\htmlonly
<script src="faster-rcnn-resnet50-coco-tf-ov-2021-2-185.js" id="faster-rcnn-resnet50-coco-tf-ov-2021-2-185"></script>
\endhtmlonly
\htmlonly
<script src="googlenet-v1-tf-ov-2021-2-185.js" id="googlenet-v1-tf-ov-2021-2-185"></script>
\endhtmlonly
\htmlonly
<script src="inception-v3-tf-ov-2021-2-185.js" id="inception-v3-tf-ov-2021-2-185"></script>
\endhtmlonly
\htmlonly
<script src="mobilenet-ssd-cf-ov-2021-2-185.js" id="mobilenet-ssd-cf-ov-2021-2-185"></script>
\endhtmlonly
\htmlonly
<script src="mobilenet-v1-1-0-224-tf-ov-2021-2-185.js" id="mobilenet-v1-1-0-224-tf-ov-2021-2-185"></script>
\endhtmlonly
\htmlonly
<script src="mobilenet-v2-pytorch-ov-2021-2-185.js" id="mobilenet-v2-pytorch-ov-2021-2-185"></script>
\endhtmlonly
\htmlonly
<script src="resnet-18-pytorch-ov-2021-2-185.js" id="resnet-18-pytorch-ov-2021-2-185"></script>
\endhtmlonly
\htmlonly
<script src="resnet-50-tf-ov-2021-2-185.js" id="resnet-50-tf-ov-2021-2-185"></script>
\endhtmlonly
\htmlonly
<script src="se-resnext-50-cf-ov-2021-2-185.js" id="se-resnext-50-cf-ov-2021-2-185"></script>
\endhtmlonly
\htmlonly
<script src="squeezenet1-1-cf-ov-2021-2-185.js" id="squeezenet1-1-cf-ov-2021-2-185"></script>
\endhtmlonly
\htmlonly
<script src="ssd300-cf-ov-2021-2-185.js" id="ssd300-cf-ov-2021-2-185"></script>
\endhtmlonly
\htmlonly
<script src="yolo-v3-tf-ov-2021-2-185.js" id="yolo-v3-tf-ov-2021-2-185"></script>
\endhtmlonly
## Platform Configurations
Intel® Distribution of OpenVINO™ toolkit performance benchmark numbers are based on release 2021.2.
Intel technologies features and benefits depend on system configuration and may require enabled hardware, software or service activation. Learn more at intel.com, or from the OEM or retailer. Performance results are based on testing as of December 9, 2020 and may not reflect all publicly available updates. See configuration disclosure for details. No product can be absolutely secure.
Performance varies by use, configuration and other factors. Learn more at [www.intel.com/PerformanceIndex](https://www.intel.com/PerformanceIndex).
Your costs and results may vary.
© Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.
Intel optimizations, for Intel compilers or other products, may not optimize to the same degree for non-Intel products.
Testing by Intel done on: see test date for each HW platform below.
**CPU Inference Engines**
| | Intel® Xeon® E-2124G | Intel® Xeon® W1290P | Intel® Xeon® Silver 4216R |
| ------------------------------- | ---------------------- | --------------------------- | ---------------------------- |
| Motherboard | ASUS* WS C246 PRO | ASUS* WS W480-ACE | Intel® Server Board S2600STB |
| CPU | Intel® Xeon® E-2124G CPU @ 3.40GHz | Intel® Xeon® W-1290P CPU @ 3.70GHz | Intel® Xeon® Silver 4216R CPU @ 2.20GHz |
| Hyper Threading | OFF | ON | ON |
| Turbo Setting | ON | ON | ON |
| Memory | 2 x 16 GB DDR4 2666MHz | 4 x 16 GB DDR4 @ 2666MHz |12 x 32 GB DDR4 2666MHz |
| Operating System | Ubuntu* 18.04 LTS | Ubuntu* 18.04 LTS | Ubuntu* 18.04 LTS |
| Kernel Version | 5.3.0-24-generic | 5.3.0-24-generic | 5.3.0-24-generic |
| BIOS Vendor | American Megatrends Inc.* | American Megatrends Inc. | Intel Corporation |
| BIOS Version | 0904 | 607 | SE5C620.86B.02.01.<br>0009.092820190230 |
| BIOS Release | April 12, 2019 | May 29, 2020 | September 28, 2019 |
| BIOS Settings | Select optimized default settings, <br>save & exit | Select optimized default settings, <br>save & exit | Select optimized default settings, <br>change power policy <br>to "performance", <br>save & exit |
| Batch size | 1 | 1 | 1
| Precision | INT8 | INT8 | INT8
| Number of concurrent inference requests | 4 | 5 | 32
| Test Date | December 9, 2020 | December 9, 2020 | December 9, 2020
| Power dissipation, TDP in Watt | [71](https://ark.intel.com/content/www/us/en/ark/products/134854/intel-xeon-e-2124g-processor-8m-cache-up-to-4-50-ghz.html#tab-blade-1-0-1) | [125](https://ark.intel.com/content/www/us/en/ark/products/199336/intel-xeon-w-1290p-processor-20m-cache-3-70-ghz.html) | [125](https://ark.intel.com/content/www/us/en/ark/products/193394/intel-xeon-silver-4216-processor-22m-cache-2-10-ghz.html#tab-blade-1-0-1) |
| CPU Price on September 29, 2020, USD<br>Prices may vary | [213](https://ark.intel.com/content/www/us/en/ark/products/134854/intel-xeon-e-2124g-processor-8m-cache-up-to-4-50-ghz.html) | [539](https://ark.intel.com/content/www/us/en/ark/products/199336/intel-xeon-w-1290p-processor-20m-cache-3-70-ghz.html) |[1,002](https://ark.intel.com/content/www/us/en/ark/products/193394/intel-xeon-silver-4216-processor-22m-cache-2-10-ghz.html) |
**CPU Inference Engines (continue)**
| | Intel® Xeon® Gold 5218T | Intel® Xeon® Platinum 8270 |
| ------------------------------- | ---------------------------- | ---------------------------- |
| Motherboard | Intel® Server Board S2600STB | Intel® Server Board S2600STB |
| CPU | Intel® Xeon® Gold 5218T CPU @ 2.10GHz | Intel® Xeon® Platinum 8270 CPU @ 2.70GHz |
| Hyper Threading | ON | ON |
| Turbo Setting | ON | ON |
| Memory | 12 x 32 GB DDR4 2666MHz | 12 x 32 GB DDR4 2933MHz |
| Operating System | Ubuntu* 18.04 LTS | Ubuntu* 18.04 LTS |
| Kernel Version | 5.3.0-24-generic | 5.3.0-24-generic |
| BIOS Vendor | Intel Corporation | Intel Corporation |
| BIOS Version | SE5C620.86B.02.01.<br>0009.092820190230 | SE5C620.86B.02.01.<br>0009.092820190230 |
| BIOS Release | September 28, 2019 | September 28, 2019 |
| BIOS Settings | Select optimized default settings, <br>change power policy to "performance", <br>save & exit | Select optimized default settings, <br>change power policy to "performance", <br>save & exit |
| Batch size | 1 | 1 |
| Precision | INT8 | INT8 |
| Number of concurrent inference requests |32 | 52 |
| Test Date | December 9, 2020 | December 9, 2020 |
| Power dissipation, TDP in Watt | [105](https://ark.intel.com/content/www/us/en/ark/products/193953/intel-xeon-gold-5218t-processor-22m-cache-2-10-ghz.html#tab-blade-1-0-1) | [205](https://ark.intel.com/content/www/us/en/ark/products/192482/intel-xeon-platinum-8270-processor-35-75m-cache-2-70-ghz.html#tab-blade-1-0-1) |
| CPU Price on September 29, 2020, USD<br>Prices may vary | [1,349](https://ark.intel.com/content/www/us/en/ark/products/193953/intel-xeon-gold-5218t-processor-22m-cache-2-10-ghz.html) | [7,405](https://ark.intel.com/content/www/us/en/ark/products/192482/intel-xeon-platinum-8270-processor-35-75m-cache-2-70-ghz.html) |
**CPU Inference Engines (continue)**
| | Intel® Core™ i7-8700T | Intel® Core™ i9-10920X | Intel® Core™ i9-10900TE<br>(iEi Flex BX210AI)| 11th Gen Intel® Core™ i7-1185G7 |
| -------------------- | ----------------------------------- |--------------------------------------| ---------------------------------------------|---------------------------------|
| Motherboard | GIGABYTE* Z370M DS3H-CF | ASUS* PRIME X299-A II | iEi / B595 | Intel Corporation<br>internal/Reference<br>Validation Platform |
| CPU | Intel® Core™ i7-8700T CPU @ 2.40GHz | Intel® Core™ i9-10920X CPU @ 3.50GHz | Intel® Core™ i9-10900TE CPU @ 1.80GHz | 11th Gen Intel® Core™ i7-1185G7 @ 3.00GHz |
| Hyper Threading | ON | ON | ON | ON |
| Turbo Setting | ON | ON | ON | ON |
| Memory | 4 x 16 GB DDR4 2400MHz | 4 x 16 GB DDR4 2666MHz | 2 x 8 GB DDR4 @ 2400MHz | 2 x 8 GB DDR4 3200MHz |
| Operating System | Ubuntu* 18.04 LTS | Ubuntu* 18.04 LTS | Ubuntu* 18.04 LTS | Ubuntu* 18.04 LTS |
| Kernel Version | 5.3.0-24-generic | 5.3.0-24-generic | 5.8.0-05-generic | 5.8.0-05-generic |
| BIOS Vendor | American Megatrends Inc.* | American Megatrends Inc.* | American Megatrends Inc.* | Intel Corporation |
| BIOS Version | F11 | 505 | Z667AR10 | TGLSFWI1.R00.3425.<br>A00.2010162309 |
| BIOS Release | March 13, 2019 | December 17, 2019 | July 15, 2020 | October 16, 2020 |
| BIOS Settings | Select optimized default settings, <br>set OS type to "other", <br>save & exit | Default Settings | Default Settings | Default Settings |
| Batch size | 1 | 1 | 1 | 1 |
| Precision | INT8 | INT8 | INT8 | INT8 |
| Number of concurrent inference requests |4 | 24 | 5 | 4 |
| Test Date | December 9, 2020 | December 9, 2020 | December 9, 2020 | December 9, 2020 |
| Power dissipation, TDP in Watt | [35](https://ark.intel.com/content/www/us/en/ark/products/129948/intel-core-i7-8700t-processor-12m-cache-up-to-4-00-ghz.html#tab-blade-1-0-1) | [165](https://ark.intel.com/content/www/us/en/ark/products/198012/intel-core-i9-10920x-x-series-processor-19-25m-cache-3-50-ghz.html) | [35](https://ark.intel.com/content/www/us/en/ark/products/203901/intel-core-i9-10900te-processor-20m-cache-up-to-4-60-ghz.html) | [28](https://ark.intel.com/content/www/us/en/ark/products/208664/intel-core-i7-1185g7-processor-12m-cache-up-to-4-80-ghz-with-ipu.html#tab-blade-1-0-1) |
| CPU Price on September 29, 2020, USD<br>Prices may vary | [303](https://ark.intel.com/content/www/us/en/ark/products/129948/intel-core-i7-8700t-processor-12m-cache-up-to-4-00-ghz.html) | [700](https://ark.intel.com/content/www/us/en/ark/products/198012/intel-core-i9-10920x-x-series-processor-19-25m-cache-3-50-ghz.html) | [444](https://ark.intel.com/content/www/us/en/ark/products/203901/intel-core-i9-10900te-processor-20m-cache-up-to-4-60-ghz.html) | [426](https://ark.intel.com/content/www/us/en/ark/products/208664/intel-core-i7-1185g7-processor-12m-cache-up-to-4-80-ghz-with-ipu.html#tab-blade-1-0-0) |
**CPU Inference Engines (continue)**
| | Intel® Core™ i5-8500 | Intel® Core™ i5-10500TE | Intel® Core™ i5-10500TE<br>(iEi Flex-BX210AI)|
| -------------------- | ---------------------------------- | ----------------------------------- |-------------------------------------- |
| Motherboard | ASUS* PRIME Z370-A | GIGABYTE* Z490 AORUS PRO AX | iEi / B595 |
| CPU | Intel® Core™ i5-8500 CPU @ 3.00GHz | Intel® Core™ i5-10500TE CPU @ 2.30GHz | Intel® Core™ i5-10500TE CPU @ 2.30GHz |
| Hyper Threading | OFF | ON | ON |
| Turbo Setting | ON | ON | ON |
| Memory | 2 x 16 GB DDR4 2666MHz | 2 x 16 GB DDR4 @ 2666MHz | 1 x 8 GB DDR4 @ 2400MHz |
| Operating System | Ubuntu* 18.04 LTS | Ubuntu* 18.04 LTS | Ubuntu* 18.04 LTS |
| Kernel Version | 5.3.0-24-generic | 5.3.0-24-generic | 5.3.0-24-generic |
| BIOS Vendor | American Megatrends Inc.* | American Megatrends Inc.* | American Megatrends Inc.* |
| BIOS Version | 2401 | F3 | Z667AR10 |
| BIOS Release | July 12, 2019 | March 25, 2020 | July 17, 2020 |
| BIOS Settings | Select optimized default settings, <br>save & exit | Select optimized default settings, <br>set OS type to "other", <br>save & exit | Default Settings |
| Batch size | 1 | 1 | 1 |
| Precision | INT8 | INT8 | INT8 |
| Number of concurrent inference requests | 3 | 4 | 4 |
| Test Date | December 9, 2020 | December 9, 2020 | December 9, 2020 |
| Power dissipation, TDP in Watt | [65](https://ark.intel.com/content/www/us/en/ark/products/129939/intel-core-i5-8500-processor-9m-cache-up-to-4-10-ghz.html#tab-blade-1-0-1)| [35](https://ark.intel.com/content/www/us/en/ark/products/203891/intel-core-i5-10500te-processor-12m-cache-up-to-3-70-ghz.html) | [35](https://ark.intel.com/content/www/us/en/ark/products/203891/intel-core-i5-10500te-processor-12m-cache-up-to-3-70-ghz.html) |
| CPU Price on September 29, 2020, USD<br>Prices may vary | [192](https://ark.intel.com/content/www/us/en/ark/products/129939/intel-core-i5-8500-processor-9m-cache-up-to-4-10-ghz.html) | [195](https://ark.intel.com/content/www/us/en/ark/products/203891/intel-core-i5-10500te-processor-12m-cache-up-to-3-70-ghz.html) | [195](https://ark.intel.com/content/www/us/en/ark/products/203891/intel-core-i5-10500te-processor-12m-cache-up-to-3-70-ghz.html) |
**CPU Inference Engines (continue)**
| | Intel Atom® x5-E3940 | Intel® Core™ i3-8100 |
| -------------------- | ---------------------------------- |----------------------------------- |
| Motherboard | | GIGABYTE* Z390 UD |
| CPU | Intel Atom® Processor E3940 @ 1.60GHz | Intel® Core™ i3-8100 CPU @ 3.60GHz |
| Hyper Threading | OFF | OFF |
| Turbo Setting | ON | OFF |
| Memory | 1 x 8 GB DDR3 1600MHz | 4 x 8 GB DDR4 2400MHz |
| Operating System | Ubuntu* 18.04 LTS | Ubuntu* 18.04 LTS |
| Kernel Version | 5.3.0-24-generic | 5.3.0-24-generic |
| BIOS Vendor | American Megatrends Inc.* | American Megatrends Inc.* |
| BIOS Version | 5.12 | F8 |
| BIOS Release | September 6, 2017 | May 24, 2019 |
| BIOS Settings | Default settings | Select optimized default settings, <br> set OS type to "other", <br>save & exit |
| Batch size | 1 | 1 |
| Precision | INT8 | INT8 |
| Number of concurrent inference requests | 4 | 4 |
| Test Date | December 9, 2020 | December 9, 2020 |
| Power dissipation, TDP in Watt | [9.5](https://ark.intel.com/content/www/us/en/ark/products/96485/intel-atom-x5-e3940-processor-2m-cache-up-to-1-80-ghz.html) | [65](https://ark.intel.com/content/www/us/en/ark/products/126688/intel-core-i3-8100-processor-6m-cache-3-60-ghz.html#tab-blade-1-0-1)|
| CPU Price on September 29, 2020, USD<br>Prices may vary | [34](https://ark.intel.com/content/www/us/en/ark/products/96485/intel-atom-x5-e3940-processor-2m-cache-up-to-1-80-ghz.html) | [117](https://ark.intel.com/content/www/us/en/ark/products/126688/intel-core-i3-8100-processor-6m-cache-3-60-ghz.html) |
**Accelerator Inference Engines**
| | Intel® Neural Compute Stick 2 | Intel® Vision Accelerator Design<br>with Intel® Movidius™ VPUs (Mustang-V100-MX8) |
| --------------------------------------- | ------------------------------------- | ------------------------------------- |
| VPU | 1 X Intel® Movidius™ Myriad™ X MA2485 | 8 X Intel® Movidius™ Myriad™ X MA2485 |
| Connection | USB 2.0/3.0 | PCIe X4 |
| Batch size | 1 | 1 |
| Precision | FP16 | FP16 |
| Number of concurrent inference requests | 4 | 32 |
| Power dissipation, TDP in Watt | 2.5 | [30](https://www.mouser.com/ProductDetail/IEI/MUSTANG-V100-MX8-R10?qs=u16ybLDytRaZtiUUvsd36w%3D%3D) |
| CPU Price, USD<br>Prices may vary | [69](https://ark.intel.com/content/www/us/en/ark/products/140109/intel-neural-compute-stick-2.html) (from December 9, 2020) | [214](https://www.arrow.com/en/products/mustang-v100-mx8-r10/iei-technology?gclid=Cj0KCQiA5bz-BRD-ARIsABjT4ng1v1apmxz3BVCPA-tdIsOwbEjTtqnmp_rQJGMfJ6Q2xTq6ADtf9OYaAhMUEALw_wcB) (from December 9, 2020) |
| Host Computer | Intel® Core™ i7 | Intel® Core™ i5 |
| Motherboard | ASUS* Z370-A II | Uzelinfo* / US-E1300 |
| CPU | Intel® Core™ i7-8700 CPU @ 3.20GHz | Intel® Core™ i5-6600 CPU @ 3.30GHz |
| Hyper Threading | ON | OFF |
| Turbo Setting | ON | ON |
| Memory | 4 x 16 GB DDR4 2666MHz | 2 x 16 GB DDR4 2400MHz |
| Operating System | Ubuntu* 18.04 LTS | Ubuntu* 18.04 LTS |
| Kernel Version | 5.0.0-23-generic | 5.0.0-23-generic |
| BIOS Vendor | American Megatrends Inc.* | American Megatrends Inc.* |
| BIOS Version | 411 | 5.12 |
| BIOS Release | September 21, 2018 | September 21, 2018 |
| Test Date | December 9, 2020 | December 9, 2020 |
Please follow this link for more detailed configuration descriptions: [Configuration Details](https://docs.openvinotoolkit.org/resources/benchmark_files/system_configurations_2021.2.html)
\htmlonly
<style>
.footer {
display: none;
}
</style>
<div class="opt-notice-wrapper">
<p class="opt-notice">
\endhtmlonly
Results may vary. For workloads and configurations visit: [www.intel.com/PerformanceIndex](https://www.intel.com/PerformanceIndex) and [Legal Information](../Legal_Information.md).
\htmlonly
</p>
</div>
\endhtmlonly

View File

@@ -29,73 +29,73 @@ Measuring inference performance involves many variables and is extremely use-cas
\htmlonly
<script src="bert-large-uncased-whole-word-masking-squad-int8-0001-ov-2021-3-338-2.js" id="bert-large-uncased-whole-word-masking-squad-int8-0001-ov-2021-3-338-2"></script>
<script src="bert-large-uncased-whole-word-masking-squad-int8-0001-ov-2021-3-338-5.js" id="bert-large-uncased-whole-word-masking-squad-int8-0001-ov-2021-3-338-5"></script>
\endhtmlonly
\htmlonly
<script src="deeplabv3-tf-ov-2021-3-338-2.js" id="deeplabv3-tf-ov-2021-3-338-2"></script>
<script src="deeplabv3-tf-ov-2021-3-338-5.js" id="deeplabv3-tf-ov-2021-3-338-5"></script>
\endhtmlonly
\htmlonly
<script src="densenet-121-tf-ov-2021-3-338-2.js" id="densenet-121-tf-ov-2021-3-338-2"></script>
<script src="densenet-121-tf-ov-2021-3-338-5.js" id="densenet-121-tf-ov-2021-3-338-5"></script>
\endhtmlonly
\htmlonly
<script src="faster-rcnn-resnet50-coco-tf-ov-2021-3-338-2.js" id="faster-rcnn-resnet50-coco-tf-ov-2021-3-338-2"></script>
<script src="faster-rcnn-resnet50-coco-tf-ov-2021-3-338-5.js" id="faster-rcnn-resnet50-coco-tf-ov-2021-3-338-5"></script>
\endhtmlonly
\htmlonly
<script src="googlenet-v1-tf-ov-2021-3-338-2.js" id="googlenet-v1-tf-ov-2021-3-338-2"></script>
<script src="googlenet-v1-tf-ov-2021-3-338-5.js" id="googlenet-v1-tf-ov-2021-3-338-5"></script>
\endhtmlonly
\htmlonly
<script src="inception-v3-tf-ov-2021-3-338-2.js" id="inception-v3-tf-ov-2021-3-338-2"></script>
<script src="inception-v3-tf-ov-2021-3-338-5.js" id="inception-v3-tf-ov-2021-3-338-5"></script>
\endhtmlonly
\htmlonly
<script src="mobilenet-ssd-cf-ov-2021-3-338-2.js" id="mobilenet-ssd-cf-ov-2021-3-338-2"></script>
<script src="mobilenet-ssd-cf-ov-2021-3-338-5.js" id="mobilenet-ssd-cf-ov-2021-3-338-5"></script>
\endhtmlonly
\htmlonly
<script src="mobilenet-v1-1-0-224-tf-ov-2021-3-338-2.js" id="mobilenet-v1-1-0-224-tf-ov-2021-3-338-2"></script>
<script src="mobilenet-v1-1-0-224-tf-ov-2021-3-338-5.js" id="mobilenet-v1-1-0-224-tf-ov-2021-3-338-5"></script>
\endhtmlonly
\htmlonly
<script src="mobilenet-v2-pytorch-ov-2021-3-338-2.js" id="mobilenet-v2-pytorch-ov-2021-3-338-2"></script>
<script src="mobilenet-v2-pytorch-ov-2021-3-338-5.js" id="mobilenet-v2-pytorch-ov-2021-3-338-5"></script>
\endhtmlonly
\htmlonly
<script src="resnet-18-pytorch-ov-2021-3-338-2.js" id="resnet-18-pytorch-ov-2021-3-338-2"></script>
<script src="resnet-18-pytorch-ov-2021-3-338-5.js" id="resnet-18-pytorch-ov-2021-3-338-5"></script>
\endhtmlonly
\htmlonly
<script src="resnet-50-tf-ov-2021-3-338-2.js" id="resnet-50-tf-ov-2021-3-338-2"></script>
<script src="resnet-50-tf-ov-2021-3-338-5.js" id="resnet-50-tf-ov-2021-3-338-5"></script>
\endhtmlonly
\htmlonly
<script src="se-resnext-50-cf-ov-2021-3-338-2.js" id="se-resnext-50-cf-ov-2021-3-338-2"></script>
<script src="se-resnext-50-cf-ov-2021-3-338-5.js" id="se-resnext-50-cf-ov-2021-3-338-5"></script>
\endhtmlonly
\htmlonly
<script src="squeezenet1-1-cf-ov-2021-3-338-2.js" id="squeezenet1-1-cf-ov-2021-3-338-2"></script>
<script src="squeezenet1-1-cf-ov-2021-3-338-5.js" id="squeezenet1-1-cf-ov-2021-3-338-5"></script>
\endhtmlonly
\htmlonly
<script src="ssd300-cf-ov-2021-3-338-2.js" id="ssd300-cf-ov-2021-3-338-2"></script>
<script src="ssd300-cf-ov-2021-3-338-5.js" id="ssd300-cf-ov-2021-3-338-5"></script>
\endhtmlonly
\htmlonly
<script src="yolo-v3-tf-ov-2021-3-338-2.js" id="yolo-v3-tf-ov-2021-3-338-2"></script>
<script src="yolo-v3-tf-ov-2021-3-338-5.js" id="yolo-v3-tf-ov-2021-3-338-5"></script>
\endhtmlonly
\htmlonly
<script src="yolo-v4-tf-ov-2021-3-338-2.js" id="yolo-v4-tf-ov-2021-3-338-2"></script>
<script src="yolo-v4-tf-ov-2021-3-338-5.js" id="yolo-v4-tf-ov-2021-3-338-5"></script>
\endhtmlonly
\htmlonly
<script src="unet-camvid-onnx-0001-ov-2021-3-338-2.js" id="unet-camvid-onnx-0001-ov-2021-3-338-2"></script>
<script src="unet-camvid-onnx-0001-ov-2021-3-338-5.js" id="unet-camvid-onnx-0001-ov-2021-3-338-5"></script>
\endhtmlonly
@@ -139,25 +139,25 @@ Testing by Intel done on: see test date for each HW platform below.
**CPU Inference Engines (continue)**
| | Intel® Xeon® Gold 5218T | Intel® Xeon® Platinum 8270 |
| ------------------------------- | ---------------------------- | ---------------------------- |
| Motherboard | Intel® Server Board S2600STB | Intel® Server Board S2600STB |
| CPU | Intel® Xeon® Gold 5218T CPU @ 2.10GHz | Intel® Xeon® Platinum 8270 CPU @ 2.70GHz |
| Hyper Threading | ON | ON |
| Turbo Setting | ON | ON |
| Memory | 12 x 32 GB DDR4 2666MHz | 12 x 32 GB DDR4 2933MHz |
| Operating System | Ubuntu* 18.04 LTS | Ubuntu* 18.04 LTS |
| Kernel Version | 5.3.0-24-generic | 5.3.0-24-generic |
| BIOS Vendor | Intel Corporation | Intel Corporation |
| BIOS Version | SE5C620.86B.02.01.<br>0009.092820190230 | SE5C620.86B.02.01.<br>0009.092820190230 |
| BIOS Release | September 28, 2019 | September 28, 2019 |
| BIOS Settings | Select optimized default settings, <br>change power policy to "performance", <br>save & exit | Select optimized default settings, <br>change power policy to "performance", <br>save & exit |
| Batch size | 1 | 1 |
| Precision | INT8 | INT8 |
| Number of concurrent inference requests |32 | 52 |
| Test Date | March 15, 2021 | March 15, 2021 |
| Power dissipation, TDP in Watt | [105](https://ark.intel.com/content/www/us/en/ark/products/193953/intel-xeon-gold-5218t-processor-22m-cache-2-10-ghz.html#tab-blade-1-0-1) | [205](https://ark.intel.com/content/www/us/en/ark/products/192482/intel-xeon-platinum-8270-processor-35-75m-cache-2-70-ghz.html#tab-blade-1-0-1) |
| CPU Price on Mach 15th, 2021, USD<br>Prices may vary | [1,349](https://ark.intel.com/content/www/us/en/ark/products/193953/intel-xeon-gold-5218t-processor-22m-cache-2-10-ghz.html) | [7,405](https://ark.intel.com/content/www/us/en/ark/products/192482/intel-xeon-platinum-8270-processor-35-75m-cache-2-70-ghz.html) |
| | Intel® Xeon® Gold 5218T | Intel® Xeon® Platinum 8270 | Intel® Xeon® Platinum 8380 |
| ------------------------------- | ---------------------------- | ---------------------------- | -----------------------------------------|
| Motherboard | Intel® Server Board S2600STB | Intel® Server Board S2600STB | Intel Corporation / WilsonCity |
| CPU | Intel® Xeon® Gold 5218T CPU @ 2.10GHz | Intel® Xeon® Platinum 8270 CPU @ 2.70GHz | Intel® Xeon® Platinum 8380 CPU @ 2.30GHz |
| Hyper Threading | ON | ON | ON |
| Turbo Setting | ON | ON | ON |
| Memory | 12 x 32 GB DDR4 2666MHz | 12 x 32 GB DDR4 2933MHz | 16 x 16 GB DDR4 3200MHz |
| Operating System | Ubuntu* 18.04 LTS | Ubuntu* 18.04 LTS | Ubuntu* 18.04 LTS |
| Kernel Version | 5.3.0-24-generic | 5.3.0-24-generic | 5.3.0-24-generic |
| BIOS Vendor | Intel Corporation | Intel Corporation | Intel Corporation |
| BIOS Version | SE5C620.86B.02.01.<br>0009.092820190230 | SE5C620.86B.02.01.<br>0009.092820190230 | WLYDCRB1.SYS.0020.<br>P86.2103050636 |
| BIOS Release | September 28, 2019 | September 28, 2019 | March 5, 2021 |
| BIOS Settings | Select optimized default settings, <br>change power policy to "performance", <br>save & exit | Select optimized default settings, <br>change power policy to "performance", <br>save & exit | Select optimized default settings, <br>change power policy to "performance", <br>save & exit |
| Batch size | 1 | 1 | 1 |
| Precision | INT8 | INT8 | INT8 |
| Number of concurrent inference requests |32 | 52 | 80 |
| Test Date | March 15, 2021 | March 15, 2021 | March 22, 2021 |
| Power dissipation, TDP in Watt | [105](https://ark.intel.com/content/www/us/en/ark/products/193953/intel-xeon-gold-5218t-processor-22m-cache-2-10-ghz.html#tab-blade-1-0-1) | [205](https://ark.intel.com/content/www/us/en/ark/products/192482/intel-xeon-platinum-8270-processor-35-75m-cache-2-70-ghz.html#tab-blade-1-0-1) | [270](https://ark.intel.com/content/www/us/en/ark/products/212287/intel-xeon-platinum-8380-processor-60m-cache-2-30-ghz.html) |
| CPU Price, USD<br>Prices may vary | [1,349](https://ark.intel.com/content/www/us/en/ark/products/193953/intel-xeon-gold-5218t-processor-22m-cache-2-10-ghz.html) (on Mach 15th, 2021) | [7,405](https://ark.intel.com/content/www/us/en/ark/products/192482/intel-xeon-platinum-8270-processor-35-75m-cache-2-70-ghz.html) (on Mach 15th, 2021) | [8,099](https://ark.intel.com/content/www/us/en/ark/products/212287/intel-xeon-platinum-8380-processor-60m-cache-2-30-ghz.html) (on March 26th, 2021) |
**CPU Inference Engines (continue)**
@@ -208,25 +208,25 @@ Testing by Intel done on: see test date for each HW platform below.
**CPU Inference Engines (continue)**
| | Intel Atom® x5-E3940 | Intel® Core™ i3-8100 |
| -------------------- | ---------------------------------- |----------------------------------- |
| Motherboard | | GIGABYTE* Z390 UD |
| CPU | Intel Atom® Processor E3940 @ 1.60GHz | Intel® Core™ i3-8100 CPU @ 3.60GHz |
| Hyper Threading | OFF | OFF |
| Turbo Setting | ON | OFF |
| Memory | 1 x 8 GB DDR3 1600MHz | 4 x 8 GB DDR4 2400MHz |
| Operating System | Ubuntu* 18.04 LTS | Ubuntu* 18.04 LTS |
| Kernel Version | 5.3.0-24-generic | 5.3.0-24-generic |
| BIOS Vendor | American Megatrends Inc.* | American Megatrends Inc.* |
| BIOS Version | 5.12 | F8 |
| BIOS Release | September 6, 2017 | May 24, 2019 |
| BIOS Settings | Default settings | Select optimized default settings, <br> set OS type to "other", <br>save & exit |
| Batch size | 1 | 1 |
| Precision | INT8 | INT8 |
| Number of concurrent inference requests | 4 | 4 |
| Test Date | March 15, 2021 | March 15, 2021 |
| Power dissipation, TDP in Watt | [9.5](https://ark.intel.com/content/www/us/en/ark/products/96485/intel-atom-x5-e3940-processor-2m-cache-up-to-1-80-ghz.html) | [65](https://ark.intel.com/content/www/us/en/ark/products/126688/intel-core-i3-8100-processor-6m-cache-3-60-ghz.html#tab-blade-1-0-1)|
| CPU Price on Mach 15th, 2021, USD<br>Prices may vary | [34](https://ark.intel.com/content/www/us/en/ark/products/96485/intel-atom-x5-e3940-processor-2m-cache-up-to-1-80-ghz.html) | [117](https://ark.intel.com/content/www/us/en/ark/products/126688/intel-core-i3-8100-processor-6m-cache-3-60-ghz.html) |
| | Intel Atom® x5-E3940 | Intel Atom® x6425RE | Intel® Core™ i3-8100 |
| -------------------- | --------------------------------------|------------------------------- |----------------------------------- |
| Motherboard | | Intel Corporation /<br>ElkhartLake LPDDR4x T3 CRB | GIGABYTE* Z390 UD |
| CPU | Intel Atom® Processor E3940 @ 1.60GHz | Intel Atom® x6425RE<br>Processor @ 1.90GHz | Intel® Core™ i3-8100 CPU @ 3.60GHz |
| Hyper Threading | OFF | OFF | OFF |
| Turbo Setting | ON | ON | OFF |
| Memory | 1 x 8 GB DDR3 1600MHz | 2 x 4GB DDR4 3200 MHz | 4 x 8 GB DDR4 2400MHz |
| Operating System | Ubuntu* 18.04 LTS | Ubuntu* 18.04 LTS | Ubuntu* 18.04 LTS |
| Kernel Version | 5.3.0-24-generic | 5.8.0-050800-generic | 5.3.0-24-generic |
| BIOS Vendor | American Megatrends Inc.* | Intel Corporation | American Megatrends Inc.* |
| BIOS Version | 5.12 | EHLSFWI1.R00.2463.<br>A03.2011200425 | F8 |
| BIOS Release | September 6, 2017 | November 22, 2020 | May 24, 2019 |
| BIOS Settings | Default settings | Default settings | Select optimized default settings, <br> set OS type to "other", <br>save & exit |
| Batch size | 1 | 1 | 1 |
| Precision | INT8 | INT8 | INT8 |
| Number of concurrent inference requests | 4 | 4 | 4 |
| Test Date | March 15, 2021 | March 15, 2021 | March 15, 2021 |
| Power dissipation, TDP in Watt | [9.5](https://ark.intel.com/content/www/us/en/ark/products/96485/intel-atom-x5-e3940-processor-2m-cache-up-to-1-80-ghz.html) | [12](https://ark.intel.com/content/www/us/en/ark/products/207899/intel-atom-x6425re-processor-1-5m-cache-1-90-ghz.html) | [65](https://ark.intel.com/content/www/us/en/ark/products/126688/intel-core-i3-8100-processor-6m-cache-3-60-ghz.html#tab-blade-1-0-1)|
| CPU Price, USD<br>Prices may vary | [34](https://ark.intel.com/content/www/us/en/ark/products/96485/intel-atom-x5-e3940-processor-2m-cache-up-to-1-80-ghz.html) (on March 15th, 2021) | [59](https://ark.intel.com/content/www/us/en/ark/products/207899/intel-atom-x6425re-processor-1-5m-cache-1-90-ghz.html) (on March 26th, 2021) | [117](https://ark.intel.com/content/www/us/en/ark/products/126688/intel-core-i3-8100-processor-6m-cache-3-60-ghz.html) (on March 15th, 2021) |

View File

@@ -135,6 +135,11 @@ limitations under the License.
<tab type="user" title="Equal-1" url="@ref openvino_docs_ops_comparison_Equal_1"/>
<tab type="user" title="Erf-1" url="@ref openvino_docs_ops_arithmetic_Erf_1"/>
<tab type="user" title="Exp-1" url="@ref openvino_docs_ops_activation_Exp_1"/>
<tab type="user" title="ExperimentalDetectronDetectionOutput-6" url="@ref openvino_docs_ops_detection_ExperimentalDetectronDetectionOutput_6"/>
<tab type="user" title="ExperimentalDetectronGenerateProposalsSingleImage-6" url="@ref openvino_docs_ops_detection_ExperimentalDetectronGenerateProposalsSingleImage_6"/>
<tab type="user" title="ExperimentalDetectronPriorGridGenerator-6" url="@ref openvino_docs_ops_detection_ExperimentalDetectronPriorGridGenerator_6"/>
<tab type="user" title="ExperimentalDetectronROIFeatureExtractor-6" url="@ref openvino_docs_ops_detection_ExperimentalDetectronROIFeatureExtractor_6"/>
<tab type="user" title="ExperimentalDetectronTopKROIs-6" url="@ref openvino_docs_ops_sort_ExperimentalDetectronTopKROIs_6"/>
<tab type="user" title="ExtractImagePatches-3" url="@ref openvino_docs_ops_movement_ExtractImagePatches_3"/>
<tab type="user" title="FakeQuantize-1" url="@ref openvino_docs_ops_quantization_FakeQuantize_1"/>
<tab type="user" title="FloorMod-1" url="@ref openvino_docs_ops_arithmetic_FloorMod_1"/>
@@ -257,7 +262,6 @@ limitations under the License.
<tab id="deploying_inference" type="usergroup" title="Deploying Inference" url="@ref openvino_docs_IE_DG_Deep_Learning_Inference_Engine_DevGuide">
<!-- Inference Engine Developer Guide -->
<tab type="usergroup" title="Inference Engine Developer Guide" url="@ref openvino_docs_IE_DG_Deep_Learning_Inference_Engine_DevGuide">
<tab type="user" title="Introduction to Inference Engine" url="@ref openvino_docs_IE_DG_inference_engine_intro"/>
<tab type="user" title="Inference Engine API Changes History" url="@ref openvino_docs_IE_DG_API_Changes"/>
<tab type="user" title="Inference Engine Memory primitives" url="@ref openvino_docs_IE_DG_Memory_primitives"/>
<tab type="user" title="Inference Engine Device Query API" url="@ref openvino_docs_IE_DG_InferenceEngine_QueryAPI"/>

View File

@@ -169,6 +169,7 @@ limitations under the License.
<tab type="user" title="Hello Classification C Sample" url="@ref openvino_inference_engine_ie_bridges_c_samples_hello_classification_README"/>
<tab type="user" title="Image Classification Python* Sample" url="@ref openvino_inference_engine_ie_bridges_python_sample_classification_sample_README"/>
<tab type="user" title="Hello Reshape SSD C++ Sample" url="@ref openvino_inference_engine_samples_hello_reshape_ssd_README"/>
<tab type="user" title="Hello Reshape SSD Python Sample" url="@ref openvino_inference_engine_samples_python_hello_reshape_ssd_README"/>
<tab type="user" title="Hello NV12 Input Classification C++ Sample" url="@ref openvino_inference_engine_samples_hello_nv12_input_classification_README"/>
<tab type="user" title="Hello NV12 Input Classification C Sample" url="@ref openvino_inference_engine_ie_bridges_c_samples_hello_nv12_input_classification_README"/>
<tab type="user" title="Hello Query Device C++ Sample" url="@ref openvino_inference_engine_samples_hello_query_device_README"/>
@@ -184,7 +185,15 @@ limitations under the License.
<tab type="user" title="Benchmark C++ Tool" url="@ref openvino_inference_engine_samples_benchmark_app_README"/>
<tab type="user" title="Benchmark Python* Tool" url="@ref openvino_inference_engine_tools_benchmark_tool_README"/>
</tab>
<!-- Reference Implementations -->
<tab type="usergroup" title="Reference Implementations" url="">
<tab type="usergroup" title="Speech Library and Speech Recognition Demos" url="@ref openvino_inference_engine_samples_speech_libs_and_demos_Speech_libs_and_demos">
<tab type="user" title="Speech Library" url="@ref openvino_inference_engine_samples_speech_libs_and_demos_Speech_library"/>
<tab type="user" title="Offline Speech Recognition Demo" url="@ref openvino_inference_engine_samples_speech_libs_and_demos_Offline_speech_recognition_demo"/>
<tab type="user" title="Live Speech Recognition Demo" url="@ref openvino_inference_engine_samples_speech_libs_and_demos_Live_speech_recognition_demo"/>
<tab type="user" title="Kaldi* Statistical Language Model Conversion Tool" url="@ref openvino_inference_engine_samples_speech_libs_and_demos_Kaldi_SLM_conversion_tool"/>
</tab>
</tab>
<!-- DL Streamer Examples -->
<tab type="usergroup" title="DL Streamer Examples" url="@ref gst_samples_README">
<tab type="usergroup" title="Command Line Samples" url="">

View File

@@ -71,9 +71,9 @@ The simplified OpenVINO™ DL Workbench workflow is:
## Run Baseline Inference
This section illustrates a sample use case of how to infer a pretrained model from the [Intel® Open Model Zoo](@ref omz_models_group_intel) with an autogenerated noise dataset on a CPU device.
\htmlonly
<iframe width="560" height="315" src="https://www.youtube.com/embed/9TRJwEmY0K4" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
\endhtmlonly
Once you log in to the DL Workbench, create a project, which is a combination of a model, a dataset, and a target device. Follow the steps below:

View File

@@ -62,7 +62,7 @@ Follow the steps below to run pre-trained Face Detection network using Inference
```
2. Build the Object Detection Sample with the following command:
```sh
cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_CXX_FLAGS="-march=armv7-a" /opt/intel/openvino/deployment_tools/inference_engine/samples/cpp
cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_CXX_FLAGS="-march=armv7-a" /opt/intel/openvino_2021/deployment_tools/inference_engine/samples/cpp
make -j2 object_detection_sample_ssd
```
3. Download the pre-trained Face Detection model with the [Model Downloader tool](@ref omz_tools_downloader):

View File

@@ -44,7 +44,6 @@ To learn about what is *custom operation* and how to work with them in the Deep
[![](https://img.youtube.com/vi/Kl1ptVb7aI8/0.jpg)](https://www.youtube.com/watch?v=Kl1ptVb7aI8)
<iframe width="560" height="315" src="https://www.youtube.com/embed/Kl1ptVb7aI8" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
## Computer Vision with Intel
[![](https://img.youtube.com/vi/FZZD4FCvO9c/0.jpg)](https://www.youtube.com/watch?v=FZZD4FCvO9c)

View File

@@ -83,7 +83,7 @@ The Inference Engine's plug-in architecture can be extended to meet other specia
Intel® Distribution of OpenVINO™ toolkit includes the following components:
- [Deep Learning Model Optimizer](MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md) - A cross-platform command-line tool for importing models and preparing them for optimal execution with the Inference Engine. The Model Optimizer imports, converts, and optimizes models, which were trained in popular frameworks, such as Caffe*, TensorFlow*, MXNet*, Kaldi*, and ONNX*.
- [Deep Learning Inference Engine](IE_DG/inference_engine_intro.md) - A unified API to allow high performance inference on many hardware types including Intel® CPU, Intel® Integrated Graphics, Intel® Neural Compute Stick 2, Intel® Vision Accelerator Design with Intel® Movidius™ vision processing unit (VPU).
- [Deep Learning Inference Engine](IE_DG/Deep_Learning_Inference_Engine_DevGuide.md) - A unified API to allow high performance inference on many hardware types including Intel® CPU, Intel® Integrated Graphics, Intel® Neural Compute Stick 2, Intel® Vision Accelerator Design with Intel® Movidius™ vision processing unit (VPU).
- [Inference Engine Samples](IE_DG/Samples_Overview.md) - A set of simple console applications demonstrating how to use the Inference Engine in your applications.
- [Deep Learning Workbench](@ref workbench_docs_Workbench_DG_Introduction) - A web-based graphical environment that allows you to easily use various sophisticated OpenVINO™ toolkit components.
- [Post-Training Optimization tool](@ref pot_README) - A tool to calibrate a model and then execute it in the INT8 precision.

View File

@@ -12,8 +12,8 @@ The following components are installed with the OpenVINO runtime package:
| Component | Description|
|-----------|------------|
| [Inference Engine](../IE_DG/inference_engine_intro.md)| The engine that runs a deep learning model. It includes a set of libraries for an easy inference integration into your applications. |
| [OpenCV*](https://docs.opencv.org/master/ | OpenCV* community version compiled for Intel® hardware. |
| [Inference Engine](../IE_DG/Deep_Learning_Inference_Engine_DevGuide.md)| The engine that runs a deep learning model. It includes a set of libraries for an easy inference integration into your applications. |
| [OpenCV*](https://docs.opencv.org/master/) | OpenCV* community version compiled for Intel® hardware. |
| Deep Learning Streamer (DL Streamer) | Streaming analytics framework, based on GStreamer, for constructing graphs of media analytics components. For the DL Streamer documentation, see [DL Streamer Samples](@ref gst_samples_README), [API Reference](https://openvinotoolkit.github.io/dlstreamer_gst/), [Elements](https://github.com/opencv/gst-video-analytics/wiki/Elements), [Tutorial](https://github.com/opencv/gst-video-analytics/wiki/DL%20Streamer%20Tutorial). |
## Included with Developer Package
@@ -23,7 +23,7 @@ The following components are installed with the OpenVINO developer package:
| Component | Description|
|-----------|------------|
| [Model Optimizer](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md) | This tool imports, converts, and optimizes models that were trained in popular frameworks to a format usable by Intel tools, especially the Inference Engine. <br>Popular frameworks include Caffe\*, TensorFlow\*, MXNet\*, and ONNX\*. |
| [Inference Engine](../IE_DG/inference_engine_intro.md) | The engine that runs a deep learning model. It includes a set of libraries for an easy inference integration into your applications.|
| [Inference Engine](../IE_DG/Deep_Learning_Inference_Engine_DevGuide.md) | The engine that runs a deep learning model. It includes a set of libraries for an easy inference integration into your applications.|
| [OpenCV*](https://docs.opencv.org/master/) | OpenCV\* community version compiled for Intel® hardware |
| [Sample Applications](../IE_DG/Samples_Overview.md) | A set of simple console applications demonstrating how to use the Inference Engine in your applications. |
| [Demo Applications](@ref omz_demos) | A set of console applications that demonstrate how you can use the Inference Engine in your applications to solve specific use cases. |

View File

@@ -10,8 +10,8 @@ This guide provides the steps for creating a Docker* image with Intel® Distribu
- Ubuntu\* 18.04 long-term support (LTS), 64-bit
- Ubuntu\* 20.04 long-term support (LTS), 64-bit
- CentOS\* 7
- RHEL\* 8
- CentOS\* 7.6
- Red Hat* Enterprise Linux* 8.2 (64 bit)
**Host Operating Systems**
@@ -143,7 +143,7 @@ RUN /bin/mkdir -p '/usr/local/lib' && \
WORKDIR /opt/libusb-1.0.22/
RUN /usr/bin/install -c -m 644 libusb-1.0.pc '/usr/local/lib/pkgconfig' && \
cp /opt/intel/openvino/deployment_tools/inference_engine/external/97-myriad-usbboot.rules /etc/udev/rules.d/ && \
cp /opt/intel/openvino_2021/deployment_tools/inference_engine/external/97-myriad-usbboot.rules /etc/udev/rules.d/ && \
ldconfig
```
- **CentOS 7**:
@@ -174,11 +174,11 @@ RUN /bin/mkdir -p '/usr/local/lib' && \
/bin/mkdir -p '/usr/local/include/libusb-1.0' && \
/usr/bin/install -c -m 644 libusb.h '/usr/local/include/libusb-1.0' && \
/bin/mkdir -p '/usr/local/lib/pkgconfig' && \
printf "\nexport LD_LIBRARY_PATH=\${LD_LIBRARY_PATH}:/usr/local/lib\n" >> /opt/intel/openvino/bin/setupvars.sh
printf "\nexport LD_LIBRARY_PATH=\${LD_LIBRARY_PATH}:/usr/local/lib\n" >> /opt/intel/openvino_2021/bin/setupvars.sh
WORKDIR /opt/libusb-1.0.22/
RUN /usr/bin/install -c -m 644 libusb-1.0.pc '/usr/local/lib/pkgconfig' && \
cp /opt/intel/openvino/deployment_tools/inference_engine/external/97-myriad-usbboot.rules /etc/udev/rules.d/ && \
cp /opt/intel/openvino_2021/deployment_tools/inference_engine/external/97-myriad-usbboot.rules /etc/udev/rules.d/ && \
ldconfig
```
2. Run the Docker* image:

View File

@@ -11,9 +11,9 @@ For Intel® Vision Accelerator Design with Intel® Movidius™ VPUs, the followi
1. Set the environment variables:
```sh
source /opt/intel/openvino/bin/setupvars.sh
source /opt/intel/openvino_2021/bin/setupvars.sh
```
> **NOTE**: The `HDDL_INSTALL_DIR` variable is set to `<openvino_install_dir>/deployment_tools/inference_engine/external/hddl`. If you installed the Intel® Distribution of OpenVINO™ to the default install directory, the `HDDL_INSTALL_DIR` was set to `/opt/intel/openvino//deployment_tools/inference_engine/external/hddl`.
> **NOTE**: The `HDDL_INSTALL_DIR` variable is set to `<openvino_install_dir>/deployment_tools/inference_engine/external/hddl`. If you installed the Intel® Distribution of OpenVINO™ to the default install directory, the `HDDL_INSTALL_DIR` was set to `/opt/intel/openvino_2021//deployment_tools/inference_engine/external/hddl`.
2. Install dependencies:
```sh
@@ -52,7 +52,7 @@ E: [ncAPI] [ 965618] [MainThread] ncDeviceOpen:677 Failed to find a device,
```sh
kill -9 $(pidof hddldaemon autoboot)
pidof hddldaemon autoboot # Make sure none of them is alive
source /opt/intel/openvino/bin/setupvars.sh
source /opt/intel/openvino_2021/bin/setupvars.sh
${HDDL_INSTALL_DIR}/bin/bsl_reset
```

View File

@@ -22,7 +22,7 @@ The Intel® Distribution of OpenVINO™ toolkit for Linux\*:
| Component | Description |
|-----------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| [Model Optimizer](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md) | This tool imports, converts, and optimizes models that were trained in popular frameworks to a format usable by Intel tools, especially the Inference Engine. <br>Popular frameworks include Caffe\*, TensorFlow\*, MXNet\*, and ONNX\*. |
| [Inference Engine](../IE_DG/inference_engine_intro.md) | This is the engine that runs the deep learning model. It includes a set of libraries for an easy inference integration into your applications. |
| [Inference Engine](../IE_DG/Deep_Learning_Inference_Engine_DevGuide.md) | This is the engine that runs the deep learning model. It includes a set of libraries for an easy inference integration into your applications. |
| Intel® Media SDK | Offers access to hardware accelerated video codecs and frame processing |
| [OpenCV](https://docs.opencv.org/master/) | OpenCV\* community version compiled for Intel® hardware |
| [Inference Engine Code Samples](../IE_DG/Samples_Overview.md) | A set of simple console applications demonstrating how to utilize specific OpenVINO capabilities in an application and how to perform specific tasks, such as loading a model, running inference, querying specific device capabilities, and more. |
@@ -49,7 +49,6 @@ Proceed to an [easy installation from Docker](@ref workbench_docs_Workbench_DG_I
**Hardware**
* 6th to 11th generation Intel® Core™ processors and Intel® Xeon® processors
* Intel® Xeon® processor E family (formerly code named Sandy Bridge, Ivy Bridge, Haswell, and Broadwell)
* 3rd generation Intel® Xeon® Scalable processor (formerly code named Cooper Lake)
* Intel® Xeon® Scalable processor (formerly Skylake and Cascade Lake)
* Intel Atom® processor with support for Intel® Streaming SIMD Extensions 4.1 (Intel® SSE4.1)
@@ -67,6 +66,7 @@ Proceed to an [easy installation from Docker](@ref workbench_docs_Workbench_DG_I
**Operating Systems**
- Ubuntu 18.04.x long-term support (LTS), 64-bit
- Ubuntu 20.04.0 long-term support (LTS), 64-bit
- CentOS 7.6, 64-bit (for target only)
- Yocto Project v3.0, 64-bit (for target only and requires modifications)

View File

@@ -24,7 +24,7 @@ The following components are installed by default:
| Component | Description |
| :-------------------------------------------------------------------------------------------------- | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| [Model Optimizer](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md) | This tool imports, converts, and optimizes models, which were trained in popular frameworks, to a format usable by Intel tools, especially the Inference Engine. <br> Popular frameworks include Caffe*, TensorFlow*, MXNet\*, and ONNX\*. |
| [Inference Engine](../IE_DG/inference_engine_intro.md) | This is the engine that runs a deep learning model. It includes a set of libraries for an easy inference integration into your applications. |
| [Inference Engine](../IE_DG/Deep_Learning_Inference_Engine_DevGuide.md) | This is the engine that runs a deep learning model. It includes a set of libraries for an easy inference integration into your applications. |
| [OpenCV\*](https://docs.opencv.org/master/) | OpenCV\* community version compiled for Intel® hardware |
| [Sample Applications](../IE_DG/Samples_Overview.md) | A set of simple console applications demonstrating how to use the Inference Engine in your applications. |
| [Demos](@ref omz_demos) | A set of console applications that demonstrate how you can use the Inference Engine in your applications to solve specific use-cases |
@@ -53,7 +53,6 @@ The development and target platforms have the same requirements, but you can sel
> **NOTE**: The current version of the Intel® Distribution of OpenVINO™ toolkit for macOS* supports inference on Intel CPUs and Intel® Neural Compute Sticks 2 only.
* 6th to 11th generation Intel® Core™ processors and Intel® Xeon® processors
* Intel® Xeon® processor E family (formerly code named Sandy Bridge, Ivy Bridge, Haswell, and Broadwell)
* 3rd generation Intel® Xeon® Scalable processor (formerly code named Cooper Lake)
* Intel® Xeon® Scalable processor (formerly Skylake and Cascade Lake)
* Intel® Neural Compute Stick 2

View File

@@ -18,7 +18,7 @@ The OpenVINO toolkit for Raspbian OS is an archive with pre-installed header fil
| Component | Description |
| :-------------------------------------------------------------------------------------------------- | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| [Inference Engine](../IE_DG/inference_engine_intro.md) | This is the engine that runs the deep learning model. It includes a set of libraries for an easy inference integration into your applications. |
| [Inference Engine](../IE_DG/Deep_Learning_Inference_Engine_DevGuide.md) | This is the engine that runs the deep learning model. It includes a set of libraries for an easy inference integration into your applications. |
| [OpenCV\*](https://docs.opencv.org/master/) | OpenCV\* community version compiled for Intel® hardware. |
| [Sample Applications](../IE_DG/Samples_Overview.md) | A set of simple console applications demonstrating how to use Intel's Deep Learning Inference Engine in your applications. |
@@ -94,12 +94,12 @@ CMake is installed. Continue to the next section to set the environment variable
You must update several environment variables before you can compile and run OpenVINO toolkit applications. Run the following script to temporarily set the environment variables:
```sh
source /opt/intel/openvino/bin/setupvars.sh
source /opt/intel/openvino_2021/bin/setupvars.sh
```
**(Optional)** The OpenVINO environment variables are removed when you close the shell. As an option, you can permanently set the environment variables as follows:
```sh
echo "source /opt/intel/openvino/bin/setupvars.sh" >> ~/.bashrc
echo "source /opt/intel/openvino_2021/bin/setupvars.sh" >> ~/.bashrc
```
To test your change, open a new terminal. You will see the following:
@@ -118,11 +118,11 @@ Continue to the next section to add USB rules for Intel® Neural Compute Stick 2
Log out and log in for it to take effect.
2. If you didn't modify `.bashrc` to permanently set the environment variables, run `setupvars.sh` again after logging in:
```sh
source /opt/intel/openvino/bin/setupvars.sh
source /opt/intel/openvino_2021/bin/setupvars.sh
```
3. To perform inference on the Intel® Neural Compute Stick 2, install the USB rules running the `install_NCS_udev_rules.sh` script:
```sh
sh /opt/intel/openvino/install_dependencies/install_NCS_udev_rules.sh
sh /opt/intel/openvino_2021/install_dependencies/install_NCS_udev_rules.sh
```
4. Plug in your Intel® Neural Compute Stick 2.
@@ -138,7 +138,7 @@ Follow the next steps to run pre-trained Face Detection network using Inference
```
2. Build the Object Detection Sample:
```sh
cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_CXX_FLAGS="-march=armv7-a" /opt/intel/openvino/deployment_tools/inference_engine/samples/cpp
cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_CXX_FLAGS="-march=armv7-a" /opt/intel/openvino_2021/deployment_tools/inference_engine/samples/cpp
```
```sh
make -j2 object_detection_sample_ssd

View File

@@ -16,12 +16,10 @@ Your installation is complete when these are all completed:
2. Install the dependencies:
- [Microsoft Visual Studio* with C++ **2019 or 2017** with MSBuild](http://visualstudio.microsoft.com/downloads/)
> **NOTE**: Clicking this link will directly download Visual Studio 2019 for Windows that has been validated with OpenVINO™.
- [CMake **3.10 or higher** 64-bit](https://cmake.org/download/)
> **NOTE**: If you want to use Microsoft Visual Studio 2019, you are required to install CMake 3.14.
- [Microsoft Visual Studio* 2019 with MSBuild](http://visualstudio.microsoft.com/downloads/)
- [CMake 3.14 or higher 64-bit](https://cmake.org/download/)
- [Python **3.6** - **3.8** 64-bit](https://www.python.org/downloads/windows/)
> **IMPORTANT**: As part of this installation, make sure you click the option to add the application to your `PATH` environment variable.
> **IMPORTANT**: As part of this installation, make sure you click the option **[Add Python 3.x to PATH](https://docs.python.org/3/using/windows.html#installation-steps)** to add Python to your `PATH` environment variable.
3. <a href="#set-the-environment-variables">Set Environment Variables</a>
@@ -59,7 +57,7 @@ The following components are installed by default:
| Component | Description |
|:---------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|[Model Optimizer](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md) |This tool imports, converts, and optimizes models that were trained in popular frameworks to a format usable by Intel tools, especially the Inference Engine.<br><strong>NOTE</strong>: Popular frameworks include such frameworks as Caffe\*, TensorFlow\*, MXNet\*, and ONNX\*. |
|[Inference Engine](../IE_DG/inference_engine_intro.md) |This is the engine that runs the deep learning model. It includes a set of libraries for an easy inference integration into your applications. |
|[Inference Engine](../IE_DG/Deep_Learning_Inference_Engine_DevGuide.md) |This is the engine that runs the deep learning model. It includes a set of libraries for an easy inference integration into your applications. |
|[OpenCV\*](https://docs.opencv.org/master/) |OpenCV* community version compiled for Intel® hardware |
|[Inference Engine Samples](../IE_DG/Samples_Overview.md) |A set of simple console applications demonstrating how to use Intel's Deep Learning Inference Engine in your applications. |
| [Demos](@ref omz_demos) | A set of console applications that demonstrate how you can use the Inference Engine in your applications to solve specific use-cases |
@@ -84,7 +82,6 @@ Proceed to an [easy installation from Docker](@ref workbench_docs_Workbench_DG_I
**Hardware**
* 6th to 11th generation Intel® Core™ processors and Intel® Xeon® processors
* Intel® Xeon® processor E family (formerly code named Sandy Bridge, Ivy Bridge, Haswell, and Broadwell)
* 3rd generation Intel® Xeon® Scalable processor (formerly code named Cooper Lake)
* Intel® Xeon® Scalable processor (formerly Skylake and Cascade Lake)
* Intel Atom® processor with support for Intel® Streaming SIMD Extensions 4.1 (Intel® SSE4.1)
@@ -135,12 +132,9 @@ The screen example below indicates you are missing two dependencies:
You must update several environment variables before you can compile and run OpenVINO™ applications. Open the Command Prompt, and run the `setupvars.bat` batch file to temporarily set your environment variables:
```sh
cd C:\Program Files (x86)\Intel\openvino_2021\bin\
```
```sh
setupvars.bat
"C:\Program Files (x86)\Intel\openvino_2021\bin\setupvars.bat"
```
> **IMPORTANT**: Windows PowerShell* is not recommended to run the configuration commands, please use the Command Prompt instead.
<strong>(Optional)</strong>: OpenVINO toolkit environment variables are removed when you close the Command Prompt window. As an option, you can permanently set the environment variables manually.
@@ -315,7 +309,7 @@ Use these steps to update your Windows `PATH` if a command you execute returns a
5. If you need to add CMake to the `PATH`, browse to the directory in which you installed CMake. The default directory is `C:\Program Files\CMake`.
6. If you need to add Python to the `PATH`, browse to the directory in which you installed Python. The default directory is `C:\Users\<USER_ID>\AppData\Local\Programs\Python\Python36\Python`.
6. If you need to add Python to the `PATH`, browse to the directory in which you installed Python. The default directory is `C:\Users\<USER_ID>\AppData\Local\Programs\Python\Python36\Python`. Note that the `AppData` folder is hidden by default. To view hidden files and folders, see the [Windows 10 instructions](https://support.microsoft.com/en-us/windows/view-hidden-files-and-folders-in-windows-10-97fbc472-c603-9d90-91d0-1166d1d9f4b5).
7. Click **OK** repeatedly to close each screen.
@@ -351,7 +345,7 @@ To learn more about converting deep learning models, go to:
- [Intel Distribution of OpenVINO Toolkit home page](https://software.intel.com/en-us/openvino-toolkit)
- [OpenVINO™ Release Notes](https://software.intel.com/en-us/articles/OpenVINO-RelNotes)
- [Introduction to Inference Engine](../IE_DG/inference_engine_intro.md)
- [Introduction to Inference Engine](../IE_DG/Deep_Learning_Inference_Engine_DevGuide.md)
- [Inference Engine Developer Guide](../IE_DG/Deep_Learning_Inference_Engine_DevGuide.md)
- [Model Optimizer Developer Guide](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)
- [Inference Engine Samples Overview](../IE_DG/Samples_Overview.md)

View File

@@ -14,7 +14,7 @@ The following components are installed with the OpenVINO runtime package:
| Component | Description|
|-----------|------------|
| [Inference Engine](../IE_DG/inference_engine_intro.md)| The engine that runs a deep learning model. It includes a set of libraries for an easy inference integration into your applications. |
| [Inference Engine](../IE_DG/Deep_Learning_Inference_Engine_DevGuide.md)| The engine that runs a deep learning model. It includes a set of libraries for an easy inference integration into your applications. |
| [OpenCV*](https://docs.opencv.org/master/) | OpenCV* community version compiled for Intel® hardware. |
| Deep Learning Stream (DL Streamer) | Streaming analytics framework, based on GStreamer, for constructing graphs of media analytics components. For the DL Streamer documentation, see [DL Streamer Samples](@ref gst_samples_README), [API Reference](https://openvinotoolkit.github.io/dlstreamer_gst/), [Elements](https://github.com/opencv/gst-video-analytics/wiki/Elements), [Tutorial](https://github.com/opencv/gst-video-analytics/wiki/DL%20Streamer%20Tutorial). |

View File

@@ -46,7 +46,7 @@ The `hddldaemon` is a system service, a binary executable that is run to manage
`<IE>` refers to the following default OpenVINO&trade; Inference Engine directories:
- **Linux:**
```
/opt/intel/openvino/inference_engine
/opt/intel/openvino_2021/inference_engine
```
- **Windows:**
```

View File

@@ -0,0 +1,193 @@
## ExperimentalDetectronDetectionOutput <a name="ExperimentalDetectronDetectionOutput"></a> {#openvino_docs_ops_detection_ExperimentalDetectronDetectionOutput_6}
**Versioned name**: *ExperimentalDetectronDetectionOutput-6*
**Category**: Object detection
**Short description**: The *ExperimentalDetectronDetectionOutput* operation performs non-maximum suppression to generate
the detection output using information on location and score predictions.
**Detailed description**: The operation performs the following steps:
1. Applies deltas to boxes sizes [x<sub>1</sub>, y<sub>1</sub>, x<sub>2</sub>, y<sub>2</sub>] and takes coordinates of
refined boxes according to the formulas:
`x1_new = ctr_x + (dx - 0.5 * exp(min(d_log_w, max_delta_log_wh))) * box_w`
`y0_new = ctr_y + (dy - 0.5 * exp(min(d_log_h, max_delta_log_wh))) * box_h`
`x1_new = ctr_x + (dx + 0.5 * exp(min(d_log_w, max_delta_log_wh))) * box_w - 1.0`
`y1_new = ctr_y + (dy + 0.5 * exp(min(d_log_h, max_delta_log_wh))) * box_h - 1.0`
* `box_w` and `box_h` are width and height of box, respectively:
`box_w = x1 - x0 + 1.0`
`box_h = y1 - y0 + 1.0`
* `ctr_x` and `ctr_y` are center location of a box:
`ctr_x = x0 + 0.5f * box_w`
`ctr_y = y0 + 0.5f * box_h`
* `dx`, `dy`, `d_log_w` and `d_log_h` are deltas calculated according to the formulas below, and `deltas_tensor` is a
second input:
`dx = deltas_tensor[roi_idx, 4 * class_idx + 0] / deltas_weights[0]`
`dy = deltas_tensor[roi_idx, 4 * class_idx + 1] / deltas_weights[1]`
`d_log_w = deltas_tensor[roi_idx, 4 * class_idx + 2] / deltas_weights[2]`
`d_log_h = deltas_tensor[roi_idx, 4 * class_idx + 3] / deltas_weights[3]`
2. If *class_agnostic_box_regression* is `true` removes predictions for background classes.
3. Clips boxes to the image.
4. Applies *score_threshold* on detection scores.
5. Applies non-maximum suppression class-wise with *nms_threshold* and returns *post_nms_count* or less detections per
class.
6. Returns *max_detections_per_image* detections if total number of detections is more than *max_detections_per_image*;
otherwise, returns total number of detections and the output tensor is filled with undefined values for rest output
tensor elements.
**Attributes**:
* *score_threshold*
* **Description**: The *score_threshold* attribute specifies a threshold to consider only detections whose score are
larger than the threshold.
* **Range of values**: non-negative floating point number
* **Type**: float
* **Default value**: None
* **Required**: *yes*
* *nms_threshold*
* **Description**: The *nms_threshold* attribute specifies a threshold to be used in the NMS stage.
* **Range of values**: non-negative floating point number
* **Type**: float
* **Default value**: None
* **Required**: *yes*
* *num_classes*
* **Description**: The *num_classes* attribute specifies the number of detected classes.
* **Range of values**: non-negative integer number
* **Type**: int
* **Default value**: None
* **Required**: *yes*
* *post_nms_count*
* **Description**: The *post_nms_count* attribute specifies the maximal number of detections per class.
* **Range of values**: non-negative integer number
* **Type**: int
* **Default value**: None
* **Required**: *yes*
* *max_detections_per_image*
* **Description**: The *max_detections_per_image* attribute specifies maximal number of detections per image.
* **Range of values**: non-negative integer number
* **Type**: int
* **Default value**: None
* **Required**: *yes*
* *class_agnostic_box_regression*
* **Description**: *class_agnostic_box_regression* attribute ia a flag specifies whether to delete background
classes or not.
* **Range of values**:
* `true` means background classes should be deleted
* `false` means background classes should not be deleted
* **Type**: boolean
* **Default value**: false
* **Required**: *no*
* *max_delta_log_wh*
* **Description**: The *max_delta_log_wh* attribute specifies maximal delta of logarithms for width and height.
* **Range of values**: floating point number
* **Type**: float
* **Default value**: None
* **Required**: *yes*
* *deltas_weights*
* **Description**: The *deltas_weights* attribute specifies weights for bounding boxes sizes deltas.
* **Range of values**: a list of non-negative floating point numbers
* **Type**: float[]
* **Default value**: None
* **Required**: *yes*
**Inputs**
* **1**: A 2D tensor of type *T* with input ROIs, with shape `[number_of_ROIs, 4]` providing the ROIs as 4-tuples:
[x<sub>1</sub>, y<sub>1</sub>, x<sub>2</sub>, y<sub>2</sub>]. The batch dimension of first, second, and third inputs
should be the same. **Required.**
* **2**: A 2D tensor of type *T* with shape `[number_of_ROIs, num_classes * 4]` providing deltas for input boxes.
**Required.**
* **3**: A 2D tensor of type *T* with shape `[number_of_ROIs, num_classes]` providing detections scores. **Required.**
* **4**: A 2D tensor of type *T* with shape `[1, 3]` contains three elements
`[image_height, image_width, scale_height_and_width]` providing input image size info. **Required.**
**Outputs**
* **1**: A 2D tensor of type *T* with shape `[max_detections_per_image, 4]` providing boxes indices.
* **2**: A 1D tensor of type *T_IND* with shape `[max_detections_per_image]` providing classes indices.
* **3**: A 1D tensor of type *T* with shape `[max_detections_per_image]` providing scores indices.
**Types**
* *T*: any supported floating point type.
* *T_IND*: `int64` or `int32`.
**Example**
```xml
<layer ... type="ExperimentalDetectronDetectionOutput" version="opset6">
<data class_agnostic_box_regression="false" deltas_weights="10.0,10.0,5.0,5.0" max_delta_log_wh="4.135166645050049" max_detections_per_image="100" nms_threshold="0.5" num_classes="81" post_nms_count="2000" score_threshold="0.05000000074505806"/>
<input>
<port id="0">
<dim>1000</dim>
<dim>4</dim>
</port>
<port id="1">
<dim>1000</dim>
<dim>324</dim>
</port>
<port id="2">
<dim>1000</dim>
<dim>81</dim>
</port>
<port id="3">
<dim>1</dim>
<dim>3</dim>
</port>
</input>
<output>
<port id="4" precision="FP32">
<dim>100</dim>
<dim>4</dim>
</port>
<port id="5" precision="I32">
<dim>100</dim>
</port>
<port id="6" precision="FP32">
<dim>100</dim>
</port>
<port id="7" precision="I32">
<dim>100</dim>
</port>
</output>
</layer>
```

View File

@@ -0,0 +1,112 @@
## ExperimentalDetectronGenerateProposalsSingleImage <a name="ExperimentalDetectronGenerateProposalsSingleImage"></a> {#openvino_docs_ops_detection_ExperimentalDetectronGenerateProposalsSingleImage_6}
**Versioned name**: *ExperimentalDetectronGenerateProposalsSingleImage-6*
**Category**: Object detection
**Short description**: The *ExperimentalDetectronGenerateProposalsSingleImage* operation computes ROIs and their scores
based on input data.
**Detailed description**: The operation performs the following steps:
1. Transposes and reshapes predicted bounding boxes deltas and scores to get them into the same order as the anchors.
2. Transforms anchors into proposals using deltas and clips proposals to an image.
3. Removes predicted boxes with either height or width < *min_size*.
4. Sorts all `(proposal, score)` pairs by score from highest to lowest; order of pairs with equal scores is undefined.
5. Takes top *pre_nms_count* proposals, if total number of proposals is less than *pre_nms_count* takes all proposals.
6. Applies non-maximum suppression with *nms_threshold*.
7. Takes top *post_nms_count* proposals and returns these top proposals and their scores. If total number of proposals
is less than *post_nms_count* returns output tensors filled with zeroes.
**Attributes**:
* *min_size*
* **Description**: The *min_size* attribute specifies minimum box width and height.
* **Range of values**: non-negative floating point number
* **Type**: float
* **Default value**: None
* **Required**: *yes*
* *nms_threshold*
* **Description**: The *nms_threshold* attribute specifies threshold to be used in the NMS stage.
* **Range of values**: non-negative floating point number
* **Type**: float
* **Default value**: None
* **Required**: *yes*
* *pre_nms_count*
* **Description**: The *pre_nms_count* attribute specifies number of top-n proposals before NMS.
* **Range of values**: non-negative integer number
* **Type**: int
* **Default value**: None
* **Required**: *yes*
* *post_nms_count*
* **Description**: The *post_nms_count* attribute specifies number of top-n proposals after NMS.
* **Range of values**: non-negative integer number
* **Type**: int
* **Default value**: None
* **Required**: *yes*
**Inputs**
* **1**: A 1D tensor of type *T* with 3 elements `[image_height, image_width, scale_height_and_width]` providing input
image size info. **Required.**
* **2**: A 2D tensor of type *T* with shape `[height * width * number_of_channels, 4]` providing anchors. **Required.**
* **3**: A 3D tensor of type *T* with shape `[number_of_channels * 4, height, width]` providing deltas for anchors.
Height and width for third and fourth inputs should be equal. **Required.**
* **4**: A 3D tensor of type *T* with shape `[number_of_channels, height, width]` providing proposals scores.
**Required.**
**Outputs**
* **1**: A 2D tensor of type *T* with shape `[post_nms_count, 4]` providing ROIs.
* **2**: A 1D tensor of type *T* with shape `[post_nms_count]` providing ROIs scores.
**Types**
* *T*: any supported floating point type.
**Example**
```xml
<layer ... type="ExperimentalDetectronGenerateProposalsSingleImage" version="opset6">
<data min_size="0.0" nms_threshold="0.699999988079071" post_nms_count="1000" pre_nms_count="1000"/>
<input>
<port id="0">
<dim>3</dim>
</port>
<port id="1">
<dim>12600</dim>
<dim>4</dim>
</port>
<port id="2">
<dim>12</dim>
<dim>50</dim>
<dim>84</dim>
</port>
<port id="3">
<dim>3</dim>
<dim>50</dim>
<dim>84</dim>
</port>
</input>
<output>
<port id="4" precision="FP32">
<dim>1000</dim>
<dim>4</dim>
</port>
<port id="5" precision="FP32">
<dim>1000</dim>
</port>
</output>
</layer>
```

View File

@@ -0,0 +1,116 @@
## ExperimentalDetectronPriorGridGenerator <a name="ExperimentalDetectronPriorGridGenerator"></a> {#openvino_docs_ops_detection_ExperimentalDetectronPriorGridGenerator_6}
**Versioned name**: *ExperimentalDetectronPriorGridGenerator-6*
**Category**: Object detection
**Short description**: The *ExperimentalDetectronPriorGridGenerator* operation generates prior grids of specified sizes.
**Detailed description**: The operation takes coordinates of centres of boxes and adds strides with offset `0.5` to them to
calculate coordinates of prior grids.
Numbers of generated cells is `featmap_height` and `featmap_width` if *h* and *w* are zeroes; otherwise, *h* and *w*,
respectively. Steps of generated grid are `image_height` / `layer_height` and `image_width` / `layer_width` if
*stride_h* and *stride_w* are zeroes; otherwise, *stride_h* and *stride_w*, respectively.
`featmap_height`, `featmap_width`, `image_height` and `image_width` are spatial dimensions values from second and third
inputs, respectively.
**Attributes**:
* *flatten*
* **Description**: The *flatten* attribute specifies whether the output tensor should be 2D or 4D.
* **Range of values**:
* `true` - the output tensor should be a 2D tensor
* `false` - the output tensor should be a 4D tensor
* **Type**: boolean
* **Default value**: true
* **Required**: *no*
* *h*
* **Description**: The *h* attribute specifies number of cells of the generated grid with respect to height.
* **Range of values**: non-negative integer number less or equal than `featmap_height`
* **Type**: int
* **Default value**: 0
* **Required**: *no*
* *w*
* **Description**: The *w* attribute specifies number of cells of the generated grid with respect to width.
* **Range of values**: non-negative integer number less or equal than `featmap_width`
* **Type**: int
* **Default value**: 0
* **Required**: *no*
* *stride_x*
* **Description**: The *stride_x* attribute specifies the step of generated grid with respect to x coordinate.
* **Range of values**: non-negative float number
* **Type**: float
* **Default value**: 0.0
* **Required**: *no*
* *stride_y*
* **Description**: The *stride_y* attribute specifies the step of generated grid with respect to y coordinate.
* **Range of values**: non-negative float number
* **Type**: float
* **Default value**: 0.0
* **Required**: *no*
**Inputs**
* **1**: A 2D tensor of type *T* with shape `[number_of_priors, 4]` contains priors. **Required.**
* **2**: A 4D tensor of type *T* with input feature map `[1, number_of_channels, featmap_height, featmap_width]`. This
operation uses only sizes of this input tensor, not its data.**Required.**
* **3**: A 4D tensor of type *T* with input image `[1, number_of_channels, image_height, image_width]`. The number of
channels of both feature map and input image tensors must match. This operation uses only sizes of this input tensor,
not its data. **Required.**
**Outputs**
* **1**: A tensor of type *T* with priors grid with shape `[featmap_height * featmap_width * number_of_priors, 4]`
if flatten is `true` or `[featmap_height, featmap_width, number_of_priors, 4]`, otherwise.
If 0 < *h* < `featmap_height` and/or 0 < *w* < `featmap_width` the output data size is less than
`featmap_height` * `featmap_width` * `number_of_priors` * 4 and the output tensor is filled with undefined values for
rest output tensor elements.
**Types**
* *T*: any supported floating point type.
**Example**
```xml
<layer ... type="ExperimentalDetectronPriorGridGenerator" version="opset6">
<data flatten="true" h="0" stride_x="32.0" stride_y="32.0" w="0"/>
<input>
<port id="0">
<dim>3</dim>
<dim>4</dim>
</port>
<port id="1">
<dim>1</dim>
<dim>256</dim>
<dim>25</dim>
<dim>42</dim>
</port>
<port id="2">
<dim>1</dim>
<dim>3</dim>
<dim>800</dim>
<dim>1344</dim>
</port>
</input>
<output>
<port id="3" precision="FP32">
<dim>3150</dim>
<dim>4</dim>
</port>
</output>
</layer>
```

View File

@@ -0,0 +1,139 @@
## ExperimentalDetectronROIFeatureExtractor <a name="ExperimentalDetectronROIFeatureExtractor"></a> {#openvino_docs_ops_detection_ExperimentalDetectronROIFeatureExtractor_6}
**Versioned name**: *ExperimentalDetectronROIFeatureExtractor-6*
**Category**: Object detection
**Short description**: *ExperimentalDetectronROIFeatureExtractor* is the [ROIAlign](ROIAlign_3.md) operation applied
over a feature pyramid.
**Detailed description**: *ExperimentalDetectronROIFeatureExtractor* maps input ROIs to the levels of the pyramid
depending on the sizes of ROIs and parameters of the operation, and then extracts features via ROIAlign from
corresponding pyramid levels.
Operation applies the *ROIAlign* algorithm to the pyramid layers:
`output[i, :, :, :] = ROIAlign(inputPyramid[j], rois[i])`
`j = PyramidLevelMapper(rois[i])`
PyramidLevelMapper maps the ROI to the pyramid level using the following formula:
`j = floor(2 + log2(sqrt(w * h) / 224)`
Here 224 is the canonical ImageNet pre-training size, 2 is the pyramid starting level, and `w`, `h` are the ROI width and height.
For more details please see the following source:
[Feature Pyramid Networks for Object Detection](https://arxiv.org/pdf/1612.03144.pdf).
**Attributes**:
* *output_size*
* **Description**: The *output_size* attribute specifies the width and height of the output tensor.
* **Range of values**: a positive integer number
* **Type**: int
* **Default value**: None
* **Required**: *yes*
* *sampling_ratio*
* **Description**: The *sampling_ratio* attribute specifies the number of sampling points per the output value. If 0,
then use adaptive number computed as `ceil(roi_width / output_width)`, and likewise for height.
* **Range of values**: a non-negative integer number
* **Type**: int
* **Default value**: None
* **Required**: *yes*
* *pyramid_scales*
* **Description**: The *pyramid_scales* enlists `image_size / layer_size[l]` ratios for pyramid layers `l=1,...,L`,
where `L` is the number of pyramid layers, and `image_size` refers to network's input image. Note that pyramid's
largest layer may have smaller size than input image, e.g. `image_size` is `800 x 1344` in the XML example below.
* **Range of values**: a list of positive integer numbers
* **Type**: int[]
* **Default value**: None
* **Required**: *yes*
* *aligned*
* **Description**: The *aligned* attribute specifies add offset (`-0.5`) to ROIs sizes or not.
* **Range of values**:
* `true` - add offset to ROIs sizes
* `false` - do not add offset to ROIs sizes
* **Type**: boolean
* **Default value**: false
* **Required**: *no*
**Inputs**:
* **1**: 2D input tensor of type *T* with shape `[number_of_ROIs, 4]` providing the ROIs as 4-tuples:
[x<sub>1</sub>, y<sub>1</sub>, x<sub>2</sub>, y<sub>2</sub>]. Coordinates *x* and *y* are refer to the network's input
*image_size*. **Required**.
* **2**, ..., **L**: Pyramid of 4D input tensors with feature maps. Shape must be
`[1, number_of_channels, layer_size[l], layer_size[l]]`. The number of channels must be the same for all layers of the
pyramid. The layer width and height must equal to the `layer_size[l] = image_size / pyramid_scales[l]`. **Required**.
**Outputs**:
* **1**: 4D output tensor of type *T* with ROIs features. Shape must be
`[number_of_ROIs, number_of_channels, output_size, output_size]`. Channels number is the same as for all images in the
input pyramid.
* **2**: 2D output tensor of type *T* with reordered ROIs according to their mapping to the pyramid levels. Shape
must be the same as for 1 input: `[number_of_ROIs, 4]`.
**Types**
* *T*: any supported floating point type.
**Example**
```xml
<layer ... type="ExperimentalDetectronROIFeatureExtractor" version="opset6">
<data aligned="false" output_size="7" pyramid_scales="4,8,16,32,64" sampling_ratio="2"/>
<input>
<port id="0">
<dim>1000</dim>
<dim>4</dim>
</port>
<port id="1">
<dim>1</dim>
<dim>256</dim>
<dim>200</dim>
<dim>336</dim>
</port>
<port id="2">
<dim>1</dim>
<dim>256</dim>
<dim>100</dim>
<dim>168</dim>
</port>
<port id="3">
<dim>1</dim>
<dim>256</dim>
<dim>50</dim>
<dim>84</dim>
</port>
<port id="4">
<dim>1</dim>
<dim>256</dim>
<dim>25</dim>
<dim>42</dim>
</port>
</input>
<output>
<port id="5" precision="FP32">
<dim>1000</dim>
<dim>256</dim>
<dim>7</dim>
<dim>7</dim>
</port>
<port id="6" precision="FP32">
<dim>1000</dim>
<dim>4</dim>
</port>
</output>
</layer>
```

View File

@@ -50,6 +50,11 @@ declared in `namespace opset6`.
* [Equal](comparison/Equal_1.md)
* [Erf](arithmetic/Erf_1.md)
* [Exp](activation/Exp_1.md)
* [ExperimentalDetectronDetectionOutput_6](detection/ExperimentalDetectronDetectionOutput_6.md)
* [ExperimentalDetectronGenerateProposalsSingleImage_6](detection/ExperimentalDetectronGenerateProposalsSingleImage_6.md)
* [ExperimentalDetectronPriorGridGenerator_6](detection/ExperimentalDetectronPriorGridGenerator_6.md)
* [ExperimentalDetectronROIFeatureExtractor_6](detection/ExperimentalDetectronROIFeatureExtractor_6.md)
* [ExperimentalDetectronTopKROIs_6](sort/ExperimentalDetectronTopKROIs_6.md)
* [ExtractImagePatches](movement/ExtractImagePatches_3.md)
* [FakeQuantize](quantization/FakeQuantize_1.md)
* [Floor](arithmetic/Floor_1.md)

View File

@@ -0,0 +1,61 @@
## ExperimentalDetectronTopKROIs <a name="ExperimentalDetectronTopKROIs"></a> {#openvino_docs_ops_sort_ExperimentalDetectronTopKROIs_6}
**Versioned name**: *ExperimentalDetectronTopKROIs-6*
**Category**: Sort
**Short description**: The *ExperimentalDetectronTopKROIs* operation is TopK operation applied to probabilities of input
ROIs.
**Detailed description**: The operation performs probabilities descending sorting for input ROIs and returns *max_rois*
number of ROIs. Order of sorted ROIs with equal probabilities is undefined. If the number of ROIs is less than *max_rois*
then operation returns all ROIs descended sorted and the output tensor is filled with undefined values for the rest of
output tensor elements.
**Attributes**:
* *max_rois*
* **Description**: The *max_rois* attribute specifies maximal numbers of output ROIs.
* **Range of values**: non-negative integer number
* **Type**: int
* **Default value**: 0
* **Required**: *no*
**Inputs**
* **1**: A 2D tensor of type *T* with shape `[number_of_ROIs, 4]` describing the ROIs as 4-tuples:
[x<sub>1</sub>, y<sub>1</sub>, x<sub>2</sub>, y<sub>2</sub>]. **Required.**
* **2**: A 1D tensor of type *T* with shape `[number_of_input_ROIs]` contains probabilities for input ROIs. **Required.**
**Outputs**
* **1**: A 2D tensor of type *T* with shape `[max_rois, 4]` describing *max_rois* ROIs with highest probabilities.
**Types**
* *T*: any supported floating point type.
**Example**
```xml
<layer ... type="ExperimentalDetectronTopKROIs" version="opset6">
<data max_rois="1000"/>
<input>
<port id="0">
<dim>5000</dim>
<dim>4</dim>
</port>
<port id="1">
<dim>5000</dim>
</port>
</input>
<output>
<port id="2" precision="FP32">
<dim>1000</dim>
<dim>4</dim>
</port>
</output>
</layer>
```

View File

@@ -589,7 +589,7 @@ The Model Hosting components install the OpenVINO™ Security Add-on Runtime Doc
This section requires interactions between the Model Developer/Independent Software vendor and the User. All roles must complete all applicable <a href="#setup-host">set up steps</a> and <a href="#ovsa-install">installation steps</a> before beginning this section.
This document uses the [face-detection-retail-0004](@ref omz_models_intel_face_detection_retail_0004_description_face_detection_retail_0004) model as an example.
This document uses the [face-detection-retail-0004](@ref omz_models_model_face_detection_retail_0044) model as an example.
The following figure describes the interactions between the Model Developer, Independent Software Vendor, and User.

View File

@@ -186,9 +186,9 @@ endif ()
if (ENABLE_OPENCV)
reset_deps_cache(OpenCV_DIR)
set(OPENCV_VERSION "4.5.1")
set(OPENCV_BUILD "044")
set(OPENCV_BUILD_YOCTO "337")
set(OPENCV_VERSION "4.5.2")
set(OPENCV_BUILD "076")
set(OPENCV_BUILD_YOCTO "708")
if (AARCH64)
if(DEFINED ENV{THIRDPARTY_SERVER_PATH})
@@ -208,7 +208,7 @@ if (ENABLE_OPENCV)
TARGET_PATH "${TEMP}/opencv_${OPENCV_VERSION}_${OPENCV_SUFFIX}/opencv"
ENVIRONMENT "OpenCV_DIR"
VERSION_REGEX ".*_([0-9]+.[0-9]+.[0-9]+).*"
SHA256 "b5239e0e50b9009f95a29cb11f0840ec085fa07f6c4d3349adf090f1e51b0787")
SHA256 "ee3e5255f381b8de5e6fffe4e43dae8c99035377d0380f9183bd7341f1d0f204")
unset(IE_PATH_TO_DEPS)
endif()
@@ -219,37 +219,37 @@ if (ENABLE_OPENCV)
TARGET_PATH "${TEMP}/opencv_${OPENCV_VERSION}/opencv"
ENVIRONMENT "OpenCV_DIR"
VERSION_REGEX ".*_([0-9]+.[0-9]+.[0-9]+).*"
SHA256 "5250bfe5860c15eb1b31963c78804ee9b301a19d8d6e920c06ef41de681cb99e")
SHA256 "a14f872e6b63b6ac12c7ff47fa49e578d14c14433b57f5d85ab5dd48a079938c")
elseif(APPLE AND X86_64)
RESOLVE_DEPENDENCY(OPENCV
ARCHIVE_MAC "opencv/opencv_${OPENCV_VERSION}-${OPENCV_BUILD}_osx.txz"
TARGET_PATH "${TEMP}/opencv_${OPENCV_VERSION}_osx/opencv"
ENVIRONMENT "OpenCV_DIR"
VERSION_REGEX ".*_([0-9]+.[0-9]+.[0-9]+).*"
SHA256 "f3ebc5cc72c86106c30cc711ac689e02281556bb43c09a89cd45cb99b6bef9a8")
SHA256 "3e162f96e86cba8836618134831d9cf76df0438778b3e27e261dedad9254c514")
elseif(LINUX)
if (AARCH64)
set(OPENCV_SUFFIX "yocto_kmb")
set(OPENCV_BUILD "${OPENCV_BUILD_YOCTO}")
elseif (ARM)
set(OPENCV_SUFFIX "debian9arm")
set(OPENCV_HASH "0e787d6738092993bc92bb55975f52caabae45dc73473b5196d15e65e87d6b9d")
set(OPENCV_HASH "4274f8c40b17215f4049096b524e4a330519f3e76813c5a3639b69c48633d34e")
elseif ((LINUX_OS_NAME STREQUAL "CentOS 7" OR
CMAKE_CXX_COMPILER_VERSION VERSION_LESS "4.9") AND X86_64)
set(OPENCV_SUFFIX "centos7")
set(OPENCV_HASH "9b813af064d463b31fa1603b11b6559532a031d59bb0782d234380955fd397e0")
set(OPENCV_HASH "5fa76985c84fe7c64531682ef0b272510c51ac0d0565622514edf1c88b33404a")
elseif (LINUX_OS_NAME MATCHES "CentOS 8" AND X86_64)
set(OPENCV_SUFFIX "centos8")
set(OPENCV_HASH "8ec3e3552500dee334162386b98cc54a5608de1f1a18f283523fc0cc13ee2f83")
set(OPENCV_HASH "db087dfd412eedb8161636ec083ada85ff278109948d1d62a06b0f52e1f04202")
elseif (LINUX_OS_NAME STREQUAL "Ubuntu 16.04" AND X86_64)
set(OPENCV_SUFFIX "ubuntu16")
set(OPENCV_HASH "cd46831b4d8d1c0891d8d22ff5b2670d0a465a8a8285243059659a50ceeae2c3")
elseif (LINUX_OS_NAME STREQUAL "Ubuntu 18.04" AND X86_64)
set(OPENCV_SUFFIX "ubuntu18")
set(OPENCV_HASH "8ec3e3552500dee334162386b98cc54a5608de1f1a18f283523fc0cc13ee2f83")
set(OPENCV_HASH "db087dfd412eedb8161636ec083ada85ff278109948d1d62a06b0f52e1f04202")
elseif ((LINUX_OS_NAME STREQUAL "Ubuntu 20.04" OR LINUX_OS_NAME STREQUAL "LinuxMint 20.1") AND X86_64)
set(OPENCV_SUFFIX "ubuntu20")
set(OPENCV_HASH "2b7808d002864acdc5fc0b19cd30dadc31a37cc267931cad605f23f2383bfc21")
set(OPENCV_HASH "2fe7bbc40e1186eb8d099822038cae2821abf617ac7a16fadf98f377c723e268")
elseif(NOT DEFINED OpenCV_DIR AND NOT DEFINED ENV{OpenCV_DIR})
message(FATAL_ERROR "OpenCV is not available on current platform (${LINUX_OS_NAME})")
endif()

View File

@@ -1,15 +1,15 @@
# Hello Reshape SSD C++ Sample {#openvino_inference_engine_samples_hello_reshape_ssd_README}
# Hello Reshape SSD Python Sample {#openvino_inference_engine_samples_python_hello_reshape_ssd_README}
This topic demonstrates how to run the Hello Reshape SSD application, which does inference using object detection
networks like SSD-VGG. The sample shows how to use [Shape Inference feature](../../../docs/IE_DG/ShapeInference.md).
networks like SSD-VGG. The sample shows how to use [Shape Inference feature](../../../../../docs/IE_DG/ShapeInference.md).
> **NOTE**: By default, Inference Engine samples and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using the Model Optimizer tool with `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of [Converting a Model Using General Conversion Parameters](../../../docs/MO_DG/prepare_model/convert_model/Converting_Model_General.md).
> **NOTE**: By default, Inference Engine samples and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using the Model Optimizer tool with `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of [Converting a Model Using General Conversion Parameters](../../../../../docs/MO_DG/prepare_model/convert_model/Converting_Model_General.md).
## Running
To run the sample, you can use public or pre-trained models. To download the pre-trained models, use the OpenVINO [Model Downloader](@ref omz_tools_downloader).
> **NOTE**: Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
> **NOTE**: Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
>
> The sample accepts models in ONNX format (.onnx) that do not require preprocessing.
@@ -25,6 +25,6 @@ of the detected objects along with the respective confidence values and the coor
rectangles to the standard output stream.
## See Also
* [Using Inference Engine Samples](../../../docs/IE_DG/Samples_Overview.md)
* [Using Inference Engine Samples](../../../../../docs/IE_DG/Samples_Overview.md)
* [Model Downloader](@ref omz_tools_downloader)
* [Model Optimizer](../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)
* [Model Optimizer](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)

View File

@@ -9,6 +9,7 @@ py_modules =
[options.package_data]
mo = *.txt
compression.configs.hardware = *.json
[options.entry_points]
console_scripts =

View File

@@ -792,6 +792,7 @@ mo/front/caffe/python_layer_extractor.py
mo/front/caffe/register_custom_ops.py
mo/front/common/__init__.py
mo/front/common/custom_replacement_registry.py
mo/front/common/extractors/__init__.py
mo/front/common/extractors/utils.py
mo/front/common/find_unsupported_ops.py
mo/front/common/layout.py
@@ -987,6 +988,7 @@ mo/utils/ir_engine/compare_graphs.py
mo/utils/ir_engine/ir_engine.py
mo/utils/ir_reader/__init__.py
mo/utils/ir_reader/extender.py
mo/utils/ir_reader/extenders/__init__.py
mo/utils/ir_reader/extenders/binary_convolution_extender.py
mo/utils/ir_reader/extenders/bucketize_extender.py
mo/utils/ir_reader/extenders/conv_extender.py