Compare commits

...

30 Commits

Author SHA1 Message Date
Alexey Suhov
0e55117a42 Fix license header in Movidius sources 2021-06-02 21:23:09 +03:00
Alina Alborova
62601251c7 GNA Plugin doc review (#2922)
* Doc review

* Addressed comments

* Removed an inexistent link
2020-12-07 19:25:03 +03:00
Vitaliy Urusovskij
bff33818bb Fix paths for squeezenet1.1 in time_tests config (#3416) 2020-11-30 18:45:11 +03:00
Nikolay Tyukaev
f5e2fff67d ops math formula fix (#3333)
Co-authored-by: Nikolay Tyukaev <ntyukaev_lo@jenkins.inn.intel.com>
2020-11-30 12:14:46 +03:00
Alina Alborova
f2a3d6b497 Fix a typo in DL Workbench Get Started (#3338)
* Fixed a typo

* Update openvino_docs.xml

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>
2020-11-25 16:41:18 +03:00
Vitaliy Urusovskij
6adaad64d9 Add several new models to tgl_test_config.yml in time_tests (#3269)
* Fix wrong path for `yolo-v2-tiny-ava-0001` for time_tests

* Add several new models to `tgl_test_config.yml` in time_tests
2020-11-24 11:06:40 +03:00
Andrey Zaytsev
20fd0bc738 Feature/azaytsev/change layout (#3295)
* Changes according to feedback comments

* Replaced @ref's with html links

* Fixed links, added a title page for installing from repos and images, fixed formatting issues

* Added links

* minor fix

* Added DL Streamer to the list of components installed by default

* Link fixes

* Link fixes

* ovms doc fix (#2988)

* added OpenVINO Model Server

* ovms doc fixes

Co-authored-by: Trawinski, Dariusz <dariusz.trawinski@intel.com>
2020-11-23 20:46:26 +03:00
Alina Alborova
9d5b2002d2 [41545] Add links to DL Workbench from components that are available in the DL WB (#2801)
* Added links to MO and Benchmark App

* Changed wording

* Fixes a link

* fixed a link

* Changed the wording
2020-11-20 20:37:52 +03:00
Alina Alborova
57eee6a583 Links to DL Workbench Installation Guide (#2861)
* Links to WB

* Changed wording

* Changed wording

* Fixes

* Changes the wording

* Minor corrections

* Removed an extra point
2020-11-20 20:37:20 +03:00
Alina Alborova
751ef42424 [40929] DL Workbench in Get Started (#2740)
* Initial commit

* Added the doc

* More instructions and images

* Added slide

* Borders for screenshots

* fixes

* Fixes

* Added link to Benchmark app

* Replaced the image

* tiny fix

* tiny fix
2020-11-20 20:36:26 +03:00
Rafal Blaczkowski
43a6e4cfa0 Fix onnx tests versions (#3240) 2020-11-20 11:15:25 +03:00
Vitaliy Urusovskij
38892b24fc Align time_tests with master (#3238)
* Align time_tests with master

* Fix "results" uploading to DB in time_tests

* Add new model to `tgl_test_config.yml`
2020-11-20 11:13:49 +03:00
Kate Generalova
bd3ba38e96 [DOC] Update Docker install guide (#3055) (#3200)
* [DOC] Update Docker install guide

* [DOC] Add proxy for Windows Docker install guide

* [DOC] move up prebuilt images section

* Update installing-openvino-linux.md

* Update installing-openvino-docker-linux.md

* Update installing-openvino-docker-linux.md

Formatting fixes

* Update installing-openvino-docker-linux.md

Fixed formatting issues

* Update installing-openvino-docker-windows.md

Minor fixes

* Update installing-openvino-docker-linux.md

Fixed formatting issues

* [DOC] update text with CPU image, remove proxy for win

* Update installing-openvino-docker-windows.md

Minor fixes

* Update installing-openvino-docker-windows.md

Minor fix

* Update installing-openvino-docker-windows.md

Minor fix

* Update installing-openvino-docker-windows.md

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

(cherry picked from commit 4a09888ef4)
2020-11-18 20:25:16 +03:00
Alina Alborova
78f8b6a36c Renamed Benchmark App into Benchmark Tool in the menu (#3032) 2020-11-16 11:48:48 +03:00
Alina Alborova
d2dc54fc37 Fixes (#3105) 2020-11-16 11:48:31 +03:00
Alina Alborova
14aa83f4d9 See Also sections in MO Guide (#2770)
* convert to doxygen comments

* layouts and code comments

* separate layout

* Changed layouts

* Removed FPGA from the documentation

* Updated according to CVS-38225

* some changes

* Made changes to benchmarks according to review comments

* Added logo info to the Legal_Information, updated Ubuntu, CentOS supported versions

* Updated supported Intel® Core™ processors list

* Fixed table formatting

* update api layouts

* Added new index page with overview

* Changed CMake and Python versions

* Fixed links

* some layout changes

* some layout changes

* some layout changes

* COnverted svg images to png

* layouts

* update layout

* Added a label for nGraph_Python_API.md

* fixed links

* Fixed image

* removed links to ../IE_DG/Introduction.md

* Removed links to tools overview page as removed

* some changes

* Remove link to Integrate_your_kernels_into_IE.md

* remove openvino_docs_IE_DG_Graph_debug_capabilities from layout as it was removed

* update layouts

* Post-release fixes and installation path changes

* Added PIP installation and Build from Source to the layout

* Fixed formatting issue, removed broken link

* Renamed section EXAMPLES to RESOURCES according to review comments

* add mo faq navigation by url param

* Removed DLDT description

* Pt 1

* Update Deep_Learning_Model_Optimizer_DevGuide.md

* Extra file

* Update IR_and_opsets.md

* Update Known_Issues_Limitations.md

* Update Config_Model_Optimizer.md

* Update Convert_Model_From_Kaldi.md

* Update Convert_Model_From_Kaldi.md

* Update Convert_Model_From_MxNet.md

* Update Convert_Model_From_ONNX.md

* Update Convert_Model_From_TensorFlow.md

* Update Converting_Model_General.md

* Update Cutting_Model.md

* Update IR_suitable_for_INT8_inference.md

* Update Aspire_Tdnn_Model.md

* Update Convert_Model_From_Caffe.md

* Update Convert_Model_From_TensorFlow.md

* Update Convert_Model_From_MxNet.md

* Update Convert_Model_From_Kaldi.md

* Added references to other fws from each fw

* Fixed broken links

* Fixed broken links

* fixes

* fixes

* Fixed wrong links

Co-authored-by: Nikolay Tyukaev <ntyukaev_lo@jenkins.inn.intel.com>
Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>
Co-authored-by: Tyukaev <nikolay.tyukaev@intel.com>
2020-11-06 17:24:07 +03:00
Andrey Zaytsev
313d88931a Feature/azaytsev/cherry pick pr2541 to 2021 1 (#2960)
* added OpenVINO Model Server to docs (#2541)

* added OpenVINO Model Server

* updated documentation to include valid links

* minor fixes

* Fixed links and style

* Update README.md

fixed links to model_server

* more corrections

* dropped reference in ie_docs and minor fixes

* Update README.md

Fixed links to Inference Engine pages

Co-authored-by: Alina Alborova <alina.alborova@intel.com>
Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

* Added Model Server docs to 2021/1

Co-authored-by: Trawinski, Dariusz <dariusz.trawinski@intel.com>
Co-authored-by: Alina Alborova <alina.alborova@intel.com>
2020-11-03 22:21:37 +03:00
Andrey Zaytsev
a7ab76e78e Added info on DockerHub CI Framework (#2919) 2020-11-03 16:14:32 +03:00
Vitaliy Urusovskij
a081dfea0f Align time_tests with master branch from 4021e144 (#2881) 2020-10-30 21:08:25 +03:00
Nikolay Tyukaev
2cf8999d23 add animation (#2865) 2020-10-30 17:04:37 +03:00
Alina Alborova
eec2fd8a8b [DOCS] [41549] Fix broken code block in Install OpenVINO from PyPI Repository (#2800)
* Turned list into headings

* fixes

* fix
2020-10-26 14:12:10 +03:00
Alexey Suhov
4a46be7631 [install_dependencies.sh] install latest cmake if current version is lower 3.13 (#2695) (#2701)
* [install_dependencies.sh] install latest cmake if current version is lower 3.13

* add shellcheck for Ubuntu

* install python 2.7 for Ubuntu
2020-10-16 21:20:06 +03:00
Andrey Zaytsev
c112547a50 Fixed CVS-35316 (#2072) (#2670)
Co-authored-by: Anastasiya Ageeva <anastasiya.ageeva@intel.com>
2020-10-15 12:29:12 +03:00
Andrey Zaytsev
41e7475731 Feature/ntyukaev/separate layout (#2629)
* convert to doxygen comments

* layouts and code comments

* separate layout

* Changed layouts

* Removed FPGA from the documentation

* Updated according to CVS-38225

* some changes

* Made changes to benchmarks according to review comments

* Added logo info to the Legal_Information, updated Ubuntu, CentOS supported versions

* Updated supported Intel® Core™ processors list

* Fixed table formatting

* update api layouts

* Added new index page with overview

* Changed CMake and Python versions

* Fixed links

* some layout changes

* some layout changes

* some layout changes

* COnverted svg images to png

* layouts

* update layout

* Added a label for nGraph_Python_API.md

* fixed links

* Fixed image

* removed links to ../IE_DG/Introduction.md

* Removed links to tools overview page as removed

* some changes

* Remove link to Integrate_your_kernels_into_IE.md

* remove openvino_docs_IE_DG_Graph_debug_capabilities from layout as it was removed

* update layouts

* Post-release fixes and installation path changes

* Added PIP installation and Build from Source to the layout

* Fixed formatting issue, removed broken link

* Renamed section EXAMPLES to RESOURCES according to review comments

* add mo faq navigation by url param

* Removed DLDT description

* Replaced wrong links

* MInor fix for path to the cpp samples

* fixes

* Update ops.py

* Fix style

Co-authored-by: Nikolay Tyukaev <ntyukaev_lo@jenkins.inn.intel.com>
Co-authored-by: Tyukaev <nikolay.tyukaev@intel.com>
Co-authored-by: aalborov <alina.alborova@intel.com>
Co-authored-by: Rafal Blaczkowski <rafal.blaczkowski@intel.com>
Co-authored-by: Alexander Zhogov <alexander.zhogov@intel.com>
2020-10-14 20:13:04 +03:00
Anton Romanov
f050de86dd Improve pip installation guide (#2644)
* Improve pip installation guide

* Updated after comments
2020-10-14 12:18:39 +03:00
Rafal Blaczkowski
3c4b116895 Skip hanging test case of OpenVino ONNX CI (#2608)
* Update OpenVino ONNX CI

* Change parallel execution to single

* Enlarge timeout

* Remove timeout

* Add timeout to test execution

* Skip hanging test

* Add description to skip issue
2020-10-12 07:00:15 +03:00
Rafal Blaczkowski
a5f538462d Update OpenVino ONNX CI check (#2599)
* Update OpenVino ONNX CI

* Change parallel execution to single

* Enlarge timeout

* Remove timeout

* Add timeout to test execution
2020-10-09 15:14:10 +03:00
Anton Romanov
0731f67e9f Added pip install documentation (#2465)
* Added pip install documentation

* Change references

* tiny fixes of links

* Update installing-openvino-pip.md

Co-authored-by: Alina Alborova <alina.alborova@intel.com>
2020-10-09 12:24:04 +03:00
Gleb Kazantaev
4793774d18 Added deprecation note for PassConfig class (#2593) 2020-10-08 18:11:19 +03:00
Ilya Churaev
ea06196afb Fixed links to images (#2569) 2020-10-07 13:32:47 +03:00
133 changed files with 4651 additions and 2716 deletions

View File

@@ -68,7 +68,7 @@ def buildDockerImage() {
def runTests() {
sh """
docker run --rm --name ${DOCKER_CONTAINER_NAME} \
docker run --name ${DOCKER_CONTAINER_NAME} \
--volume ${HOME}/ONNX_CI/onnx_models/.onnx:/root/.onnx ${DOCKER_IMAGE_TAG}
"""
}
@@ -101,6 +101,9 @@ pipeline {
}
}
stage("Run tests") {
options {
timeout(time: 10, unit: 'MINUTES')
}
steps{
runTests()
}
@@ -118,6 +121,7 @@ pipeline {
deleteDir()
sh """
docker image prune -f
docker rm -f ${DOCKER_CONTAINER_NAME}
"""
}
}

View File

@@ -66,13 +66,13 @@ The software was validated on:
cd openvino
git submodule update --init --recursive
```
2. Install build dependencies using the `install_dependencies.sh` script in the
2. Install build dependencies using the `install_build_dependencies.sh` script in the
project root folder.
```sh
chmod +x install_dependencies.sh
chmod +x install_build_dependencies.sh
```
```sh
./install_dependencies.sh
./install_build_dependencies.sh
```
3. By default, the build enables the Inference Engine GPU plugin to infer models
on your Intel® Processor Graphics. This requires you to

View File

@@ -195,7 +195,7 @@ For a step-by-step walk-through creating and executing a custom layer, see [Cust
- Intel® Distribution of OpenVINO™ toolkit home page: [https://software.intel.com/en-us/openvino-toolkit](https://software.intel.com/en-us/openvino-toolkit)
- OpenVINO™ toolkit online documentation: [https://docs.openvinotoolkit.org](https://docs.openvinotoolkit.org)
- [Model Optimizer Developer Guide](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)
- [Kernel Extensivility in the Inference Engine Developer Guide](../IE_DG/Integrate_your_kernels_into_IE.md)
- [Inference Engine Extensibility Mechanism](../IE_DG/Extensibility_DG/Intro.md)
- [Inference Engine Samples Overview](../IE_DG/Samples_Overview.md)
- [Overview of OpenVINO™ Toolkit Pre-Trained Models](@ref omz_models_intel_index)
- [Inference Engine Tutorials](https://github.com/intel-iot-devkit/inference-tutorials-generic)

View File

@@ -42,8 +42,6 @@ inference of a pre-trained and optimized deep learning model and a set of sample
## Table of Contents
* [Introduction to Intel® Deep Learning Deployment Toolkit](Introduction.md)
* [Inference Engine API Changes History](API_Changes.md)
* [Introduction to Inference Engine](inference_engine_intro.md)
@@ -87,4 +85,4 @@ inference of a pre-trained and optimized deep learning model and a set of sample
* [Known Issues](Known_Issues_Limitations.md)
**Typical Next Step:** [Introduction to Intel® Deep Learning Deployment Toolkit](Introduction.md)
**Typical Next Step:** [Introduction to Inference Engine](inference_engine_intro.md)

View File

@@ -116,7 +116,7 @@ For Intel® Distribution of OpenVINO™ toolkit, the Inference Engine package co
[sample console applications](Samples_Overview.md) demonstrating how you can use
the Inference Engine in your applications.
The open source version is available in the [OpenVINO™ toolkit GitHub repository](https://github.com/openvinotoolkit/openvino) and can be built for supported platforms using the <a href="https://github.com/openvinotoolkit/openvino/blob/master/build-instruction.md">Inference Engine Build Instructions</a>.
The open source version is available in the [OpenVINO™ toolkit GitHub repository](https://github.com/openvinotoolkit/openvino) and can be built for supported platforms using the <a href="https://github.com/openvinotoolkit/openvino/wiki/BuildingCode">Inference Engine Build Instructions</a>.
## See Also
- [Inference Engine Samples](Samples_Overview.md)
- [Intel&reg; Deep Learning Deployment Toolkit Web Page](https://software.intel.com/en-us/computer-vision-sdk)

View File

@@ -12,4 +12,4 @@ The OpenVINO™ Python\* package includes the following sub-packages:
- `openvino.tools.benchmark` - Measure latency and throughput.
## See Also
* [Introduction to Intel's Deep Learning Inference Engine](Introduction.md)
* [Introduction to Inference Engine](inference_engine_intro.md)

View File

@@ -53,7 +53,7 @@ The officially supported Linux* build environment is the following:
* GCC* 7.5.0 (for Ubuntu* 18.04) or GCC* 4.8.5 (for CentOS* 7.6)
* CMake* version 3.10 or higher
> **NOTE**: For building samples from the open-source version of OpenVINO™ toolkit, see the [build instructions on GitHub](https://github.com/openvinotoolkit/openvino/blob/master/build-instruction.md).
> **NOTE**: For building samples from the open-source version of OpenVINO™ toolkit, see the [build instructions on GitHub](https://github.com/openvinotoolkit/openvino/wiki/BuildingCode).
To build the C or C++ sample applications for Linux, go to the `<INSTALL_DIR>/inference_engine/samples/c` or `<INSTALL_DIR>/inference_engine/samples/cpp` directory, respectively, and run the `build_samples.sh` script:
```sh
@@ -183,4 +183,4 @@ sample, read the sample documentation by clicking the sample name in the samples
list above.
## See Also
* [Introduction to Intel's Deep Learning Inference Engine](Introduction.md)
* [Introduction to Inference Engine](inference_engine_intro.md)

View File

@@ -14,4 +14,4 @@ The OpenVINO™ toolkit installation includes the following tools:
## See Also
* [Introduction to Deep Learning Inference Engine](Introduction.md)
* [Introduction to Inference Engine](inference_engine_intro.md)

View File

@@ -7,11 +7,11 @@ Inference Engine is a set of C++ libraries providing a common API to deliver inf
For Intel® Distribution of OpenVINO™ toolkit, Inference Engine binaries are delivered within release packages.
The open source version is available in the [OpenVINO™ toolkit GitHub repository](https://github.com/openvinotoolkit/openvino) and can be built for supported platforms using the <a href="https://github.com/openvinotoolkit/openvino/blob/master/build-instruction.md">Inference Engine Build Instructions</a>.
The open source version is available in the [OpenVINO™ toolkit GitHub repository](https://github.com/openvinotoolkit/openvino) and can be built for supported platforms using the <a href="https://github.com/openvinotoolkit/openvino/wiki/BuildingCode">Inference Engine Build Instructions</a>.
To learn about how to use the Inference Engine API for your application, see the [Integrating Inference Engine in Your Application](Integrate_with_customer_application_new_API.md) documentation.
For complete API Reference, see the [API Reference](usergroup29.html) section.
For complete API Reference, see the [Inference Engine API References](./api_references.html) section.
Inference Engine uses a plugin architecture. Inference Engine plugin is a software component that contains complete implementation for inference on a certain Intel&reg; hardware device: CPU, GPU, VPU, etc. Each plugin implements the unified API and provides additional hardware-specific APIs.

View File

@@ -65,7 +65,7 @@ CNNNetwork network = core.ReadNetwork(strModel, make_shared_blob<uint8_t>({Preci
- OpenVINO™ toolkit online documentation: [https://docs.openvinotoolkit.org](https://docs.openvinotoolkit.org)
- Model Optimizer Developer Guide: [Model Optimizer Developer Guide](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)
- Inference Engine Developer Guide: [Inference Engine Developer Guide](Deep_Learning_Inference_Engine_DevGuide.md)
- For more information on Sample Applications, see the [Inference Engine Samples Overview](Samples_Overview.html)
- For more information on Sample Applications, see the [Inference Engine Samples Overview](Samples_Overview.md)
- For information on a set of pre-trained models, see the [Overview of OpenVINO™ Toolkit Pre-Trained Models](@ref omz_models_intel_index)
- For information on Inference Engine Tutorials, see the [Inference Tutorials](https://github.com/intel-iot-devkit/inference-tutorials-generic)
- For IoT Libraries and Code Samples see the [Intel® IoT Developer Kit](https://github.com/intel-iot-devkit).

View File

@@ -2,98 +2,98 @@
## Introducing the GNA Plugin
Intel&reg; Gaussian & Neural Accelerator is a low-power neural coprocessor for continuous inference at the edge.
Intel® Gaussian & Neural Accelerator is a low-power neural coprocessor for continuous inference at the edge.
Intel&reg; GNA is not intended to replace classic inference devices such as
CPU, graphics processing unit (GPU), or vision processing unit (VPU) . It is designed for offloading
Intel® GNA is not intended to replace classic inference devices such as
CPU, graphics processing unit (GPU), or vision processing unit (VPU). It is designed for offloading
continuous inference workloads including but not limited to noise reduction or speech recognition
to save power and free CPU resources.
The GNA plugin provides a way to run inference on Intel&reg; GNA, as well as in the software execution mode on CPU.
The GNA plugin provides a way to run inference on Intel® GNA, as well as in the software execution mode on CPU.
## Devices with Intel&reg; GNA
## Devices with Intel® GNA
Devices with Intel&reg; GNA support:
Devices with Intel® GNA support:
* [Intel&reg; Speech Enabling Developer Kit](https://www.intel.com/content/www/us/en/support/articles/000026156/boards-and-kits/smart-home.html)
* [Intel® Speech Enabling Developer Kit](https://www.intel.com/content/www/us/en/support/articles/000026156/boards-and-kits/smart-home.html)
* [Amazon Alexa* Premium Far-Field Developer Kit](https://developer.amazon.com/en-US/alexa/alexa-voice-service/dev-kits/amazon-premium-voice)
* [Amazon Alexa\* Premium Far-Field Developer Kit](https://developer.amazon.com/en-US/alexa/alexa-voice-service/dev-kits/amazon-premium-voice)
* [Intel&reg; Pentium&reg; Silver Processors N5xxx, J5xxx and Intel&reg; Celeron&reg; Processors N4xxx, J4xxx](https://ark.intel.com/content/www/us/en/ark/products/codename/83915/gemini-lake.html):
- Intel&reg; Pentium&reg; Silver J5005 Processor
- Intel&reg; Pentium&reg; Silver N5000 Processor
- Intel&reg; Celeron&reg; J4005 Processor
- Intel&reg; Celeron&reg; J4105 Processor
- Intel&reg; Celeron&reg; Processor N4100
- Intel&reg; Celeron&reg; Processor N4000
* [Intel® Pentium® Silver Processors N5xxx, J5xxx and Intel® Celeron® Processors N4xxx, J4xxx](https://ark.intel.com/content/www/us/en/ark/products/codename/83915/gemini-lake.html):
- Intel® Pentium® Silver J5005 Processor
- Intel® Pentium® Silver N5000 Processor
- Intel® Celeron® J4005 Processor
- Intel® Celeron® J4105 Processor
- Intel® Celeron® Processor N4100
- Intel® Celeron® Processor N4000
* [Intel&reg; Core&trade; Processors (formerly codenamed Cannon Lake)](https://ark.intel.com/content/www/us/en/ark/products/136863/intel-core-i3-8121u-processor-4m-cache-up-to-3-20-ghz.html):
Intel&reg; Core&trade; i3-8121U Processor
* [Intel® Core™ Processors (formerly codenamed Cannon Lake)](https://ark.intel.com/content/www/us/en/ark/products/136863/intel-core-i3-8121u-processor-4m-cache-up-to-3-20-ghz.html):
Intel® Core™ i3-8121U Processor
* [10th Generation Intel&reg; Core&trade; Processors (formerly codenamed Ice Lake)](https://ark.intel.com/content/www/us/en/ark/products/codename/74979/ice-lake.html):
- Intel&reg; Core&trade; i7-1065G7 Processor
- Intel&reg; Core&trade; i7-1060G7 Processor
- Intel&reg; Core&trade; i5-1035G4 Processor
- Intel&reg; Core&trade; i5-1035G7 Processor
- Intel&reg; Core&trade; i5-1035G1 Processor
- Intel&reg; Core&trade; i5-1030G7 Processor
- Intel&reg; Core&trade; i5-1030G4 Processor
- Intel&reg; Core&trade; i3-1005G1 Processor
- Intel&reg; Core&trade; i3-1000G1 Processor
- Intel&reg; Core&trade; i3-1000G4 Processor
* [10th Generation Intel® Core™ Processors (formerly codenamed Ice Lake)](https://ark.intel.com/content/www/us/en/ark/products/codename/74979/ice-lake.html):
- Intel® Core™ i7-1065G7 Processor
- Intel® Core™ i7-1060G7 Processor
- Intel® Core™ i5-1035G4 Processor
- Intel® Core™ i5-1035G7 Processor
- Intel® Core™ i5-1035G1 Processor
- Intel® Core™ i5-1030G7 Processor
- Intel® Core™ i5-1030G4 Processor
- Intel® Core™ i3-1005G1 Processor
- Intel® Core™ i3-1000G1 Processor
- Intel® Core™ i3-1000G4 Processor
* All [11th Generation Intel&reg; Core&trade; Processors (formerly codenamed Tiger Lake)](https://ark.intel.com/content/www/us/en/ark/products/codename/88759/tiger-lake.html).
* All [11th Generation Intel® Core™ Processors (formerly codenamed Tiger Lake)](https://ark.intel.com/content/www/us/en/ark/products/codename/88759/tiger-lake.html).
> **NOTE**: On platforms where Intel&reg; GNA is not enabled in the BIOS, the driver cannot be installed, so the GNA plugin uses the software emulation mode only.
> **NOTE**: On platforms where Intel® GNA is not enabled in the BIOS, the driver cannot be installed, so the GNA plugin uses the software emulation mode only.
## Drivers and Dependencies
Intel&reg; GNA hardware requires a driver to be installed on the system.
Intel® GNA hardware requires a driver to be installed on the system.
* Linux\* OS:
[Download Intel&reg; GNA driver for Ubuntu Linux 18.04.3 LTS (with HWE Kernel version 5.0+)](https://download.01.org/opencv/drivers/gna/)
[Download Intel® GNA driver for Ubuntu Linux 18.04.3 LTS (with HWE Kernel version 5.0+)](https://download.01.org/opencv/drivers/gna/)
* Windows\* OS:
Intel&reg; GNA driver for Windows is available through Windows Update\*
Intel® GNA driver for Windows is available through Windows Update\*
## Models and Layers Limitations
Because of specifics of hardware architecture, Intel&reg; GNA supports a limited set of layers, their kinds and combinations.
For example, you should not expect the GNA Plugin to be able to run computer vision models, except those specifically adapted for the GNA Plugin, because the plugin does not fully support
2D convolutions.
Because of specifics of hardware architecture, Intel® GNA supports a limited set of layers, their kinds and combinations.
For example, you should not expect the GNA Plugin to be able to run computer vision models, except those specifically adapted
for the GNA Plugin, because the plugin does not fully support 2D convolutions.
For the list of supported layers, see the **GNA** column of the **Supported Layers** section in [Supported Devices](Supported_Devices.md).
The list of supported layers can be found
[here](Supported_Devices.md) (see the GNA column of Supported Layers section).
Limitations include:
- Only 1D convolutions are natively supported in the models converted from:
- [Kaldi](../../MO_DG/prepare_model/convert_model/Convert_Model_From_Kaldi.md) framework;
- [TensorFlow](../../MO_DG/prepare_model/convert_model/Convert_Model_From_TensorFlow.md) framework; note that for TensorFlow models, the option `--disable_nhwc_to_nchw` must be used when running the Model Optimizer.
- The number of output channels for convolutions must be a multiple of 4
- Permute layer support is limited to the cases where no data reordering is needed, or when reordering is happening for 2 dimensions, at least one of which is not greater than 8
- [Kaldi](../../MO_DG/prepare_model/convert_model/Convert_Model_From_Kaldi.md) framework
- [TensorFlow](../../MO_DG/prepare_model/convert_model/Convert_Model_From_TensorFlow.md) framework. For TensorFlow models, use the `--disable_nhwc_to_nchw` option when running the Model Optimizer.
- The number of output channels for convolutions must be a multiple of 4.
- Permute layer support is limited to the cases where no data reordering is needed or when reordering is happening for two dimensions, at least one of which is not greater than 8.
#### Experimental Support for 2D Convolutions
The Intel&reg; GNA hardware natively supports only 1D convolution.
The Intel® GNA hardware natively supports only 1D convolution.
However, 2D convolutions can be mapped to 1D when a convolution kernel moves in a single direction. Such a transformation is performed by the GNA Plugin for Kaldi `nnet1` convolution. From this perspective, the Intel&reg; GNA hardware convolution operation accepts a `NHWC` input and produces `NHWC` output. Because OpenVINO&trade; only supports the `NCHW` layout, it may be necessary to insert `Permute` layers before or after convolutions.
However, 2D convolutions can be mapped to 1D when a convolution kernel moves in a single direction. GNA Plugin performs such a transformation for Kaldi `nnet1` convolution. From this perspective, the Intel® GNA hardware convolution operation accepts an `NHWC` input and produces an `NHWC` output. Because OpenVINO only supports the `NCHW` layout, you may need to insert `Permute` layers before or after convolutions.
For example, the Kaldi model optimizer inserts such a permute after convolution for the [rm_cnn4a network](https://download.01.org/openvinotoolkit/models_contrib/speech/kaldi/rm_cnn4a_smbr/). This `Permute` layer is automatically removed by the GNA Plugin, because the Intel&reg; GNA hardware convolution layer already produces the required `NHWC` result.
For example, the Kaldi model optimizer inserts such a permute after convolution for the [rm_cnn4a network](https://download.01.org/openvinotoolkit/models_contrib/speech/kaldi/rm_cnn4a_smbr/). This `Permute` layer is automatically removed by the GNA Plugin, because the Intel® GNA hardware convolution layer already produces the required `NHWC` result.
## Operation Precision
Intel&reg; GNA essentially operates in the low-precision mode, which represents a mix of 8-bit (`I8`), 16-bit (`I16`), and 32-bit (`I32`) integer computations, so compared to 32-bit floating point (`FP32`) results for example, calculated on CPU using Inference Engine [CPU Plugin](CPU.md) outputs calculated using reduced integer precision are different from the scores calculated using floating point.
Intel® GNA essentially operates in the low-precision mode, which represents a mix of 8-bit (`I8`), 16-bit (`I16`), and 32-bit (`I32`) integer computations. Outputs calculated using a reduced integer precision are different from the scores calculated using the floating point format, for example, `FP32` outputs calculated on CPU using the Inference Engine [CPU Plugin](CPU.md).
Unlike other plugins supporting low-precision execution, the GNA plugin calculates quantization factors at the model loading time, so a model can run without calibration.
Unlike other plugins supporting low-precision execution, the GNA plugin calculates quantization factors at the model loading time, so you can run a model without calibration.
## <a name="execution-models">Execution Modes</a>
## <a name="execution-modes">Execution Modes</a>
| Mode | Description |
| :---------------------------------| :---------------------------------------------------------|
| `GNA_AUTO` | Uses Intel&reg; GNA if available, otherwise uses software execution mode on CPU. |
| `GNA_HW` | Uses Intel&reg; GNA if available, otherwise raises an error. |
| `GNA_SW` | *Deprecated*. Executes the GNA-compiled graph on CPU performing calculations in the same precision as the Intel&reg; GNA, but not in the bit-exact mode. |
| `GNA_SW_EXACT` | Executes the GNA-compiled graph on CPU performing calculations in the same precision as the Intel&reg; GNA in the bit-exact mode. |
| `GNA_AUTO` | Uses Intel® GNA if available, otherwise uses software execution mode on CPU. |
| `GNA_HW` | Uses Intel® GNA if available, otherwise raises an error. |
| `GNA_SW` | *Deprecated*. Executes the GNA-compiled graph on CPU performing calculations in the same precision as the Intel® GNA, but not in the bit-exact mode. |
| `GNA_SW_EXACT` | Executes the GNA-compiled graph on CPU performing calculations in the same precision as the Intel® GNA in the bit-exact mode. |
| `GNA_SW_FP32` | Executes the GNA-compiled graph on CPU but substitutes parameters and calculations from low precision to floating point (`FP32`). |
## Supported Configuration Parameters
@@ -101,42 +101,42 @@ Unlike other plugins supporting low-precision execution, the GNA plugin calculat
The plugin supports the configuration parameters listed below.
The parameters are passed as `std::map<std::string, std::string>` on `InferenceEngine::Core::LoadNetwork` or `InferenceEngine::SetConfig`.
The parameter `KEY_GNA_DEVICE_MODE` can also be changed at run time using `InferenceEngine::ExecutableNetwork::SetConfig` (for any values excluding `GNA_SW_FP32`). This allows switching the
You can change the `KEY_GNA_DEVICE_MODE` parameter at run time using `InferenceEngine::ExecutableNetwork::SetConfig`, which works for any value excluding `GNA_SW_FP32`. This enables you to switch the
execution between software emulation mode and hardware emulation mode after the model is loaded.
The parameter names below correspond to their usage through API keys, such as `GNAConfigParams::KEY_GNA_DEVICE_MODE` or `PluginConfigParams::KEY_PERF_COUNT`.
When specifying key values as raw strings (that is, when using Python API), omit the `KEY_` prefix.
When specifying key values as raw strings, that is, when using Python API, omit the `KEY_` prefix.
| Parameter Name | Parameter Values | Default Value | Description |
| :---------------------------------| :---------------------------------------------------------| :-----------| :------------------------------------------------------------------------|
| `KEY_GNA_COMPACT_MODE` | `YES`/`NO` | `YES` | Reuse I/O buffers to save space (makes debugging harder) |
| `KEY_GNA_SCALE_FACTOR` | `FP32` number | 1.0 | Scale factor to use for input quantization |
| `KEY_GNA_DEVICE_MODE` | `GNA_AUTO`/`GNA_HW`/`GNA_SW_EXACT`/`GNA_SW_FP32` | `GNA_AUTO` | One of the modes described <a name="execution-models">Execution Models</a> |
| `KEY_GNA_FIRMWARE_MODEL_IMAGE` | `std::string` | `""` | Name for embedded model binary dump file |
| `KEY_GNA_PRECISION` | `I16`/`I8` | `I16` | Hint to GNA plugin: preferred integer weight resolution for quantization |
| `KEY_PERF_COUNT` | `YES`/`NO` | `NO` | Turn on performance counters reporting |
| `KEY_GNA_LIB_N_THREADS` | 1-127 integer number | 1 | Sets the number of GNA accelerator library worker threads used for inference computation in software modes
| `KEY_GNA_COMPACT_MODE` | `YES`/`NO` | `YES` | Enables I/O buffers reuse to save space. Makes debugging harder. |
| `KEY_GNA_SCALE_FACTOR` | `FP32` number | 1.0 | Sets the scale factor to use for input quantization. |
| `KEY_GNA_DEVICE_MODE` | `GNA_AUTO`/`GNA_HW`/`GNA_SW_EXACT`/`GNA_SW_FP32` | `GNA_AUTO` | One of the modes described in <a href="#execution-modes">Execution Modes</a> |
| `KEY_GNA_FIRMWARE_MODEL_IMAGE` | `std::string` | `""` | Sets the name for the embedded model binary dump file. |
| `KEY_GNA_PRECISION` | `I16`/`I8` | `I16` | Sets the preferred integer weight resolution for quantization. |
| `KEY_PERF_COUNT` | `YES`/`NO` | `NO` | Turns on performance counters reporting. |
| `KEY_GNA_LIB_N_THREADS` | 1-127 integer number | 1 | Sets the number of GNA accelerator library worker threads used for inference computation in software modes.
## How to Interpret Performance Counters
As a result of collecting performance counters using `InferenceEngine::InferRequest::GetPerformanceCounts`, you can find various performance data about execution on GNA.
Returned map stores a counter description as a key, counter value is stored in the `realTime_uSec` field of the `InferenceEngineProfileInfo` structure. Current GNA implementation calculates counters for the whole utterance scoring and does not provide per-layer information. API allows to retrieve counter units in cycles, but they can be converted to seconds as follows:
Returned map stores a counter description as a key, and a counter value in the `realTime_uSec` field of the `InferenceEngineProfileInfo` structure. Current GNA implementation calculates counters for the whole utterance scoring and does not provide per-layer information. The API enables you to retrieve counter units in cycles, you can convert cycles to seconds as follows:
```
seconds = cycles / frequency
```
Refer to the table below to learn about the frequency of Intel&reg; GNA inside a particular processor.
Processor | Frequency of Intel&reg; GNA
Refer to the table below to learn about the frequency of Intel® GNA inside a particular processor.
Processor | Frequency of Intel® GNA
---|---
Intel&reg; Ice Lake processors| 400MHz
Intel&reg; Core&trade; i3-8121U processor| 400MHz
Intel&reg; Gemini Lake processors | 200MHz
Intel® Ice Lake processors| 400MHz
Intel® Core™ i3-8121U processor| 400MHz
Intel® Gemini Lake processors | 200MHz
Performance counters provided for the time being:
* Scoring request performance results
* Number of total cycles spent on scoring in hardware (including compute and memory stall cycles)
* Number of total cycles spent on scoring in hardware including compute and memory stall cycles
* Number of stall cycles spent in hardware
## Multithreading Support in GNA Plugin
@@ -151,40 +151,40 @@ The GNA plugin supports the following configuration parameters for multithreadin
## Network Batch Size
Intel&reg; GNA plugin supports the processing of context-windowed speech frames in batches of 1-8 frames in one
Intel® GNA plugin supports the processing of context-windowed speech frames in batches of 1-8 frames in one
input blob using `InferenceEngine::ICNNNetwork::setBatchSize`. Increasing batch size only improves efficiency of `Fully Connected` layers.
> **NOTE**: For networks with `Convolutional`, `LSTM`, or `Memory` layers, the only supported batch size is 1.
## Compatibility with Heterogeneous Plugin
Heterogeneous plugin was tested with the Intel&reg; GNA as a primary device and CPU as a secondary device. To run inference of networks with layers unsupported by the GNA plugin (for example, Softmax), use the Heterogeneous plugin with the `HETERO:GNA,CPU` configuration. For the list of supported networks, see the [Supported Frameworks](#supported-frameworks).
Heterogeneous plugin was tested with the Intel® GNA as a primary device and CPU as a secondary device. To run inference of networks with layers unsupported by the GNA plugin, such as Softmax, use the Heterogeneous plugin with the `HETERO:GNA,CPU` configuration.
> **NOTE:** Due to limitation of the Intel&reg; GNA backend library, heterogenous support is limited to cases where in the resulted sliced graph, only one subgraph is scheduled to run on GNA\_HW or GNA\_SW devices.
> **NOTE:** Due to limitation of the Intel® GNA backend library, heterogenous support is limited to cases where in the resulted sliced graph, only one subgraph is scheduled to run on GNA\_HW or GNA\_SW devices.
## Recovery from interruption by high-priority Windows audio processes\*
## Recovery from Interruption by High-Priority Windows Audio Processes\*
As noted in the introduction, GNA is designed for real-time workloads such as noise reduction.
GNA is designed for real-time workloads such as noise reduction.
For such workloads, processing should be time constrained, otherwise extra delays may cause undesired effects such as
audio "glitches". To make sure that processing can satisfy real time requirements, the GNA driver provides a QoS
(Quality of Service) mechanism which interrupts requests that might cause high-priority Windows audio processes to miss
schedule, thereby causing long running GNA tasks to terminate early.
*audio glitches*. To make sure that processing can satisfy real-time requirements, the GNA driver provides a Quality of Service
(QoS) mechanism, which interrupts requests that might cause high-priority Windows audio processes to miss
the schedule, thereby causing long running GNA tasks to terminate early.
Applications should be prepared for this situation.
If an inference (in `GNA_HW` mode) cannot be executed because of such an interruption, then `InferRequest::Wait()` will return status code
`StatusCode::INFER_NOT_STARTED` (note that it will be changed to a more meaningful status code in future releases).
If an inference in the `GNA_HW` mode cannot be executed because of such an interruption, then `InferRequest::Wait()` returns status code
`StatusCode::INFER_NOT_STARTED`. In future releases, it will be changed to a more meaningful status code.
Any application working with GNA must properly react if it receives this code. Various strategies are possible.
One of the options is to immediately switch to GNA SW emulation mode:
Any application working with GNA must properly react to this code.
One of the strategies to adapt an application:
1. Immediately switch to the GNA_SW emulation mode:
```cpp
std::map<std::string, Parameter> newConfig;
newConfig[GNAConfigParams::KEY_GNA_DEVICE_MODE] = Parameter("GNA_SW_EXACT");
executableNet.SetConfig(newConfig);
```
then resubmit and switch back to GNA_HW after some time hoping that the competing application has finished.
2. Resubmit and switch back to GNA_HW expecting that the competing application has finished.
## See Also

View File

@@ -2,15 +2,15 @@
## Introducing HDDL Plugin
The Inference Engine HDDL plugin is developed for inference of neural networks on Intel&reg; Vision Accelerator Design with Intel&reg; Movidius&trade; VPUs which is designed for use cases those require large throughput of deep learning inference. It provides dozens amount of throughput as MYRIAD Plugin.
The Inference Engine HDDL plugin is developed for inference of neural networks on the Intel&reg; Vision Accelerator Design with Intel&reg; Movidius&trade; VPUs. It is designed for use cases which require large throughputs of deep learning inference. It provides dozens of times the throughput as the MYRIAD Plugin does.
## Installation on Linux* OS
For installation instructions, refer to the [Installation Guide for Linux\*](VPU.md).
For installation instructions, refer to the [Installation Guide for Linux*](VPU.md).
## Installation on Windows* OS
For installation instructions, refer to the [Installation Guide for Windows\*](Supported_Devices.md).
For installation instructions, refer to the [Installation Guide for Windows*](Supported_Devices.md).
## Supported networks
@@ -30,7 +30,7 @@ In addition to common parameters for Myriad plugin and HDDL plugin, HDDL plugin
| KEY_VPU_HDDL_STREAM_ID | string | empty string | Allows to execute inference on a specified device. |
| KEY_VPU_HDDL_DEVICE_TAG | string | empty string | Allows to allocate/deallocate networks on specified devices. |
| KEY_VPU_HDDL_BIND_DEVICE | YES/NO | NO | Whether the network should bind to a device. Refer to vpu_plugin_config.hpp. |
| KEY_VPU_HDDL_RUNTIME_PRIORITY | singed int | 0 | Specify the runtime priority of a device among all devices that running a same network Refer to vpu_plugin_config.hpp. |
| KEY_VPU_HDDL_RUNTIME_PRIORITY | singed int | 0 | Specify the runtime priority of a device among all devices that are running the same network. Refer to vpu_plugin_config.hpp. |
## See Also

View File

@@ -6,11 +6,12 @@ The Inference Engine MYRIAD plugin is developed for inference of neural networks
## Installation on Linux* OS
For installation instructions, refer to the [Installation Guide for Linux*](../../../inference-engine/samples/benchmark_app/README.md).
For installation instructions, refer to the [Installation Guide for Linux*](../../install_guides/installing-openvino-linux.md).
## Installation on Windows* OS
For installation instructions, refer to the [Installation Guide for Windows*](../../../inference-engine/samples/benchmark_app/README.md).
For installation instructions, refer to the [Installation Guide for Windows*](../../install_guides/installing-openvino-windows.md).
## Supported networks

View File

@@ -1,3 +0,0 @@
version https://git-lfs.github.com/spec/v1
oid sha256:05eb8600d2c905975674f3a0a5dc676107d22f65f2a1f78ee1cfabc1771721ea
size 41307

View File

@@ -1,3 +0,0 @@
version https://git-lfs.github.com/spec/v1
oid sha256:17cd470c6d04d7aabbdb4a08e31f9c97eab960cf7ef5bbd3a541df92db38f26b
size 40458

View File

@@ -1,3 +0,0 @@
version https://git-lfs.github.com/spec/v1
oid sha256:80297287c81a2f27b7e74895738afd90844354a8dd745757e8321e2fb6ed547e
size 31246

View File

@@ -1,3 +0,0 @@
version https://git-lfs.github.com/spec/v1
oid sha256:0b206c602626f17ba5787810b9a28f9cde511448c3e63a5c7ba976cee7868bdb
size 14907

View File

@@ -12,6 +12,13 @@ Model Optimizer produces an Intermediate Representation (IR) of the network, whi
* <code>.bin</code> - Contains the weights and biases binary data.
> **TIP**: You also can work with the Model Optimizer inside the OpenVINO™ [Deep Learning Workbench](@ref workbench_docs_Workbench_DG_Introduction) (DL Workbench).
> [DL Workbench](@ref workbench_docs_Workbench_DG_Introduction) is a platform built upon OpenVINO™ and provides a web-based graphical environment that enables you to optimize, fine-tune, analyze, visualize, and compare
> performance of deep learning models on various Intel® architecture
> configurations. In the DL Workbench, you can use most of OpenVINO™ toolkit components.
> <br>
> Proceed to an [easy installation from Docker](@ref workbench_docs_Workbench_DG_Install_from_Docker_Hub) to get started.
## What's New in the Model Optimizer in this Release?
* Common changes:
@@ -63,8 +70,6 @@ Model Optimizer produces an Intermediate Representation (IR) of the network, whi
## Table of Content
* [Introduction to OpenVINO™ Deep Learning Deployment Toolkit](../IE_DG/Introduction.md)
* [Preparing and Optimizing your Trained Model with Model Optimizer](prepare_model/Prepare_Trained_Model.md)
* [Configuring Model Optimizer](prepare_model/Config_Model_Optimizer.md)
* [Converting a Model to Intermediate Representation (IR)](prepare_model/convert_model/Converting_Model.md)
@@ -107,4 +112,4 @@ Model Optimizer produces an Intermediate Representation (IR) of the network, whi
* [Known Issues](Known_Issues_Limitations.md)
**Typical Next Step:** [Introduction to Intel® Deep Learning Deployment Toolkit](../IE_DG/Introduction.md)
**Typical Next Step:** [Preparing and Optimizing your Trained Model with Model Optimizer](prepare_model/Prepare_Trained_Model.md)

View File

@@ -242,4 +242,8 @@ To differentiate versions of the same operation type, like `ReLU`, the suffix `-
`N` usually refers to the first `opsetN` where this version of the operation is introduced.
It is not guaranteed that new operations will be named according to that rule, the naming convention might be changed, but not for old operations which are frozen completely.
---
## See Also
* [Cut Off Parts of a Model](prepare_model/convert_model/Cutting_Model.md)

View File

@@ -45,3 +45,8 @@ Possible workaround is to upgrade default protobuf compiler (libprotoc 2.5.0) to
libprotoc 2.6.1.
[protobuf_issue]: https://github.com/google/protobuf/issues/4272
---
## See Also
* [Known Issues and Limitations in the Inference Engine](../IE_DG/Known_Issues_Limitations.md)

View File

@@ -260,6 +260,14 @@ python3 -m easy_install dist/protobuf-3.6.1-py3.6-win-amd64.egg
set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=cpp
```
---
## See Also
docs\MO_DG\prepare_model\Config_Model_Optimizer.md
docs\install_guides\installing-openvino-raspbian.md
* [Converting a Model to Intermediate Representation (IR)](convert_model/Converting_Model.md)
* [Install OpenVINO™ toolkit for Raspbian* OS](../../install_guides/installing-openvino-raspbian.md)
* [Install Intel® Distribution of OpenVINO™ toolkit for Windows* 10](../../install_guides/installing-openvino-windows.md)
* [Install Intel® Distribution of OpenVINO™ toolkit for Windows* with FPGA Support](../../install_guides/installing-openvino-windows-fpga.md)
* [Install Intel® Distribution of OpenVINO™ toolkit for macOS*](../../install_guides/installing-openvino-macos.md)
* [Configuration Guide for the Intel® Distribution of OpenVINO™ toolkit 2020.4 and the Intel® Vision Accelerator Design with an Intel® Arria® 10 FPGA SG2 (IEI's Mustang-F100-A10) on Linux* ](../../install_guides/VisionAcceleratorFPGA_Configure.md)

View File

@@ -615,3 +615,16 @@ You need to specify values for each input of the model. For more information, re
#### 102. What does the message "Operation _contrib_box_nms is not supported ..." mean? <a name="question-102"></a>
It means that you trying to convert the topology which contains '_contrib_box_nms' operation which is not supported directly. However the sub-graph of operations including the '_contrib_box_nms' could be replaced with DetectionOutput layer if your topology is one of the gluoncv topologies. Specify '--enable_ssd_gluoncv' command line parameter for the Model Optimizer to enable this transformation.
\htmlonly
<script>
window.addEventListener('load', function(){
var questionID = getURLParameter('question'); /* this function is defined in openvino-layout.js */
if (questionID) {
window.location = window.location.pathname + '#' + encodeURI(questionID);
}
});
</script>
\endhtmlonly

View File

@@ -144,3 +144,13 @@ In this document, you learned:
* Basic information about how the Model Optimizer works with Caffe\* models
* Which Caffe\* models are supported
* How to convert a trained Caffe\* model using the Model Optimizer with both framework-agnostic and Caffe-specific command-line options
---
## See Also
* [Converting a TensorFlow* Model](Convert_Model_From_TensorFlow.md)
* [Converting an MXNet* Model](Convert_Model_From_MxNet.md)
* [Converting a Kaldi* Model](Convert_Model_From_Kaldi.md)
* [Converting an ONNX* Model](Convert_Model_From_ONNX.md)
* [Converting a Model Using General Conversion Parameters](Converting_Model_General.md)
* [Custom Layers in the Model Optimizer ](../customize_model_optimizer/Customize_Model_Optimizer.md)

View File

@@ -106,3 +106,12 @@ must be copied to `Parameter_0_for_Offset_fastlstm2.r_trunc__2Offset_fastlstm2.r
## Supported Kaldi\* Layers
Refer to [Supported Framework Layers ](../Supported_Frameworks_Layers.md) for the list of supported standard layers.
---
## See Also
* [Converting a TensorFlow* Model](Convert_Model_From_TensorFlow.md)
* [Converting an MXNet* Model](Convert_Model_From_MxNet.md)
* [Converting a Caffe* Model](Convert_Model_From_Caffe.md)
* [Converting an ONNX* Model](Convert_Model_From_ONNX.md)
* [Custom Layers Guide](../../../HOWTO/Custom_Layers_Guide.md)

View File

@@ -103,3 +103,12 @@ In this document, you learned:
* Basic information about how the Model Optimizer works with MXNet\* models
* Which MXNet\* models are supported
* How to convert a trained MXNet\* model using the Model Optimizer with both framework-agnostic and MXNet-specific command-line options
---
## See Also
* [Converting a TensorFlow* Model](Convert_Model_From_TensorFlow.md)
* [Converting a Caffe* Model](Convert_Model_From_Caffe.md)
* [Converting a Kaldi* Model](Convert_Model_From_Kaldi.md)
* [Converting an ONNX* Model](Convert_Model_From_ONNX.md)
* [Custom Layers in the Model Optimizer](../customize_model_optimizer/Customize_Model_Optimizer.md)

View File

@@ -78,3 +78,12 @@ There are no ONNX\* specific parameters, so only [framework-agnostic parameters]
## Supported ONNX\* Layers
Refer to [Supported Framework Layers](../Supported_Frameworks_Layers.md) for the list of supported standard layers.
---
## See Also
* [Converting a TensorFlow* Model](Convert_Model_From_TensorFlow.md)
* [Converting an MXNet* Model](Convert_Model_From_MxNet.md)
* [Converting a Caffe* Model](Convert_Model_From_Caffe.md)
* [Converting a Kaldi* Model](Convert_Model_From_Kaldi.md)
* [Convert TensorFlow* BERT Model to the Intermediate Representation ](tf_specific/Convert_BERT_From_Tensorflow.md)

View File

@@ -375,3 +375,12 @@ In this document, you learned:
* Which TensorFlow models are supported
* How to freeze a TensorFlow model
* How to convert a trained TensorFlow model using the Model Optimizer with both framework-agnostic and TensorFlow-specific command-line options
---
## See Also
* [Converting a Caffe* Model](Convert_Model_From_Caffe.md)
* [Converting an MXNet* Model](Convert_Model_From_MxNet.md)
* [Converting a Kaldi* Model](Convert_Model_From_Kaldi.md)
* [Converting an ONNX* Model](Convert_Model_From_ONNX.md)
* [Converting a Model Using General Conversion Parameters](Converting_Model_General.md)

View File

@@ -233,3 +233,13 @@ Otherwise, it will be casted to data type passed to `--data_type` parameter (by
```sh
python3 mo.py --input_model FaceNet.pb --input "placeholder_layer_name->[0.1 1.2 2.3]"
```
---
## See Also
* [Converting a Cafee* Model](Convert_Model_From_Caffe.md)
* [Converting a TensorFlow* Model](Convert_Model_From_TensorFlow.md)
* [Converting an MXNet* Model](Convert_Model_From_MxNet.md)
* [Converting an ONNX* Model](Convert_Model_From_ONNX.md)
* [Converting a Kaldi* Model](Convert_Model_From_Kaldi.md)
* [Using Shape Inference](../../../IE_DG/ShapeInference.md)

View File

@@ -389,4 +389,11 @@ In this case, when `--input_shape` is specified and the node contains multiple i
The correct command line is:
```sh
python3 mo.py --input_model=inception_v1.pb --input=0:InceptionV1/InceptionV1/Conv2d_1a_7x7/convolution --input_shape=[1,224,224,3]
```
```
---
## See Also
* [Sub-Graph Replacement in the Model Optimizer](../customize_model_optimizer/Subgraph_Replacement_Model_Optimizer.md)
* [Extending the Model Optimizer with New Primitives](../customize_model_optimizer/Extending_Model_Optimizer_with_New_Primitives.md)
* [Converting a Model Using General Conversion Parameters](Converting_Model_General.md)

View File

@@ -34,4 +34,11 @@ Weights compression leaves `FakeQuantize` output arithmetically the same and wei
See the visualization of `Convolution` with the compressed weights:
![](../../img/compressed_int8_Convolution_weights.png)
Both Model Optimizer and Post-Training Optimization tool generate a compressed IR by default. To generate an expanded INT8 IR, use `--disable_weights_compression`.
Both Model Optimizer and Post-Training Optimization tool generate a compressed IR by default. To generate an expanded INT8 IR, use `--disable_weights_compression`.
---
## See Also
* [Quantization](@ref pot_compression_algorithms_quantization_README)
* [Optimization Guide](../../../optimization_guide/dldt_optimization_guide.md)
* [Low Precision Optimization Guide](@ref pot_docs_LowPrecisionOptimizationGuide)

View File

@@ -110,3 +110,8 @@ speech_sample -i feats.ark,ivector_online_ie.ark -m final.xml -d CPU -o predicti
Results can be decoded as described in "Use of Sample in Kaldi* Speech Recognition Pipeline" chapter
in [the Speech Recognition Sample description](../../../../../inference-engine/samples/speech_sample/README.md).
---
## See Also
* [Converting a Kaldi Model](../Convert_Model_From_Kaldi.md)

View File

@@ -19,6 +19,7 @@ Measuring inference performance involves many variables and is extremely use-cas
<script src="https://cdn.jsdelivr.net/npm/chartjs-plugin-datalabels"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/chartjs-plugin-annotation/0.5.7/chartjs-plugin-annotation.min.js"></script>
<script src="https://cdn.jsdelivr.net/npm/chartjs-plugin-barchart-background@1.3.0/build/Plugin.Barchart.Background.min.js"></script>
<script src="https://cdn.jsdelivr.net/npm/chartjs-plugin-deferred@1"></script>
<!-- download this file and place on your server (or include the styles inline) -->
<link rel="stylesheet" href="ovgraphs.css" type="text/css">
\endhtmlonly
@@ -129,7 +130,7 @@ Testing by Intel done on: see test date for each HW platform below.
| | Intel® Core™ i5-8500 | Intel® Core™ i7-8700T | Intel® Core™ i9-10920X | 11th Gen Intel® Core™ i5-1145G7E |
| -------------------- | ---------------------------------- | ----------------------------------- |--------------------------------------|-----------------------------------|
| Motherboard | ASUS* PRIME Z370-A | GIGABYTE* Z370M DS3H-CF | ASUS* PRIME X299-A II | Intel Corporation /<br>TigerLake U DDR4 SODIMM RVP |
| Motherboard | ASUS* PRIME Z370-A | GIGABYTE* Z370M DS3H-CF | ASUS* PRIME X299-A II | Intel Corporation<br>internal/Reference Validation Platform |
| CPU | Intel® Core™ i5-8500 CPU @ 3.00GHz | Intel® Core™ i7-8700T CPU @ 2.40GHz | Intel® Core™ i9-10920X CPU @ 3.50GHz | 11th Gen Intel® Core™ i5-1145G7E @ 2.60GHz |
| Hyper Threading | OFF | ON | ON | ON |
| Turbo Setting | ON | ON | ON | ON |

View File

@@ -51,19 +51,16 @@ We published a set of guidelines and recommendations to optimize your models ava
#### 9. Why are INT8 optimized models used for benchmarking on CPUs with no VNNI support?
The benefit of low-precision optimization using the OpenVINO™ toolkit model optimizer extends beyond processors supporting VNNI through Intel® DL Boost. The reduced bit width of INT8 compared to FP32 allows Intel® CPU to process the data faster and thus offers better throughput on any converted model agnostic of the intrinsically supported low-precision optimizations within Intel® hardware. Please refer to [INT8 vs. FP32 Comparison on Select Networks and Platforms](./performance_int8_vs_fp32.html) for comparison on boost factors for different network models and a selection of Intel® CPU architectures, including AVX-2 with Intel® Core™ i7-8700T, and AVX-512 (VNNI) with Intel® Xeon® 5218T and Intel® Xeon® 8270.
#### 10. Previous releases included benchmarks on googlenet-v1. Why is there no longer benchmarks on this neural network model?
We replaced googlenet-v1 to [resnet-18-pytorch](https://github.com/opencv/open_model_zoo/blob/master/models/public/resnet-18-pytorch/resnet-18-pytorch.md) due to changes in developer usage. The public model resnet-18 is used by many developers as an Image Classification model. This pre-optimized model was also trained on the ImageNet database, similar to googlenet-v1. Both googlenet-v1 and resnet-18 will remain part of the Open Model Zoo. Developers are encouraged to utilize resnet-18-pytorch for Image Classification use cases.
#### 11. Previous releases included benchmarks on googlenet-v1-CF (Caffe). Why is there no longer benchmarks on this neural network model?
#### 10. Previous releases included benchmarks on googlenet-v1-CF (Caffe). Why is there no longer benchmarks on this neural network model?
We replaced googlenet-v1-CF to resnet-18-pytorch due to changes in developer usage. The public model resnet-18 is used by many developers as an Image Classification model. This pre-optimized model was also trained on the ImageNet database, similar to googlenet-v1-CF. Both googlenet-v1-CF and resnet-18 will remain part of the Open Model Zoo. Developers are encouraged to utilize resnet-18-pytorch for Image Classification use cases.
#### 12. Why have resnet-50-CF, mobilenet-v1-1.0-224-CF, mobilenet-v2-CF and resnet-101-CF been removed?
#### 11. Why have resnet-50-CF, mobilenet-v1-1.0-224-CF, mobilenet-v2-CF and resnet-101-CF been removed?
The CAFFE version of resnet-50, mobilenet-v1-1.0-224 and mobilenet-v2 have been replaced with their TensorFlow and PyTorch counterparts. Resnet-50-CF is replaced by resnet-50-TF, mobilenet-v1-1.0-224-CF is replaced by mobilenet-v1-1.0-224-TF and mobilenet-v2-CF is replaced by mobilenetv2-PyTorch. Resnet-50-CF an resnet-101-CF are no longer maintained at their public source repos.
#### 13. Where can I search for OpenVINO™ performance results based on HW-platforms?
#### 12. Where can I search for OpenVINO™ performance results based on HW-platforms?
The web site format has changed in order to support the more common search approach of looking for the performance of a given neural network model on different HW-platforms. As opposed to review a given HW-platform's performance on different neural network models.
#### 14. How is Latency measured?
#### 13. How is Latency measured?
Latency is measured by running the OpenVINO™ inference engine in synchronous mode. In synchronous mode each frame or image is processed through the entire set of stages (pre-processing, inference, post-processing) before the next frame or image is processed. This KPI is relevant for applications where the inference on a single image is required, for example the analysis of an ultra sound image in a medical application or the analysis of a seismic image in the oil & gas industry. Other use cases include real-time or near real-time applications like an industrial robot's response to changes in its environment and obstacle avoidance for autonomous vehicles where a quick response to the result of the inference is required.
\htmlonly

View File

@@ -10,7 +10,7 @@ The table below illustrates the speed-up factor for the performance gain by swit
<th>Intel® Xeon® <br>Gold <br>5218T</th>
<th>Intel® Xeon® <br>Platinum <br>8270</th>
<th>Intel® Core™ <br>i7-1065G7</th>
<th>Intel® Core™ <br>i7-1145G7E</th>
<th>Intel® Core™ <br>i5-1145G7E</th>
</tr>
<tr align="left">
<th>OpenVINO <br>benchmark <br>model name</th>

View File

@@ -72,9 +72,9 @@ def process(docs_folder):
md_folder = os.path.dirname(md_file)
with open(md_file, 'r', encoding='utf-8') as f:
content = f.read()
inline_links = set(re.findall(r'!?\[.*?\]\(([\w\/\-\.]+\.(md|png|jpg|gif))\)', content, flags=re.IGNORECASE))
inline_links = set(re.findall(r'!?\[.*?\]\(([\w\/\-\.]+\.(md|png|jpg|gif|svg))\)', content, flags=re.IGNORECASE))
github_md_links = set(re.findall(r'(\[(.+?)\]\((https:[\w\.\/-]+?\.md)\))', content, flags=re.IGNORECASE))
reference_links = set(re.findall(r'\[.+\]\:\s*?([\w\/\-\.]+\.(md|png|jpg|gif))', content, flags=re.IGNORECASE))
reference_links = set(re.findall(r'\[.+\]\:\s*?([\w\/\-\.]+\.(md|png|jpg|gif|svg))', content, flags=re.IGNORECASE))
content = replace_links(content, inline_links, md_folder, labels, docs_folder)
content = replace_links(content, reference_links, md_folder, labels, docs_folder)
content = process_github_md_links(content, github_md_links)

View File

@@ -3,12 +3,10 @@
<!-- Navigation index tabs for HTML output -->
<navindex>
<tab type="mainpage" title="OpenVINO Home" url="../index.html"/>
<tab type="user" title="GETTING STARTED" url="../index.html"/>
<tab type="user" title="HOW TOs" url="../openvino_docs_how_tos_how_to_links.html"/>
<tab type="user" title="GUIDES" url="../openvino_docs_IE_DG_Introduction.html"/>
<tab type="user" title="RESOURCES" url="../openvino_docs_resources_introduction.html"/>
<tab type="user" title="PERFORMANCE BENCHMARKS" url="../openvino_docs_performance_benchmarks.html"/>
<tab type="usergroup" title="API REFERENCES" url="../usergroup14.html">
<tab type="user" title="Get Started" url="../index.html"/>
<tab type="user" title="Documentation" url="../documentation.html"/>
<tab type="user" title="Examples" url="../examples.html"/>
<tab type="usergroup" title="API REFERENCES" url="../api_references.html">
<!-- OpenVX -->
<tab type="user" title="OpenVX Developer Guide" url="https://khronos.org/openvx"/>
<!-- OpenCV -->
@@ -31,6 +29,10 @@
<tab type="user" title="Inference Engine Python API Reference" url="../ie_python_api/annotated.html"/>
<!-- DL Streamer -->
<tab type="user" title="DL Streamer API Reference" url="https://openvinotoolkit.github.io/dlstreamer_gst/"/>
<!-- nGraph C++ API -->
<tab type="user" title="nGraph C++ API Reference" url="../ngraph_cpp_api/annotated.html"/>
<!-- nGraph Python API -->
<tab type="user" title="nGraph Python API Reference" url="../ngraph_python_api/files.html"/>
</tab>
<!-- Chinese docs -->
<tab type="user" title="中文文件" url="https://docs.openvinotoolkit.org/cn/index.html"/>

View File

@@ -1,54 +1,7 @@
<doxygenlayout xmlns:xi="http://www.w3.org/2001/XInclude" version="1.0">
<!-- Navigation index tabs for HTML output -->
<navindex>
<tab type="mainpage" title="OpenVINO Home" url="@ref index"/>
<!-- GET STARTED category -->
<tab type="usergroup" title="GET STARTED" url="@ref index">
<tab type="user" title="OpenVINO Toolkit Overview" url="@ref index"/>
<!-- Install Directly -->
<tab type="usergroup" title="Install Directly" url=""><!--automatically generated-->
<tab type="user" title="Linux" url="@ref openvino_docs_install_guides_installing_openvino_linux"/>
<tab type="user" title="Windows" url="@ref openvino_docs_install_guides_installing_openvino_windows"/>
<tab type="user" title="macOS" url="@ref openvino_docs_install_guides_installing_openvino_macos"/>
<tab type="user" title="Raspbian OS" url="@ref openvino_docs_install_guides_installing_openvino_raspbian"/>
</tab>
<!-- Install From Images and Repositories -->
<tab type="usergroup" title="Install From Images and Repositories" url=""><!--automatically generated-->
<tab type="usergroup" title="Docker" url="@ref openvino_docs_install_guides_installing_openvino_docker_linux">
<tab type="user" title="Install Intel&#174; Distribution of OpenVINO&#8482; toolkit for Linux* from a Docker* Image" url="@ref openvino_docs_install_guides_installing_openvino_docker_linux"/>
<tab type="user" title="Install Intel&#174; Distribution of OpenVINO&#8482; toolkit for Windows* from a Docker* Image" url="@ref openvino_docs_install_guides_installing_openvino_docker_windows"/>
</tab>
<tab type="user" title="APT" url="@ref openvino_docs_install_guides_installing_openvino_apt"/>
<tab type="user" title="YUM" url="@ref openvino_docs_install_guides_installing_openvino_yum"/>
<tab type="user" title="Anaconda Cloud" url="@ref openvino_docs_install_guides_installing_openvino_conda"/>
<tab type="user" title="PIP" url="@ref openvino_docs_install_guides_installing_pip"/>
<tab type="user" title="Yocto" url="@ref openvino_docs_install_guides_installing_openvino_yocto"/>
</tab>
<!-- Get Started Guides-->
<tab type="usergroup" title="Get Started Guides" url=""><!--automatically generated-->
<tab type="user" title="Get Started with OpenVINO&#8482; toolkit on Linux*" url="@ref openvino_docs_get_started_get_started_linux"/>
<tab type="user" title="Get Started with OpenVINO&#8482; toolkit on Windows*" url="@ref openvino_docs_get_started_get_started_windows"/>
<tab type="user" title="Get Started with OpenVINO&#8482; toolkit on macOS*" url="@ref openvino_docs_get_started_get_started_macos"/>
</tab>
<!-- Configuration for Hardware -->
<tab type="usergroup" title="Configuration for Hardware" url=""><!--automatically generated-->
<tab type="user" title="Configuration Guide for the Intel&#174; Distribution of OpenVINO&#8482; toolkit and the Intel&#174; Vision Accelerator Design with Intel&#174; Movidius&#8482; VPUs on Linux*" url="@ref openvino_docs_install_guides_installing_openvino_linux_ivad_vpu"/>
<tab type="user" title="Intel&#174; Movidius&#8482; VPUs Setup Guide for Use with Intel&#174; Distribution of OpenVINO&#8482; toolkit" url="@ref openvino_docs_install_guides_movidius_setup_guide"/>
<tab type="user" title="Intel&#174; Movidius&#8482; VPUs Programming Guide for Use with Intel&#174; Distribution of OpenVINO&#8482; toolkit" url="@ref openvino_docs_install_guides_movidius_programming_guide"/>
</tab>
<!-- Security -->
<tab type="usergroup" title="Security" url="@ref openvino_docs_security_guide_introduction"><!--automatically generated-->
<tab type="user" title="Introduction" url="@ref openvino_docs_security_guide_introduction"/>
<tab type="user" title="Using DL Workbench Securely" url="@ref openvino_docs_security_guide_workbench"/>
<tab type="user" title="Using Encrypted Models" url="@ref openvino_docs_IE_DG_protecting_model_guide"/>
</tab>
</tab>
<!-- DOCUMENTATION category -->
<tab type="usergroup" title="DOCUMENTATION"><!--automatically generated-->
<!-- DLDT Documentation-->
<tab id="converting_and_preparing_models" type="usergroup" title="Converting and Preparing Models" url="">
<tab id="converting_and_preparing_models" type="usergroup" title="Converting and Preparing Models" url="">
<!-- Model Optimizer Developer Guide-->
<tab type="usergroup" title="Model Optimizer Developer Guide" url="@ref openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide">
<tab type="usergroup" title="Preparing and Optimizing Your Trained Model" url="@ref openvino_docs_MO_DG_prepare_model_Prepare_Trained_Model">
@@ -103,8 +56,12 @@
</tab>
<!-- Model Downloader -->
<tab type="user" title="Model Downloader" url="@ref omz_tools_downloader_README"/>
</tab>
<tab id="intermediate_representaton_and_operations_sets" type="usergroup" title="Intermediate Representation and Operations Sets" url="@ref openvino_docs_MO_DG_IR_and_opsets">
<!-- Custom Layers Guide -->
<tab type="usergroup" title="Custom Layers Guide" url="@ref openvino_docs_HOWTO_Custom_Layers_Guide"></tab>
</tab>
<!-- Intermediate Representation and Operations Sets -->
<tab id="intermediate_representaton_and_operations_sets" type="usergroup" title="Intermediate Representation and Operations Sets" url="@ref openvino_docs_MO_DG_IR_and_opsets">
<tab type="usergroup" title="Available Operations Sets" url="@ref openvino_docs_ops_opset">
<tab type="user" title="opset4 Specification" url="@ref openvino_docs_ops_opset4"/>
<tab type="user" title="opset3 Specification" url="@ref openvino_docs_ops_opset3"/>
@@ -261,7 +218,8 @@
<tab type="user" title="VariadicSplit-1" url="@ref openvino_docs_ops_movement_VariadicSplit_1"/>
</tab>
</tab>
<tab id="deploying_inference" type="usergroup" title="Deploying Inference" url="@ref openvino_docs_IE_DG_Deep_Learning_Inference_Engine_DevGuide">
<tab id="deploying_inference" type="usergroup" title="Deploying Inference" url="@ref openvino_docs_IE_DG_Deep_Learning_Inference_Engine_DevGuide">
<!-- Inference Engine Developer Guide -->
<tab type="usergroup" title="Inference Engine Developer Guide" url="@ref openvino_docs_IE_DG_Deep_Learning_Inference_Engine_DevGuide">
<tab type="user" title="Introduction to Inference Engine" url="@ref openvino_docs_IE_DG_inference_engine_intro"/>
@@ -282,7 +240,6 @@
<tab type="user" title="Inference Engine Python* API Overview" url="@ref openvino_inference_engine_ie_bridges_python_docs_api_overview"/>
<tab type="user" title="Read an ONNX model" url="@ref openvino_docs_IE_DG_ONNX_Support"/>
<tab type="user" title="[DEPRECATED] Import an ONNX model" url="@ref openvino_docs_IE_DG_OnnxImporterTutorial"/>
<tab type="user" title="Graph Debug Capabilities" url="@ref openvino_docs_IE_DG_Graph_debug_capabilities"/>
<tab type="user" title="Using Dynamic Batching Feature" url="@ref openvino_docs_IE_DG_DynamicBatching"/>
<tab type="user" title="Using Static Shape Infer Feature" url="@ref openvino_docs_IE_DG_ShapeInference"/>
<tab type="user" title="Using GPU kernels tuning" url="@ref openvino_docs_IE_DG_GPU_Kernels_Tuning"/>
@@ -298,7 +255,7 @@
<tab type="user" title="RemoteBlob API of GPU Plugin" url="@ref openvino_docs_IE_DG_supported_plugins_GPU_RemoteBlob_API"/>
</tab>
<tab type="user" title="CPU Plugin" url="@ref openvino_docs_IE_DG_supported_plugins_CPU"/>
<tab type="user" title="FPGA Plugin" url="@ref openvino_docs_IE_DG_supported_plugins_FPGA"/>
<tab type="user" title="[DEPRECATED] FPGA Plugin" url="@ref openvino_docs_IE_DG_supported_plugins_FPGA"/>
<tab type="usergroup" title="VPU Plugins" url="@ref openvino_docs_IE_DG_supported_plugins_VPU">
<tab type="user" title="MYRIAD Plugin " url="@ref openvino_docs_IE_DG_supported_plugins_MYRIAD"/>
<tab type="user" title="HDDL Plugin " url="@ref openvino_docs_IE_DG_supported_plugins_HDDL"/>
@@ -328,669 +285,30 @@
<!-- Compile Tool -->
<tab type="user" title="Compile Tool" url="@ref openvino_inference_engine_tools_compile_tool_README"/>
<!-- IE C -->
<tab type="user" title="Inference Engine C API Reference" url="ie_c_api/modules.html"/>
<!-- IE C++-->
<tab type="classes" visible="yes" title="Inference Engine &#1057;++ API Reference">
<tab type="classlist" visible="yes" title=""/>
<tab type="hierarchy" visible="yes" title=""/>
<tab type="namespacemembers" visible="yes" title="" intro=""/>
<tab type="pages" visible="no"/>
<tab type="files" visible="no"/>
<tab type="filelist" visible="no"/>
<tab type="globals" visible="no"/>
</tab>
<!-- IE Python -->
<tab type="user" title="Inference Engine Python API Reference" url="ie_python_api/annotated.html"/>
<!-- API References -->
<tab id="api_references" type="usergroup" title="API References">
<!-- IE C -->
<tab type="user" title="Inference Engine C API Reference" url="ie_c_api/modules.html"/>
<!-- IE C++-->
<tab type="classes" visible="yes" title="Inference Engine C++ API Reference" url="annotated.html">
<tab type="classlist" visible="yes" title=""/>
<tab type="hierarchy" visible="yes" title=""/>
<tab type="namespacemembers" visible="yes" title="" intro=""/>
<tab type="pages" visible="no"/>
<tab type="files" visible="no"/>
<tab type="filelist" visible="no"/>
<tab type="globals" visible="no"/>
</tab>
<!-- IE Python -->
<tab type="user" title="Inference Engine Python API Reference" url="ie_python_api/annotated.html"/>
<!-- nGraph C++ -->
<tab type="user" title="nGraph C++ API Reference" url="ngraph_cpp_api/annotated.html"/>
<!-- nGraph Python -->
<tab type="user" title="nGraph Python API Reference" url="ngraph_python_api/files.html"/>
</tab>
<!-- Inference Engine Plugin Development Guide-->
<tab type="user" title="Inference Engine Plugin Development Guide" url="ie_plugin_api/index.html"/>
</tab>
<tab id="custom_layers_guide" type="usergroup" title="Custom Layers Guide" url="@ref openvino_docs_HOWTO_Custom_Layers_Guide"></tab>
<tab id="legal_information" type="usergroup" title="Legal Information" url="@ref openvino_docs_Legal_Information"></tab>
<tab id="api_references" type="usergroup" title="API REFERENCES">
<!-- OpenVX -->
<tab type="user" title="OpenVX Developer Guide" url="https://khronos.org/openvx"/>
<!-- OpenCV -->
<tab type="user" title="OpenCV Developer Guide" url="https://docs.opencv.org/master/"/>
<!-- DL Streamer -->
<tab type="user" title="DL Streamer API Reference" url="https://openvinotoolkit.github.io/dlstreamer_gst/"/>
</tab>
<!-- Workbench -->
<tab id="deep_learning_workbench" type="usergroup" title="Deep Learning Workbench" url="@ref workbench_docs_Workbench_DG_Introduction">
<tab type="user" title="Introduction to DL Workbench" url="@ref workbench_docs_Workbench_DG_Introduction"/>
<tab type="usergroup" title="DL Workbench Installation Guide" url="@ref workbench_docs_Workbench_DG_Install_Workbench">
<tab type="user" title="Install from Docker Hub*" url="@ref workbench_docs_Workbench_DG_Install_from_Docker_Hub"/>
<tab type="user" title="Install from the Intel&#174; Distribution of OpenVINO&#8482; Toolkit Package" url="@ref workbench_docs_Workbench_DG_Install_from_Package"/>
<tab type="user" title="Enter DL Workbench" url="@ref workbench_docs_Workbench_DG_Authentication"/>
</tab>
<tab type="usergroup" title="DL Workbench Get Started Guide" url="@ref workbench_docs_Workbench_DG_Work_with_Models_and_Sample_Datasets">
<tab type="usergroup" title="Select Models" url="@ref workbench_docs_Workbench_DG_Select_Model">
<tab type="user" title="Import Models" url="@ref workbench_docs_Workbench_DG_Select_Models"/>
<tab type="user" title="Import Frozen TensorFlow* SSD MobileNet v2 COCO Tutorial" url="@ref workbench_docs_Workbench_DG_Import_TensorFlow"/>
<tab type="user" title="Import MXNet* MobileNet v2 Tutorial" url="@ref workbench_docs_Workbench_DG_Import_MXNet"/>
<tab type="user" title="Import ONNX* MobileNet v2 Tutorial" url="@ref workbench_docs_Workbench_DG_Import_ONNX"/>
</tab>
<tab type="usergroup" title="Select Datasets" url="@ref workbench_docs_Workbench_DG_Select_Datasets">
<tab type="user" title="Import Datasets" url="@ref workbench_docs_Workbench_DG_Import_Datasets"/>
<tab type="user" title="Generate Datasets" url="@ref workbench_docs_Workbench_DG_Generate_Datasets"/>
<tab type="user" title="Dataset Types" url="@ref workbench_docs_Workbench_DG_Dataset_Types"/>
<tab type="user" title="Download and Cut Datasets" url="@ref workbench_docs_Workbench_DG_Download_and_Cut_Datasets"/>
</tab>
<tab type="user" title="Select Environment" url="@ref workbench_docs_Workbench_DG_Select_Environment"/>
<tab type="user" title="Run Baseline Inference" url="@ref workbench_docs_Workbench_DG_Run_Baseline_Inference"/>
</tab>
<tab type="usergroup" title="DL Workbench Developer Guide" url="@ref workbench_docs_Workbench_DG_Run_Single_Inference">
<tab type="usergroup" title="Measure and Interpret Model Performance" url="@ref workbench_docs_Workbench_DG_Run_Single_Inference">
<tab type="user" title="Run Single Inference" url="@ref workbench_docs_Workbench_DG_Run_Single_Inference"/>
<tab type="user" title="Run Group Inference" url="@ref workbench_docs_Workbench_DG_Run_Range_of_Inferences"/>
<tab type="usergroup" title="View Inference Results" url="@ref workbench_docs_Workbench_DG_View_Inference_Results">
<tab type="user" title="Visualize Model" url="@ref workbench_docs_Workbench_DG_Visualize_Model"/>
</tab>
<tab type="user" title="Compare Performance between Two Versions of a Model" url="@ref workbench_docs_Workbench_DG_Compare_Performance_between_Two_Versions_of_Models"/>
</tab>
<tab type="usergroup" title="Tune Model for Enhanced Performance" url="@ref workbench_docs_Workbench_DG_Int_8_Quantization">
<tab type="user" title="INT8 Calibration" url="@ref workbench_docs_Workbench_DG_Int_8_Quantization"/>
<tab type="user" title="Winograd Algorithmic Tuning" url="@ref workbench_docs_Workbench_DG_Winograd_Algorithmic_Tuning"/>
</tab>
<tab type="usergroup" title="Accuracy Measurements" url="@ref workbench_docs_Workbench_DG_Measure_Accuracy">
<tab type="user" title="Measure Accuracy" url="@ref workbench_docs_Workbench_DG_Measure_Accuracy"/>
<tab type="user" title="Configure Accuracy Settings" url="@ref workbench_docs_Workbench_DG_Configure_Accuracy_Settings"/>
</tab>
<tab type="usergroup" title="Remote Profiling" url="@ref workbench_docs_Workbench_DG_Remote_Profiling">
<tab type="user" title="Profile on Remote Machine" url="@ref workbench_docs_Workbench_DG_Profile_on_Remote_Machine"/>
<tab type="user" title="Set Up Target for Remote Profiling" url="@ref workbench_docs_Workbench_DG_Setup_Remote_Target"/>
<tab type="user" title="Register Remote Target in DL Workbench" url="@ref workbench_docs_Workbench_DG_Add_Remote_Target"/>
<tab type="user" title="Remote Machines" url="@ref workbench_docs_Workbench_DG_Remote_Machines"/>
</tab>
<tab type="user" title="Build Application with Deployment Package" url="@ref workbench_docs_Workbench_DG_Deployment_Package"/>
<tab type="user" title="Deploy and Integrate Performance Criteria into Application" url="@ref workbench_docs_Workbench_DG_Deploy_and_Integrate_Performance_Criteria_into_Application"/>
<tab type="user" title="Persist Database State" url="@ref workbench_docs_Workbench_DG_Persist_Database"/>
<tab type="user" title="Work with Docker Container" url="@ref workbench_docs_Workbench_DG_Docker_Container"/>
</tab>
<tab type="usergroup" title="DL Workbench Security Guide" url="@ref workbench_docs_Workbench_DG_Configure_TLS">
<tab type="user" title="Configure Transport Layer Security (TLS)" url="@ref workbench_docs_Workbench_DG_Configure_TLS"/>
<tab type="user" title="Configure Authentication Token Saving" url="@ref workbench_docs_Workbench_DG_Configure_Token_Saving"/>
</tab>
<tab type="user" title="Troubleshooting" url="@ref workbench_docs_Workbench_DG_Troubleshooting"/>
</tab>
<!-- Optimization docs -->
<tab id="tuning_for_performance" type="usergroup" title="Tuning for Performance" url="">
<!-- Performance Benchmarks -->
<tab type="usergroup" title="Performance Measures" url="@ref openvino_docs_performance_benchmarks">
<tab type="user" title="Performance Information Frequently Asked Questions" url="@ref openvino_docs_performance_benchmarks_faq"/>
<tab type="user" title="Download Performance Data Spreadsheet in MS Excel* Format" url="https://docs.openvinotoolkit.org/downloads/benchmark_files/OV-2021.1-Download-Excel.xlsx"/>
<tab type="user" title="INT8 vs. FP32 Comparison on Select Networks and Platforms" url="@ref openvino_docs_performance_int8_vs_fp32"/>
</tab>
<tab type="user" title="Performance Optimization Guide" url="@ref openvino_docs_optimization_guide_dldt_optimization_guide"/>
<tab type="usergroup" title="Post-Training Optimization Toolkit" url="@ref pot_README">
<tab type="usergroup" title="Quantization" url="@ref pot_compression_algorithms_quantization_README">
<tab type="user" title="DefaultQuantization Algorithm" url="@ref pot_compression_algorithms_quantization_default_README"/>
<tab type="user" title="AccuracyAwareQuantization Algorithm" url="@ref pot_compression_algorithms_quantization_accuracy_aware_README"/>
<tab type="user" title="TunableQuantization Algorithm" url="@ref pot_compression_algorithms_quantization_tunable_quantization_README"/>
</tab>
<tab type="usergroup" title="Global Optimization" url="@ref pot_compression_optimization_README">
<tab type="usergroup" title="Tree-Structured Parzen Estimator (TPE)" url="@ref pot_compression_optimization_tpe_README">
<tab type="user" title="TPE Multiple Node Configuration Based on MongoDB Database" url="@ref pot_compression_optimization_tpe_multinode"/>
</tab>
</tab>
<tab type="user" title="Low Precision Optimization Guide" url="@ref pot_docs_LowPrecisionOptimizationGuide"/>
<tab type="user" title="Post-Training Optimization Best Practices" url="@ref pot_docs_BestPractices"/>
<tab type="user" title="Use Post-Training Optimization Toolkit API" url="@ref pot_compression_api_README"/>
<tab type="usergroup" title="Configuration File Description" url="@ref pot_configs_README">
<tab type="user" title="How to Run Examples" url="@ref pot_configs_examples_README"/>
</tab>
</tab>
<tab type="usergroup" title="Tuning Utilities" url="">
<tab type="usergroup" title="Accuracy Checker Tool" url="@ref omz_tools_accuracy_checker_README">
<tab type="user" title="Accuracy Checker Sample" url="@ref omz_tools_accuracy_checker_sample_README"/>
<tab type="user" title="Configure Caffe* Launcher" url="@ref omz_tools_accuracy_checker_accuracy_checker_launcher_caffe_launcher_readme"/>
<tab type="user" title="Configure OpenVINO Launcher" url="@ref omz_tools_accuracy_checker_accuracy_checker_launcher_dlsdk_launcher_readme"/>
<tab type="user" title="Configure OpenCV* Launcher" url="@ref omz_tools_accuracy_checker_accuracy_checker_launcher_opencv_launcher_readme"/>
<tab type="user" title="Configure MxNet* Launcher" url="@ref omz_tools_accuracy_checker_accuracy_checker_launcher_mxnet_launcher_readme"/>
<tab type="user" title="Configure TensorFlow* Launcher" url="@ref omz_tools_accuracy_checker_accuracy_checker_launcher_tf_launcher_readme"/>
<tab type="user" title="Configure TensorFlow* Lite Launcher" url="@ref omz_tools_accuracy_checker_accuracy_checker_launcher_tf_lite_launcher_readme"/>
<tab type="user" title="Configure ONNX* Runtime Launcher" url="@ref omz_tools_accuracy_checker_accuracy_checker_launcher_onnx_runtime_launcher_readme"/>
<tab type="user" title="Configure *PyTorch Launcher" url="@ref omz_tools_accuracy_checker_accuracy_checker_launcher_pytorch_launcher_readme"/>
<tab type="user" title="Adapters" url="@ref omz_tools_accuracy_checker_accuracy_checker_adapters_README"/>
<tab type="user" title="Annotation Converters" url="@ref omz_tools_accuracy_checker_accuracy_checker_annotation_converters_README"/>
<tab type="user" title="Preprocessors" url="@ref omz_tools_accuracy_checker_accuracy_checker_preprocessor_README"/>
<tab type="user" title="Postprocessors" url="@ref omz_tools_accuracy_checker_accuracy_checker_postprocessor_README"/>
<tab type="user" title="Metrics" url="@ref omz_tools_accuracy_checker_accuracy_checker_metrics_README"/>
<tab type="user" title="Custom Evaluators for Accuracy Checker" url="@ref omz_tools_accuracy_checker_accuracy_checker_evaluators_custom_evaluators_README"/>
<tab type="user" title="Readers" url="@ref omz_tools_accuracy_checker_accuracy_checker_data_readers_README"/>
<tab type="user" title="Caffe* Installation Tips" url="@ref omz_tools_accuracy_checker_accuracy_checker_launcher_caffe_installation_readme"/>
</tab>
<tab type="user" title="Using Cross Check Tool for Per-Layer Comparison Between Plugins" url="@ref openvino_inference_engine_tools_cross_check_tool_README"/>
</tab>
<tab type="user" title="Case Studies" url="https://www.intel.com/openvino-success-stories"/>
</tab>
<tab type="usergroup" title="Media Processing">
<!-- OpenVX -->
<tab type="user" title="OpenVX* Developer Guide" url="https://software.intel.com/en-us/openvino-ovx-guide"/>
<!-- OpenCV -->
<tab type="user" title="OpenCV* Developer Guide" url="https://docs.opencv.org/master/"/>
<!-- OpenCL -->
<tab type="user" title="OpenCL&#8482; Developer Guide" url="https://software.intel.com/en-us/openclsdk-devguide"/>
</tab>
</tab>
<!-- EXAMPLES category -->
<tab type="usergroup" title="EXAMPLES" url="examples.html">
<!-- Models and Demos Documentation-->
<tab id="trained_models" type="usergroup" title="Trained Models" url="@ref omz_models_intel_index">
<tab type="usergroup" title="Intel Pre-Trained Models" url="@ref omz_models_intel_index">
<tab type="usergroup" title="Object Detection Models" url="">
<tab type="user" title="faster-rcnn-resnet101-coco-sparse-60-0001" url="@ref omz_models_intel_faster_rcnn_resnet101_coco_sparse_60_0001_description_faster_rcnn_resnet101_coco_sparse_60_0001"/>
<tab type="user" title="face-detection-adas-0001" url="@ref omz_models_intel_face_detection_adas_0001_description_face_detection_adas_0001"/>
<tab type="user" title="face-detection-adas-binary-0001" url="@ref omz_models_intel_face_detection_adas_binary_0001_description_face_detection_adas_binary_0001"/>
<tab type="user" title="face-detection-retail-0004" url="@ref omz_models_intel_face_detection_retail_0004_description_face_detection_retail_0004"/>
<tab type="user" title="face-detection-retail-0005" url="@ref omz_models_intel_face_detection_retail_0005_description_face_detection_retail_0005"/>
<tab type="user" title="face-detection-0100" url="@ref omz_models_intel_face_detection_0100_description_face_detection_0100"/>
<tab type="user" title="face-detection-0102" url="@ref omz_models_intel_face_detection_0102_description_face_detection_0102"/>
<tab type="user" title="face-detection-0104" url="@ref omz_models_intel_face_detection_0104_description_face_detection_0104"/>
<tab type="user" title="face-detection-0105" url="@ref omz_models_intel_face_detection_0105_description_face_detection_0105"/>
<tab type="user" title="face-detection-0106" url="@ref omz_models_intel_face_detection_0106_description_face_detection_0106"/>
<tab type="user" title="person-detection-retail-0002" url="@ref omz_models_intel_person_detection_retail_0002_description_person_detection_retail_0002"/>
<tab type="user" title="person-detection-retail-0013" url="@ref omz_models_intel_person_detection_retail_0013_description_person_detection_retail_0013"/>
<tab type="user" title="person-detection-action-recognition-0005" url="@ref omz_models_intel_person_detection_action_recognition_0005_description_person_detection_action_recognition_0005"/>
<tab type="user" title="person-detection-action-recognition-0006" url="@ref omz_models_intel_person_detection_action_recognition_0006_description_person_detection_action_recognition_0006"/>
<tab type="user" title="person-detection-action-recognition-teacher-0002" url="@ref omz_models_intel_person_detection_action_recognition_teacher_0002_description_person_detection_action_recognition_teacher_0002"/>
<tab type="user" title="person-detection-raisinghand-recognition-0001" url="@ref omz_models_intel_person_detection_raisinghand_recognition_0001_description_person_detection_raisinghand_recognition_0001"/>
<tab type="user" title="person-detection-0100" url="@ref omz_models_intel_person_detection_0100_description_person_detection_0100"/>
<tab type="user" title="person-detection-0101" url="@ref omz_models_intel_person_detection_0101_description_person_detection_0101"/>
<tab type="user" title="person-detection-0102" url="@ref omz_models_intel_person_detection_0102_description_person_detection_0102"/>
<tab type="user" title="person-detection-0106" url="@ref omz_models_intel_person_detection_0106_description_person_detection_0106"/>
<tab type="user" title="person-detection-asl-0001" url="@ref omz_models_intel_person_detection_asl_0001_description_person_detection_asl_0001"/>
<tab type="user" title="pedestrian-detection-adas-0002" url="@ref omz_models_intel_pedestrian_detection_adas_0002_description_pedestrian_detection_adas_0002"/>
<tab type="user" title="pedestrian-detection-adas-binary-0001" url="@ref omz_models_intel_pedestrian_detection_adas_binary_0001_description_pedestrian_detection_adas_binary_0001"/>
<tab type="user" title="pedestrian-and-vehicle-detector-adas-0001" url="@ref omz_models_intel_pedestrian_and_vehicle_detector_adas_0001_description_pedestrian_and_vehicle_detector_adas_0001"/>
<tab type="user" title="vehicle-detection-adas-0002" url="@ref omz_models_intel_vehicle_detection_adas_0002_description_vehicle_detection_adas_0002"/>
<tab type="user" title="vehicle-detection-adas-binary-0001" url="@ref omz_models_intel_vehicle_detection_adas_binary_0001_description_vehicle_detection_adas_binary_0001"/>
<tab type="user" title="person-vehicle-bike-detection-crossroad-0078" url="@ref omz_models_intel_person_vehicle_bike_detection_crossroad_0078_description_person_vehicle_bike_detection_crossroad_0078"/>
<tab type="user" title="person-vehicle-bike-detection-crossroad-1016" url="@ref omz_models_intel_person_vehicle_bike_detection_crossroad_1016_description_person_vehicle_bike_detection_crossroad_1016"/>
<tab type="user" title="product-detection-0001" url="@ref omz_models_intel_product_detection_0001_description_product_detection_0001"/>
<tab type="user" title="vehicle-license-plate-detection-barrier-0106" url="@ref omz_models_intel_vehicle_license_plate_detection_barrier_0106_description_vehicle_license_plate_detection_barrier_0106"/>
<tab type="user" title="yolo-v2-ava-0001" url="@ref omz_models_intel_yolo_v2_ava_0001_description_yolo_v2_ava_0001"/>
<tab type="user" title="yolo-v2-ava-sparse-35-0001" url="@ref omz_models_intel_yolo_v2_ava_sparse_35_0001_description_yolo_v2_ava_sparse_35_0001"/>
<tab type="user" title="yolo-v2-ava-sparse-70-0001" url="@ref omz_models_intel_yolo_v2_ava_sparse_70_0001_description_yolo_v2_ava_sparse_70_0001"/>
<tab type="user" title="yolo-v2-tiny-ava-0001" url="@ref omz_models_intel_yolo_v2_tiny_ava_0001_description_yolo_v2_tiny_ava_0001"/>
<tab type="user" title="yolo-v2-tiny-ava-sparse-30-0001" url="@ref omz_models_intel_yolo_v2_tiny_ava_sparse_30_0001_description_yolo_v2_tiny_ava_sparse_30_0001"/>
<tab type="user" title="yolo-v2-tiny-ava-sparse-60-0001" url="@ref omz_models_intel_yolo_v2_tiny_ava_sparse_60_0001_description_yolo_v2_tiny_ava_sparse_60_0001"/>
<tab type="user" title="yolo-v2-tiny-vehicle-detection-0001" url="@ref omz_models_intel_yolo_v2_tiny_vehicle_detection_0001_description_yolo_v2_tiny_vehicle_detection_0001"/>
</tab>
<tab type="usergroup" title="Object Recognition Models" url="">
<tab type="user" title="age-gender-recognition-retail-0013" url="@ref omz_models_intel_age_gender_recognition_retail_0013_description_age_gender_recognition_retail_0013"/>
<tab type="user" title="head-pose-estimation-adas-0001" url="@ref omz_models_intel_head_pose_estimation_adas_0001_description_head_pose_estimation_adas_0001"/>
<tab type="user" title="license-plate-recognition-barrier-0001" url="@ref omz_models_intel_license_plate_recognition_barrier_0001_description_license_plate_recognition_barrier_0001"/>
<tab type="user" title="vehicle-attributes-recognition-barrier-0039" url="@ref omz_models_intel_vehicle_attributes_recognition_barrier_0039_description_vehicle_attributes_recognition_barrier_0039"/>
<tab type="user" title="vehicle-attributes-recognition-barrier-0042" url="@ref omz_models_intel_vehicle_attributes_recognition_barrier_0042_description_vehicle_attributes_recognition_barrier_0042"/>
<tab type="user" title="emotions-recognition-retail-0003" url="@ref omz_models_intel_emotions_recognition_retail_0003_description_emotions_recognition_retail_0003"/>
<tab type="user" title="landmarks-regression-retail-0009" url="@ref omz_models_intel_landmarks_regression_retail_0009_description_landmarks_regression_retail_0009"/>
<tab type="user" title="facial-landmarks-35-adas-0002" url="@ref omz_models_intel_facial_landmarks_35_adas_0002_description_facial_landmarks_35_adas_0002"/>
<tab type="user" title="person-attributes-recognition-crossroad-0230" url="@ref omz_models_intel_person_attributes_recognition_crossroad_0230_description_person_attributes_recognition_crossroad_0230"/>
<tab type="user" title="gaze-estimation-adas-0002" url="@ref omz_models_intel_gaze_estimation_adas_0002_description_gaze_estimation_adas_0002"/>
</tab>
<tab type="usergroup" title="Reidentification Models" url="">
<tab type="user" title="person-reidentification-retail-0248" url="@ref omz_models_intel_person_reidentification_retail_0248_description_person_reidentification_retail_0248"/>
<tab type="user" title="person-reidentification-retail-0265" url="@ref omz_models_intel_person_reidentification_retail_0265_description_person_reidentification_retail_0265"/>
<tab type="user" title="person-reidentification-retail-0267" url="@ref omz_models_intel_person_reidentification_retail_0267_description_person_reidentification_retail_0267"/>
<tab type="user" title="person-reidentification-retail-0270" url="@ref omz_models_intel_person_reidentification_retail_0270_description_person_reidentification_retail_0270"/>
</tab>
<tab type="usergroup" title="Semantic Segmentation Models" url="">
<tab type="user" title="road-segmentation-adas-0001" url="@ref omz_models_intel_road_segmentation_adas_0001_description_road_segmentation_adas_0001"/>
<tab type="user" title="semantic-segmentation-adas-0001" url="@ref omz_models_intel_semantic_segmentation_adas_0001_description_semantic_segmentation_adas_0001"/>
<tab type="user" title="icnet-camvid-ava-0001" url="@ref omz_models_intel_icnet_camvid_ava_0001_description_icnet_camvid_ava_0001"/>
<tab type="user" title="icnet-camvid-ava-sparse-30-0001" url="@ref omz_models_intel_icnet_camvid_ava_sparse_30_0001_description_icnet_camvid_ava_sparse_30_0001"/>
<tab type="user" title="icnet-camvid-ava-sparse-60-0001" url="@ref omz_models_intel_icnet_camvid_ava_sparse_60_0001_description_icnet_camvid_ava_sparse_60_0001"/>
<tab type="user" title="unet-camvid-onnx-0001" url="@ref omz_models_intel_unet_camvid_onnx_0001_description_unet_camvid_onnx_0001"/>
</tab>
<tab type="usergroup" title="Instance Segmentation Models" url="">
<tab type="user" title="instance-segmentation-security-0050" url="@ref omz_models_intel_instance_segmentation_security_0050_description_instance_segmentation_security_0050"/>
<tab type="user" title="instance-segmentation-security-0083" url="@ref omz_models_intel_instance_segmentation_security_0083_description_instance_segmentation_security_0083"/>
<tab type="user" title="instance-segmentation-security-0010" url="@ref omz_models_intel_instance_segmentation_security_0010_description_instance_segmentation_security_0010"/>
<tab type="user" title="instance-segmentation-security-1025" url="@ref omz_models_intel_instance_segmentation_security_1025_description_instance_segmentation_security_1025"/>
</tab>
<tab type="usergroup" title="Human Pose Estimation Models" url="">
<tab type="user" title="human-pose-estimation-0001" url="@ref omz_models_intel_human_pose_estimation_0001_description_human_pose_estimation_0001"/>
</tab>
<tab type="usergroup" title="Image Processing" url="">
<tab type="user" title="single-image-super-resolution-1032" url="@ref omz_models_intel_single_image_super_resolution_1032_description_single_image_super_resolution_1032"/>
<tab type="user" title="single-image-super-resolution-1033" url="@ref omz_models_intel_single_image_super_resolution_1033_description_single_image_super_resolution_1033"/>
<tab type="user" title="text-image-super-resolution-0001" url="@ref omz_models_intel_text_image_super_resolution_0001_description_text_image_super_resolution_0001"/>
</tab>
<tab type="usergroup" title="Text Detection" url="">
<tab type="user" title="text-detection-0003" url="@ref omz_models_intel_text_detection_0003_description_text_detection_0003"/>
<tab type="user" title="text-detection-0004" url="@ref omz_models_intel_text_detection_0004_description_text_detection_0004"/>
</tab>
<tab type="usergroup" title="Text Recognition" url="">
<tab type="user" title="text-recognition-0012" url="@ref omz_models_intel_text_recognition_0012_description_text_recognition_0012"/>
<tab type="user" title="handwritten-score-recognition-0003" url="@ref omz_models_intel_handwritten_score_recognition_0003_description_handwritten_score_recognition_0003"/>
<tab type="user" title="handwritten-japanese-recognition-0001" url="@ref omz_models_intel_handwritten_japanese_recognition_0001_description_handwritten_japanese_recognition_0001"/>
</tab>
<tab type="usergroup" title="Text Spotting" url="">
<tab type="user" title="text-spotting-0002" url="@ref omz_models_intel_text_spotting_0002_description_text_spotting_0002"/>
</tab>
<tab type="usergroup" title="Action Recognition Models" url="">
<tab type="user" title="driver-action-recognition-adas-0002" url="@ref omz_models_intel_driver_action_recognition_adas_0002_description_driver_action_recognition_adas_0002"/>
<tab type="user" title="action-recognition-0001" url="@ref omz_models_intel_action_recognition_0001_description_action_recognition_0001"/>
<tab type="user" title="asl-recognition-0004" url="@ref omz_models_intel_asl_recognition_0004_description_asl_recognition_0004"/>
<tab type="user" title="weld-porosity-detection-0001" url="@ref omz_models_intel_weld_porosity_detection_0001_description_weld_porosity_detection_0001"/>
</tab>
<tab type="usergroup" title="Image Retrieval" url="">
<tab type="user" title="image-retrieval-0001" url="@ref omz_models_intel_image_retrieval_0001_description_image_retrieval_0001"/>
</tab>
<tab type="usergroup" title="Compressed Models" url="">
<tab type="user" title="resnet50-binary-0001" url="@ref omz_models_intel_resnet50_binary_0001_description_resnet50_binary_0001"/>
<tab type="user" title="resnet18-xnor-binary-onnx-0001" url="@ref omz_models_intel_resnet18_xnor_binary_onnx_0001_description_resnet18_xnor_binary_onnx_0001"/>
</tab>
<tab type="usergroup" title="Question Answering" url="">
<tab type="user" title="bert-large-uncased-whole-word-masking-squad-fp32-0001" url="@ref omz_models_intel_bert_large_uncased_whole_word_masking_squad_fp32_0001_description_bert_large_uncased_whole_word_masking_squad_fp32_0001"/>
<tab type="user" title="bert-large-uncased-whole-word-masking-squad-int8-0001" url="@ref omz_models_intel_bert_large_uncased_whole_word_masking_squad_int8_0001_description_bert_large_uncased_whole_word_masking_squad_int8_0001"/>
<tab type="user" title="bert-small-uncased-whole-word-masking-squad-0001" url="@ref omz_models_intel_bert_small_uncased_whole_word_masking_squad_0001_description_bert_small_uncased_whole_word_masking_squad_0001"/>
</tab>
</tab>
<tab type="usergroup" title="Public Pre-trained Models Available with OpenVINO&#8482; from Open Model Zoo" url="@ref omz_models_public_index">
<tab type="usergroup" title="Classification" url="">
<tab type="user" title="AlexNet" url="@ref omz_models_public_alexnet_alexnet"/>
<tab type="user" title="CaffeNet" url="@ref omz_models_public_caffenet_caffenet"/>
<tab type="user" title="DenseNet 121" url="@ref omz_models_public_densenet_121_densenet_121"/>
<tab type="user" title="densenet-121-tf" url="@ref omz_models_public_densenet_121_tf_densenet_121_tf"/>
<tab type="user" title="densenet-121-caffe2" url="@ref omz_models_public_densenet_121_caffe2_densenet_121_caffe2"/>
<tab type="user" title="DenseNet 161" url="@ref omz_models_public_densenet_161_densenet_161"/>
<tab type="user" title="densenet-161-tf" url="@ref omz_models_public_densenet_161_tf_densenet_161_tf"/>
<tab type="user" title="DenseNet 169" url="@ref omz_models_public_densenet_169_densenet_169"/>
<tab type="user" title="densenet-169-tf" url="@ref omz_models_public_densenet_169_tf_densenet_169_tf"/>
<tab type="user" title="DenseNet 201" url="@ref omz_models_public_densenet_201_densenet_201"/>
<tab type="user" title="EfficientNet B0" url="@ref omz_models_public_efficientnet_b0_efficientnet_b0"/>
<tab type="user" title="efficientnet-b0-pytorch" url="@ref omz_models_public_efficientnet_b0_pytorch_efficientnet_b0_pytorch"/>
<tab type="user" title="EfficientNet B0 AutoAugment" url="@ref omz_models_public_efficientnet_b0_auto_aug_efficientnet_b0_auto_aug"/>
<tab type="user" title="EfficientNet B5" url="@ref omz_models_public_efficientnet_b5_efficientnet_b5"/>
<tab type="user" title="efficientnet-b5-pytorch" url="@ref omz_models_public_efficientnet_b5_pytorch_efficientnet_b5_pytorch"/>
<tab type="user" title="EfficientNet B7" url="@ref omz_models_public_efficientnet_b7_pytorch_efficientnet_b7_pytorch"/>
<tab type="user" title="EfficientNet B7 AutoAugment" url="@ref omz_models_public_efficientnet_b7_auto_aug_efficientnet_b7_auto_aug"/>
<tab type="user" title="HBONet 1.0" url="@ref omz_models_public_hbonet_1_0_hbonet_1_0"/>
<tab type="user" title="HBONet 0.5" url="@ref omz_models_public_hbonet_0_5_hbonet_0_5"/>
<tab type="user" title="HBONet 0.25" url="@ref omz_models_public_hbonet_0_25_hbonet_0_25"/>
<tab type="user" title="Inception (GoogleNet) V1" url="@ref omz_models_public_googlenet_v1_googlenet_v1"/>
<tab type="user" title="googlenet-v1-tf" url="@ref omz_models_public_googlenet_v1_tf_googlenet_v1_tf"/>
<tab type="user" title="Inception (GoogleNet) V2" url="@ref omz_models_public_googlenet_v2_googlenet_v2"/>
<tab type="user" title="googlenet-v2-tf" url="@ref omz_models_public_googlenet_v2_tf_googlenet_v2_tf"/>
<tab type="user" title="Inception (GoogleNet) V3" url="@ref omz_models_public_googlenet_v3_googlenet_v3"/>
<tab type="user" title="googlenet-v3-pytorch" url="@ref omz_models_public_googlenet_v3_pytorch_googlenet_v3_pytorch"/>
<tab type="user" title="Inception (GoogleNet) V4" url="@ref omz_models_public_googlenet_v4_tf_googlenet_v4_tf"/>
<tab type="user" title="Inception-ResNet V2" url="@ref omz_models_public_inception_resnet_v2_tf_inception_resnet_v2_tf"/>
<tab type="user" title="MobileNet V1 0.25 128" url="@ref omz_models_public_mobilenet_v1_0_25_128_mobilenet_v1_0_25_128"/>
<tab type="user" title="MobileNet V1 0.5 160" url="@ref omz_models_public_mobilenet_v1_0_50_160_mobilenet_v1_0_50_160"/>
<tab type="user" title="MobileNet V1 0.5 224" url="@ref omz_models_public_mobilenet_v1_0_50_224_mobilenet_v1_0_50_224"/>
<tab type="user" title="MobileNet V1 1.0 224" url="@ref omz_models_public_mobilenet_v1_1_0_224_mobilenet_v1_1_0_224"/>
<tab type="user" title="mobilenet-v1-1.0-224-tf" url="@ref omz_models_public_mobilenet_v1_1_0_224_tf_mobilenet_v1_1_0_224_tf"/>
<tab type="user" title="MobileNet V2 1.0 224" url="@ref omz_models_public_mobilenet_v2_mobilenet_v2"/>
<tab type="user" title="mobilenet-v2-1.0-224" url="@ref omz_models_public_mobilenet_v2_1_0_224_mobilenet_v2_1_0_224"/>
<tab type="user" title="mobilenet-v2-pytorch" url="@ref omz_models_public_mobilenet_v2_pytorch_mobilenet_v2_pytorch"/>
<tab type="user" title="MobileNet V2 1.4 224" url="@ref omz_models_public_mobilenet_v2_1_4_224_mobilenet_v2_1_4_224"/>
<tab type="user" title="MobileNet V3 Small 1.0" url="@ref omz_models_public_mobilenet_v3_small_1_0_224_tf_mobilenet_v3_small_1_0_224_tf"/>
<tab type="user" title="MobileNet V3 Large 1.0" url="@ref omz_models_public_mobilenet_v3_large_1_0_224_tf_mobilenet_v3_large_1_0_224_tf"/>
<tab type="user" title="DenseNet 121, alpha=0.125" url="@ref omz_models_public_octave_densenet_121_0_125_octave_densenet_121_0_125"/>
<tab type="user" title="ResNet 26, alpha=0.25" url="@ref omz_models_public_octave_resnet_26_0_25_octave_resnet_26_0_25"/>
<tab type="user" title="ResNet 50, alpha=0.125" url="@ref omz_models_public_octave_resnet_50_0_125_octave_resnet_50_0_125"/>
<tab type="user" title="ResNet 101, alpha=0.125" url="@ref omz_models_public_octave_resnet_101_0_125_octave_resnet_101_0_125"/>
<tab type="user" title="ResNet 200, alpha=0.125" url="@ref omz_models_public_octave_resnet_200_0_125_octave_resnet_200_0_125"/>
<tab type="user" title="ResNeXt 50, alpha=0.25" url="@ref omz_models_public_octave_resnext_50_0_25_octave_resnext_50_0_25"/>
<tab type="user" title="ResNeXt 101, alpha=0.25" url="@ref omz_models_public_octave_resnext_101_0_25_octave_resnext_101_0_25"/>
<tab type="user" title="SE-ResNet 50, alpha=0.125" url="@ref omz_models_public_octave_se_resnet_50_0_125_octave_se_resnet_50_0_125"/>
<tab type="user" title="open-closed-eye-0001" url="@ref omz_models_public_open_closed_eye_0001_description_open_closed_eye_0001"/>
<tab type="user" title="ResNet 18" url="@ref omz_models_public_resnet_18_pytorch_resnet_18_pytorch"/>
<tab type="user" title="ResNet 34" url="@ref omz_models_public_resnet_34_pytorch_resnet_34_pytorch"/>
<tab type="user" title="ResNet 50" url="@ref omz_models_public_resnet_50_resnet_50"/>
<tab type="user" title="resnet-50-pytorch" url="@ref omz_models_public_resnet_50_pytorch_resnet_50_pytorch"/>
<tab type="user" title="resnet-50-caffe2" url="@ref omz_models_public_resnet_50_caffe2_resnet_50_caffe2"/>
<tab type="user" title="resnet-50-tf" url="@ref omz_models_public_resnet_50_tf_resnet_50_tf"/>
<tab type="user" title="ResNet 101" url="@ref omz_models_public_resnet_101_resnet_101"/>
<tab type="user" title="ResNet 152" url="@ref omz_models_public_resnet_152_resnet_152"/>
<tab type="user" title="SE-Inception" url="@ref omz_models_public_se_inception_se_inception"/>
<tab type="user" title="SE-ResNet 50" url="@ref omz_models_public_se_resnet_50_se_resnet_50"/>
<tab type="user" title="SE-ResNet 101" url="@ref omz_models_public_se_resnet_101_se_resnet_101"/>
<tab type="user" title="SE-ResNet 152" url="@ref omz_models_public_se_resnet_152_se_resnet_152"/>
<tab type="user" title="SE-ResNeXt 50" url="@ref omz_models_public_se_resnext_50_se_resnext_50"/>
<tab type="user" title="SE-ResNeXt 101" url="@ref omz_models_public_se_resnext_101_se_resnext_101"/>
<tab type="user" title="SqueezeNet v1.0" url="@ref omz_models_public_squeezenet1_0_squeezenet1_0"/>
<tab type="user" title="SqueezeNet v1.1" url="@ref omz_models_public_squeezenet1_1_squeezenet1_1"/>
<tab type="user" title="squeezenet1.1-caffe2" url="@ref omz_models_public_squeezenet1_1_caffe2_squeezenet1_1_caffe2"/>
<tab type="user" title="VGG 16" url="@ref omz_models_public_vgg16_vgg16"/>
<tab type="user" title="VGG 19" url="@ref omz_models_public_vgg19_vgg19"/>
<tab type="user" title="vgg19-caffe2" url="@ref omz_models_public_vgg19_caffe2_vgg19_caffe2"/>
</tab>
<tab type="usergroup" title="Segmentation" url="">
<tab type="usergroup" title="Semantic Segmentation" url="">
<tab type="user" title="DeepLab V3" url="@ref omz_models_public_deeplabv3_deeplabv3"/>
</tab>
<tab type="usergroup" title="Instance Segmentation" url="">
<tab type="user" title="Mask R-CNN Inception ResNet V2" url="@ref omz_models_public_mask_rcnn_inception_resnet_v2_atrous_coco_mask_rcnn_inception_resnet_v2_atrous_coco"/>
<tab type="user" title="Mask R-CNN Inception V2" url="@ref omz_models_public_mask_rcnn_inception_v2_coco_mask_rcnn_inception_v2_coco"/>
<tab type="user" title="Mask R-CNN ResNet 50" url="@ref omz_models_public_mask_rcnn_resnet50_atrous_coco_mask_rcnn_resnet50_atrous_coco"/>
<tab type="user" title="Mask R-CNN ResNet 101" url="@ref omz_models_public_mask_rcnn_resnet101_atrous_coco_mask_rcnn_resnet101_atrous_coco"/>
</tab>
<tab type="usergroup" title="3D Semantic Segmentation" url="">
<tab type="user" title="Brain Tumor Segmentation" url="@ref omz_models_public_brain_tumor_segmentation_0001_brain_tumor_segmentation_0001"/>
<tab type="user" title="Brain Tumor Segmentation 2" url="@ref omz_models_public_brain_tumor_segmentation_0002_brain_tumor_segmentation_0002"/>
</tab>
</tab>
<tab type="usergroup" title="Object Detection" url="">
<tab type="user" title="CTPN" url="@ref omz_models_public_ctpn_ctpn"/>
<tab type="user" title="CenterNet (CTDET with DLAV0) 384x384" url="@ref omz_models_public_ctdet_coco_dlav0_384_ctdet_coco_dlav0_384"/>
<tab type="user" title="CenterNet (CTDET with DLAV0) 512x512" url="@ref omz_models_public_ctdet_coco_dlav0_512_ctdet_coco_dlav0_512"/>
<tab type="user" title="FaceBoxes" url="@ref omz_models_public_faceboxes_pytorch_faceboxes_pytorch"/>
<tab type="user" title="Faster R-CNN with Inception-ResNet v2" url="@ref omz_models_public_faster_rcnn_inception_resnet_v2_atrous_coco_faster_rcnn_inception_resnet_v2_atrous_coco"/>
<tab type="user" title="Faster R-CNN with Inception v2" url="@ref omz_models_public_faster_rcnn_inception_v2_coco_faster_rcnn_inception_v2_coco"/>
<tab type="user" title="Faster R-CNN with ResNet 50" url="@ref omz_models_public_faster_rcnn_resnet50_coco_faster_rcnn_resnet50_coco"/>
<tab type="user" title="Faster R-CNN with ResNet 101" url="@ref omz_models_public_faster_rcnn_resnet101_coco_faster_rcnn_resnet101_coco"/>
<tab type="user" title="MobileFace Detection V1" url="@ref omz_models_public_mobilefacedet_v1_mxnet_mobilefacedet_v1_mxnet"/>
<tab type="user" title="MTCNN" url="@ref omz_models_public_mtcnn_mtcnn"/>
<tab type="user" title="Pelee" url="@ref omz_models_public_pelee_coco_pelee_coco"/>
<tab type="user" title="RetinaNet with Resnet 50" url="@ref omz_models_public_retinanet_tf_retinanet_tf"/>
<tab type="user" title="R-FCN with Resnet-101" url="@ref omz_models_public_rfcn_resnet101_coco_tf_rfcn_resnet101_coco_tf"/>
<tab type="user" title="SSD 300" url="@ref omz_models_public_ssd300_ssd300"/>
<tab type="user" title="SSD 512" url="@ref omz_models_public_ssd512_ssd512"/>
<tab type="user" title="SSD with MobileNet" url="@ref omz_models_public_mobilenet_ssd_mobilenet_ssd"/>
<tab type="user" title="ssd_mobilenet_v1_coco" url="@ref omz_models_public_ssd_mobilenet_v1_coco_ssd_mobilenet_v1_coco"/>
<tab type="user" title="SSD with MobileNet FPN" url="@ref omz_models_public_ssd_mobilenet_v1_fpn_coco_ssd_mobilenet_v1_fpn_coco"/>
<tab type="user" title="SSD with MobileNet V2" url="@ref omz_models_public_ssd_mobilenet_v2_coco_ssd_mobilenet_v2_coco"/>
<tab type="user" title="SSD lite with MobileNet V2" url="@ref omz_models_public_ssdlite_mobilenet_v2_ssdlite_mobilenet_v2"/>
<tab type="user" title="SSD with ResNet-50 V1 FPN" url="@ref omz_models_public_ssd_resnet50_v1_fpn_coco_ssd_resnet50_v1_fpn_coco"/>
<tab type="user" title="SSD with ResNet 34 1200x1200" url="@ref omz_models_public_ssd_resnet34_1200_onnx_ssd_resnet34_1200_onnx"/>
<tab type="user" title="RetinaFace-R50" url="@ref omz_models_public_retinaface_resnet50_retinaface_resnet50"/>
<tab type="user" title="RetinaFace-Anti-Cov" url="@ref omz_models_public_retinaface_anti_cov_retinaface_anti_cov"/>
<tab type="user" title="YOLO v1 Tiny" url="@ref omz_models_public_yolo_v1_tiny_tf_yolo_v1_tiny_tf"/>
<tab type="user" title="YOLO v2 Tiny" url="@ref omz_models_public_yolo_v2_tiny_tf_yolo_v2_tiny_tf"/>
<tab type="user" title="YOLO v2" url="@ref omz_models_public_yolo_v2_tf_yolo_v2_tf"/>
<tab type="user" title="YOLO v3" url="@ref omz_models_public_yolo_v3_tf_yolo_v3_tf"/>
</tab>
<tab type="usergroup" title="Face Recognition" url="">
<tab type="user" title="FaceNet" url="@ref omz_models_public_facenet_20180408_102900_facenet_20180408_102900"/>
<tab type="user" title="LResNet34E-IR,ArcFace@ms1m-refine-v1" url="@ref omz_models_public_face_recognition_resnet34_arcface_face_recognition_resnet34_arcface"/>
<tab type="user" title="LResNet50E-IR,ArcFace@ms1m-refine-v1" url="@ref omz_models_public_face_recognition_resnet50_arcface_face_recognition_resnet50_arcface"/>
<tab type="user" title="LResNet100E-IR,ArcFace@ms1m-refine-v2" url="@ref omz_models_public_face_recognition_resnet100_arcface_face_recognition_resnet100_arcface"/>
<tab type="user" title="MobileFaceNet,ArcFace@ms1m-refine-v1" url="@ref omz_models_public_face_recognition_mobilefacenet_arcface_face_recognition_mobilefacenet_arcface"/>
<tab type="user" title="SphereFace" url="@ref omz_models_public_Sphereface_Sphereface"/>
</tab>
<tab type="usergroup" title="Human Pose Estimation" url="">
<tab type="user" title="human-pose-estimation-3d-0001" url="@ref omz_models_public_human_pose_estimation_3d_0001_description_human_pose_estimation_3d_0001"/>
<tab type="user" title="single-human-pose-estimation-0001" url="@ref omz_models_public_single_human_pose_estimation_0001_description_single_human_pose_estimation_0001"/>
</tab>
<tab type="usergroup" title="Monocular Depth Estimation" url="">
<tab type="user" title="midasnet" url="@ref omz_models_public_midasnet_midasnet"/>
</tab>
<tab type="usergroup" title="Image Inpainting" url="">
<tab type="user" title="GMCNN Inpainting" url="@ref omz_models_public_gmcnn_places2_tf_gmcnn_places2_tf"/>
</tab>
<tab type="usergroup" title="Style Transfer" url="">
<tab type="user" title="fast-neural-style-mosaic-onnx" url="@ref omz_models_public_fast_neural_style_mosaic_onnx_fast_neural_style_mosaic_onnx"/>
</tab>
<tab type="usergroup" title="Action Recognition" url="">
<tab type="user" title="RGB-I3D, pretrained on ImageNet\*" url="@ref omz_models_public_i3d_rgb_tf_i3d_rgb_tf"/>
</tab>
<tab type="Colorization" title="Colorization" url="">
<tab type="user" title="colorization-v2" url="@ref omz_models_public_colorization_v2_colorization_v2"/>
<tab type="user" title="colorization-v2-norebal" url="@ref omz_models_public_colorization_v2_norebal_colorization_v2_norebal"/>
</tab>
</tab>
</tab>
<tab id="application_demos" type="usergroup" title="Application Demos" url="@ref omz_demos_README">
<tab type="user" title="Crossroad Camera C++ Demo" url="@ref omz_demos_crossroad_camera_demo_README"/>
<tab type="user" title="Colorization Python Demo" url="@ref omz_demos_python_demos_colorization_demo_README"/>
<tab type="user" title="Image Inpainting Python Demo" url="@ref omz_demos_python_demos_image_inpainting_demo_README"/>
<tab type="user" title="Monodepth Python* Demo" url="@ref omz_demos_python_demos_monodepth_demo_README"/>
<tab type="user" title="Interactive Face Detection C++ Demo" url="@ref omz_demos_interactive_face_detection_demo_README"/>
<tab type="user" title="TensorFlow* Object Detection Mask R-CNNs Segmentation Demo" url="@ref omz_demos_mask_rcnn_demo_README"/>
<tab type="usergroup" title="Multi-Channel Demos" url="@ref omz_demos_multi_channel_README">
<tab type="user" title="Multi-Channel Face Detection C++ Demo" url="@ref omz_demos_multi_channel_face_detection_demo_README"/>
<tab type="user" title="Multi-Channel Human Pose Estimation C++ Demo" url="@ref omz_demos_multi_channel_human_pose_estimation_demo_README"/>
<tab type="user" title="Multi-Channel Object Detection YOLO* V3 C++ Demo" url="@ref omz_demos_multi_channel_object_detection_demo_yolov3_README"/>
</tab>
<tab type="user" title="Object Detection Faster R-CNN Demo" url="@ref omz_demos_object_detection_demo_faster_rcnn_README"/>
<tab type="user" title="Object Detection for RetinaFace Python* Demo" url="@ref omz_demos_python_demos_object_detection_demo_retinaface_README"/>
<tab type="user" title="Object Detection for CenterNet Python* Demo" url="@ref omz_demos_python_demos_object_detection_demo_centernet_README"/>
<tab type="user" title="Object Detection SSD C++ Demo, Async API Performance Showcase" url="@ref omz_demos_object_detection_demo_ssd_async_README"/>
<tab type="user" title="Object Detection SSD Python* Demo, Async API Performance Showcase" url="@ref omz_demos_python_demos_object_detection_demo_ssd_async_README"/>
<tab type="user" title="Security Barrier Camera Demo" url="@ref omz_demos_security_barrier_camera_demo_README"/>
<tab type="user" title="Image Segmentation C++ Demo" url="@ref omz_demos_segmentation_demo_README"/>
<tab type="user" title="Single Human Pose Estimation Python* Demo" url="@ref omz_demos_python_demos_single_human_pose_estimation_demo_README"/>
<tab type="user" title="Image Segmentation Python* Demo" url="@ref omz_demos_python_demos_segmentation_demo_README"/>
<tab type="user" title="Smart Classroom Demo" url="@ref omz_demos_smart_classroom_demo_README"/>
<tab type="user" title="Text Detection Demo" url="@ref omz_demos_text_detection_demo_README"/>
<tab type="user" title="Text Spotting Python* Demo" url="@ref omz_demos_python_demos_text_spotting_demo_README"/>
<tab type="user" title="Handwritten Japanese Recognition Python* Demo" url="@ref omz_demos_python_demos_handwritten_japanese_recognition_demo_README"/>
<tab type="user" title="Gaze Estimation Demo" url="@ref omz_demos_gaze_estimation_demo_README"/>
<tab type="user" title="Human Pose Estimation Demo" url="@ref omz_demos_human_pose_estimation_demo_README"/>
<tab type="user" title="3D Human Pose Estimation Python* Demo" url="@ref omz_demos_python_demos_human_pose_estimation_3d_demo_README"/>
<tab type="user" title="Pedestrian Tracker Demo" url="@ref omz_demos_pedestrian_tracker_demo_README"/>
<tab type="user" title="Super Resolution Demo" url="@ref omz_demos_super_resolution_demo_README"/>
<tab type="user" title="Object Detection YOLO* V3 C++ Demo, Async API Performance Showcase" url="@ref omz_demos_object_detection_demo_yolov3_async_README"/>
<tab type="user" title="Object Detection YOLO* V3 Python* Demo, Async API Performance Showcase" url="@ref omz_demos_python_demos_object_detection_demo_yolov3_async_README"/>
<tab type="user" title="Action Recognition Demo" url="@ref omz_demos_python_demos_action_recognition_README"/>
<tab type="user" title="Instance Segmentation Demo" url="@ref omz_demos_python_demos_instance_segmentation_demo_README"/>
<tab type="user" title="3D Segmentation Demo" url="@ref omz_demos_python_demos_3d_segmentation_demo_README"/>
<tab type="user" title="Image Retrieval Python* Demo" url="@ref omz_demos_python_demos_image_retrieval_demo_README"/>
<tab type="user" title="Multi-Camera Multi-Target Tracking Python* Demo" url="@ref omz_demos_python_demos_multi_camera_multi_target_tracking_README"/>
<tab type="usergroup" title="Speech Library and Speech Recognition Demos" url="@ref openvino_inference_engine_samples_speech_libs_and_demos_Speech_libs_and_demos">
<tab type="user" title="Speech Library" url="@ref openvino_inference_engine_samples_speech_libs_and_demos_Speech_library"/>
<tab type="user" title="Offline Speech Recognition Demo" url="@ref openvino_inference_engine_samples_speech_libs_and_demos_Offline_speech_recognition_demo"/>
<tab type="user" title="Live Speech Recognition Demo" url="@ref openvino_inference_engine_samples_speech_libs_and_demos_Live_speech_recognition_demo"/>
<tab type="user" title="Kaldi* Statistical Language Model Conversion Tool" url="@ref openvino_inference_engine_samples_speech_libs_and_demos_Kaldi_SLM_conversion_tool"/>
</tab>
</tab>
<!-- IE Code Samples -->
<tab type="usergroup" title="Inference Engine Code Samples" url="@ref openvino_docs_IE_DG_Samples_Overview">
<tab type="user" title="Image Classification C++ Sample Async" url="@ref openvino_inference_engine_samples_classification_sample_async_README"/>
<tab type="user" title="Image Classification Python* Sample Async" url="@ref openvino_inference_engine_ie_bridges_python_sample_classification_sample_async_README"/>
<tab type="user" title="Hello Classification C++ Sample" url="@ref openvino_inference_engine_samples_hello_classification_README"/>
<tab type="user" title="Hello Classification C Sample" url="@ref openvino_inference_engine_ie_bridges_c_samples_hello_classification_README"/>
<tab type="user" title="Image Classification Python* Sample" url="@ref openvino_inference_engine_ie_bridges_python_sample_classification_sample_README"/>
<tab type="user" title="Hello Reshape SSD C++ Sample" url="@ref openvino_inference_engine_samples_hello_reshape_ssd_README"/>
<tab type="user" title="Hello NV12 Input Classification C++ Sample" url="@ref openvino_inference_engine_samples_hello_nv12_input_classification_README"/>
<tab type="user" title="Hello NV12 Input Classification C Sample" url="@ref openvino_inference_engine_ie_bridges_c_samples_hello_nv12_input_classification_README"/>
<tab type="user" title="Hello Query Device C++ Sample" url="@ref openvino_inference_engine_samples_hello_query_device_README"/>
<tab type="user" title="Hello Query Device Python* Sample" url="@ref openvino_inference_engine_ie_bridges_python_sample_hello_query_device_README"/>
<tab type="user" title="nGraph Function C++ Sample" url="@ref openvino_inference_engine_samples_ngraph_function_creation_sample_README"/>
<tab type="user" title="Object Detection C++ Sample SSD" url="@ref openvino_inference_engine_samples_object_detection_sample_ssd_README"/>
<tab type="user" title="Object Detection Python* Sample SSD" url="@ref openvino_inference_engine_ie_bridges_python_sample_object_detection_sample_ssd_README"/>
<tab type="user" title="Object Detection C Sample SSD" url="@ref openvino_inference_engine_ie_bridges_c_samples_object_detection_sample_ssd_README"/>
<tab type="user" title="Automatic Speech Recognition C++ Sample" url="@ref openvino_inference_engine_samples_speech_sample_README"/>
<tab type="user" title="Neural Style Transfer C++ Sample" url="@ref openvino_inference_engine_samples_style_transfer_sample_README"/>
<tab type="user" title="Neural Style Transfer Python* Sample" url="@ref openvino_inference_engine_ie_bridges_python_sample_style_transfer_sample_README"/>
<tab type="user" title="Benchmark C++ App" url="@ref openvino_inference_engine_samples_benchmark_app_README"/>
<tab type="user" title="Benchmark Python* App" url="@ref openvino_inference_engine_tools_benchmark_tool_README"/>
</tab>
<!-- DL Streamer Examples -->
<tab type="usergroup" title="DL Streamer Examples" url="@ref gst_samples_README">
<tab type="usergroup" title="Command Line Samples" url="">
<tab type="user" title="Face Detection And Classification Sample" url="@ref gst_samples_gst_launch_face_detection_and_classification_README"/>
<tab type="user" title="Vehicle and Pedestrian Tracking Sample" url="@ref gst_samples_gst_launch_vehicle_pedestrian_tracking_README"/>
<tab type="usergroup" title="Metadata Publishing Sample" url="@ref gst_samples_gst_launch_metapublish_README">
<tab type="user" title="MetaPublish Listeners" url="@ref gst_samples_gst_launch_metapublish_listener"/>
</tab>
<tab type="user" title="gvapython Sample" url="@ref gst_samples_gst_launch_gvapython_face_detection_and_classification_README"/>
</tab>
<tab type="user" title="Draw Face Attributes C++ Sample" url="@ref gst_samples_cpp_draw_face_attributes_README"/>
<tab type="user" title="Draw Face Attributes Python Sample" url="@ref gst_samples_python_draw_face_attributes_README"/>
<tab type="user" title="Benchmark Sample" url="@ref gst_samples_benchmark_README"/>
</tab>
</tab>
<!-- Chinese docs -->
<tab type="user" title="&#20013;&#25991;&#25991;&#20214;" url="https://docs.openvinotoolkit.org/cn/index.html"/>
</navindex>
<!-- Layout definition for a class page -->
<class>
<briefdescription visible="yes"/>
<includes visible="$SHOW_INCLUDE_FILES"/>
<inheritancegraph visible="$CLASS_GRAPH"/>
<collaborationgraph visible="$COLLABORATION_GRAPH"/>
<memberdecl>
<nestedclasses visible="yes" title=""/>
<publictypes title=""/>
<services title=""/>
<interfaces title=""/>
<publicslots title=""/>
<signals title=""/>
<publicmethods title=""/>
<publicstaticmethods title=""/>
<publicattributes title=""/>
<publicstaticattributes title=""/>
<protectedtypes title=""/>
<protectedslots title=""/>
<protectedmethods title=""/>
<protectedstaticmethods title=""/>
<protectedattributes title=""/>
<protectedstaticattributes title=""/>
<packagetypes title=""/>
<packagemethods title=""/>
<packagestaticmethods title=""/>
<packageattributes title=""/>
<packagestaticattributes title=""/>
<properties title=""/>
<events title=""/>
<privatetypes title=""/>
<privateslots title=""/>
<privatemethods title=""/>
<privatestaticmethods title=""/>
<privateattributes title=""/>
<privatestaticattributes title=""/>
<friends title=""/>
<related title="" subtitle=""/>
<membergroups visible="yes"/>
</memberdecl>
<detaileddescription title=""/>
<memberdef>
<inlineclasses title=""/>
<typedefs title=""/>
<enums title=""/>
<services title=""/>
<interfaces title=""/>
<constructors title=""/>
<functions title=""/>
<related title=""/>
<variables title=""/>
<properties title=""/>
<events title=""/>
</memberdef>
<allmemberslink visible="yes"/>
<usedfiles visible="$SHOW_USED_FILES"/>
<authorsection visible="yes"/>
</class>
<!-- Layout definition for a namespace page -->
<namespace>
<briefdescription visible="yes"/>
<memberdecl>
<nestednamespaces visible="yes" title=""/>
<constantgroups visible="yes" title=""/>
<classes visible="yes" title=""/>
<typedefs title=""/>
<enums title=""/>
<functions title=""/>
<variables title=""/>
<membergroups visible="yes"/>
</memberdecl>
<detaileddescription title=""/>
<memberdef>
<inlineclasses title=""/>
<typedefs title=""/>
<enums title=""/>
<functions title=""/>
<variables title=""/>
</memberdef>
<authorsection visible="yes"/>
</namespace>
<!-- Layout definition for a file page -->
<file>
<briefdescription visible="yes"/>
<includes visible="$SHOW_INCLUDE_FILES"/>
<includegraph visible="$INCLUDE_GRAPH"/>
<includedbygraph visible="$INCLUDED_BY_GRAPH"/>
<sourcelink visible="yes"/>
<memberdecl>
<classes visible="yes" title=""/>
<namespaces visible="yes" title=""/>
<constantgroups visible="yes" title=""/>
<defines title=""/>
<typedefs title=""/>
<enums title=""/>
<functions title=""/>
<variables title=""/>
<membergroups visible="yes"/>
</memberdecl>
<detaileddescription title=""/>
<memberdef>
<inlineclasses title=""/>
<defines title=""/>
<typedefs title=""/>
<enums title=""/>
<functions title=""/>
<variables title=""/>
</memberdef>
<authorsection/>
</file>
<!-- Layout definition for a group page -->
<group>
<briefdescription visible="yes"/>
<groupgraph visible="$GROUP_GRAPHS"/>
<memberdecl>
<nestedgroups visible="yes" title=""/>
<dirs visible="yes" title=""/>
<files visible="yes" title=""/>
<namespaces visible="yes" title=""/>
<classes visible="yes" title=""/>
<defines title=""/>
<typedefs title=""/>
<enums title=""/>
<enumvalues title=""/>
<functions title=""/>
<variables title=""/>
<signals title=""/>
<publicslots title=""/>
<protectedslots title=""/>
<privateslots title=""/>
<events title=""/>
<properties title=""/>
<friends title=""/>
<membergroups visible="yes"/>
</memberdecl>
<detaileddescription title=""/>
<memberdef>
<pagedocs/>
<inlineclasses title=""/>
<defines title=""/>
<typedefs title=""/>
<enums title=""/>
<enumvalues title=""/>
<functions title=""/>
<variables title=""/>
<signals title=""/>
<publicslots title=""/>
<protectedslots title=""/>
<privateslots title=""/>
<events title=""/>
<properties title=""/>
<friends title=""/>
</memberdef>
<authorsection visible="yes"/>
</group>
<!-- Layout definition for a directory page -->
<directory>
<briefdescription visible="yes"/>
<directorygraph visible="yes"/>
<memberdecl>
<dirs visible="yes"/>
<files visible="yes"/>
</memberdecl>
<detaileddescription title=""/>
</directory>
</doxygenlayout>

View File

@@ -3,12 +3,10 @@
<!-- Navigation index tabs for HTML output -->
<navindex>
<tab type="mainpage" title="OpenVINO Home" url="../index.html"/>
<tab type="user" title="GETTING STARTED" url="../index.html"/>
<tab type="user" title="HOW TOs" url="../openvino_docs_how_tos_how_to_links.html"/>
<tab type="user" title="GUIDES" url="../openvino_docs_IE_DG_Introduction.html"/>
<tab type="user" title="RESOURCES" url="../openvino_docs_resources_introduction.html"/>
<tab type="user" title="PERFORMANCE BENCHMARKS" url="../openvino_docs_performance_benchmarks.html"/>
<tab type="usergroup" title="API REFERENCES" url="../usergroup14.html">
<tab type="user" title="Get Started" url="../index.html"/>
<tab type="user" title="Documentation" url="../documentation.html"/>
<tab type="user" title="Examples" url="../examples.html"/>
<tab type="usergroup" title="API REFERENCES" url="../api_references.html">
<!-- OpenVX -->
<tab type="user" title="OpenVX Developer Guide" url="https://khronos.org/openvx"/>
<!-- OpenCV -->
@@ -29,6 +27,10 @@
</tab>
<!-- DL Streamer -->
<tab type="user" title="DL Streamer API Reference" url="https://openvinotoolkit.github.io/dlstreamer_gst/"/>
<!-- nGraph C++ API -->
<tab type="user" title="nGraph C++ API Reference" url="../ngraph_cpp_api/annotated.html"/>
<!-- nGraph Python API -->
<tab type="user" title="nGraph Python API Reference" url="../ngraph_python_api/files.html"/>
</tab>
<!-- Chinese docs -->
<tab type="user" title="中文文件" url="https://docs.openvinotoolkit.org/cn/index.html"/>

View File

@@ -0,0 +1,206 @@
<doxygenlayout version="1.0">
<!-- Generated by doxygen 1.8.12 -->
<!-- Navigation index tabs for HTML output -->
<navindex>
<tab type="mainpage" title="OpenVINO Home" url="../index.html"/>
<tab type="user" title="Get Started" url="../index.html"/>
<tab type="user" title="Documentation" url="../documentation.html"/>
<tab type="user" title="Examples" url="../examples.html"/>
<tab type="usergroup" title="API References" url="../api_references.html">
<!-- OpenVX -->
<tab type="user" title="OpenVX Developer Guide" url="https://khronos.org/openvx"/>
<!-- OpenCV -->
<tab type="user" title="OpenCV Developer Guide" url="https://docs.opencv.org/master/"/>
<!-- IE C -->
<tab type="usergroup" title="Inference Engine C API Reference" url="../ie_c_api/groups.html"/>
<tab type="user" title="Inference Engine С++ API Reference" url="../annotated.html"/>
<tab type="user" title="Inference Engine Python API Reference" url="../ie_python_api/annotated.html"/>
<!-- DL Streamer -->
<tab type="user" title="DL Streamer API Reference" url="https://openvinotoolkit.github.io/dlstreamer_gst/"/>
<!-- nGraph C++ API Reference -->
<tab type="classes" visible="yes" title="nGraph С++ API Reference">
<tab type="classlist" visible="yes" title=""/>
<tab type="hierarchy" visible="yes" title=""/>
<tab type="namespacemembers" visible="yes" title="" intro=""/>
<tab type="pages" visible="no"/>
<tab type="files" visible="no"/>
<tab type="filelist" visible="no"/>
<tab type="globals" visible="no"/>
</tab>
<!-- nGraph Python API -->
<tab type="user" title="nGraph Python API Reference" url="../ngraph_python_api/files.html"/>
</tab>
<!-- Chinese docs -->
<tab type="user" title="中文文件" url="https://docs.openvinotoolkit.org/cn/index.html"/>
</navindex>
<!-- Layout definition for a class page -->
<class>
<briefdescription visible="yes"/>
<includes visible="$SHOW_INCLUDE_FILES"/>
<inheritancegraph visible="$CLASS_GRAPH"/>
<collaborationgraph visible="$COLLABORATION_GRAPH"/>
<memberdecl>
<nestedclasses visible="yes" title=""/>
<publictypes title=""/>
<services title=""/>
<interfaces title=""/>
<publicslots title=""/>
<signals title=""/>
<publicmethods title=""/>
<publicstaticmethods title=""/>
<publicattributes title=""/>
<publicstaticattributes title=""/>
<protectedtypes title=""/>
<protectedslots title=""/>
<protectedmethods title=""/>
<protectedstaticmethods title=""/>
<protectedattributes title=""/>
<protectedstaticattributes title=""/>
<packagetypes title=""/>
<packagemethods title=""/>
<packagestaticmethods title=""/>
<packageattributes title=""/>
<packagestaticattributes title=""/>
<properties title=""/>
<events title=""/>
<privatetypes title=""/>
<privateslots title=""/>
<privatemethods title=""/>
<privatestaticmethods title=""/>
<privateattributes title=""/>
<privatestaticattributes title=""/>
<friends title=""/>
<related title="" subtitle=""/>
<membergroups visible="yes"/>
</memberdecl>
<detaileddescription title=""/>
<memberdef>
<inlineclasses title=""/>
<typedefs title=""/>
<enums title=""/>
<services title=""/>
<interfaces title=""/>
<constructors title=""/>
<functions title=""/>
<related title=""/>
<variables title=""/>
<properties title=""/>
<events title=""/>
</memberdef>
<allmemberslink visible="yes"/>
<usedfiles visible="$SHOW_USED_FILES"/>
<authorsection visible="yes"/>
</class>
<!-- Layout definition for a namespace page -->
<namespace>
<briefdescription visible="yes"/>
<memberdecl>
<nestednamespaces visible="yes" title=""/>
<constantgroups visible="yes" title=""/>
<classes visible="yes" title=""/>
<typedefs title=""/>
<enums title=""/>
<functions title=""/>
<variables title=""/>
<membergroups visible="yes"/>
</memberdecl>
<detaileddescription title=""/>
<memberdef>
<inlineclasses title=""/>
<typedefs title=""/>
<enums title=""/>
<functions title=""/>
<variables title=""/>
</memberdef>
<authorsection visible="yes"/>
</namespace>
<!-- Layout definition for a file page -->
<file>
<briefdescription visible="yes"/>
<includes visible="$SHOW_INCLUDE_FILES"/>
<includegraph visible="$INCLUDE_GRAPH"/>
<includedbygraph visible="$INCLUDED_BY_GRAPH"/>
<sourcelink visible="yes"/>
<memberdecl>
<classes visible="yes" title=""/>
<namespaces visible="yes" title=""/>
<constantgroups visible="yes" title=""/>
<defines title=""/>
<typedefs title=""/>
<enums title=""/>
<functions title=""/>
<variables title=""/>
<membergroups visible="yes"/>
</memberdecl>
<detaileddescription title=""/>
<memberdef>
<inlineclasses title=""/>
<defines title=""/>
<typedefs title=""/>
<enums title=""/>
<functions title=""/>
<variables title=""/>
</memberdef>
<authorsection/>
</file>
<!-- Layout definition for a group page -->
<group>
<briefdescription visible="yes"/>
<groupgraph visible="$GROUP_GRAPHS"/>
<memberdecl>
<nestedgroups visible="yes" title=""/>
<dirs visible="yes" title=""/>
<files visible="yes" title=""/>
<namespaces visible="yes" title=""/>
<classes visible="yes" title=""/>
<defines title=""/>
<typedefs title=""/>
<enums title=""/>
<enumvalues title=""/>
<functions title=""/>
<variables title=""/>
<signals title=""/>
<publicslots title=""/>
<protectedslots title=""/>
<privateslots title=""/>
<events title=""/>
<properties title=""/>
<friends title=""/>
<membergroups visible="yes"/>
</memberdecl>
<detaileddescription title=""/>
<memberdef>
<pagedocs/>
<inlineclasses title=""/>
<defines title=""/>
<typedefs title=""/>
<enums title=""/>
<enumvalues title=""/>
<functions title=""/>
<variables title=""/>
<signals title=""/>
<publicslots title=""/>
<protectedslots title=""/>
<privateslots title=""/>
<events title=""/>
<properties title=""/>
<friends title=""/>
</memberdef>
<authorsection visible="yes"/>
</group>
<!-- Layout definition for a directory page -->
<directory>
<briefdescription visible="yes"/>
<directorygraph visible="yes"/>
<memberdecl>
<dirs visible="yes"/>
<files visible="yes"/>
</memberdecl>
<detaileddescription title=""/>
</directory>
</doxygenlayout>

View File

@@ -0,0 +1,200 @@
<doxygenlayout version="1.0">
<!-- Generated by doxygen 1.8.12 -->
<!-- Navigation index tabs for HTML output -->
<navindex>
<tab type="mainpage" title="OpenVINO Home" url="../index.html"/>
<tab type="user" title="Get Started" url="../index.html"/>
<tab type="user" title="Documentation" url="../documentation.html"/>
<tab type="user" title="Examples" url="../examples.html"/>
<tab type="usergroup" title="API REFERENCES" url="../api_references.html">
<!-- OpenVX -->
<tab type="user" title="OpenVX Developer Guide" url="https://khronos.org/openvx"/>
<!-- OpenCV -->
<tab type="user" title="OpenCV Developer Guide" url="https://docs.opencv.org/master/"/>
<!-- IE C -->
<tab type="usergroup" title="Inference Engine C API Reference" url="../ie_c_api/groups.html"/>
<tab type="user" title="Inference Engine С++ API Reference" url="../annotated.html"/>
<tab type="user" title="Inference Engine Python API Reference" url="../ie_python_api/annotated.html"/>
<!-- DL Streamer -->
<tab type="user" title="DL Streamer API Reference" url="https://openvinotoolkit.github.io/dlstreamer_gst/"/>
<tab type="user" title="nGraph С++ API Reference" url="../ngraph_cpp_api/annotated.html"/>
<!-- nGraph Python API Reference -->
<tab type="files" visible="yes" title="nGraph Python API Reference">
<tab type="filelist" visible="yes" title="nGraph Python API Reference" intro=""/>
<tab type="globals" visible="yes" title="" intro=""/>
</tab>
</tab>
<!-- Chinese docs -->
<tab type="user" title="中文文件" url="https://docs.openvinotoolkit.org/cn/index.html"/>
</navindex>
<!-- Layout definition for a class page -->
<class>
<briefdescription visible="yes"/>
<includes visible="$SHOW_INCLUDE_FILES"/>
<inheritancegraph visible="$CLASS_GRAPH"/>
<collaborationgraph visible="$COLLABORATION_GRAPH"/>
<memberdecl>
<nestedclasses visible="yes" title=""/>
<publictypes title=""/>
<services title=""/>
<interfaces title=""/>
<publicslots title=""/>
<signals title=""/>
<publicmethods title=""/>
<publicstaticmethods title=""/>
<publicattributes title=""/>
<publicstaticattributes title=""/>
<protectedtypes title=""/>
<protectedslots title=""/>
<protectedmethods title=""/>
<protectedstaticmethods title=""/>
<protectedattributes title=""/>
<protectedstaticattributes title=""/>
<packagetypes title=""/>
<packagemethods title=""/>
<packagestaticmethods title=""/>
<packageattributes title=""/>
<packagestaticattributes title=""/>
<properties title=""/>
<events title=""/>
<privatetypes title=""/>
<privateslots title=""/>
<privatemethods title=""/>
<privatestaticmethods title=""/>
<privateattributes title=""/>
<privatestaticattributes title=""/>
<friends title=""/>
<related title="" subtitle=""/>
<membergroups visible="yes"/>
</memberdecl>
<detaileddescription title=""/>
<memberdef>
<inlineclasses title=""/>
<typedefs title=""/>
<enums title=""/>
<services title=""/>
<interfaces title=""/>
<constructors title=""/>
<functions title=""/>
<related title=""/>
<variables title=""/>
<properties title=""/>
<events title=""/>
</memberdef>
<allmemberslink visible="yes"/>
<usedfiles visible="$SHOW_USED_FILES"/>
<authorsection visible="yes"/>
</class>
<!-- Layout definition for a namespace page -->
<namespace>
<briefdescription visible="yes"/>
<memberdecl>
<nestednamespaces visible="yes" title=""/>
<constantgroups visible="yes" title=""/>
<classes visible="yes" title=""/>
<typedefs title=""/>
<enums title=""/>
<functions title=""/>
<variables title=""/>
<membergroups visible="yes"/>
</memberdecl>
<detaileddescription title=""/>
<memberdef>
<inlineclasses title=""/>
<typedefs title=""/>
<enums title=""/>
<functions title=""/>
<variables title=""/>
</memberdef>
<authorsection visible="yes"/>
</namespace>
<!-- Layout definition for a file page -->
<file>
<briefdescription visible="yes"/>
<includes visible="$SHOW_INCLUDE_FILES"/>
<includegraph visible="$INCLUDE_GRAPH"/>
<includedbygraph visible="$INCLUDED_BY_GRAPH"/>
<sourcelink visible="yes"/>
<memberdecl>
<classes visible="yes" title=""/>
<namespaces visible="yes" title=""/>
<constantgroups visible="yes" title=""/>
<defines title=""/>
<typedefs title=""/>
<enums title=""/>
<functions title=""/>
<variables title=""/>
<membergroups visible="yes"/>
</memberdecl>
<detaileddescription title=""/>
<memberdef>
<inlineclasses title=""/>
<defines title=""/>
<typedefs title=""/>
<enums title=""/>
<functions title=""/>
<variables title=""/>
</memberdef>
<authorsection/>
</file>
<!-- Layout definition for a group page -->
<group>
<briefdescription visible="yes"/>
<groupgraph visible="$GROUP_GRAPHS"/>
<memberdecl>
<nestedgroups visible="yes" title=""/>
<dirs visible="yes" title=""/>
<files visible="yes" title=""/>
<namespaces visible="yes" title=""/>
<classes visible="yes" title=""/>
<defines title=""/>
<typedefs title=""/>
<enums title=""/>
<enumvalues title=""/>
<functions title=""/>
<variables title=""/>
<signals title=""/>
<publicslots title=""/>
<protectedslots title=""/>
<privateslots title=""/>
<events title=""/>
<properties title=""/>
<friends title=""/>
<membergroups visible="yes"/>
</memberdecl>
<detaileddescription title=""/>
<memberdef>
<pagedocs/>
<inlineclasses title=""/>
<defines title=""/>
<typedefs title=""/>
<enums title=""/>
<enumvalues title=""/>
<functions title=""/>
<variables title=""/>
<signals title=""/>
<publicslots title=""/>
<protectedslots title=""/>
<privateslots title=""/>
<events title=""/>
<properties title=""/>
<friends title=""/>
</memberdef>
<authorsection visible="yes"/>
</group>
<!-- Layout definition for a directory page -->
<directory>
<briefdescription visible="yes"/>
<directorygraph visible="yes"/>
<memberdecl>
<dirs visible="yes"/>
<files visible="yes"/>
</memberdecl>
<detaileddescription title=""/>
</directory>
</doxygenlayout>

View File

@@ -0,0 +1,308 @@
<doxygenlayout version="1.0" xmlns:xi="http://www.w3.org/2001/XInclude">
<!-- Navigation index tabs for HTML output -->
<navindex>
<tab type="mainpage" title="OpenVINO Home" url="@ref index"/>
<!-- GET STARTED category -->
<tab type="usergroup" title="GET STARTED" url="index.html">
<!-- Install Directly -->
<tab type="usergroup" title="Installation Guides" url=""><!--automatically generated-->
<tab type="usergroup" title="Linux" url="@ref openvino_docs_install_guides_installing_openvino_linux">
<tab type="user" title="Install Intel® Distribution of OpenVINO™ toolkit for Linux* OS" url="@ref openvino_docs_install_guides_installing_openvino_linux"/>
<tab type="user" title="[DEPRECATED] Install Intel® Distribution of OpenVINO™ toolkit for Linux with FPGA Support" url="@ref openvino_docs_install_guides_installing_openvino_linux_fpga"/>
</tab>
<tab type="usergroup" title="Windows" url="@ref openvino_docs_install_guides_installing_openvino_windows">
<tab type="user" title="Install Intel® Distribution of OpenVINO™ toolkit for Windows* 10" url="@ref openvino_docs_install_guides_installing_openvino_windows"/>
<tab type="user" title="[DEPRECATED] Install Intel® Distribution of OpenVINO™ toolkit for Windows* with FPGA support" url="@ref openvino_docs_install_guides_installing_openvino_windows_fpga"/>
</tab>
<tab type="user" title="macOS" url="@ref openvino_docs_install_guides_installing_openvino_macos"/>
<tab type="user" title="Raspbian OS" url="@ref openvino_docs_install_guides_installing_openvino_raspbian"/>
<tab type="user" title="DL Workbench Installation Guide" url="./workbench_docs_Workbench_DG_Install_Workbench.html"/><!-- Link to the original Workbench topic -->
</tab>
<!-- Install From Images and Repositories -->
<tab type="usergroup" title="Install From Images and Repositories" url="@ref openvino_docs_install_guides_installing_openvino_images">
<tab type="usergroup" title="Docker" url="@ref openvino_docs_install_guides_installing_openvino_docker_linux">
<tab type="user" title="Install Intel® Distribution of OpenVINO™ toolkit for Linux* from a Docker* Image" url="@ref openvino_docs_install_guides_installing_openvino_docker_linux"/>
<tab type="user" title="Install Intel® Distribution of OpenVINO™ toolkit for Windows* from a Docker* Image" url="@ref openvino_docs_install_guides_installing_openvino_docker_windows"/>
</tab>
<tab type="user" title="Docker with DL Workbench" url="./workbench_docs_Workbench_DG_Install_from_Docker_Hub.html"/><!-- Link to the original Workbench topic -->
<tab type="user" title="APT" url="@ref openvino_docs_install_guides_installing_openvino_apt"/>
<tab type="user" title="YUM" url="@ref openvino_docs_install_guides_installing_openvino_yum"/>
<tab type="user" title="Anaconda Cloud" url="@ref openvino_docs_install_guides_installing_openvino_conda"/>
<tab type="user" title="Yocto" url="@ref openvino_docs_install_guides_installing_openvino_yocto"/>
<tab type="user" title="PyPI" url="@ref openvino_docs_install_guides_installing_openvino_pip"/>
<tab type="user" title="Build from Source" url="https://github.com/openvinotoolkit/openvino/wiki/BuildingCode"/>
</tab>
<!-- Get Started Guides-->
<tab type="usergroup" title="Get Started Guides" url=""><!--automatically generated-->
<tab type="user" title="OpenVINO™ Toolkit Overview" url="@ref index"/>
<tab type="user" title="Linux" url="@ref openvino_docs_get_started_get_started_linux"/>
<tab type="user" title="Windows" url="@ref openvino_docs_get_started_get_started_windows"/>
<tab type="user" title="macOS" url="@ref openvino_docs_get_started_get_started_macos"/>
<tab type="user" title="Get Started with OpenVINO via DL Workbench" url="@ref openvino_docs_get_started_get_started_dl_workbench"/>
<tab type="user" title="Legal Information" url="@ref openvino_docs_Legal_Information"/>
</tab>
<!-- Configuration for Hardware -->
<tab type="usergroup" title="Configuration for Hardware" url=""><!--automatically generated-->
<tab type="usergroup" title="VPUs" url="@ref openvino_docs_install_guides_movidius_setup_guide">
<tab type="user" title="Configuration Guide for Intel® Vision Accelerator Design with Intel® Movidius™ VPUs on Linux" url="@ref openvino_docs_install_guides_installing_openvino_linux_ivad_vpu"/>
<tab type="user" title="Intel® Movidius™ VPUs Setup Guide" url="@ref openvino_docs_install_guides_movidius_setup_guide"/>
<tab type="user" title="Intel® Movidius™ VPUs Programming Guide" url="@ref openvino_docs_install_guides_movidius_programming_guide"/>
</tab>
<tab type="usergroup" title="[DEPRECATED] FPGAs" url="@ref openvino_docs_install_guides_VisionAcceleratorFPGA_Configure">
<tab type="user" title="[DEPRECATED] Configuration Guide for Intel® Vision Accelerator Design with an Intel® Arria 10 FPGA SG2 (IEIs Mustang-F100-A10) on Linux" url="@ref openvino_docs_install_guides_VisionAcceleratorFPGA_Configure"/>
<tab type="user" title="[DEPRECATED] Configuration Guide for Intel® Programmable Acceleration Card with Intel® Arria® 10 FPGA GX on CentOS or Ubuntu*" url="@ref openvino_docs_install_guides_PAC_Configure"/>
</tab>
</tab>
<!-- Security -->
<tab type="usergroup" title="Security" url="@ref openvino_docs_security_guide_introduction"><!--automatically generated-->
<tab type="user" title="Introduction" url="@ref openvino_docs_security_guide_introduction"/>
<tab type="user" title="Using DL Workbench Securely" url="@ref openvino_docs_security_guide_workbench"/>
<tab type="user" title="Using Encrypted Models" url="@ref openvino_docs_IE_DG_protecting_model_guide"/>
</tab>
</tab>
<!-- DOCUMENTATION category -->
<tab type="usergroup" title="DOCUMENTATION"><!--automatically generated-->
<!-- DLDT Documentation-->
<xi:include href="ie_docs.xml" xpointer="xpointer(//tab[@id='converting_and_preparing_models'])"/>
<xi:include href="ie_docs.xml" xpointer="xpointer(//tab[@id='intermediate_representaton_and_operations_sets'])"/>
<xi:include href="ie_docs.xml" xpointer="xpointer(//tab[@id='deploying_inference'])"/>
<!-- Workbench -->
<xi:include href="workbench_docs.xml" xpointer="xpointer(//tab[@id='deep_learning_workbench'])"/>
<!-- Optimization docs -->
<xi:include href="optimization_docs.xml" xpointer="xpointer(//tab[@id='tuning_for_performance'])"/>
<tab type="usergroup" title="Media Processing">
<!-- DL Streamer -->
<tab type="user" title="DL Streamer API Reference" url="https://openvinotoolkit.github.io/dlstreamer_gst/"/>
<!-- DL Streamer Examples -->
<tab type="usergroup" title="DL Streamer Examples" url="@ref gst_samples_README">
</tab>
<!-- OpenVX -->
<tab type="user" title="OpenVX Developer Guide" url="https://software.intel.com/en-us/openvino-ovx-guide"/>
<tab type="user" title="OpenVX API Reference" url="https://khronos.org/openvx"/>
<!-- OpenCV -->
<tab type="user" title="OpenCV* Developer Guide" url="https://docs.opencv.org/master/"/>
<!-- OpenCL -->
<tab type="user" title="OpenCL™ Developer Guide" url="https://software.intel.com/en-us/openclsdk-devguide"/>
</tab>
</tab>
<!-- RESOURCES category -->
<tab type="usergroup" title="RESOURCES">
<!-- Models and Demos Documentation-->
<xi:include href="omz_docs.xml" xpointer="xpointer(//tab[@id='trained_models'])"/>
<xi:include href="omz_docs.xml" xpointer="xpointer(//tab[@id='application_demos'])"/>
<!-- IE Code Samples -->
<tab type="usergroup" title="Inference Engine Code Samples" url="@ref openvino_docs_IE_DG_Samples_Overview">
<tab type="user" title="Image Classification C++ Sample Async" url="@ref openvino_inference_engine_samples_classification_sample_async_README"/>
<tab type="user" title="Image Classification Python* Sample Async" url="@ref openvino_inference_engine_ie_bridges_python_sample_classification_sample_async_README"/>
<tab type="user" title="Hello Classification C++ Sample" url="@ref openvino_inference_engine_samples_hello_classification_README"/>
<tab type="user" title="Hello Classification C Sample" url="@ref openvino_inference_engine_ie_bridges_c_samples_hello_classification_README"/>
<tab type="user" title="Image Classification Python* Sample" url="@ref openvino_inference_engine_ie_bridges_python_sample_classification_sample_README"/>
<tab type="user" title="Hello Reshape SSD C++ Sample" url="@ref openvino_inference_engine_samples_hello_reshape_ssd_README"/>
<tab type="user" title="Hello NV12 Input Classification C++ Sample" url="@ref openvino_inference_engine_samples_hello_nv12_input_classification_README"/>
<tab type="user" title="Hello NV12 Input Classification C Sample" url="@ref openvino_inference_engine_ie_bridges_c_samples_hello_nv12_input_classification_README"/>
<tab type="user" title="Hello Query Device C++ Sample" url="@ref openvino_inference_engine_samples_hello_query_device_README"/>
<tab type="user" title="Hello Query Device Python* Sample" url="@ref openvino_inference_engine_ie_bridges_python_sample_hello_query_device_README"/>
<tab type="user" title="nGraph Function C++ Sample" url="@ref openvino_inference_engine_samples_ngraph_function_creation_sample_README"/>
<tab type="user" title="Object Detection C++ Sample SSD" url="@ref openvino_inference_engine_samples_object_detection_sample_ssd_README"/>
<tab type="user" title="Object Detection Python* Sample SSD" url="@ref openvino_inference_engine_ie_bridges_python_sample_object_detection_sample_ssd_README"/>
<tab type="user" title="Object Detection C Sample SSD" url="@ref openvino_inference_engine_ie_bridges_c_samples_object_detection_sample_ssd_README"/>
<tab type="user" title="Automatic Speech Recognition C++ Sample" url="@ref openvino_inference_engine_samples_speech_sample_README"/>
<tab type="user" title="Neural Style Transfer C++ Sample" url="@ref openvino_inference_engine_samples_style_transfer_sample_README"/>
<tab type="user" title="Neural Style Transfer Python* Sample" url="@ref openvino_inference_engine_ie_bridges_python_sample_style_transfer_sample_README"/>
<tab type="user" title="Benchmark C++ Tool" url="@ref openvino_inference_engine_samples_benchmark_app_README"/>
<tab type="user" title="Benchmark Python* Tool" url="@ref openvino_inference_engine_tools_benchmark_tool_README"/>
</tab>
<!-- DL Streamer Examples -->
<tab type="usergroup" title="DL Streamer Examples" url="@ref gst_samples_README">
<tab type="usergroup" title="Command Line Samples" url="">
<tab type="user" title="Audio Detection Sample" url="@ref gst_samples_gst_launch_audio_detect_README"/>
<tab type="user" title="Face Detection And Classification Sample" url="@ref gst_samples_gst_launch_face_detection_and_classification_README"/>
<tab type="user" title="Vehicle and Pedestrian Tracking Sample" url="@ref gst_samples_gst_launch_vehicle_pedestrian_tracking_README"/>
<tab type="usergroup" title="Metadata Publishing Sample" url="@ref gst_samples_gst_launch_metapublish_README">
<tab type="user" title="MetaPublish Listeners" url="@ref gst_samples_gst_launch_metapublish_listener"/>
</tab>
<tab type="user" title="gvapython Sample" url="@ref gst_samples_gst_launch_gvapython_face_detection_and_classification_README"/>
</tab>
<tab type="user" title="Draw Face Attributes C++ Sample" url="@ref gst_samples_cpp_draw_face_attributes_README"/>
<tab type="user" title="Draw Face Attributes Python Sample" url="@ref gst_samples_python_draw_face_attributes_README"/>
<tab type="user" title="Benchmark Sample" url="@ref gst_samples_benchmark_README"/>
</tab>
<tab type="usergroup" title="Add-Ons" url="">
<tab type="user" title="Model Server" url="@ref openvino_docs_ovms"/>
</tab>
</tab>
<!-- Chinese docs -->
<tab type="user" title="中文文件" url="https://docs.openvinotoolkit.org/cn/index.html"/>
</navindex>
<!-- Layout definition for a class page -->
<class>
<briefdescription visible="yes"/>
<includes visible="$SHOW_INCLUDE_FILES"/>
<inheritancegraph visible="$CLASS_GRAPH"/>
<collaborationgraph visible="$COLLABORATION_GRAPH"/>
<memberdecl>
<nestedclasses visible="yes" title=""/>
<publictypes title=""/>
<services title=""/>
<interfaces title=""/>
<publicslots title=""/>
<signals title=""/>
<publicmethods title=""/>
<publicstaticmethods title=""/>
<publicattributes title=""/>
<publicstaticattributes title=""/>
<protectedtypes title=""/>
<protectedslots title=""/>
<protectedmethods title=""/>
<protectedstaticmethods title=""/>
<protectedattributes title=""/>
<protectedstaticattributes title=""/>
<packagetypes title=""/>
<packagemethods title=""/>
<packagestaticmethods title=""/>
<packageattributes title=""/>
<packagestaticattributes title=""/>
<properties title=""/>
<events title=""/>
<privatetypes title=""/>
<privateslots title=""/>
<privatemethods title=""/>
<privatestaticmethods title=""/>
<privateattributes title=""/>
<privatestaticattributes title=""/>
<friends title=""/>
<related title="" subtitle=""/>
<membergroups visible="yes"/>
</memberdecl>
<detaileddescription title=""/>
<memberdef>
<inlineclasses title=""/>
<typedefs title=""/>
<enums title=""/>
<services title=""/>
<interfaces title=""/>
<constructors title=""/>
<functions title=""/>
<related title=""/>
<variables title=""/>
<properties title=""/>
<events title=""/>
</memberdef>
<allmemberslink visible="yes"/>
<usedfiles visible="$SHOW_USED_FILES"/>
<authorsection visible="yes"/>
</class>
<!-- Layout definition for a namespace page -->
<namespace>
<briefdescription visible="yes"/>
<memberdecl>
<nestednamespaces visible="yes" title=""/>
<constantgroups visible="yes" title=""/>
<classes visible="yes" title=""/>
<typedefs title=""/>
<enums title=""/>
<functions title=""/>
<variables title=""/>
<membergroups visible="yes"/>
</memberdecl>
<detaileddescription title=""/>
<memberdef>
<inlineclasses title=""/>
<typedefs title=""/>
<enums title=""/>
<functions title=""/>
<variables title=""/>
</memberdef>
<authorsection visible="yes"/>
</namespace>
<!-- Layout definition for a file page -->
<file>
<briefdescription visible="yes"/>
<includes visible="$SHOW_INCLUDE_FILES"/>
<includegraph visible="$INCLUDE_GRAPH"/>
<includedbygraph visible="$INCLUDED_BY_GRAPH"/>
<sourcelink visible="yes"/>
<memberdecl>
<classes visible="yes" title=""/>
<namespaces visible="yes" title=""/>
<constantgroups visible="yes" title=""/>
<defines title=""/>
<typedefs title=""/>
<enums title=""/>
<functions title=""/>
<variables title=""/>
<membergroups visible="yes"/>
</memberdecl>
<detaileddescription title=""/>
<memberdef>
<inlineclasses title=""/>
<defines title=""/>
<typedefs title=""/>
<enums title=""/>
<functions title=""/>
<variables title=""/>
</memberdef>
<authorsection/>
</file>
<!-- Layout definition for a group page -->
<group>
<briefdescription visible="yes"/>
<groupgraph visible="$GROUP_GRAPHS"/>
<memberdecl>
<nestedgroups visible="yes" title=""/>
<dirs visible="yes" title=""/>
<files visible="yes" title=""/>
<namespaces visible="yes" title=""/>
<classes visible="yes" title=""/>
<defines title=""/>
<typedefs title=""/>
<enums title=""/>
<enumvalues title=""/>
<functions title=""/>
<variables title=""/>
<signals title=""/>
<publicslots title=""/>
<protectedslots title=""/>
<privateslots title=""/>
<events title=""/>
<properties title=""/>
<friends title=""/>
<membergroups visible="yes"/>
</memberdecl>
<detaileddescription title=""/>
<memberdef>
<pagedocs/>
<inlineclasses title=""/>
<defines title=""/>
<typedefs title=""/>
<enums title=""/>
<enumvalues title=""/>
<functions title=""/>
<variables title=""/>
<signals title=""/>
<publicslots title=""/>
<protectedslots title=""/>
<privateslots title=""/>
<events title=""/>
<properties title=""/>
<friends title=""/>
</memberdef>
<authorsection visible="yes"/>
</group>
<!-- Layout definition for a directory page -->
<directory>
<briefdescription visible="yes"/>
<directorygraph visible="yes"/>
<memberdecl>
<dirs visible="yes"/>
<files visible="yes"/>
</memberdecl>
<detaileddescription title=""/>
</directory>
</doxygenlayout>

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:6675f4b68df7eaa3d6188ecc8b5d53be572cf9c92f53abac3bc6416e6b428d0c
size 196146

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:539deb67a7d1c0e8b0c037f8e7488445be0895e8e717bed5cfec64131936870c
size 198207

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:2925e58a71d684e23776e6ed55cc85d9085b3ba5e484720528aeac5fa59f9e3a
size 55404

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:f4a52661c05977d878c614c4f8510935982ce8a0e120e05690307d7c95e4ab31
size 73999

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:ddb0550f3f04c177ec116d6c41e6d3a2ac1fedea7121e10ad3836f84c86a5c78
size 35278

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:f1e329304ff3d586bb2b8e2442333ede085593f40b1567bd5250508d33d3b9f9
size 32668

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:605515f25a746579d3622b7a274c7dece95e4fbfc6c1817f99431c1abf116070
size 55409

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:0ca48900ca8f6733c4a8ebc957517fbed80f3c080f53d251eeebb01f082c8f83
size 55646

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:ba94c2c0e0cb98b9e43c876d060d8a7965182461b0d505167eb71134d4975b8f
size 58204

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:75628b7d02f1fe5c25a233fa16ae1c6c3d5060bf3d15bc7b1e5b9ea71ce50b73
size 50227

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:72ab36115cecfee4b215e1b21911ebac3706e513b72eea7bb829932f7bdb3a19
size 70515

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:70aee6f0fd30c8e2139950c6bc831dc11b2616ea8f04b991efc9b3f5b7b11ce6
size 88891

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:c1e297da7f7dfd2af7a0ba47ba1e5c14376f21b15dfcde1fe6f5ad3412ad8feb
size 21296

View File

@@ -0,0 +1,140 @@
# Get Started with OpenVINO™ Toolkit via Deep Learning Workbench {#openvino_docs_get_started_get_started_dl_workbench}
The OpenVINO™ toolkit optimizes and runs Deep Learning Neural Network models on Intel® hardware. This guide helps you get started with the OpenVINO™ toolkit via the Deep Learning Workbench (DL Workbench) on Linux\*, Windows\*, or macOS\*.
In this guide, you will:
* Learn the OpenVINO™ inference workflow.
* Start DL Workbench on Linux. Links to instructions for other operating systems are provided as well.
* Create a project and run a baseline inference.
[DL Workbench](@ref workbench_docs_Workbench_DG_Introduction) is a web-based graphical environment that enables you to easily use various sophisticated
OpenVINO™ toolkit components:
* [Model Downloader](@ref omz_tools_downloader_README) to download models from the [Intel® Open Model Zoo](@ref omz_models_intel_index)
with pretrained models for a range of different tasks
* [Model Optimizer](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md) to transform models into
the Intermediate Representation (IR) format
* [Post-Training Optimization toolkit](@ref pot_README) to calibrate a model and then execute it in the
INT8 precision
* [Accuracy Checker](@ref omz_tools_accuracy_checker_README) to determine the accuracy of a model
* [Benchmark Tool](@ref openvino_inference_engine_samples_benchmark_app_README) to estimate inference performance on supported devices
![](./dl_workbench_img/DL_Workbench.jpg)
DL Workbench supports the following scenarios:
1. [Calibrate the model in INT8 precision](@ref workbench_docs_Workbench_DG_Int_8_Quantization)
2. [Find the best combination](@ref workbench_docs_Workbench_DG_View_Inference_Results) of inference parameters: [number of streams and batches](../optimization_guide/dldt_optimization_guide.md)
3. [Analyze inference results](@ref workbench_docs_Workbench_DG_Visualize_Model) and [compare them across different configurations](@ref workbench_docs_Workbench_DG_Compare_Performance_between_Two_Versions_of_Models)
4. [Implement an optimal configuration into your application](@ref workbench_docs_Workbench_DG_Deploy_and_Integrate_Performance_Criteria_into_Application)
## Prerequisites
Prerequisite | Linux* | Windows* | macOS*
:----- | :----- |:----- |:-----
Operating system|Ubuntu\* 18.04. Other Linux distributions, such as Ubuntu\* 16.04 and CentOS\* 7, are not validated.|Windows\* 10 | macOS\* 10.15 Catalina
CPU | Intel® Core™ i5| Intel® Core™ i5 | Intel® Core™ i5
GPU| Intel® Pentium® processor N4200/5 with Intel® HD Graphics | Not supported| Not supported
HDDL, Myriad| Intel® Neural Compute Stick 2 <br> Intel® Vision Accelerator Design with Intel® Movidius™ VPUs| Not supported | Not supported
Available RAM space| 4 GB| 4 GB| 4 GB
Available storage space | 8 GB + space for imported artifacts| 8 GB + space for imported artifacts| 8 GB + space for imported artifacts
Docker\*| Docker CE 18.06.1 | Docker Desktop 2.1.0.1|Docker CE 18.06.1
Web browser| Google Chrome\* 76 <br> Browsers like Mozilla Firefox\* 71 or Apple Safari\* 12 are not validated. <br> Microsoft Internet Explorer\* is not supported.| Google Chrome\* 76 <br> Browsers like Mozilla Firefox\* 71 or Apple Safari\* 12 are not validated. <br> Microsoft Internet Explorer\* is not supported.| Google Chrome\* 76 <br>Browsers like Mozilla Firefox\* 71 or Apple Safari\* 12 are not validated. <br> Microsoft Internet Explorer\* is not supported.
Resolution| 1440 x 890|1440 x 890|1440 x 890
Internet|Optional|Optional|Optional
Installation method| From Docker Hub <br> From OpenVINO™ toolkit package|From Docker Hub|From Docker Hub
## Start DL Workbench
This section provides instructions to run the DL Workbench on Linux from Docker Hub.
Use the command below to pull the latest Docker image with the application and run it:
```bash
wget https://raw.githubusercontent.com/openvinotoolkit/workbench_aux/master/start_workbench.sh && bash start_workbench.sh
```
DL Workbench uses [authentication tokens](@ref workbench_docs_Workbench_DG_Authentication) to access the application. A token
is generated automatically and displayed in the console output when you run the container for the first time. Once the command is executed, follow the link with the token. The **Get Started** page opens:
![](./dl_workbench_img/Get_Started_Page-b.png)
For details and more installation options, visit the links below:
* [Install DL Workbench from Docker Hub* on Linux* OS](@ref workbench_docs_Workbench_DG_Install_from_DockerHub_Linux)
* [Install DL Workbench from Docker Hub on Windows*](@ref workbench_docs_Workbench_DG_Install_from_Docker_Hub_Win)
* [Install DL Workbench from Docker Hub on macOS*](@ref workbench_docs_Workbench_DG_Install_from_Docker_Hub_mac)
* [Install DL Workbench from the OpenVINO toolkit package on Linux](@ref workbench_docs_Workbench_DG_Install_from_Package)
## <a name="workflow-overview"></a>OpenVINO™ DL Workbench Workflow Overview
The simplified OpenVINO™ DL Workbench workflow is:
1. **Get a trained model** for your inference task. Example inference tasks: pedestrian detection, face detection, vehicle detection, license plate recognition, head pose.
2. **Run the trained model through the Model Optimizer** to convert the model to an Intermediate Representation, which consists of a pair of `.xml` and `.bin` files that are used as the input for Inference Engine.
3. **Run inference against the Intermediate Representation** (optimized model) and output inference results.
## Run Baseline Inference
This section illustrates a sample use case of how to infer a pretrained model from the [Intel® Open Model Zoo](@ref omz_models_intel_index) with an autogenerated noise dataset on a CPU device.
<iframe width="560" height="315" src="https://www.youtube.com/embed/9TRJwEmY0K4" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
Once you log in to the DL Workbench, create a project, which is a combination of a model, a dataset, and a target device. Follow the steps below:
### Step 1. Open a New Project
On the the **Active Projects** page, click **Create** to open the **Create Project** page:
![](./dl_workbench_img/create_configuration.png)
### Step 2. Choose a Pretrained Model
Click **Import** next to the **Model** table on the **Create Project** page. The **Import Model** page opens. Select the squeezenet1.1 model from the Open Model Zoo and click **Import**.
![](./dl_workbench_img/import_model_02.png)
### Step 3. Convert the Model into Intermediate Representation
The **Convert Model to IR** tab opens. Keep the FP16 precision and click **Convert**.
![](./dl_workbench_img/convert_model.png)
You are directed back to the **Create Project** page where you can see the status of the chosen model.
![](./dl_workbench_img/model_loading.png)
### Step 4. Generate a Noise Dataset
Scroll down to the **Validation Dataset** table. Click **Generate** next to the table heading.
![](./dl_workbench_img/validation_dataset.png)
The **Autogenerate Dataset** page opens. Click **Generate**.
![](./dl_workbench_img/generate_dataset.png)
You are directed back to the **Create Project** page where you can see the status of the dataset.
![](./dl_workbench_img/dataset_loading.png)
### Step 5. Create the Project and Run a Baseline Inference
On the **Create Project** page, select the imported model, CPU target, and the generated dataset. Click **Create**.
![](./dl_workbench_img/selected.png)
The inference starts and you cannot proceed until it is done.
![](./dl_workbench_img/inference_banner.png)
Once the inference is complete, the **Projects** page opens automatically. Find your inference job in the **Projects Settings** table indicating all jobs.
![](./dl_workbench_img/inference_complete.png)
Congratulations, you have performed your first inference in the OpenVINO DL Workbench. Now you can proceed to:
* [Select the inference](@ref workbench_docs_Workbench_DG_Run_Single_Inference)
* [Visualize statistics](@ref workbench_docs_Workbench_DG_Visualize_Model)
* [Experiment with model optimization](@ref workbench_docs_Workbench_DG_Int_8_Quantization)
and inference options to profile the configuration
For detailed instructions to create a new project, visit the links below:
* [Select a model](@ref workbench_docs_Workbench_DG_Select_Model)
* [Select a dataset](@ref workbench_docs_Workbench_DG_Select_Datasets)
* [Select a target and an environment](@ref workbench_docs_Workbench_DG_Select_Environment). This can be your local workstation or a remote target. If you use a remote target, [register the remote machine](@ref workbench_docs_Workbench_DG_Add_Remote_Target) first.
## Additional Resources
* [OpenVINO™ Release Notes](https://software.intel.com/en-us/articles/OpenVINO-RelNotes)
* [OpenVINO™ Toolkit Overview](../index.md)
* [DL Workbench Installation Guide](@ref workbench_docs_Workbench_DG_Install_Workbench)
* [Inference Engine Developer Guide](../IE_DG/Deep_Learning_Inference_Engine_DevGuide.md)
* [Model Optimizer Developer Guide](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)
* [Inference Engine Samples Overview](../IE_DG/Samples_Overview.md)
* [Overview of OpenVINO™ Toolkit Pre-Trained Models](https://software.intel.com/en-us/openvino-toolkit/documentation/pretrained-models)
* [OpenVINO™ Hello World Face Detection Exercise](https://github.com/intel-iot-devkit/inference-tutorials-generic)

View File

@@ -23,9 +23,15 @@ In addition, demo scripts, code samples and demo applications are provided to he
## <a name="openvino-installation"></a>Intel® Distribution of OpenVINO™ toolkit Installation and Deployment Tools Directory Structure
This guide assumes you completed all Intel® Distribution of OpenVINO™ toolkit installation and configuration steps. If you have not yet installed and configured the toolkit, see [Install Intel® Distribution of OpenVINO™ toolkit for Linux*](../install_guides/installing-openvino-linux.md).
By default, the installation directory is `/opt/intel/openvino`, but the installation gave you the option to use the directory of your choice. If you installed the Intel® Distribution of OpenVINO™ toolkit to a directory other than the default, replace `/opt/intel` with the directory in which you installed the software.
By default, the Intel® Distribution of OpenVINO™ is installed to the following directory, referred to as `<INSTALL_DIR>`:
* For root or administrator: `/opt/intel/openvino_<version>/`
* For regular users: `/home/<USER>/intel/openvino_<version>/`
The primary tools for deploying your models and applications are installed to the `/opt/intel/openvino/deployment_tools` directory.
For simplicity, a symbolic link to the latest installation is also created: `/home/<user>/intel/openvino_2021/`
If you installed the Intel® Distribution of OpenVINO™ toolkit to a directory other than the default, replace `/opt/intel` or `/home/<USER>/` with the directory in which you installed the software.
The primary tools for deploying your models and applications are installed to the `/opt/intel/openvino_2021/deployment_tools` directory.
<details>
<summary><strong>Click for the Intel® Distribution of OpenVINO™ toolkit directory structure</strong></summary>
@@ -57,7 +63,7 @@ The simplified OpenVINO™ workflow is:
## Use the Demo Scripts to Learn the Workflow
The demo scripts in `/opt/intel/openvino/deployment_tools/demo` give you a starting point to learn the OpenVINO workflow. These scripts automatically perform the workflow steps to demonstrate running inference pipelines for different scenarios. The demo steps let you see how to:
The demo scripts in `/opt/intel/openvino_2021/deployment_tools/demo` give you a starting point to learn the OpenVINO workflow. These scripts automatically perform the workflow steps to demonstrate running inference pipelines for different scenarios. The demo steps let you see how to:
* Compile several samples from the source files delivered as part of the OpenVINO toolkit.
* Download trained models.
* Perform pipeline steps and see the output on the console.
@@ -189,7 +195,7 @@ You will perform the following steps:
Each demo and code sample is a separate application, but they use the same behavior and components. The code samples and demo applications are:
* [Code Samples](../IE_DG/Samples_Overview.html) - Small console applications that show how to utilize specific OpenVINO capabilities within an application and execute specific tasks such as loading a model, running inference, querying specific device capabilities, and more.
* [Code Samples](../IE_DG/Samples_Overview.md) - Small console applications that show how to utilize specific OpenVINO capabilities within an application and execute specific tasks such as loading a model, running inference, querying specific device capabilities, and more.
* [Demo Applications](@ref omz_demos_README) - Console applications that provide robust application templates to support developers in implementing specific deep learning scenarios. They may also involve more complex processing pipelines that gather analysis from several models that run inference simultaneously. For example concurrently detecting a person in a video stream and detecting attributes such as age, gender and/or emotions.
@@ -221,7 +227,7 @@ This guide uses the Model Downloader to get pre-trained models. You can use one
* **List the models available in the downloader**:
```sh
cd /opt/intel/openvino/deployment_tools/tools/model_downloader/
cd /opt/intel/openvino_2021/deployment_tools/tools/model_downloader/
```
```sh
python3 info_dumper.py --print_all
@@ -325,7 +331,7 @@ The `vehicle-license-plate-detection-barrier-0106`, `vehicle-attributes-recognit
3. Run the Model Optimizer script:
```sh
cd /opt/intel/openvino/deployment_tools/model_optimizer
cd /opt/intel/openvino_2021/deployment_tools/model_optimizer
```
```sh
python3 ./mo.py --input_model <model_dir>/<model_file> --data_type <model_precision> --output_dir <ir_dir>
@@ -338,7 +344,7 @@ The `vehicle-license-plate-detection-barrier-0106`, `vehicle-attributes-recognit
The following command converts the public SqueezeNet 1.1 Caffe\* model to the FP16 IR and saves to the `~/models/public/squeezenet1.1/ir` output directory:
```sh
cd /opt/intel/openvino/deployment_tools/model_optimizer
cd /opt/intel/openvino_2021/deployment_tools/model_optimizer
```
```sh
python3 ./mo.py --input_model ~/models/public/squeezenet1.1/squeezenet1.1.caffemodel --data_type FP16 --output_dir ~/models/public/squeezenet1.1/ir
@@ -346,9 +352,9 @@ The following command converts the public SqueezeNet 1.1 Caffe\* model to the FP
After the Model Optimizer script is completed, the produced IR files (`squeezenet1.1.xml`, `squeezenet1.1.bin`) are in the specified `~/models/public/squeezenet1.1/ir` directory.
Copy the `squeezenet1.1.labels` file from the `/opt/intel/openvino/deployment_tools/demo/` to `<ir_dir>`. This file contains the classes that ImageNet uses. Therefore, the inference results show text instead of classification numbers:
Copy the `squeezenet1.1.labels` file from the `/opt/intel/openvino_2021/deployment_tools/demo/` to `<ir_dir>`. This file contains the classes that ImageNet uses. Therefore, the inference results show text instead of classification numbers:
```sh
cp /opt/intel/openvino/deployment_tools/demo/squeezenet1.1.labels <ir_dir>
cp /opt/intel/openvino_2021/deployment_tools/demo/squeezenet1.1.labels <ir_dir>
```
</details>
@@ -359,18 +365,18 @@ Many sources are available from which you can download video media to use the co
- https://images.google.com
As an alternative, the Intel® Distribution of OpenVINO™ toolkit includes two sample images that you can use for running code samples and demo applications:
* `/opt/intel/openvino/deployment_tools/demo/car.png`
* `/opt/intel/openvino/deployment_tools/demo/car_1.bmp`
* `/opt/intel/openvino_2021/deployment_tools/demo/car.png`
* `/opt/intel/openvino_2021/deployment_tools/demo/car_1.bmp`
### <a name="run-image-classification"></a>Step 4: Run the Image Classification Code Sample
> **NOTE**: The Image Classification code sample is automatically compiled when you ran the Image Classification demo script. If you want to compile it manually, see the [Inference Engine Code Samples Overview](../IE_DG/Samples_Overview.html#build_samples_linux) section.
> **NOTE**: The Image Classification code sample is automatically compiled when you ran the Image Classification demo script. If you want to compile it manually, see the *Build the Sample Applications on Linux* section in the [Inference Engine Code Samples Overview](../IE_DG/Samples_Overview.md).
To run the **Image Classification** code sample with an input image on the IR:
1. Set up the OpenVINO environment variables:
```sh
source /opt/intel/openvino/bin/setupvars.sh
source /opt/intel/openvino_2021/bin/setupvars.sh
```
2. Go to the code samples build directory:
```sh
@@ -383,32 +389,32 @@ To run the **Image Classification** code sample with an input image on the IR:
<details>
<summary><strong>Click for examples of running the Image Classification code sample on different devices</strong></summary>
The following commands run the Image Classification Code Sample using the `car.png` file from the `/opt/intel/openvino/deployment_tools/demo/` directory as an input image, the IR of your model from `~/models/public/squeezenet1.1/ir` and on different hardware devices:
The following commands run the Image Classification Code Sample using the `car.png` file from the `/opt/intel/openvino_2021/deployment_tools/demo/` directory as an input image, the IR of your model from `~/models/public/squeezenet1.1/ir` and on different hardware devices:
**CPU:**
```sh
./classification_sample_async -i /opt/intel/openvino/deployment_tools/demo/car.png -m ~/models/public/squeezenet1.1/ir/squeezenet1.1.xml -d CPU
./classification_sample_async -i /opt/intel/openvino_2021/deployment_tools/demo/car.png -m ~/models/public/squeezenet1.1/ir/squeezenet1.1.xml -d CPU
```
**GPU:**
> **NOTE**: Running inference on Intel® Processor Graphics (GPU) requires additional hardware configuration steps. For details, see the Steps for Intel® Processor Graphics (GPU) section in the [installation instructions](../install_guides/installing-openvino-linux.md).
```sh
./classification_sample_async -i /opt/intel/openvino/deployment_tools/demo/car.png -m ~/models/public/squeezenet1.1/ir/squeezenet1.1.xml -d GPU
./classification_sample_async -i /opt/intel/openvino_2021/deployment_tools/demo/car.png -m ~/models/public/squeezenet1.1/ir/squeezenet1.1.xml -d GPU
```
**MYRIAD:**
> **NOTE**: Running inference on VPU devices (Intel® Neural Compute Stick 2) with the MYRIAD plugin requires additional hardware configuration steps. For details, see the Steps for Intel® Neural Compute Stick 2 section in the [installation instructions](../install_guides/installing-openvino-linux.md).
```sh
./classification_sample_async -i /opt/intel/openvino/deployment_tools/demo/car.png -m ~/models/public/squeezenet1.1/ir/squeezenet1.1.xml -d MYRIAD
./classification_sample_async -i /opt/intel/openvino_2021/deployment_tools/demo/car.png -m ~/models/public/squeezenet1.1/ir/squeezenet1.1.xml -d MYRIAD
```
**HDDL:**
> **NOTE**: Running inference on the Intel® Vision Accelerator Design with Intel® Movidius™ VPUs device with the HDDL plugin requires additional hardware configuration steps. For details, see the Steps for Intel® Vision Accelerator Design with Intel® Movidius™ VPUs section in the [installation instructions](../install_guides/installing-openvino-linux.md).
```sh
./classification_sample_async -i /opt/intel/openvino/deployment_tools/demo/car.png -m ~/models/public/squeezenet1.1/ir/squeezenet1.1.xml -d HDDL
./classification_sample_async -i /opt/intel/openvino_2021/deployment_tools/demo/car.png -m ~/models/public/squeezenet1.1/ir/squeezenet1.1.xml -d HDDL
```
When the Sample Application completes, you see the label and confidence for the top-10 categories on the display. Below is a sample output with inference results on CPU:
@@ -449,7 +455,7 @@ To run the **Security Barrier Camera Demo Application** using an input image on
1. Set up the OpenVINO environment variables:
```sh
source /opt/intel/openvino/bin/setupvars.sh
source /opt/intel/openvino_2021/bin/setupvars.sh
```
2. Go to the demo application build directory:
```sh
@@ -466,14 +472,14 @@ To run the **Security Barrier Camera Demo Application** using an input image on
**CPU:**
```sh
./security_barrier_camera_demo -i /opt/intel/openvino/deployment_tools/demo/car_1.bmp -m /home/username/models/intel/vehicle-license-plate-detection-barrier-0106/FP16/vehicle-license-plate-detection-barrier-0106.xml -m_va /home/username/models/intel/vehicle-attributes-recognition-barrier-0039/FP16/vehicle-attributes-recognition-barrier-0039.xml -m_lpr /home/username/models/intel/license-plate-recognition-barrier-0001/FP16/license-plate-recognition-barrier-0001.xml -d CPU
./security_barrier_camera_demo -i /opt/intel/openvino_2021/deployment_tools/demo/car_1.bmp -m /home/username/models/intel/vehicle-license-plate-detection-barrier-0106/FP16/vehicle-license-plate-detection-barrier-0106.xml -m_va /home/username/models/intel/vehicle-attributes-recognition-barrier-0039/FP16/vehicle-attributes-recognition-barrier-0039.xml -m_lpr /home/username/models/intel/license-plate-recognition-barrier-0001/FP16/license-plate-recognition-barrier-0001.xml -d CPU
```
**GPU:**
> **NOTE**: Running inference on Intel® Processor Graphics (GPU) requires additional hardware configuration steps. For details, see the Steps for Intel® Processor Graphics (GPU) section in the [installation instructions](../install_guides/installing-openvino-linux.md).
```sh
./security_barrier_camera_demo -i /opt/intel/openvino/deployment_tools/demo/car_1.bmp -m <path_to_model>/vehicle-license-plate-detection-barrier-0106.xml -m_va <path_to_model>/vehicle-attributes-recognition-barrier-0039.xml -m_lpr <path_to_model>/license-plate-recognition-barrier-0001.xml -d GPU
./security_barrier_camera_demo -i /opt/intel/openvino_2021/deployment_tools/demo/car_1.bmp -m <path_to_model>/vehicle-license-plate-detection-barrier-0106.xml -m_va <path_to_model>/vehicle-attributes-recognition-barrier-0039.xml -m_lpr <path_to_model>/license-plate-recognition-barrier-0001.xml -d GPU
```
**MYRIAD:**
@@ -498,7 +504,7 @@ Following are some basic guidelines for executing the OpenVINO™ workflow using
1. Before using the OpenVINO™ samples, always set up the environment:
```sh
source /opt/intel/openvino/bin/setupvars.sh
source /opt/intel/openvino_2021/bin/setupvars.sh
```
2. Have the directory path for the following:
- Code Sample binaries located in `~/inference_engine_cpp_samples_build/intel64/Release`
@@ -559,7 +565,7 @@ You can see all the sample applications parameters by adding the `-h` or `--h
Use these resources to learn more about the OpenVINO™ toolkit:
* [OpenVINO™ Release Notes](https://software.intel.com/en-us/articles/OpenVINO-RelNotes)
* [Introduction to Intel® Deep Learning Deployment Toolkit](../IE_DG/Introduction.md)
* [OpenVINO™ Toolkit Overview](../index.md)
* [Inference Engine Developer Guide](../IE_DG/Deep_Learning_Inference_Engine_DevGuide.md)
* [Model Optimizer Developer Guide](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)
* [Inference Engine Samples Overview](../IE_DG/Samples_Overview.md)

View File

@@ -24,10 +24,12 @@ In addition, demo scripts, code samples and demo applications are provided to he
This guide assumes you completed all Intel® Distribution of OpenVINO™ toolkit installation and configuration steps. If you have not yet installed and configured the toolkit, see [Install Intel® Distribution of OpenVINO™ toolkit for macOS*](../install_guides/installing-openvino-macos.md).
By default, the Intel® Distribution of OpenVINO™ is installed to the following directory, referred to as `<INSTALL_DIR>`:
* For root or administrator: `/opt/intel/openvino/`
* For regular users: `/home/<USER>/intel/openvino/`
* For root or administrator: `/opt/intel/openvino_<version>/`
* For regular users: `/home/<USER>/intel/openvino_<version>/`
If you installed the Intel® Distribution of OpenVINO™ toolkit to a directory other than the default, replace `/opt/intel` or `/home/<USER>/` with the directory in which you installed the software.
For simplicity, a symbolic link to the latest installation is also created: `/home/<user>/intel/openvino_2021/`.
If you installed the Intel® Distribution of OpenVINO™ toolkit to a directory other than the default, replace `/opt/intel` or `/home/<USER>/` with the directory in which you installed the software.
The primary tools for deploying your models and applications are installed to the `<INSTALL_DIR>/deployment_tools` directory.
<details>
@@ -105,7 +107,7 @@ When the script completes, you see the label and confidence for the top-10 categ
Top 10 results:
Image /opt/intel/openvino/deployment_tools/demo/car.png
Image /opt/intel/openvino_2021/deployment_tools/demo/car.png
classid probability label
------- ----------- -----
@@ -216,7 +218,7 @@ This guide uses the Model Downloader to get pre-trained models. You can use one
* **List the models available in the downloader**:
```sh
cd /opt/intel/openvino/deployment_tools/tools/model_downloader/
cd /opt/intel/openvino_2021/deployment_tools/tools/model_downloader/
```
```sh
python3 info_dumper.py --print_all
@@ -321,7 +323,7 @@ The `vehicle-license-plate-detection-barrier-0106`, `vehicle-attributes-recognit
3. Run the Model Optimizer script:
```sh
cd /opt/intel/openvino/deployment_tools/model_optimizer
cd /opt/intel/openvino_2021/deployment_tools/model_optimizer
```
```sh
python3 ./mo.py --input_model <model_dir>/<model_file> --data_type <model_precision> --output_dir <ir_dir>
@@ -334,7 +336,7 @@ The `vehicle-license-plate-detection-barrier-0106`, `vehicle-attributes-recognit
The following command converts the public SqueezeNet 1.1 Caffe\* model to the FP16 IR and saves to the `~/models/public/squeezenet1.1/ir` output directory:
```sh
cd /opt/intel/openvino/deployment_tools/model_optimizer
cd /opt/intel/openvino_2021/deployment_tools/model_optimizer
```
```sh
python3 ./mo.py --input_model ~/models/public/squeezenet1.1/squeezenet1.1.caffemodel --data_type FP16 --output_dir ~/models/public/squeezenet1.1/ir
@@ -342,9 +344,9 @@ The following command converts the public SqueezeNet 1.1 Caffe\* model to the FP
After the Model Optimizer script is completed, the produced IR files (`squeezenet1.1.xml`, `squeezenet1.1.bin`) are in the specified `~/models/public/squeezenet1.1/ir` directory.
Copy the `squeezenet1.1.labels` file from the `/opt/intel/openvino/deployment_tools/demo/` to `<ir_dir>`. This file contains the classes that ImageNet uses. Therefore, the inference results show text instead of classification numbers:
Copy the `squeezenet1.1.labels` file from the `/opt/intel/openvino_2021/deployment_tools/demo/` to `<ir_dir>`. This file contains the classes that ImageNet uses. Therefore, the inference results show text instead of classification numbers:
```sh
cp /opt/intel/openvino/deployment_tools/demo/squeezenet1.1.labels <ir_dir>
cp /opt/intel/openvino_2021/deployment_tools/demo/squeezenet1.1.labels <ir_dir>
```
</details>
@@ -355,8 +357,8 @@ Many sources are available from which you can download video media to use the co
- https://images.google.com
As an alternative, the Intel® Distribution of OpenVINO™ toolkit includes two sample images that you can use for running code samples and demo applications:
* `/opt/intel/openvino/deployment_tools/demo/car.png`
* `/opt/intel/openvino/deployment_tools/demo/car_1.bmp`
* `/opt/intel/openvino_2021/deployment_tools/demo/car.png`
* `/opt/intel/openvino_2021/deployment_tools/demo/car_1.bmp`
### <a name="run-image-classification"></a>Step 4: Run the Image Classification Code Sample
@@ -366,7 +368,7 @@ To run the **Image Classification** code sample with an input image on the IR:
1. Set up the OpenVINO environment variables:
```sh
source /opt/intel/openvino/bin/setupvars.sh
source /opt/intel/openvino_2021/bin/setupvars.sh
```
2. Go to the code samples build directory:
```sh
@@ -379,11 +381,11 @@ To run the **Image Classification** code sample with an input image on the IR:
<details>
<summary><strong>Click for examples of running the Image Classification code sample on different devices</strong></summary>
The following commands run the Image Classification Code Sample using the `car.png` file from the `/opt/intel/openvino/deployment_tools/demo/` directory as an input image, the IR of your model from `~/models/public/squeezenet1.1/ir` and on different hardware devices:
The following commands run the Image Classification Code Sample using the `car.png` file from the `/opt/intel/openvino_2021/deployment_tools/demo/` directory as an input image, the IR of your model from `~/models/public/squeezenet1.1/ir` and on different hardware devices:
**CPU:**
```sh
./classification_sample_async -i /opt/intel/openvino/deployment_tools/demo/car.png -m ~/models/public/squeezenet1.1/ir/squeezenet1.1.xml -d CPU
./classification_sample_async -i /opt/intel/openvino_2021/deployment_tools/demo/car.png -m ~/models/public/squeezenet1.1/ir/squeezenet1.1.xml -d CPU
```
@@ -391,14 +393,14 @@ The following commands run the Image Classification Code Sample using the `car.p
> **NOTE**: Running inference on VPU devices (Intel® Neural Compute Stick 2) with the MYRIAD plugin requires additional hardware configuration steps. For details, see the Steps for Intel® Neural Compute Stick 2 section in the [installation instructions](../install_guides/installing-openvino-macos.md).
```sh
./classification_sample_async -i /opt/intel/openvino/deployment_tools/demo/car.png -m ~/models/public/squeezenet1.1/ir/squeezenet1.1.xml -d MYRIAD
./classification_sample_async -i /opt/intel/openvino_2021/deployment_tools/demo/car.png -m ~/models/public/squeezenet1.1/ir/squeezenet1.1.xml -d MYRIAD
```
When the Sample Application completes, you see the label and confidence for the top-10 categories on the display. Below is a sample output with inference results on CPU:
```sh
Top 10 results:
Image /opt/intel/openvino/deployment_tools/demo/car.png
Image /opt/intel/openvino_2021/deployment_tools/demo/car.png
classid probability label
------- ----------- -----
@@ -426,7 +428,7 @@ To run the **Security Barrier Camera Demo Application** using an input image on
1. Set up the OpenVINO environment variables:
```sh
source /opt/intel/openvino/bin/setupvars.sh
source /opt/intel/openvino_2021/bin/setupvars.sh
```
2. Go to the demo application build directory:
```sh
@@ -443,7 +445,7 @@ To run the **Security Barrier Camera Demo Application** using an input image on
**CPU:**
```sh
./security_barrier_camera_demo -i /opt/intel/openvino/deployment_tools/demo/car_1.bmp -m ~/models/intel/vehicle-license-plate-detection-barrier-0106/FP16/vehicle-license-plate-detection-barrier-0106.xml -m_va ~/models/intel/vehicle-attributes-recognition-barrier-0039/FP16/vehicle-attributes-recognition-barrier-0039.xml -m_lpr ~/models/intel/license-plate-recognition-barrier-0001/FP16/license-plate-recognition-barrier-0001.xml -d CPU
./security_barrier_camera_demo -i /opt/intel/openvino_2021/deployment_tools/demo/car_1.bmp -m ~/models/intel/vehicle-license-plate-detection-barrier-0106/FP16/vehicle-license-plate-detection-barrier-0106.xml -m_va ~/models/intel/vehicle-attributes-recognition-barrier-0039/FP16/vehicle-attributes-recognition-barrier-0039.xml -m_lpr ~/models/intel/license-plate-recognition-barrier-0001/FP16/license-plate-recognition-barrier-0001.xml -d CPU
```
**MYRIAD:**
@@ -461,7 +463,7 @@ Following are some basic guidelines for executing the OpenVINO™ workflow using
1. Before using the OpenVINO™ samples, always set up the environment:
```sh
source /opt/intel/openvino/bin/setupvars.sh
source /opt/intel/openvino_2021/bin/setupvars.sh
```
2. Have the directory path for the following:
- Code Sample binaries located in `~/inference_engine_cpp_samples_build/intel64/Release`
@@ -522,7 +524,7 @@ You can see all the sample applications parameters by adding the `-h` or `--h
Use these resources to learn more about the OpenVINO™ toolkit:
* [OpenVINO™ Release Notes](https://software.intel.com/en-us/articles/OpenVINO-RelNotes)
* [Introduction to Intel® Deep Learning Deployment Toolkit](../IE_DG/Introduction.md)
* [OpenVINO™ Toolkit Overview](../index.md)
* [Inference Engine Developer Guide](../IE_DG/Deep_Learning_Inference_Engine_DevGuide.md)
* [Model Optimizer Developer Guide](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)
* [Inference Engine Samples Overview](../IE_DG/Samples_Overview.md)

View File

@@ -24,7 +24,7 @@ In addition, demo scripts, code samples and demo applications are provided to he
## <a name="openvino-installation"></a>Intel® Distribution of OpenVINO™ toolkit Installation and Deployment Tools Directory Structure
This guide assumes you completed all Intel® Distribution of OpenVINO™ toolkit installation and configuration steps. If you have not yet installed and configured the toolkit, see [Install Intel® Distribution of OpenVINO™ toolkit for Windows*](../install_guides/installing-openvino-windows.md).
By default, the installation directory is `C:\Program Files (x86)\IntelSWTools\openvino`, referred to as `<INSTALL_DIR>`. If you installed the Intel® Distribution of OpenVINO™ toolkit to a directory other than the default, replace `C:\Program Files (x86)\IntelSWTools` with the directory in which you installed the software.
By default, the installation directory is `C:\Program Files (x86)\Intel\openvino_<version>`, referred to as `<INSTALL_DIR>`. If you installed the Intel® Distribution of OpenVINO™ toolkit to a directory other than the default, replace `C:\Program Files (x86)\Intel` with the directory in which you installed the software. For simplicity, a shortcut to the latest installation is also created: `C:\Program Files (x86)\Intel\openvino_2021`.
The primary tools for deploying your models and applications are installed to the `<INSTALL_DIR>\deployment_tools` directory.
<details>
@@ -106,7 +106,7 @@ When the script completes, you see the label and confidence for the top-10 categ
Top 10 results:
Image C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\demo\car.png
Image C:\Program Files (x86)\Intel\openvino_2021\deployment_tools\demo\car.png
classid probability label
------- ----------- -----
@@ -403,7 +403,7 @@ When the Sample Application completes, you see the label and confidence for the
```bat
Top 10 results:
Image C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\demo\car.png
Image C:\Program Files (x86)\Intel\openvino_2021\deployment_tools\demo\car.png
classid probability label
------- ----------- -----
@@ -533,7 +533,7 @@ You can see all the sample applications parameters by adding the `-h` or `--h
Use these resources to learn more about the OpenVINO™ toolkit:
* [OpenVINO™ Release Notes](https://software.intel.com/en-us/articles/OpenVINO-RelNotes)
* [Introduction to Intel® Deep Learning Deployment Toolkit](../IE_DG/Introduction.md)
* [OpenVINO™ Toolkit Overview](../index.md)
* [Inference Engine Developer Guide](../IE_DG/Deep_Learning_Inference_Engine_DevGuide.md)
* [Model Optimizer Developer Guide](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)
* [Inference Engine Samples Overview](../IE_DG/Samples_Overview.md)

View File

@@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:de85bd59edc66bfd37aab395bc7e2dde2988f16c7ff263153d382bfcbeb9ff2e
size 35998
oid sha256:e2a218afd50f8112f94c032439f69992abb54a551566ab8b4734405d6332499d
size 32796

View File

@@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:7b2586ce56ff1a5c0527b53dc21aa09b489c11e24fec82c6a58e2db860a772c4
size 39720
oid sha256:4ad93452fb1020baa7b5de0eb859bdb89e609a4ba8eb382baafb75b3194080ff
size 47318

View File

@@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:3faa8b02a8477b5d764ea2d47502bc0a878087614e0516704cc1525b5b60dedb
size 26412
oid sha256:bfdd8e4dcc4d7acd4a1003e7a3d933f17ac05761c5b2e4ebedd23e7f8242d5db
size 26598

View File

@@ -19,7 +19,7 @@ The following diagram illustrates the typical OpenVINO™ workflow (click to see
### Model Preparation, Conversion and Optimization
You can use your framework of choice to prepare and train a Deep Learning model or just download a pretrained model from the Open Model Zoo. The Open Model Zoo includes Deep Learning solutions to a variety of vision problems, including object recognition, face recognition, pose estimation, text detection, and action recognition, at a range of measured complexities.
Several of these pretrained models are used also in the [code samples](E_DG/Samples_Overview.md) and [application demos](@ref omz_demos_README). To download models from the Open Model Zoo, the [Model Downloader](@ref omz_tools_downloader_README) tool is used.
Several of these pretrained models are used also in the [code samples](IE_DG/Samples_Overview.md) and [application demos](@ref omz_demos_README). To download models from the Open Model Zoo, the [Model Downloader](@ref omz_tools_downloader_README) tool is used.
One of the core component of the OpenVINO™ toolkit is the [Model Optimizer](MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md) a cross-platform command-line
tool that converts a trained neural network from its source framework to an open-source, nGraph-compatible [Intermediate Representation (IR)](MO_DG/IR_and_opsets.md) for use in inference operations. The Model Optimizer imports models trained in popular frameworks such as Caffe*, TensorFlow*, MXNet*, Kaldi*, and ONNX* and performs a few optimizations to remove excess layers and group operations when possible into simpler, faster graphs.
@@ -84,10 +84,10 @@ Intel® Distribution of OpenVINO™ toolkit includes the following components:
- [Deep Learning Model Optimizer](MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md) - A cross-platform command-line tool for importing models and preparing them for optimal execution with the Inference Engine. The Model Optimizer imports, converts, and optimizes models, which were trained in popular frameworks, such as Caffe*, TensorFlow*, MXNet*, Kaldi*, and ONNX*.
- [Deep Learning Inference Engine](IE_DG/inference_engine_intro.md) - A unified API to allow high performance inference on many hardware types including Intel® CPU, Intel® Integrated Graphics, Intel® Neural Compute Stick 2, Intel® Vision Accelerator Design with Intel® Movidius™ vision processing unit (VPU)
- [Inference Engine Samples](IE_DG/Samples_Overview.md) - A set of simple console applications demonstrating how to use the Inference Engine in your applications
- [Tools](IE_DG/Tools_Overview.md) - A set of simple console tools to work with your models
- Additional Tools - A set of tools to work with your models including [Accuracy Checker utility](@ref omz_tools_accuracy_checker_README), [Post-Training Optimization Tool Guide](@ref pot_README), [Model Downloader](@ref omz_tools_downloader_README) and other
- [Open Model Zoo](@ref omz_models_intel_index)
- [Demos](@ref omz_demos_README) - Console applications that demonstrate how you can use the Inference Engine in your applications to solve specific use cases
- [Tools](IE_DG/Tools_Overview.md) - Additional tools to download models and check accuracy
- Additional Tools - A set of tools to work with your models including [Accuracy Checker utility](@ref omz_tools_accuracy_checker_README), [Post-Training Optimization Tool Guide](@ref pot_README), [Model Downloader](@ref omz_tools_downloader_README) and other
- [Documentation for Pretrained Models](@ref omz_models_intel_index) - Documentation for pretrained models that are available in the [Open Model Zoo repository](https://github.com/opencv/open_model_zoo)
- [Post-Training Optimization tool](@ref pot_README) - A tool to calibrate a model and then execute it in the INT8 precision
- [Deep Learning Workbench](@ref workbench_docs_Workbench_DG_Introduction) - A web-based graphical environment that allows you to easily use various sophisticated OpenVINO™ toolkit components
@@ -99,4 +99,4 @@ Intel® Distribution of OpenVINO™ toolkit includes the following components:
- [OpenCV](https://docs.opencv.org/master/) - OpenCV* community version compiled for Intel® hardware
- [Intel® Media SDK](https://software.intel.com/en-us/media-sdk) (in Intel® Distribution of OpenVINO™ toolkit for Linux only)
OpenVINO™ Toolkit opensource version is available on [GitHub](https://github.com/openvinotoolkit/openvino). For building the Inference Engine from the source code, see the <a href="https://github.com/openvinotoolkit/openvino/blob/master/build-instruction.md">build instructions</a>.
OpenVINO™ Toolkit opensource version is available on [GitHub](https://github.com/openvinotoolkit/openvino). For building the Inference Engine from the source code, see the <a href="https://github.com/openvinotoolkit/openvino/wiki/BuildingCode">build instructions</a>.

View File

@@ -39,10 +39,10 @@ Interactive mode provides a user-friendly command-line interface that will guide
./deployment_manager.py
```
2. The target device selection dialog is displayed:
![Deployment Manager selection dialog](../img/selection_dialog.png "Deployment Manager selection dialog")
![Deployment Manager selection dialog](../img/selection_dialog.png)
Use the options provided on the screen to complete selection of the target devices and press **Enter** to proceed to the package generation dialog. if you want to interrupt the generation process and exit the program, type **q** and press **Enter**.
3. Once you accept the selection, the package generation dialog is displayed:
![Deployment Manager configuration dialog](../img/configuration_dialog.png "Deployment Manager configuration dialog")
![Deployment Manager configuration dialog](../img/configuration_dialog.png)
1. The target devices you have selected at the previous step appear on the screen. If you want to change the selection, type **b** and press **Enter** to go back to the previous screen.
2. Use the options provided to configure the generation process, or use the default settings.

View File

@@ -9,11 +9,17 @@ This guide provides the steps for creating a Docker* image with Intel® Distribu
**Target Operating Systems**
- Ubuntu\* 18.04 long-term support (LTS), 64-bit
- Ubuntu\* 20.04 long-term support (LTS), 64-bit
- CentOS\* 7.6
**Host Operating Systems**
- Linux with installed GPU driver and with Linux kernel supported by GPU driver
## Prebuilt images
Prebuilt images are available on [Docker Hub](https://hub.docker.com/u/openvino).
## Use Docker* Image for CPU
- Kernel reports the same information for all containers as for native application, for example, CPU, memory information.
@@ -22,127 +28,14 @@ This guide provides the steps for creating a Docker* image with Intel® Distribu
### <a name="building-for-cpu"></a>Build a Docker* Image for CPU
To build a Docker image, create a `Dockerfile` that contains defined variables and commands required to create an OpenVINO toolkit installation image.
Create your `Dockerfile` using the following example as a template:
<details>
<summary>Click to expand/collapse</summary>
```sh
FROM ubuntu:18.04
USER root
WORKDIR /
SHELL ["/bin/bash", "-xo", "pipefail", "-c"]
# Creating user openvino
RUN useradd -ms /bin/bash openvino && \
chown openvino -R /home/openvino
ARG DEPENDENCIES="autoconf \
automake \
build-essential \
cmake \
cpio \
curl \
gnupg2 \
libdrm2 \
libglib2.0-0 \
lsb-release \
libgtk-3-0 \
libtool \
udev \
unzip \
dos2unix"
RUN apt-get update && \
apt-get install -y --no-install-recommends ${DEPENDENCIES} && \
rm -rf /var/lib/apt/lists/*
WORKDIR /thirdparty
RUN sed -Ei 's/# deb-src /deb-src /' /etc/apt/sources.list && \
apt-get update && \
apt-get source ${DEPENDENCIES} && \
rm -rf /var/lib/apt/lists/*
# setup Python
ENV PYTHON python3.6
RUN apt-get update && \
apt-get install -y --no-install-recommends python3-pip python3-dev lib${PYTHON}=3.6.9-1~18.04 && \
rm -rf /var/lib/apt/lists/*
ARG package_url=http://registrationcenter-download.intel.com/akdlm/irc_nas/16612/l_openvino_toolkit_p_0000.0.000.tgz
ARG TEMP_DIR=/tmp/openvino_installer
WORKDIR ${TEMP_DIR}
ADD ${package_url} ${TEMP_DIR}
# install product by installation script
ENV INTEL_OPENVINO_DIR /opt/intel/openvino
RUN tar -xzf ${TEMP_DIR}/*.tgz --strip 1
RUN sed -i 's/decline/accept/g' silent.cfg && \
${TEMP_DIR}/install.sh -s silent.cfg && \
${INTEL_OPENVINO_DIR}/install_dependencies/install_openvino_dependencies.sh
WORKDIR /tmp
RUN rm -rf ${TEMP_DIR}
# installing dependencies for package
WORKDIR /tmp
RUN ${PYTHON} -m pip install --no-cache-dir setuptools && \
find "${INTEL_OPENVINO_DIR}/" -type f -name "*requirements*.*" -path "*/${PYTHON}/*" -exec ${PYTHON} -m pip install --no-cache-dir -r "{}" \; && \
find "${INTEL_OPENVINO_DIR}/" -type f -name "*requirements*.*" -not -path "*/post_training_optimization_toolkit/*" -not -name "*windows.txt" -not -name "*ubuntu16.txt" -not -path "*/python3*/*" -not -path "*/python2*/*" -exec ${PYTHON} -m pip install --no-cache-dir -r "{}" \;
WORKDIR ${INTEL_OPENVINO_DIR}/deployment_tools/open_model_zoo/tools/accuracy_checker
RUN source ${INTEL_OPENVINO_DIR}/bin/setupvars.sh && \
${PYTHON} -m pip install --no-cache-dir -r ${INTEL_OPENVINO_DIR}/deployment_tools/open_model_zoo/tools/accuracy_checker/requirements.in && \
${PYTHON} ${INTEL_OPENVINO_DIR}/deployment_tools/open_model_zoo/tools/accuracy_checker/setup.py install
WORKDIR ${INTEL_OPENVINO_DIR}/deployment_tools/tools/post_training_optimization_toolkit
RUN if [ -f requirements.txt ]; then \
${PYTHON} -m pip install --no-cache-dir -r ${INTEL_OPENVINO_DIR}/deployment_tools/tools/post_training_optimization_toolkit/requirements.txt && \
${PYTHON} ${INTEL_OPENVINO_DIR}/deployment_tools/tools/post_training_optimization_toolkit/setup.py install; \
fi;
# Post-installation cleanup and setting up OpenVINO environment variables
RUN if [ -f "${INTEL_OPENVINO_DIR}"/bin/setupvars.sh ]; then \
printf "\nsource \${INTEL_OPENVINO_DIR}/bin/setupvars.sh\n" >> /home/openvino/.bashrc; \
printf "\nsource \${INTEL_OPENVINO_DIR}/bin/setupvars.sh\n" >> /root/.bashrc; \
fi;
RUN find "${INTEL_OPENVINO_DIR}/" -name "*.*sh" -type f -exec dos2unix {} \;
USER openvino
WORKDIR ${INTEL_OPENVINO_DIR}
CMD ["/bin/bash"]
```
</details>
> **NOTE**: Please replace direct link to the Intel® Distribution of OpenVINO™ toolkit package to the latest version in the `package_url` argument. You can copy the link from the [Intel® Distribution of OpenVINO™ toolkit download page](https://software.seek.intel.com/openvino-toolkit) after registration. Right click on **Offline Installer** button on the download page for Linux in your browser and press **Copy link address**.
You can select which OpenVINO components will be installed by modifying `COMPONENTS` parameter in the `silent.cfg` file. For example to install only CPU runtime for the Inference Engine, set
`COMPONENTS=intel-openvino-ie-rt-cpu__x86_64` in `silent.cfg`.
To get a full list of available components for installation, run the `./install.sh --list_components` command from the unpacked OpenVINO™ toolkit package.
To build a Docker* image for CPU, run the following command:
```sh
docker build . -t <image_name> \
--build-arg HTTP_PROXY=<http://your_proxy_server.com:port> \
--build-arg HTTPS_PROXY=<https://your_proxy_server.com:port>
```
You can use [available Dockerfiles](https://github.com/openvinotoolkit/docker_ci/tree/master/dockerfiles) or generate a Dockerfile with your setting via [DockerHub CI Framework](https://github.com/openvinotoolkit/docker_ci) for Intel® Distribution of OpenVINO toolkit.
The Framework can generate a Dockerfile, build, test, and deploy an image with the Intel® Distribution of OpenVINO™ toolkit.
### Run the Docker* Image for CPU
Run the image with the following command:
```sh
docker run -it <image_name>
docker run -it --rm <image_name>
```
## Use a Docker* Image for GPU
### Build a Docker* Image for GPU
@@ -153,8 +46,9 @@ docker run -it <image_name>
- Intel® OpenCL™ runtime package must be included into the container.
- In the container, user must be in the `video` group.
Before building a Docker* image on GPU, add the following commands to the `Dockerfile` example for CPU above:
Before building a Docker* image on GPU, add the following commands to a Dockerfile:
**Ubuntu 18.04/20.04**:
```sh
WORKDIR /tmp/opencl
RUN usermod -aG video openvino
@@ -170,28 +64,36 @@ RUN apt-get update && \
ldconfig && \
rm /tmp/opencl
```
To build a Docker* image for GPU, run the following command:
**CentOS 7.6**:
```sh
docker build . -t <image_name> \
--build-arg HTTP_PROXY=<http://your_proxy_server.com:port> \
--build-arg HTTPS_PROXY=<https://your_proxy_server.com:port>
WORKDIR /tmp/opencl
RUN groupmod -g 44 video
RUN yum update -y && yum install -y epel-release && \
yum update -y && yum install -y ocl-icd ocl-icd-devel && \
yum clean all && rm -rf /var/cache/yum && \
curl -L https://sourceforge.net/projects/intel-compute-runtime/files/19.41.14441/centos-7/intel-gmmlib-19.3.2-1.el7.x86_64.rpm/download -o intel-gmmlib-19.3.2-1.el7.x86_64.rpm && \
curl -L https://sourceforge.net/projects/intel-compute-runtime/files/19.41.14441/centos-7/intel-gmmlib-devel-19.3.2-1.el7.x86_64.rpm/download -o intel-gmmlib-devel-19.3.2-1.el7.x86_64.rpm && \
curl -L https://sourceforge.net/projects/intel-compute-runtime/files/19.41.14441/centos-7/intel-igc-core-1.0.2597-1.el7.x86_64.rpm/download -o intel-igc-core-1.0.2597-1.el7.x86_64.rpm && \
curl -L https://sourceforge.net/projects/intel-compute-runtime/files/19.41.14441/centos-7/intel-igc-opencl-1.0.2597-1.el7.x86_64.rpm/download -o intel-igc-opencl-1.0.2597-1.el7.x86_64.rpm && \
curl -L https://sourceforge.net/projects/intel-compute-runtime/files/19.41.14441/centos-7/intel-igc-opencl-devel-1.0.2597-1.el7.x86_64.rpm/download -o intel-igc-opencl-devel-1.0.2597-1.el7.x86_64.rpm && \
curl -L https://sourceforge.net/projects/intel-compute-runtime/files/19.41.14441/centos-7/intel-opencl-19.41.14441-1.el7.x86_64.rpm/download -o intel-opencl-19.41.14441-1.el7.x86_64.rpm \
rpm -ivh ${TEMP_DIR}/*.rpm && \
ldconfig && \
rm -rf ${TEMP_DIR} && \
yum remove -y epel-release
```
### Run the Docker* Image for GPU
To make GPU available in the container, attach the GPU to the container using `--device /dev/dri` option and run the container:
```sh
docker run -it --device /dev/dri <image_name>
docker run -it --rm --device /dev/dri <image_name>
```
## Use a Docker* Image for Intel® Neural Compute Stick 2
### Build a Docker* Image for Intel® Neural Compute Stick 2
Build a Docker image using the same steps as for CPU.
### Run the Docker* Image for Intel® Neural Compute Stick 2
### Build and Run the Docker* Image for Intel® Neural Compute Stick 2
**Known limitations:**
@@ -199,12 +101,24 @@ Build a Docker image using the same steps as for CPU.
- UDEV events are not forwarded to the container by default it does not know about device reconnection.
- Only one device per host is supported.
Use one of the following options to run **Possible solutions for Intel® Neural Compute Stick 2:**
Use one of the following options as **Possible solutions for Intel® Neural Compute Stick 2:**
- **Solution #1**:
1. Get rid of UDEV by rebuilding `libusb` without UDEV support in the Docker* image (add the following commands to the `Dockerfile` example for CPU above):<br>
#### Option #1
1. Get rid of UDEV by rebuilding `libusb` without UDEV support in the Docker* image (add the following commands to a `Dockerfile`):
- **Ubuntu 18.04/20.04**:
```sh
ARG BUILD_DEPENDENCIES="autoconf \
automake \
build-essential \
libtool \
unzip \
udev"
RUN apt-get update && \
apt-get install -y --no-install-recommends ${BUILD_DEPENDENCIES} && \
rm -rf /var/lib/apt/lists/*
RUN usermod -aG users openvino
WORKDIR /opt
RUN curl -L https://github.com/libusb/libusb/archive/v1.0.22.zip --output v1.0.22.zip && \
unzip v1.0.22.zip
@@ -213,9 +127,6 @@ WORKDIR /opt/libusb-1.0.22
RUN ./bootstrap.sh && \
./configure --disable-udev --enable-shared && \
make -j4
RUN apt-get update && \
apt-get install -y --no-install-recommends libusb-1.0-0-dev=2:1.0.21-2 && \
rm -rf /var/lib/apt/lists/*
WORKDIR /opt/libusb-1.0.22/libusb
RUN /bin/mkdir -p '/usr/local/lib' && \
@@ -226,38 +137,103 @@ RUN /bin/mkdir -p '/usr/local/lib' && \
WORKDIR /opt/libusb-1.0.22/
RUN /usr/bin/install -c -m 644 libusb-1.0.pc '/usr/local/lib/pkgconfig' && \
cp /opt/intel/openvino/deployment_tools/inference_engine/external/97-myriad-usbboot.rules /etc/udev/rules.d/ && \
ldconfig
```
<br>
2. Run the Docker* image:<br>
- **CentOS 7.6**:
```sh
docker run --device-cgroup-rule='c 189:* rmw' -v /dev/bus/usb:/dev/bus/usb <image_name>
ARG BUILD_DEPENDENCIES="autoconf \
automake \
libtool \
unzip \
udev"
# hadolint ignore=DL3031, DL3033
RUN yum update -y && yum install -y ${BUILD_DEPENDENCIES} && \
yum group install -y "Development Tools" && \
yum clean all && rm -rf /var/cache/yum
WORKDIR /opt
RUN curl -L https://github.com/libusb/libusb/archive/v1.0.22.zip --output v1.0.22.zip && \
unzip v1.0.22.zip && rm -rf v1.0.22.zip
WORKDIR /opt/libusb-1.0.22
RUN ./bootstrap.sh && \
./configure --disable-udev --enable-shared && \
make -j4
WORKDIR /opt/libusb-1.0.22/libusb
RUN /bin/mkdir -p '/usr/local/lib' && \
/bin/bash ../libtool --mode=install /usr/bin/install -c libusb-1.0.la '/usr/local/lib' && \
/bin/mkdir -p '/usr/local/include/libusb-1.0' && \
/usr/bin/install -c -m 644 libusb.h '/usr/local/include/libusb-1.0' && \
/bin/mkdir -p '/usr/local/lib/pkgconfig' && \
printf "\nexport LD_LIBRARY_PATH=\${LD_LIBRARY_PATH}:/usr/local/lib\n" >> /opt/intel/openvino/bin/setupvars.sh
WORKDIR /opt/libusb-1.0.22/
RUN /usr/bin/install -c -m 644 libusb-1.0.pc '/usr/local/lib/pkgconfig' && \
cp /opt/intel/openvino/deployment_tools/inference_engine/external/97-myriad-usbboot.rules /etc/udev/rules.d/ && \
ldconfig
```
2. Run the Docker* image:
```sh
docker run -it --rm --device-cgroup-rule='c 189:* rmw' -v /dev/bus/usb:/dev/bus/usb <image_name>
```
- **Solution #2**:
Run container in privileged mode, enable Docker network configuration as host, and mount all devices to container:<br>
#### Option #2
Run container in the privileged mode, enable the Docker network configuration as host, and mount all devices to the container:
```sh
docker run --privileged -v /dev:/dev --network=host <image_name>
docker run -it --rm --privileged -v /dev:/dev --network=host <image_name>
```
> **Notes**:
> - It is not secure
> - Conflicts with Kubernetes* and other tools that use orchestration and private networks
> **NOTES**:
> - It is not secure.
> - Conflicts with Kubernetes* and other tools that use orchestration and private networks may occur.
## Use a Docker* Image for Intel® Vision Accelerator Design with Intel® Movidius™ VPUs
### Build Docker* Image for Intel® Vision Accelerator Design with Intel® Movidius™ VPUs
To use the Docker container for inference on Intel® Vision Accelerator Design with Intel® Movidius™ VPUs:
1. Set up the environment on the host machine, that is going to be used for running Docker*. It is required to execute `hddldaemon`, which is responsible for communication between the HDDL plugin and the board. To learn how to set up the environment (the OpenVINO package must be pre-installed), see [Configuration Guide for Intel® Vision Accelerator Design with Intel® Movidius™ VPUs](installing-openvino-linux-ivad-vpu.md).
2. Prepare the Docker* image. As a base image, you can use the image from the section [Building Docker Image for CPU](#building-for-cpu). To use it for inference on Intel® Vision Accelerator Design with Intel® Movidius™ VPUs you need to rebuild the image with adding the following dependencies:
1. Set up the environment on the host machine, that is going to be used for running Docker*.
It is required to execute `hddldaemon`, which is responsible for communication between the HDDL plugin and the board.
To learn how to set up the environment (the OpenVINO package or HDDL package must be pre-installed), see [Configuration guide for HDDL device](https://github.com/openvinotoolkit/docker_ci/blob/master/install_guide_vpu_hddl.md) or [Configuration Guide for Intel® Vision Accelerator Design with Intel® Movidius™ VPUs](installing-openvino-linux-ivad-vpu.md).
2. Prepare the Docker* image (add the following commands to a Dockerfile).
- **Ubuntu 18.04**:
```sh
WORKDIR /tmp
RUN apt-get update && \
apt-get install -y --no-install-recommends \
libboost-filesystem1.65-dev=1.65.1+dfsg-0ubuntu5 \
libboost-thread1.65-dev=1.65.1+dfsg-0ubuntu5 \
libjson-c3=0.12.1-1.3 libxxf86vm-dev=1:1.1.4-1 && \
rm -rf /var/lib/apt/lists/*
libboost-filesystem1.65-dev \
libboost-thread1.65-dev \
libjson-c3 libxxf86vm-dev && \
rm -rf /var/lib/apt/lists/* && rm -rf /tmp/*
```
- **Ubuntu 20.04**:
```sh
WORKDIR /tmp
RUN apt-get update && \
apt-get install -y --no-install-recommends \
libboost-filesystem-dev \
libboost-thread-dev \
libjson-c4 \
libxxf86vm-dev && \
rm -rf /var/lib/apt/lists/* && rm -rf /tmp/*
```
- **CentOS 7.6**:
```sh
WORKDIR /tmp
RUN yum update -y && yum install -y \
boost-filesystem \
boost-thread \
boost-program-options \
boost-system \
boost-chrono \
boost-date-time \
boost-regex \
boost-atomic \
json-c \
libXxf86vm-devel && \
yum clean all && rm -rf /var/cache/yum
```
3. Run `hddldaemon` on the host in a separate terminal session using the following command:
```sh
@@ -267,22 +243,50 @@ $HDDL_INSTALL_DIR/hddldaemon
### Run the Docker* Image for Intel® Vision Accelerator Design with Intel® Movidius™ VPUs
To run the built Docker* image for Intel® Vision Accelerator Design with Intel® Movidius™ VPUs, use the following command:
```sh
docker run --device=/dev/ion:/dev/ion -v /var/tmp:/var/tmp -ti <image_name>
docker run -it --rm --device=/dev/ion:/dev/ion -v /var/tmp:/var/tmp <image_name>
```
> **NOTE**:
> **NOTES**:
> - The device `/dev/ion` need to be shared to be able to use ion buffers among the plugin, `hddldaemon` and the kernel.
> - Since separate inference tasks share the same HDDL service communication interface (the service creates mutexes and a socket file in `/var/tmp`), `/var/tmp` needs to be mounted and shared among them.
In some cases, the ion driver is not enabled (for example, due to a newer kernel version or iommu incompatibility). `lsmod | grep myd_ion` returns empty output. To resolve, use the following command:
```sh
docker run --rm --net=host -v /var/tmp:/var/tmp ipc=host -ti <image_name>
docker run -it --rm --net=host -v /var/tmp:/var/tmp ipc=host <image_name>
```
> **NOTE**:
> **NOTES**:
> - When building docker images, create a user in the docker file that has the same UID and GID as the user which runs hddldaemon on the host.
> - Run the application in the docker with this user.
> - Alternatively, you can start hddldaemon with the root user on host, but this approach is not recommended.
### Run Demos in the Docker* Image
To run the Security Barrier Camera Demo on a specific inference device, run the following commands with the root privileges (additional third-party dependencies will be installed):
**CPU**:
```sh
docker run -itu root:root --rm --device=/dev/ion:/dev/ion -v /var/tmp:/var/tmp --device /dev/dri:/dev/dri --device-cgroup-rule='c 189:* rmw' -v /dev/bus/usb:/dev/bus/usb <image_name>
/bin/bash -c "apt update && apt install sudo && deployment_tools/demo/demo_security_barrier_camera.sh -d CPU -sample-options -no_show"
```
**GPU**:
```sh
docker run -itu root:root --rm --device=/dev/ion:/dev/ion -v /var/tmp:/var/tmp --device /dev/dri:/dev/dri --device-cgroup-rule='c 189:* rmw' -v /dev/bus/usb:/dev/bus/usb <image_name>
/bin/bash -c "apt update && apt install sudo && deployment_tools/demo/demo_security_barrier_camera.sh -d GPU -sample-options -no_show"
```
**MYRIAD**:
```sh
docker run -itu root:root --rm --device=/dev/ion:/dev/ion -v /var/tmp:/var/tmp --device /dev/dri:/dev/dri --device-cgroup-rule='c 189:* rmw' -v /dev/bus/usb:/dev/bus/usb <image_name>
/bin/bash -c "apt update && apt install sudo && deployment_tools/demo/demo_security_barrier_camera.sh -d MYRIAD -sample-options -no_show"
```
**HDDL**:
```sh
docker run -itu root:root --rm --device=/dev/ion:/dev/ion -v /var/tmp:/var/tmp --device /dev/dri:/dev/dri --device-cgroup-rule='c 189:* rmw' -v /dev/bus/usb:/dev/bus/usb <image_name>
/bin/bash -c "apt update && apt install sudo && deployment_tools/demo/demo_security_barrier_camera.sh -d HDDL -sample-options -no_show"
```
## Use a Docker* Image for FPGA
Intel will be transitioning to the next-generation programmable deep-learning solution based on FPGAs in order to increase the level of customization possible in FPGA deep-learning. As part of this transition, future standard releases (i.e., non-LTS releases) of Intel® Distribution of OpenVINO™ toolkit will no longer include the Intel® Vision Accelerator Design with an Intel® Arria® 10 FPGA and the Intel® Programmable Acceleration Card with Intel® Arria® 10 GX FPGA.
@@ -291,12 +295,14 @@ Intel® Distribution of OpenVINO™ toolkit 2020.3.X LTS release will continue t
For instructions for previous releases with FPGA Support, see documentation for the [2020.4 version](https://docs.openvinotoolkit.org/2020.4/openvino_docs_install_guides_installing_openvino_docker_linux.html#use_a_docker_image_for_fpga) or lower.
## Examples
* [ubuntu18_runtime dockerfile](https://docs.openvinotoolkit.org/downloads/ubuntu18_runtime.dockerfile) - Can be used to build OpenVINO™ runtime image containing minimal dependencies needed to use OpenVINO™ in production environment.
* [ubuntu18_dev dockerfile](https://docs.openvinotoolkit.org/downloads/ubuntu18_dev.dockerfile) - Can be used to build OpenVINO™ developer image containing full OpenVINO™ package to use in development environment.
## Troubleshooting
If you got proxy issues, please setup proxy settings for Docker. See the Proxy section in the [Install the DL Workbench from Docker Hub* ](@ref workbench_docs_Workbench_DG_Install_from_Docker_Hub) topic.
## Additional Resources
* [DockerHub CI Framework](https://github.com/openvinotoolkit/docker_ci) for Intel® Distribution of OpenVINO™ toolkit. The Framework can generate a Dockerfile, build, test, and deploy an image with the Intel® Distribution of OpenVINO™ toolkit. You can reuse available Dockerfiles, add your layer and customize the image of OpenVINO™ for your needs.
* Intel® Distribution of OpenVINO™ toolkit home page: [https://software.intel.com/en-us/openvino-toolkit](https://software.intel.com/en-us/openvino-toolkit)
* OpenVINO™ toolkit documentation: [https://docs.openvinotoolkit.org](https://docs.openvinotoolkit.org)

View File

@@ -15,140 +15,75 @@ This guide provides the steps for creating a Docker* image with Intel® Distribu
- Windows 10*, 64-bit Pro, Enterprise or Education (1607 Anniversary Update, Build 14393 or later) editions
- Windows Server* 2016 or higher
## Prebuilt Images
Prebuilt images are available on [Docker Hub](https://hub.docker.com/u/openvino).
## Build a Docker* Image for CPU
To build a Docker image, create a `Dockerfile` that contains defined variables and commands required to create an OpenVINO toolkit installation image.
You can use [available Dockerfiles](https://github.com/openvinotoolkit/docker_ci/tree/master/dockerfiles) or generate a Dockerfile with your setting via [DockerHub CI Framework](https://github.com/openvinotoolkit/docker_ci) for Intel® Distribution of OpenVINO toolkit.
The Framework can generate a Dockerfile, build, test, and deploy an image with the Intel® Distribution of OpenVINO™ toolkit.
Create your `Dockerfile` using the following example as a template:
<details>
<summary>Click to expand/collapse</summary>
~~~
# escape= `
FROM mcr.microsoft.com/windows/servercore:ltsc2019
# Restore the default Windows shell for correct batch processing.
SHELL ["cmd", "/S", "/C"]
USER ContainerAdministrator
# Setup Redistributable Libraries for Intel(R) C++ Compiler for Windows*
RUN powershell.exe -Command `
Invoke-WebRequest -URI https://software.intel.com/sites/default/files/managed/59/aa/ww_icl_redist_msi_2018.3.210.zip -Proxy %HTTPS_PROXY% -OutFile "%TMP%\ww_icl_redist_msi_2018.3.210.zip" ; `
Expand-Archive -Path "%TMP%\ww_icl_redist_msi_2018.3.210.zip" -DestinationPath "%TMP%\ww_icl_redist_msi_2018.3.210" -Force ; `
Remove-Item "%TMP%\ww_icl_redist_msi_2018.3.210.zip" -Force
RUN %TMP%\ww_icl_redist_msi_2018.3.210\ww_icl_redist_intel64_2018.3.210.msi /quiet /passive /log "%TMP%\redist.log"
# setup Python
ARG PYTHON_VER=python3.7
RUN powershell.exe -Command `
Invoke-WebRequest -URI https://www.python.org/ftp/python/3.7.6/python-3.7.6-amd64.exe -Proxy %HTTPS_PROXY% -OutFile %TMP%\\python-3.7.exe ; `
Start-Process %TMP%\\python-3.7.exe -ArgumentList '/passive InstallAllUsers=1 PrependPath=1 TargetDir=c:\\Python37' -Wait ; `
Remove-Item %TMP%\\python-3.7.exe -Force
RUN python -m pip install --upgrade pip
RUN python -m pip install cmake
# download package from external URL
ARG package_url=http://registrationcenter-download.intel.com/akdlm/irc_nas/16613/w_openvino_toolkit_p_0000.0.000.exe
ARG TEMP_DIR=/temp
WORKDIR ${TEMP_DIR}
ADD ${package_url} ${TEMP_DIR}
# install product by installation script
ARG build_id=0000.0.000
ENV INTEL_OPENVINO_DIR C:\intel
RUN powershell.exe -Command `
Start-Process "./*.exe" -ArgumentList '--s --a install --eula=accept --installdir=%INTEL_OPENVINO_DIR% --output=%TMP%\openvino_install_out.log --components=OPENVINO_COMMON,INFERENCE_ENGINE,INFERENCE_ENGINE_SDK,INFERENCE_ENGINE_SAMPLES,OMZ_TOOLS,POT,INFERENCE_ENGINE_CPU,INFERENCE_ENGINE_GPU,MODEL_OPTIMIZER,OMZ_DEV,OPENCV_PYTHON,OPENCV_RUNTIME,OPENCV,DOCS,SETUPVARS,VC_REDIST_2017_X64,icl_redist' -Wait
ENV INTEL_OPENVINO_DIR C:\intel\openvino_${build_id}
# Post-installation cleanup
RUN rmdir /S /Q "%USERPROFILE%\Downloads\Intel"
# dev package
WORKDIR ${INTEL_OPENVINO_DIR}
RUN python -m pip install --no-cache-dir setuptools && `
python -m pip install --no-cache-dir -r "%INTEL_OPENVINO_DIR%\python\%PYTHON_VER%\requirements.txt" && `
python -m pip install --no-cache-dir -r "%INTEL_OPENVINO_DIR%\python\%PYTHON_VER%\openvino\tools\benchmark\requirements.txt" && `
python -m pip install --no-cache-dir torch==1.4.0+cpu torchvision==0.5.0+cpu -f https://download.pytorch.org/whl/torch_stable.html
WORKDIR ${TEMP_DIR}
COPY scripts\install_requirements.bat install_requirements.bat
RUN install_requirements.bat %INTEL_OPENVINO_DIR%
WORKDIR ${INTEL_OPENVINO_DIR}\deployment_tools\open_model_zoo\tools\accuracy_checker
RUN %INTEL_OPENVINO_DIR%\bin\setupvars.bat && `
python -m pip install --no-cache-dir -r "%INTEL_OPENVINO_DIR%\deployment_tools\open_model_zoo\tools\accuracy_checker\requirements.in" && `
python "%INTEL_OPENVINO_DIR%\deployment_tools\open_model_zoo\tools\accuracy_checker\setup.py" install
WORKDIR ${INTEL_OPENVINO_DIR}\deployment_tools\tools\post_training_optimization_toolkit
RUN python -m pip install --no-cache-dir -r "%INTEL_OPENVINO_DIR%\deployment_tools\tools\post_training_optimization_toolkit\requirements.txt" && `
python "%INTEL_OPENVINO_DIR%\deployment_tools\tools\post_training_optimization_toolkit\setup.py" install
WORKDIR ${INTEL_OPENVINO_DIR}
# Post-installation cleanup
RUN powershell Remove-Item -Force -Recurse "%TEMP%\*" && `
powershell Remove-Item -Force -Recurse "%TEMP_DIR%" && `
rmdir /S /Q "%ProgramData%\Package Cache"
USER ContainerUser
CMD ["cmd.exe"]
~~~
</details>
> **NOTE**: Replace direct link to the Intel® Distribution of OpenVINO™ toolkit package to the latest version in the `package_url` variable and modify install package name in the subsequent commands. You can copy the link from the [Intel® Distribution of OpenVINO™ toolkit download page](https://software.seek.intel.com/openvino-toolkit) after registration. Right click the **Offline Installer** button on the download page for Linux in your browser and press **Copy link address**.
> **NOTE**: Replace build number of the package in the `build_id` variable according to the name of the downloaded Intel® Distribution of OpenVINO™ toolkit package. For example, for the installation file `w_openvino_toolkit_p_2020.3.333.exe`, the `build_id` variable should have the value `2020.3.333`.
To build a Docker* image for CPU, run the following command:
~~~
docker build . -t <image_name> `
--build-arg HTTP_PROXY=<http://your_proxy_server.com:port> `
--build-arg HTTPS_PROXY=<https://your_proxy_server.com:port>
~~~
## Install additional dependencies
## Install Additional Dependencies
### Install CMake
To add CMake to the image, add the following commands to the `Dockerfile` example above:
To add CMake to the image, add the following commands to the Dockerfile:
~~~
RUN powershell.exe -Command `
Invoke-WebRequest -URI https://cmake.org/files/v3.14/cmake-3.14.7-win64-x64.msi -Proxy %HTTPS_PROXY% -OutFile %TMP%\\cmake-3.14.7-win64-x64.msi ; `
Invoke-WebRequest -URI https://cmake.org/files/v3.14/cmake-3.14.7-win64-x64.msi -OutFile %TMP%\\cmake-3.14.7-win64-x64.msi ; `
Start-Process %TMP%\\cmake-3.14.7-win64-x64.msi -ArgumentList '/quiet /norestart' -Wait ; `
Remove-Item %TMP%\\cmake-3.14.7-win64-x64.msi -Force
RUN SETX /M PATH "C:\Program Files\CMake\Bin;%PATH%"
~~~
In case of proxy issues, please add the `ARG HTTPS_PROXY` and `-Proxy %%HTTPS_PROXY%` settings to the `powershell.exe` command to the Dockerfile. Then build a docker image:
~~~
docker build . -t <image_name> `
--build-arg HTTPS_PROXY=<https://your_proxy_server:port>
~~~
### Install Microsoft Visual Studio* Build Tools
You can add Microsoft Visual Studio Build Tools* to Windows* OS Docker image. Available options are to use offline installer for Build Tools
(follow [Instruction for the offline installer](https://docs.microsoft.com/en-us/visualstudio/install/create-an-offline-installation-of-visual-studio?view=vs-2019) or
to use online installer for Build Tools (follow [Instruction for the online installer](https://docs.microsoft.com/en-us/visualstudio/install/build-tools-container?view=vs-2019).
You can add Microsoft Visual Studio Build Tools* to a Windows* OS Docker image. Available options are to use offline installer for Build Tools
(follow the [Instruction for the offline installer](https://docs.microsoft.com/en-us/visualstudio/install/create-an-offline-installation-of-visual-studio?view=vs-2019)) or
to use the online installer for Build Tools (follow [Instruction for the online installer](https://docs.microsoft.com/en-us/visualstudio/install/build-tools-container?view=vs-2019)).
Microsoft Visual Studio Build Tools* are licensed as a supplement your existing Microsoft Visual Studio* license.
Any images built with these tools should be for your personal use or for use in your organization in accordance with your existing Visual Studio* and Windows* licenses.
To add MSBuild 2019 to the image, add the following commands to the Dockerfile:
~~~
RUN powershell.exe -Command Invoke-WebRequest -URI https://aka.ms/vs/16/release/vs_buildtools.exe -OutFile %TMP%\\vs_buildtools.exe
RUN %TMP%\\vs_buildtools.exe --quiet --norestart --wait --nocache `
--installPath "C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools" `
--add Microsoft.VisualStudio.Workload.MSBuildTools `
--add Microsoft.VisualStudio.Workload.UniversalBuildTools `
--add Microsoft.VisualStudio.Workload.VCTools --includeRecommended `
--remove Microsoft.VisualStudio.Component.Windows10SDK.10240 `
--remove Microsoft.VisualStudio.Component.Windows10SDK.10586 `
--remove Microsoft.VisualStudio.Component.Windows10SDK.14393 `
--remove Microsoft.VisualStudio.Component.Windows81SDK || IF "%ERRORLEVEL%"=="3010" EXIT 0 && powershell set-executionpolicy remotesigned
~~~
In case of proxy issues, please use an offline installer for Build Tools (follow [Instruction for the offline installer](https://docs.microsoft.com/en-us/visualstudio/install/create-an-offline-installation-of-visual-studio?view=vs-2019).
## Run the Docker* Image for CPU
To install the OpenVINO toolkit from the prepared Docker image, run the image with the following command:
To install the OpenVINO toolkit from the prepared Docker image, run the image with the following command (currently support only CPU target):
~~~
docker run -it <image_name>
docker run -it --rm <image_name>
~~~
## Examples
* [winserver2019_runtime dockerfile](https://docs.openvinotoolkit.org/downloads/winserver2019_runtime.dockerfile) - Can be used to build OpenVINO™ runtime image containing minimal dependencies needed to use OpenVINO™ in production environment.
* [winserver2019_dev dockerfile](https://docs.openvinotoolkit.org/downloads/winserver2019_dev.dockerfile) - Can be used to build OpenVINO™ developer image containing full OpenVINO™ package to use in development environment.
If you want to try some demos then run image with the root privileges (some additional 3-rd party dependencies will be installed):
~~~
docker run -itu ContainerAdministrator --rm <image_name> cmd /S /C "cd deployment_tools\demo && demo_security_barrier_camera.bat -d CPU -sample-options -no_show"
~~~
## Troubleshooting
If you got proxy issues, please setup proxy settings for Docker. See the Proxy section in the [Install the DL Workbench from Docker Hub* ](@ref workbench_docs_Workbench_DG_Install_from_Docker_Hub) topic.
## Additional Resources
* [DockerHub CI Framework](https://github.com/openvinotoolkit/docker_ci) for Intel® Distribution of OpenVINO™ toolkit. The Framework can generate a Dockerfile, build, test, and deploy an image with the Intel® Distribution of OpenVINO™ toolkit. You can reuse available Dockerfiles, add your layer and customize the image of OpenVINO™ for your needs.
* Intel® Distribution of OpenVINO™ toolkit home page: [https://software.intel.com/en-us/openvino-toolkit](https://software.intel.com/en-us/openvino-toolkit)
* OpenVINO™ toolkit documentation: [https://docs.openvinotoolkit.org](https://docs.openvinotoolkit.org)

View File

@@ -0,0 +1,14 @@
# Install From Images and Repositories {#openvino_docs_install_guides_installing_openvino_images}
You may install Intel® Distribution of OpenVINO™ toolkit from images and repositories using the **Install OpenVINO™** button above or directly from the [Get the Intel® Distribution of OpenVINO™ Toolkit](https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit/download.html) page. Use the documentation below if you need additional support:
* [Docker](installing-openvino-docker-linux.md)
* [Docker with DL Workbench](@ref workbench_docs_Workbench_DG_Install_from_Docker_Hub)
* [APT](installing-openvino-apt.md)
* [YUM](installing-openvino-yum.md)
* [Anaconda Cloud](installing-openvino-conda.md)
* [Yocto](installing-openvino-yocto.md)
* [PyPI](installing-openvino-pip.md)
The open source version is available in the [OpenVINO™ toolkit GitHub repository](https://github.com/openvinotoolkit/openvino) and you can build it for supported platforms using the <a href="https://github.com/openvinotoolkit/openvino/wiki/BuildingCode">Inference Engine Build Instructions</a>.

View File

@@ -9,7 +9,7 @@
## Introduction
The Intel® Distribution of OpenVINO™ toolkit quickly deploys applications and solutions that emulate human vision. Based on Convolutional Neural Networks (CNN), the toolkit extends computer vision (CV) workloads across Intel® hardware, maximizing performance. The Intel® Distribution of OpenVINO™ toolkit includes the Intel® Deep Learning Deployment Toolkit (Intel® DLDT).
OpenVINO™ toolkit is a comprehensive toolkit for quickly developing applications and solutions that solve a variety of tasks including emulation of human vision, automatic speech recognition, natural language processing, recommendation systems, and many others. Based on latest generations of artificial neural networks, including Convolutional Neural Networks (CNNs), recurrent and attention-based networks, the toolkit extends computer vision and non-vision workloads across Intel® hardware, maximizing performance. It accelerates applications with high-performance, AI and deep learning inference deployed from edge to cloud.
The Intel® Distribution of OpenVINO™ toolkit for Linux\*:
- Enables CNN-based deep learning inference on the edge
@@ -28,7 +28,21 @@ The Intel® Distribution of OpenVINO™ toolkit for Linux\*:
| [Inference Engine Code Samples](../IE_DG/Samples_Overview.md) | A set of simple console applications demonstrating how to utilize specific OpenVINO capabilities in an application and how to perform specific tasks, such as loading a model, running inference, querying specific device capabilities, and more. |
| [Demo Applications](@ref omz_demos_README) | A set of simple console applications that provide robust application templates to help you implement specific deep learning scenarios. |
| Additional Tools | A set of tools to work with your models including [Accuracy Checker utility](@ref omz_tools_accuracy_checker_README), [Post-Training Optimization Tool Guide](@ref pot_README), [Model Downloader](@ref omz_tools_downloader_README) and other |
| [Documentation for Pre-Trained Models ](@ref omz_models_intel_index) | Documentation for the pre-trained models available in the [Open Model Zoo repo](https://github.com/opencv/open_model_zoo) |
| [Documentation for Pre-Trained Models ](@ref omz_models_intel_index) | Documentation for the pre-trained models available in the [Open Model Zoo repo](https://github.com/opencv/open_model_zoo). |
| Deep Learning Streamer (DL Streamer) | Streaming analytics framework, based on GStreamer, for constructing graphs of media analytics components. For the DL Streamer documentation, see [DL Streamer Samples](@ref gst_samples_README), [API Reference](https://openvinotoolkit.github.io/dlstreamer_gst/), [Elements](https://github.com/opencv/gst-video-analytics/wiki/Elements), [Tutorial](https://github.com/opencv/gst-video-analytics/wiki/DL%20Streamer%20Tutorial). |
**Could Be Optionally Installed**
[Deep Learning Workbench](@ref workbench_docs_Workbench_DG_Introduction) (DL Workbench) is a platform built upon OpenVINO™ and provides a web-based graphical environment that enables you to optimize, fine-tune, analyze, visualize, and compare performance of deep learning models on various Intel® architecture
configurations. In the DL Workbench, you can use most of OpenVINO™ toolkit components:
* [Model Downloader](@ref omz_tools_downloader_README)
* [Intel® Open Model Zoo](@ref omz_models_intel_index)
* [Model Optimizer](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)
* [Post-training Optimization Tool](@ref pot_README)
* [Accuracy Checker](@ref omz_tools_accuracy_checker_README)
* [Benchmark Tool](../../inference-engine/samples/benchmark_app/README.md)
Proceed to an [easy installation from Docker](@ref workbench_docs_Workbench_DG_Install_from_Docker_Hub) to get started.
## System Requirements
@@ -84,28 +98,25 @@ If you downloaded the package file to the current user's `Downloads` directory:
```sh
cd ~/Downloads/
```
By default, the file is saved as `l_openvino_toolkit_p_<version>.tgz`.
By default, the file is saved as `l_openvino_toolkit_p_<version>.tgz`.
3. Unpack the .tgz file:
```sh
tar -xvzf l_openvino_toolkit_p_<version>.tgz
```
The files are unpacked to the `l_openvino_toolkit_p_<version>` directory.
The files are unpacked to the `l_openvino_toolkit_p_<version>` directory.
4. Go to the `l_openvino_toolkit_p_<version>` directory:
```sh
cd l_openvino_toolkit_p_<version>
```
If you have a previous version of the Intel Distribution of OpenVINO
If you have a previous version of the Intel Distribution of OpenVINO
toolkit installed, rename or delete these two directories:
- `~/inference_engine_samples_build`
- `~/openvino_models`
**Installation Notes:**
- Choose an installation option and run the related script as root.
- You can use either a GUI installation wizard or command-line instructions (CLI).
- Screenshots are provided for the GUI, but not for CLI. The following information also applies to CLI and will be helpful to your installation where you will be presented with the same choices and tasks.
**Installation Notes:**
- Choose an installation option and run the related script as root.
- You can use either a GUI installation wizard or command-line instructions (CLI).
- Screenshots are provided for the GUI, but not for CLI. The following information also applies to CLI and will be helpful to your installation where you will be presented with the same choices and tasks.
5. Choose your installation option:
- **Option 1:** GUI Installation Wizard:
@@ -116,6 +127,15 @@ sudo ./install_GUI.sh
```sh
sudo ./install.sh
```
- **Option 3:** Command-Line Silent Instructions:
```sh
sudo sed -i 's/decline/accept/g' silent.cfg
sudo ./install.sh -s silent.cfg
```
You can select which OpenVINO components will be installed by modifying the `COMPONENTS` parameter in the `silent.cfg` file. For example, to install only CPU runtime for the Inference Engine, set
`COMPONENTS=intel-openvino-ie-rt-cpu__x86_64` in `silent.cfg`.
To get a full list of available components for installation, run the `./install.sh --list_components` command from the unpacked OpenVINO™ toolkit package.
6. Follow the instructions on your screen. Watch for informational
messages such as the following in case you must complete additional
steps:
@@ -128,7 +148,7 @@ looks like this:
![](../img/openvino-install-linux-03.png)
When installed as **root** the default installation directory for the Intel Distribution of OpenVINO is
`/opt/intel/openvino_<version>/`.<br>
For simplicity, a symbolic link to the latest installation is also created: `/opt/intel/openvino/`.
For simplicity, a symbolic link to the latest installation is also created: `/opt/intel/openvino_2021/`.
> **NOTE**: The Intel® Media SDK component is always installed in the `/opt/intel/mediasdk` directory regardless of the OpenVINO installation path chosen.
8. A Complete screen indicates that the core components have been installed:
@@ -149,20 +169,20 @@ These dependencies are required for:
1. Change to the `install_dependencies` directory:
```sh
cd /opt/intel/openvino/install_dependencies
cd /opt/intel/openvino_2021/install_dependencies
```
2. Run a script to download and install the external software dependencies:
```sh
sudo -E ./install_openvino_dependencies.sh
```
The dependencies are installed. Continue to the next section to set your environment variables.
The dependencies are installed. Continue to the next section to set your environment variables.
## <a name="set-the-environment-variables"></a>Set the Environment Variables
You must update several environment variables before you can compile and run OpenVINO™ applications. Run the following script to temporarily set your environment variables:
```sh
source /opt/intel/openvino/bin/setupvars.sh
source /opt/intel/openvino_2021/bin/setupvars.sh
```
**Optional:** The OpenVINO environment variables are removed when you close the shell. As an option, you can permanently set the environment variables as follows:
@@ -174,7 +194,7 @@ vi <user_directory>/.bashrc
2. Add this line to the end of the file:
```sh
source /opt/intel/openvino/bin/setupvars.sh
source /opt/intel/openvino_2021/bin/setupvars.sh
```
3. Save and close the file: press the **Esc** key and type `:wq`.
@@ -210,7 +230,7 @@ You can choose to either configure all supported frameworks at once **OR** confi
1. Go to the Model Optimizer prerequisites directory:
```sh
cd /opt/intel/openvino/deployment_tools/model_optimizer/install_prerequisites
cd /opt/intel/openvino_2021/deployment_tools/model_optimizer/install_prerequisites
```
2. Run the script to configure the Model Optimizer for Caffe,
TensorFlow 1.x, MXNet, Kaldi\*, and ONNX:
@@ -224,7 +244,7 @@ Configure individual frameworks separately **ONLY** if you did not select **Opti
1. Go to the Model Optimizer prerequisites directory:
```sh
cd /opt/intel/openvino/deployment_tools/model_optimizer/install_prerequisites
cd /opt/intel/openvino_2021/deployment_tools/model_optimizer/install_prerequisites
```
2. Run the script for your model framework. You can run more than one script:
@@ -271,32 +291,30 @@ To verify the installation and compile two samples, use the steps below to run t
1. Go to the **Inference Engine demo** directory:
```sh
cd /opt/intel/openvino/deployment_tools/demo
cd /opt/intel/openvino_2021/deployment_tools/demo
```
2. Run the **Image Classification verification script**:
```sh
./demo_squeezenet_download_convert_run.sh
```
This verification script downloads a SqueezeNet model, uses the Model Optimizer to convert the model to the .bin and .xml Intermediate Representation (IR) files. The Inference Engine requires this model conversion so it can use the IR as input and achieve optimum performance on Intel hardware.<br>
This verification script builds the [Image Classification Sample Async](../../inference-engine/samples/classification_sample_async/README.md) application and run it with the `car.png` image located in the demo directory. When the verification script completes, you will have the label and confidence for the top-10 categories:
![](../img/image_classification_script_output_lnx.png)
This verification script downloads a SqueezeNet model, uses the Model Optimizer to convert the model to the .bin and .xml Intermediate Representation (IR) files. The Inference Engine requires this model conversion so it can use the IR as input and achieve optimum performance on Intel hardware.<br>
This verification script builds the [Image Classification Sample Async](../../inference-engine/samples/classification_sample_async/README.md) application and run it with the `car.png` image located in the demo directory. When the verification script completes, you will have the label and confidence for the top-10 categories:
![](../img/image_classification_script_output_lnx.png)
3. Run the **Inference Pipeline verification script**:
```sh
./demo_security_barrier_camera.sh
```
This script downloads three pre-trained model IRs, builds the [Security Barrier Camera Demo](@ref omz_demos_security_barrier_camera_demo_README) application, and runs it with the downloaded models and the `car_1.bmp` image from the `demo` directory to show an inference pipeline. The verification script uses vehicle recognition in which vehicle attributes build on each other to narrow in on a specific attribute.
First, an object is identified as a vehicle. This identification is used as input to the next model, which identifies specific vehicle attributes, including the license plate. Finally, the attributes identified as the license plate are used as input to the third model, which recognizes specific characters in the license plate.
When the verification script completes, you will see an image that displays the resulting frame with detections rendered as bounding boxes, and text:
![](../img/inference_pipeline_script_lnx.png)
This script downloads three pre-trained model IRs, builds the [Security Barrier Camera Demo](@ref omz_demos_security_barrier_camera_demo_README) application, and runs it with the downloaded models and the `car_1.bmp` image from the `demo` directory to show an inference pipeline. The verification script uses vehicle recognition in which vehicle attributes build on each other to narrow in on a specific attribute.<br>
First, an object is identified as a vehicle. This identification is used as input to the next model, which identifies specific vehicle attributes, including the license plate. Finally, the attributes identified as the license plate are used as input to the third model, which recognizes specific characters in the license plate.<br>
When the verification script completes, you will see an image that displays the resulting frame with detections rendered as bounding boxes, and text:
![](../img/inference_pipeline_script_lnx.png)
4. Close the image viewer window to complete the verification script.
To learn about the verification scripts, see the `README.txt` file in `/opt/intel/openvino/deployment_tools/demo`.
To learn about the verification scripts, see the `README.txt` file in `/opt/intel/openvino_2021/deployment_tools/demo`.
For a description of the Intel Distribution of OpenVINO™ pre-trained object detection and object recognition models, see [Overview of OpenVINO™ Toolkit Pre-Trained Models](@ref omz_models_intel_index).
@@ -312,7 +330,7 @@ The steps in this section are required only if you want to enable the toolkit co
1. Go to the install_dependencies directory:
```sh
cd /opt/intel/openvino/install_dependencies/
cd /opt/intel/openvino_2021/install_dependencies/
```
2. Enter the super user mode:
```sh
@@ -322,20 +340,15 @@ sudo -E su
```sh
./install_NEO_OCL_driver.sh
```
The drivers are not included in the package and the script downloads them. Make sure you have the
internet connection for this step.
The script compares the driver version on the system to the current version.
If the driver version on the system is higher or equal to the current version, the script does
not install a new driver.
If the version of the driver is lower than the current version, the script uninstalls the lower
and installs the current version with your permission:
![](../img/NEO_check_agreement.png)
Higher hardware versions require a higher driver version, namely 20.35 instead of 19.41.
If the script fails to uninstall the driver, uninstall it manually.
During the script execution, you may see the following command line output:
- Add OpenCL user to video group
Ignore this suggestion and continue.
The drivers are not included in the package and the script downloads them. Make sure you have the internet connection for this step.<br>
The script compares the driver version on the system to the current version. If the driver version on the system is higher or equal to the current version, the script does
not install a new driver. If the version of the driver is lower than the current version, the script uninstalls the lower and installs the current version with your permission:
![](../img/NEO_check_agreement.png)
Higher hardware versions require a higher driver version, namely 20.35 instead of 19.41. If the script fails to uninstall the driver, uninstall it manually. During the script execution, you may see the following command line output:
```sh
Add OpenCL user to video group
```
Ignore this suggestion and continue.
4. **Optional** Install header files to allow compiling a new code. You can find the header files at [Khronos OpenCL™ API Headers](https://github.com/KhronosGroup/OpenCL-Headers.git).
## <a name="additional-NCS-steps"></a>Steps for Intel® Neural Compute Stick 2
@@ -346,11 +359,10 @@ These steps are only required if you want to perform inference on Intel® Movidi
```sh
sudo usermod -a -G users "$(whoami)"
```
Log out and log in for it to take effect.
Log out and log in for it to take effect.
2. To perform inference on Intel® Neural Compute Stick 2, install the USB rules as follows:
```sh
sudo cp /opt/intel/openvino/inference_engine/external/97-myriad-usbboot.rules /etc/udev/rules.d/
sudo cp /opt/intel/openvino_2021/inference_engine/external/97-myriad-usbboot.rules /etc/udev/rules.d/
```
```sh
sudo udevadm control --reload-rules
@@ -373,7 +385,7 @@ After configuration is done, you are ready to run the verification scripts with
1. Go to the **Inference Engine demo** directory:
```sh
cd /opt/intel/openvino/deployment_tools/demo
cd /opt/intel/openvino_2021/deployment_tools/demo
```
2. Run the **Image Classification verification script**. If you have access to the Internet through the proxy server only, please make sure that it is configured in your OS environment.
@@ -403,7 +415,7 @@ To run the sample application:
1. Set up environment variables:
```sh
source /opt/intel/openvino/bin/setupvars.sh
source /opt/intel/openvino_2021/bin/setupvars.sh
```
2. Go to the samples build directory:
```sh
@@ -414,24 +426,24 @@ cd ~/inference_engine_samples_build/intel64/Release
- **For CPU**:
```sh
./classification_sample_async -i /opt/intel/openvino/deployment_tools/demo/car.png -m ~/openvino_models/ir/public/squeezenet1.1/FP16/squeezenet1.1.xml -d CPU
./classification_sample_async -i /opt/intel/openvino_2021/deployment_tools/demo/car.png -m ~/openvino_models/ir/public/squeezenet1.1/FP16/squeezenet1.1.xml -d CPU
```
- **For GPU**:
```sh
./classification_sample_async -i /opt/intel/openvino/deployment_tools/demo/car.png -m ~/openvino_models/ir/public/squeezenet1.1/FP16/squeezenet1.1.xml -d GPU
./classification_sample_async -i /opt/intel/openvino_2021/deployment_tools/demo/car.png -m ~/openvino_models/ir/public/squeezenet1.1/FP16/squeezenet1.1.xml -d GPU
```
- **For MYRIAD**:
> **NOTE**: Running inference on Intel® Neural Compute Stick 2 with the MYRIAD plugin requires performing [additional hardware configuration steps](#additional-NCS-steps).
```sh
./classification_sample_async -i /opt/intel/openvino/deployment_tools/demo/car.png -m ~/openvino_models/ir/public/squeezenet1.1/FP16/squeezenet1.1.xml -d MYRIAD
./classification_sample_async -i /opt/intel/openvino_2021/deployment_tools/demo/car.png -m ~/openvino_models/ir/public/squeezenet1.1/FP16/squeezenet1.1.xml -d MYRIAD
```
- **For HDDL**:
> **NOTE**: Running inference on Intel® Vision Accelerator Design with Intel® Movidius™ VPUs with the HDDL plugin requires performing [additional hardware configuration steps](installing-openvino-linux-ivad-vpu.md)
```sh
./classification_sample_async -i /opt/intel/openvino/deployment_tools/demo/car.png -m ~/openvino_models/ir/public/squeezenet1.1/FP16/squeezenet1.1.xml -d HDDL
./classification_sample_async -i /opt/intel/openvino_2021/deployment_tools/demo/car.png -m ~/openvino_models/ir/public/squeezenet1.1/FP16/squeezenet1.1.xml -d HDDL
```
For information on Sample Applications, see the [Inference Engine Samples Overview](../IE_DG/Samples_Overview.md).

View File

@@ -1,22 +1,21 @@
# Install Intel® Distribution of OpenVINO™ toolkit for macOS* {#openvino_docs_install_guides_installing_openvino_macos}
> **NOTES**:
> - The Intel® Distribution of OpenVINO™ is supported on macOS\* 10.14.x versions.
> - This installation has been validated on macOS 10.14.4.
> - The Intel® Distribution of OpenVINO™ is supported on macOS\* 10.15.x versions.
> - An internet connection is required to follow the steps in this guide. If you have access to the Internet through the proxy server only, please make sure that it is configured in your OS environment.
## Introduction
The Intel® Distribution of OpenVINO™ toolkit quickly deploys applications and solutions that emulate human vision. Based on Convolutional Neural Networks (CNN), the toolkit extends computer vision (CV) workloads across Intel® hardware, maximizing performance.
The Intel® Distribution of OpenVINO™ toolkit for macOS* includes the Intel® Deep Learning Deployment Toolkit (Intel® DLDT) and OpenCV* to deploy applications for accelerated inference on Intel® CPUs.
The Intel® Distribution of OpenVINO™ toolkit for macOS* includes the Inference Engine, OpenCV* libraries and Model Optimizer tool to deploy applications for accelerated inference on Intel® CPUs and Intel® Neural Compute Stick 2.
The Intel® Distribution of OpenVINO™ toolkit for macOS*:
- Enables CNN-based deep learning inference on the edge
- Enables CNN-based deep learning inference on the edge
- Supports heterogeneous execution across Intel® CPU and Intel® Neural Compute Stick 2 with Intel® Movidius™ VPUs
- Speeds time-to-market via an easy-to-use library of computer vision functions and pre-optimized kernels
- Includes optimized calls for computer vision standards including OpenCV\*
- Speeds time-to-market via an easy-to-use library of computer vision functions and pre-optimized kernels
- Includes optimized calls for computer vision standards including OpenCV\*
**Included with the Installation**
@@ -32,6 +31,19 @@ The following components are installed by default:
| Additional Tools | A set of tools to work with your models including [Accuracy Checker utility](@ref omz_tools_accuracy_checker_README), [Post-Training Optimization Tool Guide](@ref pot_README), [Model Downloader](@ref omz_tools_downloader_README) and other |
| [Documentation for Pre-Trained Models ](@ref omz_models_intel_index) | Documentation for the pre-trained models available in the [Open Model Zoo repo](https://github.com/opencv/open_model_zoo) |
**Could Be Optionally Installed**
[Deep Learning Workbench](@ref workbench_docs_Workbench_DG_Introduction) (DL Workbench) is a platform built upon OpenVINO™ and provides a web-based graphical environment that enables you to optimize, fine-tune, analyze, visualize, and compare performance of deep learning models on various Intel® architecture
configurations. In the DL Workbench, you can use most of OpenVINO™ toolkit components:
* [Model Downloader](@ref omz_tools_downloader_README)
* [Intel® Open Model Zoo](@ref omz_models_intel_index)
* [Model Optimizer](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)
* [Post-training Optimization Tool](@ref pot_README)
* [Accuracy Checker](@ref omz_tools_accuracy_checker_README)
* [Benchmark Tool](../../inference-engine/samples/benchmark_app/README.md)
Proceed to an [easy installation from Docker](@ref workbench_docs_Workbench_DG_Install_from_Docker_Hub) to get started.
## Development and Target Platform
The development and target platforms have the same requirements, but you can select different components during the installation, based on your intended use.
@@ -107,11 +119,11 @@ The disk image is mounted to `/Volumes/m_openvino_toolkit_p_<version>` and autom
- If you used **root** or **administrator** privileges to run the installer, it installs the OpenVINO toolkit to `/opt/intel/openvino_<version>/`
For simplicity, a symbolic link to the latest installation is also created: `/opt/intel/openvino/`
For simplicity, a symbolic link to the latest installation is also created: `/opt/intel/openvino_2021/`
- If you used **regular user** privileges to run the installer, it installs the OpenVINO toolkit to `/home/<user>/intel/openvino_<version>/`
For simplicity, a symbolic link to the latest installation is also created: `/home/<user>/intel/openvino/`
For simplicity, a symbolic link to the latest installation is also created: `/home/<user>/intel/openvino_2021/`
9. If needed, click **Customize** to change the installation directory or the components you want to install:
![](../img/openvino-install-macos-04.png)
@@ -131,7 +143,7 @@ The disk image is mounted to `/Volumes/m_openvino_toolkit_p_<version>` and autom
You need to update several environment variables before you can compile and run OpenVINO™ applications. Open the macOS Terminal\* or a command-line interface shell you prefer and run the following script to temporarily set your environment variables:
```sh
source /opt/intel/openvino/bin/setupvars.sh
source /opt/intel/openvino_2021/bin/setupvars.sh
```
<strong>Optional</strong>: The OpenVINO environment variables are removed when you close the shell. You can permanently set the environment variables as follows:
@@ -144,7 +156,7 @@ You need to update several environment variables before you can compile and run
3. Add this line to the end of the file:
```sh
source /opt/intel/openvino/bin/setupvars.sh
source /opt/intel/openvino_2021/bin/setupvars.sh
```
3. Save and close the file: press the **Esc** key, type `:wq` and press the **Enter** key.
@@ -178,7 +190,7 @@ You can choose to either configure the Model Optimizer for all supported framewo
1. Go to the Model Optimizer prerequisites directory:
```sh
cd /opt/intel/openvino/deployment_tools/model_optimizer/install_prerequisites
cd /opt/intel/openvino_2021/deployment_tools/model_optimizer/install_prerequisites
```
2. Run the script to configure the Model Optimizer for Caffe, TensorFlow 1.x, MXNet, Kaldi\*, and ONNX:
@@ -192,7 +204,7 @@ Configure individual frameworks separately **ONLY** if you did not select **Opti
1. Go to the Model Optimizer prerequisites directory:
```sh
cd /opt/intel/openvino/deployment_tools/model_optimizer/install_prerequisites
cd /opt/intel/openvino_2021/deployment_tools/model_optimizer/install_prerequisites
```
2. Run the script for your model framework. You can run more than one script:
@@ -243,7 +255,7 @@ To verify the installation and compile two Inference Engine samples, run the ver
1. Go to the **Inference Engine demo** directory:
```sh
cd /opt/intel/openvino/deployment_tools/demo
cd /opt/intel/openvino_2021/deployment_tools/demo
```
2. Run the **Image Classification verification script**:
@@ -263,7 +275,7 @@ This script is complete. Continue to the next section to run the Inference Pipel
### Run the Inference Pipeline Verification Script
While still in `/opt/intel/openvino/deployment_tools/demo/`, run the Inference Pipeline verification script:
While still in `/opt/intel/openvino_2021/deployment_tools/demo/`, run the Inference Pipeline verification script:
```sh
./demo_security_barrier_camera.sh
```
@@ -299,7 +311,7 @@ Visit the Intel Distribution of OpenVINO Toolkit [Inference Tutorials for Face D
## Additional Resources
- To learn more about the verification applications, see `README.txt` in `/opt/intel/openvino/deployment_tools/demo/`.
- To learn more about the verification applications, see `README.txt` in `/opt/intel/openvino_2021/deployment_tools/demo/`.
- For detailed description of the pre-trained models, go to the [Overview of OpenVINO toolkit Pre-Trained Models](@ref omz_models_intel_index) page.

View File

@@ -0,0 +1,82 @@
# Install Intel® Distribution of OpenVINO™ Toolkit from PyPI Repository {#openvino_docs_install_guides_installing_openvino_pip}
This guide provides installation steps for the Intel® distribution of OpenVINO™ toolkit distributed through the PyPI repository.
## System Requirements
* [Python* distribution](https://www.python.org/) 3.6 or 3.7
* Operating Systems:
- Ubuntu* 18.04 long-term support (LTS), 64-bit
- macOS* 10.15.x versions
- Windows 10*, 64-bit Pro, Enterprise or Education (1607 Anniversary Update, Build 14393 or higher) editions
- Windows Server* 2016 or higher
## Install the Runtime Package Using the PyPI Repository
### Step 1. Set up and update pip to the highest version
Run the command below:
```sh
python3 -m pip install --upgrade pip
```
### Step 2. Install the Intel® distribution of OpenVINO™ toolkit
Run the command below:
```sh
pip install openvino-python
```
### Step 3. Add PATH to environment variables
Run a command for your operating system:
- Ubuntu 18.04 and macOS:
```sh
export LD_LIBRARY_PATH=<library_dir>:${LD_LIBRARY_PATH}
```
- Windows* 10:
```sh
set PATH=<library_dir>;%PATH%
```
To find `library_dir`:
**Ubuntu, macOS**:
- Standard user:
```sh
echo $(python3 -m site --user-base)/lib
```
- Root or sudo user:
```sh
/usr/local/lib
```
- Virtual environments or custom Python installations (from sources or tarball):
```sh
echo $(which python3)/../../lib
```
**Windows**:
- Standard Python:
```sh
python -c "import os, sys; print((os.path.dirname(sys.executable))+'\Library\\bin')"
```
- Virtual environments or custom Python installations (from sources or tarball):
```sh
python -c "import os, sys; print((os.path.dirname(sys.executable))+'\..\Library\\bin')"
```
### Step 4. Verify that the package is installed
Run the command below:
```sh
python3 -c "import openvino"
```
Now you are ready to develop and run your application.
## Additional Resources
- [Intel® Distribution of OpenVINO™ toolkit](https://software.intel.com/en-us/openvino-toolkit).
- [OpenVINO™ toolkit online documentation](https://docs.openvinotoolkit.org).
- [Model Optimizer Developer Guide](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
- [Inference Engine Developer Guide](../IE_DG/Deep_Learning_Inference_Engine_DevGuide.md).
- For more information on Sample Applications, see the [Inference Engine Samples Overview](../IE_DG/Samples_Overview.md).
- [Intel® Distribution of OpenVINO™ toolkit PIP home page](https://pypi.org/project/openvino-python/)

View File

@@ -144,7 +144,7 @@ mkdir build && cd build
2. Build the Object Detection Sample:
```sh
cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_CXX_FLAGS="-march=armv7-a" /opt/intel/openvino/deployment_tools/inference_engine/samples*/cpp*
cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_CXX_FLAGS="-march=armv7-a" /opt/intel/openvino/deployment_tools/inference_engine/samples/cpp
```
```sh
make -j2 object_detection_sample_ssd

View File

@@ -38,9 +38,9 @@ Your installation is complete when these are all completed:
### About the Intel® Distribution of OpenVINO™ toolkit
The Intel® Distribution of OpenVINO™ toolkit speeds the deployment of applications and solutions that emulate human vision. Based on Convolutional Neural Networks (CNN), the toolkit extends computer vision (CV) workloads across Intel® hardware to maximize performance.
OpenVINO™ toolkit is a comprehensive toolkit for quickly developing applications and solutions that solve a variety of tasks including emulation of human vision, automatic speech recognition, natural language processing, recommendation systems, and many others. Based on latest generations of artificial neural networks, including Convolutional Neural Networks (CNNs), recurrent and attention-based networks, the toolkit extends computer vision and non-vision workloads across Intel® hardware, maximizing performance. It accelerates applications with high-performance, AI and deep learning inference deployed from edge to cloud.
The Intel® Distribution of OpenVINO™ toolkit includes the Intel® Deep Learning Deployment Toolkit (Intel® DLDT). For more information, see the online [Intel® Distribution of OpenVINO™ toolkit Overview](https://software.intel.com/en-us/OpenVINO-toolkit) page.
For more information, see the online [Intel® Distribution of OpenVINO™ toolkit Overview](https://software.intel.com/en-us/OpenVINO-toolkit) page.
The Intel® Distribution of OpenVINO™ toolkit for Windows\* 10 OS:
@@ -63,6 +63,19 @@ The following components are installed by default:
| Additional Tools | A set of tools to work with your models including [Accuracy Checker utility](@ref omz_tools_accuracy_checker_README), [Post-Training Optimization Tool Guide](@ref pot_README), [Model Downloader](@ref omz_tools_downloader_README) and other |
| [Documentation for Pre-Trained Models ](@ref omz_models_intel_index) | Documentation for the pre-trained models available in the [Open Model Zoo repo](https://github.com/opencv/open_model_zoo) |
**Could Be Optionally Installed**
[Deep Learning Workbench](@ref workbench_docs_Workbench_DG_Introduction) (DL Workbench) is a platform built upon OpenVINO™ and provides a web-based graphical environment that enables you to optimize, fine-tune, analyze, visualize, and compare performance of deep learning models on various Intel® architecture
configurations. In the DL Workbench, you can use most of OpenVINO™ toolkit components:
* [Model Downloader](@ref omz_tools_downloader_README)
* [Intel® Open Model Zoo](@ref omz_models_intel_index)
* [Model Optimizer](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)
* [Post-training Optimization Tool](@ref pot_README)
* [Accuracy Checker](@ref omz_tools_accuracy_checker_README)
* [Benchmark Tool](../../inference-engine/samples/benchmark_app/README.md)
Proceed to an [easy installation from Docker](@ref workbench_docs_Workbench_DG_Install_from_Docker_Hub) to get started.
### System Requirements
**Hardware**
@@ -99,7 +112,7 @@ The following components are installed by default:
1. If you have not downloaded the Intel® Distribution of OpenVINO™ toolkit, [download the latest version](http://software.intel.com/en-us/openvino-toolkit/choose-download/free-download-windows). By default, the file is saved to the `Downloads` directory as `w_openvino_toolkit_p_<version>.exe`.
2. Go to the `Downloads` folder and double-click `w_openvino_toolkit_p_<version>.exe`. A window opens to let you choose your installation directory and components. The default installation directory is `C:\Program Files (x86)\IntelSWTools\openvino_<version>`, for simplicity, a shortcut to the latest installation is also created: `C:\Program Files (x86)\IntelSWTools\openvino`. If you choose a different installation directory, the installer will create the directory for you:
2. Go to the `Downloads` folder and double-click `w_openvino_toolkit_p_<version>.exe`. A window opens to let you choose your installation directory and components. The default installation directory is `C:\Program Files (x86)\Intel\openvino_<version>`, for simplicity, a shortcut to the latest installation is also created: `C:\Program Files (x86)\Intel\openvino_2021`. If you choose a different installation directory, the installer will create the directory for you:
![](../img/openvino-install-windows-01.png)
@@ -124,11 +137,11 @@ The screen example below indicates you are missing two dependencies:
### Set the Environment Variables <a name="set-the-environment-variables"></a>
> **NOTE**: If you installed the Intel® Distribution of OpenVINO™ to the non-default install directory, replace `C:\Program Files (x86)\IntelSWTools` with the directory in which you installed the software.
> **NOTE**: If you installed the Intel® Distribution of OpenVINO™ to the non-default install directory, replace `C:\Program Files (x86)\Intel` with the directory in which you installed the software.
You must update several environment variables before you can compile and run OpenVINO™ applications. Open the Command Prompt, and run the `setupvars.bat` batch file to temporarily set your environment variables:
```sh
cd C:\Program Files (x86)\IntelSWTools\openvino\bin\
cd C:\Program Files (x86)\Intel\openvino_2021\bin\
```
```sh
@@ -152,7 +165,7 @@ The Model Optimizer is a key component of the Intel® Distribution of OpenVINO
The Inference Engine reads, loads, and infers the IR files, using a common API across the CPU, GPU, or VPU hardware.
The Model Optimizer is a Python*-based command line tool (`mo.py`), which is located in `C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\model_optimizer`. Use this tool on models trained with popular deep learning frameworks such as Caffe\*, TensorFlow\*, MXNet\*, and ONNX\* to convert them to an optimized IR format that the Inference Engine can use.
The Model Optimizer is a Python*-based command line tool (`mo.py`), which is located in `C:\Program Files (x86)\Intel\openvino_2021\deployment_tools\model_optimizer`. Use this tool on models trained with popular deep learning frameworks such as Caffe\*, TensorFlow\*, MXNet\*, and ONNX\* to convert them to an optimized IR format that the Inference Engine can use.
This section explains how to use scripts to configure the Model Optimizer either for all of the supported frameworks at the same time or for individual frameworks. If you want to manually configure the Model Optimizer instead of using scripts, see the **Using Manual Configuration Process** section on the [Configuring the Model Optimizer](../MO_DG/prepare_model/Config_Model_Optimizer.md) page.
@@ -167,8 +180,8 @@ You can configure the Model Optimizer either for all supported frameworks at onc
> **NOTE**:
> In the steps below:
> - If you you want to use the Model Optimizer from another installed versions of Intel® Distribution of OpenVINO™ toolkit installed, replace `openvino` with `openvino_<version>`.
> - If you installed the Intel® Distribution of OpenVINO™ toolkit to the non-default installation directory, replace `C:\Program Files (x86)\IntelSWTools` with the directory where you installed the software.
> - If you you want to use the Model Optimizer from another installed versions of Intel® Distribution of OpenVINO™ toolkit installed, replace `openvino_2021` with `openvino_<version>`, where `<version>` is the required version.
> - If you installed the Intel® Distribution of OpenVINO™ toolkit to the non-default installation directory, replace `C:\Program Files (x86)\Intel` with the directory where you installed the software.
These steps use a command prompt to make sure you see error messages.
@@ -181,7 +194,7 @@ Type commands in the opened window:
2. Go to the Model Optimizer prerequisites directory.<br>
```sh
cd C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\model_optimizer\install_prerequisites
cd C:\Program Files (x86)\Intel\openvino_2021\deployment_tools\model_optimizer\install_prerequisites
```
3. Run the following batch file to configure the Model Optimizer for Caffe\*, TensorFlow\* 1.x, MXNet\*, Kaldi\*, and ONNX\*:<br>
@@ -193,7 +206,7 @@ install_prerequisites.bat
1. Go to the Model Optimizer prerequisites directory:<br>
```sh
cd C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\model_optimizer\install_prerequisites
cd C:\Program Files (x86)\Intel\openvino_2021\deployment_tools\model_optimizer\install_prerequisites
```
2. Run the batch file for the framework you will use with the Model Optimizer. You can use more than one:
@@ -242,14 +255,14 @@ If you want to use a GPU or VPU, or update your Windows* environment variables,
> **IMPORTANT**: This section is required. In addition to confirming your installation was successful, demo scripts perform other steps, such as setting up your computer to use the Inference Engine samples.
> **NOTE**:
> The paths in this section assume you used the default installation directory. If you used a directory other than `C:\Program Files (x86)\IntelSWTools`, update the directory with the location where you installed the software.
> The paths in this section assume you used the default installation directory. If you used a directory other than `C:\Program Files (x86)\Intel`, update the directory with the location where you installed the software.
To verify the installation and compile two samples, run the verification applications provided with the product on the CPU:
1. Open a command prompt window.
2. Go to the Inference Engine demo directory:<br>
```sh
cd C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\demo\
cd C:\Program Files (x86)\Intel\openvino_2021\deployment_tools\demo\
```
3. Run the verification scripts by following the instructions in the next section.
@@ -291,7 +304,7 @@ When the demo completes, you have two windows open:
Close the image viewer window to end the demo.
To learn more about the verification scripts, see `README.txt` in `C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\demo`.
To learn more about the verification scripts, see `README.txt` in `C:\Program Files (x86)\Intel\openvino_2021\deployment_tools\demo`.
For detailed description of the OpenVINO™ pre-trained object detection and object recognition models, see the [Overview of OpenVINO™ toolkit Pre-Trained Models](@ref omz_models_intel_index) page.
@@ -358,7 +371,7 @@ After configuration is done, you are ready to run the verification scripts with
2. Go to the Inference Engine demo directory:
```sh
cd C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\demo\
cd C:\Program Files (x86)\Intel\openvino_2021\deployment_tools\demo\
```
3. Run the Image Classification verification script. If you have access to the Internet through the proxy server only, please make sure that it is configured in your environment.
```sh
@@ -405,13 +418,13 @@ Image Classification sample application binary file was automatically built and
The Image Classification sample application binary file located in the `C:\Users\<username>\Documents\Intel\OpenVINO\inference_engine_samples_build\intel64\Release\` directory.
The Caffe* Squeezenet model IR files (`.bin` and `.xml`) are located in the in the `C:\Users\<username>\Documents\Intel\OpenVINO\openvino_models\ir\public\squeezenet1.1\FP16\` directory.
> **NOTE**: If you installed the Intel® Distribution of OpenVINO™ toolkit to the non-default installation directory, replace `C:\Program Files (x86)\IntelSWTools` with the directory where you installed the software.
> **NOTE**: If you installed the Intel® Distribution of OpenVINO™ toolkit to the non-default installation directory, replace `C:\Program Files (x86)\Intel` with the directory where you installed the software.
To run the sample application:
1. Set up environment variables:
```sh
cd C:\Program Files (x86)\IntelSWTools\openvino\bin\setupvars.bat
cd C:\Program Files (x86)\Intel\openvino_2021\bin\setupvars.bat
```
2. Go to the samples build directory:
```sh
@@ -422,22 +435,22 @@ cd C:\Users\<username>\Documents\Intel\OpenVINO\inference_engine_samples_build\i
- For CPU:
```sh
classification_sample_async.exe -i "C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\demo\car.png" -m "C:\Users\<username>\Documents\Intel\OpenVINO\openvino_models\ir\public\squeezenet1.1\FP16\squeezenet1.1.xml" -d CPU
classification_sample_async.exe -i "C:\Program Files (x86)\Intel\openvino_2021\deployment_tools\demo\car.png" -m "C:\Users\<username>\Documents\Intel\OpenVINO\openvino_models\ir\public\squeezenet1.1\FP16\squeezenet1.1.xml" -d CPU
```
- For GPU:
```sh
classification_sample_async.exe -i "C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\demo\car.png" -m "C:\Users\<username>\Documents\Intel\OpenVINO\openvino_models\ir\public\squeezenet1.1\FP16\squeezenet1.1.xml" -d GPU
classification_sample_async.exe -i "C:\Program Files (x86)\Intel\openvino_2021\deployment_tools\demo\car.png" -m "C:\Users\<username>\Documents\Intel\OpenVINO\openvino_models\ir\public\squeezenet1.1\FP16\squeezenet1.1.xml" -d GPU
```
- For VPU (Intel® Neural Compute Stick 2):
```sh
classification_sample_async.exe -i "C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\demo\car.png" -m "C:\Users\<username>\Documents\Intel\OpenVINO\openvino_models\ir\public\squeezenet1.1\FP16\squeezenet1.1.xml" -d MYRIAD
classification_sample_async.exe -i "C:\Program Files (x86)\Intel\openvino_2021\deployment_tools\demo\car.png" -m "C:\Users\<username>\Documents\Intel\OpenVINO\openvino_models\ir\public\squeezenet1.1\FP16\squeezenet1.1.xml" -d MYRIAD
```
- For VPU (Intel® Vision Accelerator Design with Intel® Movidius™ VPUs):
```sh
classification_sample_async.exe -i "C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\demo\car.png" -m "C:\Users\<username>\Documents\Intel\OpenVINO\openvino_models\ir\public\squeezenet1.1\FP16\squeezenet1.1.xml" -d HDDL
classification_sample_async.exe -i "C:\Program Files (x86)\Intel\openvino_2021\deployment_tools\demo\car.png" -m "C:\Users\<username>\Documents\Intel\OpenVINO\openvino_models\ir\public\squeezenet1.1\FP16\squeezenet1.1.xml" -d HDDL
```
For information on Sample Applications, see the [Inference Engine Samples Overview](../IE_DG/Samples_Overview.md).
@@ -463,7 +476,7 @@ To learn more about converting deep learning models, go to:
- [Intel Distribution of OpenVINO Toolkit home page](https://software.intel.com/en-us/openvino-toolkit)
- [Intel Distribution of OpenVINO Toolkit documentation](https://software.intel.com/en-us/openvino-toolkit/documentation/featured)
- [OpenVINO™ Release Notes](https://software.intel.com/en-us/articles/OpenVINO-RelNotes)
- [Introduction to Intel® Deep Learning Deployment Toolkit](../IE_DG/Introduction.md)
- [Introduction to Inference Engine](inference_engine_intro.md)
- [Inference Engine Developer Guide](../IE_DG/Deep_Learning_Inference_Engine_DevGuide.md)
- [Model Optimizer Developer Guide](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)
- [Inference Engine Samples Overview](../IE_DG/Samples_Overview.md)

143
docs/model_server/README.md Normal file
View File

@@ -0,0 +1,143 @@
# OpenVINO&trade; Model Server {#openvino_docs_ovms}
OpenVINO&trade; Model Server (OVMS) is a scalable, high-performance solution for serving machine learning models optimized for Intel&reg; architectures.
The server provides an inference service via gRPC or REST API - making it easy to deploy new algorithms and AI experiments using the same
architecture as [TensorFlow* Serving](https://github.com/tensorflow/serving) for any models trained in a framework that is supported
by [OpenVINO](https://software.intel.com/en-us/openvino-toolkit).
The server implements gRPC and REST API framework with data serialization and deserialization using TensorFlow Serving API,
and OpenVINO&trade; as the inference execution provider. Model repositories may reside on a locally accessible file system (for example, NFS),
Google Cloud Storage\* (GCS), Amazon S3\*, MinIO\*, or Azure Blob Storage\*.
OVMS is now implemented in C++ and provides much higher scalability compared to its predecessor in the Python version.
You can take advantage of all the power of Xeon® CPU capabilities or AI accelerators and expose it over the network interface.
Read the [release notes](https://github.com/openvinotoolkit/model_server/releases) to find out what's new in the C++ version.
Review the [Architecture Concept](https://github.com/openvinotoolkit/model_server/blob/main/docs/architecture.md) document for more details.
A few key features:
- Support for multiple frameworks. Serve models trained in popular formats such as Caffe\*, TensorFlow\*, MXNet\*, and ONNX*.
- Deploy new [model versions](https://github.com/openvinotoolkit/model_server/blob/main/docs/docker_container.md#model-version-policy) without changing client code.
- Support for AI accelerators including [Intel Movidius Myriad VPUs](../IE_DG/supported_plugins/VPU),
[GPU](../IE_DG/supported_plugins/CL_DNN), and [HDDL](../IE_DG/supported_plugins/HDDL).
- The server can be enabled both on [Bare Metal Hosts](https://github.com/openvinotoolkit/model_server/blob/main/docs/host.md) or in
[Docker* containers](https://github.com/openvinotoolkit/model_server/blob/main/docs/docker_container.md).
- [Kubernetes deployments](https://github.com/openvinotoolkit/model_server/blob/main/deploy). The server can be deployed in a Kubernetes cluster allowing the inference service to scale horizontally and ensure high availability.
- [Model reshaping](https://github.com/openvinotoolkit/model_server/blob/main/docs/docker_container.md#model-reshaping). The server supports reshaping models in runtime.
- [Model ensemble](https://github.com/openvinotoolkit/model_server/blob/main/docs/ensemble_scheduler.md) (preview). Connect multiple models to deploy complex processing solutions and reduce overhead of sending data back and forth.
> **NOTE**: OVMS has been tested on CentOS\* and Ubuntu\*. Publically released [Docker images](https://hub.docker.com/r/openvino/model_server) are based on CentOS.
## Build OpenVINO Model Server
1. Go to the root directory of the repository.
2. Build the Docker image with the command below:
```bash
make docker_build
```
The command generates:
* Image tagged as `openvino/model_server:latest` with CPU, NCS, and HDDL support
* Image tagged as `openvino/model_server:latest-gpu` with CPU, NCS, HDDL, and iGPU support
* `.tar.gz` release package with OVMS binary and necessary libraries in the `./dist` directory.
The release package is compatible with Linux machines on which `glibc` version is greater than or equal to the build image version.
For debugging, the command also generates an image with a suffix `-build`, namely `openvino/model_server-build:latest`.
> **NOTE**: Images include OpenVINO 2021.1 release.
## Run OpenVINO Model Server
Find a detailed description of how to use the OpenVINO Model Server in the [OVMS Quickstart](https://github.com/openvinotoolkit/model_server/blob/main/docs/ovms_quickstart.md).
For more detailed guides on using the Model Server in various scenarios, visit the links below:
* [Models repository configuration](https://github.com/openvinotoolkit/model_server/blob/main/docs/models_repository.md)
* [Using a Docker container](https://github.com/openvinotoolkit/model_server/blob/main/docs/docker_container.md)
* [Landing on bare metal or virtual machine](https://github.com/openvinotoolkit/model_server/blob/main/docs/host.md)
* [Performance tuning](https://github.com/openvinotoolkit/model_server/blob/main/docs/performance_tuning.md)
* [Model Ensemble Scheduler](https://github.com/openvinotoolkit/model_server/blob/main/docs/ensemble_scheduler.md)
## API Documentation
### GRPC
OpenVINO&trade; Model Server gRPC API is documented in the proto buffer files in [tensorflow_serving_api](https://github.com/tensorflow/serving/tree/r2.2/tensorflow_serving/apis).
> **NOTE:** The implementations for `Predict`, `GetModelMetadata`, and `GetModelStatus` function calls are currently available.
> These are the most generic function calls and should address most of the usage scenarios.
[Predict proto](https://github.com/tensorflow/serving/blob/r2.2/tensorflow_serving/apis/predict.proto) defines two message specifications: `PredictRequest` and `PredictResponse` used while calling Prediction endpoint.
* `PredictRequest` specifies information about the model spec, that is name and version, and a map of input data serialized via
[TensorProto](https://github.com/tensorflow/tensorflow/blob/r2.2/tensorflow/core/framework/tensor.proto) to a string format.
* `PredictResponse` includes a map of outputs serialized by
[TensorProto](https://github.com/tensorflow/tensorflow/blob/r2.2/tensorflow/core/framework/tensor.proto) and information about the used model spec.
[Get Model Metadata proto](https://github.com/tensorflow/serving/blob/r2.2/tensorflow_serving/apis/get_model_metadata.proto) defines three message definitions used while calling Metadata endpoint:
`SignatureDefMap`, `GetModelMetadataRequest`, `GetModelMetadataResponse`.
A function call `GetModelMetadata` accepts model spec information as input and returns Signature Definition content in the format similar to TensorFlow Serving.
[Get Model Status proto](https://github.com/tensorflow/serving/blob/r2.2/tensorflow_serving/apis/get_model_status.proto) defines three message definitions used while calling Status endpoint:
`GetModelStatusRequest`, `ModelVersionStatus`, `GetModelStatusResponse` that report all exposed versions including their state in their lifecycle.
Refer to the [example client code](https://github.com/openvinotoolkit/model_server/blob/main/example_client) to learn how to use this API and submit the requests using the gRPC interface.
Using the gRPC interface is recommended for optimal performance due to its faster implementation of input data deserialization. It enables you to achieve lower latency, especially with larger input messages like images.
### REST
OpenVINO&trade; Model Server RESTful API follows the documentation from the [TensorFlow Serving REST API](https://www.tensorflow.org/tfx/serving/api_rest).
Both row and column format of the requests are implemented.
> **NOTE**: Just like with gRPC, only the implementations for `Predict`, `GetModelMetadata`, and `GetModelStatus` function calls are currently available.
Only the numerical data types are supported.
Review the exemplary clients below to find out more how to connect and run inference requests.
REST API is recommended when the primary goal is in reducing the number of client side Python dependencies and simpler application code.
## Known Limitations
* Currently, `Predict`, `GetModelMetadata`, and `GetModelStatus` calls are implemented using the TensorFlow Serving API.
* `Classify`, `Regress`, and `MultiInference` are not included.
* `Output_filter` is not effective in the `Predict` call. All outputs defined in the model are returned to the clients.
## OpenVINO Model Server Contribution Policy
* All contributed code must be compatible with the [Apache 2](https://www.apache.org/licenses/LICENSE-2.0) license.
* All changes have to pass linter, unit, and functional tests.
* All new features need to be covered by tests.
## References
* [Speed and Scale AI Inference Operations Across Multiple Architectures - webinar recording](https://techdecoded.intel.io/essentials/speed-and-scale-ai-inference-operations-across-multiple-architectures/)
* [OpenVINO&trade;](https://software.intel.com/en-us/openvino-toolkit)
* [TensorFlow Serving](https://github.com/tensorflow/serving)
* [gRPC](https://grpc.io/)
* [RESTful API](https://restfulapi.net/)
* [Inference at Scale in Kubernetes](https://www.intel.ai/inference-at-scale-in-kubernetes)
---
\* Other names and brands may be claimed as the property of others.

View File

@@ -253,13 +253,13 @@ To eliminate operation, nGraph has special method that considers all limitations
When developing a transformation, you need to follow these transformation rules:
### Operation Set (OpSet)
###1. Operation Set (OpSet)
Use the latest version of OpSet in your transformation. An exception is ConvertOpSetXToOpSetY transformations, where you must use operations from OpSetX and OpSetY.
@snippet example_ngraph_utils.cpp ngraph:include
### Dynamic Shape and Rank
###2. Dynamic Shape and Rank
nGraph has two types for shape representation:
`ngraph::Shape` - represents static shape.
@@ -368,6 +368,9 @@ Another example shows how multiple matcher passes can be united into single Grap
@snippet src/template_pattern_transformation.cpp matcher_pass:manager2
> **Note:** nGraph used to have the `pass::PassConfig` class for transformation pipeline manipulation.
This mechanism is now obsolete and the `pass::PassConfig` class will be removed in future release.
## How to debug transformations <a name="how_to_debug_transformations"></a>
The most popular tool for transformations debugging is the `ngraph::pass::VisualizeTree` transformation, which visualizes ngraph::Function.
@@ -435,9 +438,9 @@ The basic transformation test looks like this:
@snippet tests/functional/transformations/template_transformations_test.cpp transformation:test
[ngraph_replace_node]: ../images/ngraph_replace_node.png
[ngraph_insert_node]: ../images/ngraph_insert_node.png
[transformations_structure]: ../images/transformations_structure.png
[register_new_node]: ../images/register_new_node.png
[graph_rewrite_execution]: ../images/graph_rewrite_execution.png
[graph_rewrite_efficient_search]: ../images/graph_rewrite_efficient_search.png
[ngraph_replace_node]: ./img/ngraph_replace_node.png
[ngraph_insert_node]: ./img/ngraph_insert_node.png
[transformations_structure]: ./img/transformations_structure.png
[register_new_node]: ./img/register_new_node.png
[graph_rewrite_execution]: ./img/graph_rewrite_execution.png
[graph_rewrite_efficient_search]: ./img/graph_rewrite_efficient_search.png

View File

@@ -9,9 +9,9 @@
**Detailed description**: For each element from the input tensor calculates corresponding
element in the output tensor with the following formula:
\f[
HSwish(x) = x \frac{min(max(x + 3, 0), 6)}{6}
\f]
\f[
HSwish(x) = x \frac{min(max(x + 3, 0), 6)}{6}
\f]
The HSwish operation is introduced in the following [article](https://arxiv.org/pdf/1905.02244.pdf).

View File

@@ -9,9 +9,9 @@
**Detailed description**: For each element from the input tensor calculates corresponding
element in the output tensor with the following formula:
\f[
SoftPlus(x) = ln(e^{x} + 1.0)
\f]
\f[
SoftPlus(x) = ln(e^{x} + 1.0)
\f]
**Attributes**: *SoftPlus* operation has no attributes.

View File

@@ -78,9 +78,9 @@
**Mathematical Formulation**
\f[
output_{j} = \frac{\sum_{i = 0}^{n}x_{i}}{n}
\f]
\f[
output_{j} = \frac{\sum_{i = 0}^{n}x_{i}}{n}
\f]
**Example**

View File

@@ -70,9 +70,9 @@
**Mathematical Formulation**
\f[
output_{j} = MAX\{ x_{0}, ... x_{i}\}
\f]
\f[
output_{j} = MAX\{ x_{0}, ... x_{i}\}
\f]
**Example**

View File

@@ -4,7 +4,7 @@
## Samples
- [Inference Engine Samples](../IE_DG/Samples_Overview.md)
- [DL Streamer Samples](../IE_DG/Tools_Overview.md)
- [DL Streamer Samples](@ref gst_samples_README)
## Demos
@@ -13,7 +13,7 @@
## Additional Tools
- [Tools for models calibration and accuracy measurement](../IE_DG/Tools_Overview.md)
- A set of tools to work with your models including [Accuracy Checker utility](@ref omz_tools_accuracy_checker_README), [Post-Training Optimization Tool Guide](@ref pot_README), [Model Downloader](@ref omz_tools_downloader_README) and other
## Pre-Trained Models

View File

@@ -4,6 +4,12 @@ This topic demonstrates how to use the Benchmark C++ Tool to estimate deep learn
> **NOTE:** This topic describes usage of C++ implementation of the Benchmark Tool. For the Python* implementation, refer to [Benchmark Python* Tool](../../tools/benchmark_tool/README.md).
> **TIP**: You also can work with the Benchmark Tool inside the OpenVINO™ [Deep Learning Workbench](@ref workbench_docs_Workbench_DG_Introduction) (DL Workbench).
> [DL Workbench](@ref workbench_docs_Workbench_DG_Introduction) is a platform built upon OpenVINO™ and provides a web-based graphical environment that enables you to optimize, fine-tune, analyze, visualize, and compare
> performance of deep learning models on various Intel® architecture
> configurations. In the DL Workbench, you can use most of OpenVINO™ toolkit components.
> <br>
> Proceed to an [easy installation from Docker](@ref workbench_docs_Workbench_DG_Install_from_Docker_Hub) to get started.
## How It Works
@@ -43,6 +49,7 @@ The application also saves executable graph information serialized to an XML fil
## Run the Tool
Note that the benchmark_app usually produces optimal performance for any device out of the box.
**So in most cases you don't need to play the app options explicitly and the plain device name is enough**, for example, for CPU:

View File

@@ -1,20 +1,6 @@
/*
* Copyright 2017-2019 Intel Corporation.
* The source code, information and material ("Material") contained herein is
* owned by Intel Corporation or its suppliers or licensors, and title to such
* Material remains with Intel Corporation or its suppliers or licensors.
* The Material contains proprietary information of Intel or its suppliers and
* licensors. The Material is protected by worldwide copyright laws and treaty
* provisions.
* No part of the Material may be used, copied, reproduced, modified, published,
* uploaded, posted, transmitted, distributed or disclosed in any way without
* Intel's prior express written permission. No license under any patent,
* copyright or other intellectual property rights in the Material is granted to
* or conferred upon you, either expressly, by implication, inducement, estoppel
* or otherwise.
* Any license under such intellectual property rights must be express and
* approved by Intel in writing.
*/
// Copyright (C) 2018-2020 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "XLinkStringUtils.h"

View File

@@ -1,20 +1,6 @@
/*
* Copyright 2017-2019 Intel Corporation.
* The source code, information and material ("Material") contained herein is
* owned by Intel Corporation or its suppliers or licensors, and title to such
* Material remains with Intel Corporation or its suppliers or licensors.
* The Material contains proprietary information of Intel or its suppliers and
* licensors. The Material is protected by worldwide copyright laws and treaty
* provisions.
* No part of the Material may be used, copied, reproduced, modified, published,
* uploaded, posted, transmitted, distributed or disclosed in any way without
* Intel's prior express written permission. No license under any patent,
* copyright or other intellectual property rights in the Material is granted to
* or conferred upon you, either expressly, by implication, inducement, estoppel
* or otherwise.
* Any license under such intellectual property rights must be express and
* approved by Intel in writing.
*/
// Copyright (C) 2018-2020 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "mvnc_data.h"
#include "mvnc_tool.h"

View File

@@ -4,6 +4,13 @@ This topic demonstrates how to run the Benchmark Python* Tool, which performs in
> **NOTE:** This topic describes usage of Python implementation of the Benchmark Tool. For the C++ implementation, refer to [Benchmark C++ Tool](../../samples/benchmark_app/README.md).
> **TIP**: You also can work with the Benchmark Tool inside the OpenVINO™ [Deep Learning Workbench](@ref workbench_docs_Workbench_DG_Introduction) (DL Workbench).
> [DL Workbench](@ref workbench_docs_Workbench_DG_Introduction) is a platform built upon OpenVINO™ and provides a web-based graphical environment that enables you to optimize, fine-tune, analyze, visualize, and compare
> performance of deep learning models on various Intel® architecture
> configurations. In the DL Workbench, you can use most of OpenVINO™ toolkit components.
> <br>
> Proceed to an [easy installation from Docker](@ref workbench_docs_Workbench_DG_Install_from_Docker_Hub) to get started.
## How It Works
Upon start-up, the application reads command-line parameters and loads a network and images/binary files to the Inference Engine plugin, which is chosen depending on a specified device. The number of infer requests and execution approach depend on the mode defined with the `-api` command-line parameter.

View File

@@ -32,7 +32,6 @@ if [ -f /etc/lsb-release ]; then
sudo -E apt update
sudo -E apt-get install -y \
build-essential \
cmake \
curl \
wget \
libssl-dev \
@@ -46,6 +45,8 @@ if [ -f /etc/lsb-release ]; then
automake \
libtool \
autoconf \
shellcheck \
python \
libcairo2-dev \
libpango1.0-dev \
libglib2.0-dev \
@@ -101,13 +102,6 @@ elif [ -f /etc/redhat-release ]; then
sudo -E yum install -y rh-python36
source scl_source enable rh-python36
wget https://cmake.org/files/v3.12/cmake-3.12.3.tar.gz
tar xf cmake-3.12.3.tar.gz
cd cmake-3.12.3
./configure
make -j16
sudo -E make install
echo
echo "FFmpeg is required for processing audio and video streams with OpenCV. Please select your preferred method for installing FFmpeg:"
echo
@@ -135,7 +129,6 @@ elif [ -f /etc/os-release ] && grep -q "raspbian" /etc/os-release; then
sudo -E apt update
sudo -E apt-get install -y \
build-essential \
cmake \
curl \
wget \
libssl-dev \
@@ -166,4 +159,14 @@ elif [ -f /etc/os-release ] && grep -q "raspbian" /etc/os-release; then
fi
else
echo "Unknown OS, please install build dependencies manually"
fi
fi
# cmake 3.13 or higher is required to build OpenVINO
current_cmake_version=$(cmake --version | sed -ne 's/[^0-9]*\(\([0-9]\.\)\{0,4\}[0-9][^.]\).*/\1/p')
required_cmake_ver=3.13
if [ ! "$(printf '%s\n' "$required_cmake_ver" "$current_cmake_version" | sort -V | head -n1)" = "$required_cmake_ver" ]; then
wget "https://github.com/Kitware/CMake/releases/download/v3.18.4/cmake-3.18.4.tar.gz"
tar xf cmake-3.18.4.tar.gz
(cd cmake-3.18.4 && ./bootstrap --parallel="$(nproc --all)" && make --jobs="$(nproc --all)" && sudo make install)
rm -rf cmake-3.18.4 cmake-3.18.4.tar.gz
fi

View File

@@ -1,2 +1,2 @@
numpy
typing
numpy==1.19.4
typing; python_version < '3.6'

View File

@@ -1,10 +1,10 @@
flake8
flake8-comprehensions
flake8-docstrings
flake8-quotes
onnx
pydocstyle
pytest
retrying
tox
wheel
flake8==3.8.4
flake8-comprehensions==3.3.0
flake8-docstrings==1.5.0
flake8-quotes==3.2.0
onnx==1.7.0
pydocstyle==5.1.1
pytest==6.1.2
retrying==1.3.3
tox==3.20.1
wheel==0.34.2

View File

@@ -13,7 +13,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
# ******************************************************************************
"""ngraph module namespace, exposing factory functions for all ops and other classes."""
"""! ngraph module namespace, exposing factory functions for all ops and other classes."""
# noqa: F401
from pkg_resources import get_distribution, DistributionNotFound

View File

@@ -13,16 +13,16 @@
# See the License for the specific language governing permissions and
# limitations under the License.
# ******************************************************************************
"""ngraph exceptions hierarchy. All exceptions are descendants of NgraphError."""
"""! ngraph exceptions hierarchy. All exceptions are descendants of NgraphError."""
class NgraphError(Exception):
"""Base class for Ngraph exceptions."""
"""! Base class for Ngraph exceptions."""
class UserInputError(NgraphError):
"""User provided unexpected input."""
"""! User provided unexpected input."""
class NgraphTypeError(NgraphError, TypeError):
"""Type mismatch error."""
"""! Type mismatch error."""

View File

@@ -13,14 +13,14 @@
# See the License for the specific language governing permissions and
# limitations under the License.
# ******************************************************************************
"""nGraph helper functions."""
"""! nGraph helper functions."""
from ngraph.impl import Function
from openvino.inference_engine import IENetwork
def function_from_cnn(cnn_network: IENetwork) -> Function:
"""Get nGraph function from Inference Engine CNN network."""
"""! Get nGraph function from Inference Engine CNN network."""
capsule = cnn_network._get_function_capsule()
ng_function = Function.from_capsule(capsule)
return ng_function

File diff suppressed because it is too large Load Diff

View File

@@ -14,7 +14,7 @@
# limitations under the License.
# ******************************************************************************
"""Factory functions for all ngraph ops."""
"""! Factory functions for all ngraph ops."""
from typing import Callable, Iterable, List, Optional, Set, Union
import numpy as np
@@ -66,16 +66,16 @@ def batch_to_space(
crops_end: NodeInput,
name: Optional[str] = None,
) -> Node:
"""Perform BatchToSpace operation on the input tensor.
"""! Perform BatchToSpace operation on the input tensor.
BatchToSpace permutes data from the batch dimension of the data tensor into spatial dimensions.
:param data: Node producing the data tensor.
:param block_shape: The sizes of the block of values to be moved.
:param crops_begin: Specifies the amount to crop from the beginning along each axis of `data`.
:param crops_end: Specifies the amount to crop from the end along each axis of `data`.
:param name: Optional output node name.
:return: The new node performing a BatchToSpace operation.
@param data: Node producing the data tensor.
@param block_shape: The sizes of the block of values to be moved.
@param crops_begin: Specifies the amount to crop from the beginning along each axis of `data`.
@param crops_end: Specifies the amount to crop from the end along each axis of `data`.
@param name: Optional output node name.
@return The new node performing a BatchToSpace operation.
"""
return _get_node_factory_opset2().create(
"BatchToSpace", as_nodes(data, block_shape, crops_begin, crops_end)
@@ -84,18 +84,18 @@ def batch_to_space(
@unary_op
def gelu(node: NodeInput, name: Optional[str] = None) -> Node:
r"""Perform Gaussian Error Linear Unit operation element-wise on data from input node.
r"""! Perform Gaussian Error Linear Unit operation element-wise on data from input node.
Computes GELU function:
.. math:: f(x) = 0.5\cdot x\cdot(1 + erf( \dfrac{x}{\sqrt{2}})
\f[ f(x) = 0.5\cdot x\cdot(1 + erf( \dfrac{x}{\sqrt{2}}) \f]
For more information refer to:
`Gaussian Error Linear Unit (GELU) <https://arxiv.org/pdf/1606.08415.pdf>`_
:param node: Input tensor. One of: input node, array or scalar.
:param name: Optional output node name.
:return: The new node performing a GELU operation on its input data element-wise.
@param node: Input tensor. One of: input node, array or scalar.
@param name: Optional output node name.
@return The new node performing a GELU operation on its input data element-wise.
"""
return _get_node_factory_opset2().create("Gelu", [node])
@@ -108,19 +108,19 @@ def mvn(
eps: float = 1e-9,
name: str = None,
) -> Node:
r"""Perform Mean Variance Normalization operation on data from input node.
r"""! Perform Mean Variance Normalization operation on data from input node.
Computes MVN on the input tensor :code:`data` (called `X`) using formula:
Computes MVN on the input tensor `data` (called `X`) using formula:
.. math:: Y = \dfrac{X-EX}{\sqrt{E(X-EX)^2}}
\f[ Y = \dfrac{X-EX}{\sqrt{E(X-EX)^2}} \f]
:param data: The node with data tensor.
:param across_channels: Denotes if mean values are shared across channels.
:param normalize_variance: Denotes whether to perform variance normalization.
:param eps: The number added to the variance to avoid division by zero
@param data: The node with data tensor.
@param across_channels: Denotes if mean values are shared across channels.
@param normalize_variance: Denotes whether to perform variance normalization.
@param eps: The number added to the variance to avoid division by zero
when normalizing the value. Scalar value.
:param name: Optional output node name.
:return: The new node performing a MVN operation on input tensor.
@param name: Optional output node name.
@return The new node performing a MVN operation on input tensor.
"""
return _get_node_factory_opset2().create(
"MVN",
@@ -131,12 +131,12 @@ def mvn(
@nameable_op
def reorg_yolo(input: Node, stride: List[int], name: Optional[str] = None) -> Node:
"""Return a node which produces the ReorgYolo operation.
"""! Return a node which produces the ReorgYolo operation.
:param input: Input data
:param stride: Stride to reorganize input by
:param name: Optional name for output node.
:return: ReorgYolo node
@param input: Input data
@param stride: Stride to reorganize input by
@param name: Optional name for output node.
@return ReorgYolo node
"""
return _get_node_factory_opset2().create("ReorgYolo", [input], {"stride": stride})
@@ -150,14 +150,14 @@ def roi_pooling(
method: str,
name: Optional[str] = None,
) -> Node:
"""Return a node which produces an ROIPooling operation.
"""! Return a node which produces an ROIPooling operation.
:param input: Input feature map {N, C, ...}
:param coords: Coordinates of bounding boxes
:param output_size: Height/Width of ROI output features (shape)
:param spatial_scale: Ratio of input feature map over input image size (float)
:param method: Method of pooling - string: "max" or "bilinear"
:return: ROIPooling node
@param input: Input feature map {N, C, ...}
@param coords: Coordinates of bounding boxes
@param output_size: Height/Width of ROI output features (shape)
@param spatial_scale: Ratio of input feature map over input image size (float)
@param method: Method of pooling - string: "max" or "bilinear"
@return ROIPooling node
"""
method = method.lower()
return _get_node_factory_opset2().create(
@@ -175,18 +175,18 @@ def space_to_batch(
pads_end: NodeInput,
name: Optional[str] = None,
) -> Node:
"""Perform SpaceToBatch operation on the input tensor.
"""! Perform SpaceToBatch operation on the input tensor.
SpaceToBatch permutes data tensor blocks of spatial data into batch dimension.
The operator returns a copy of the input tensor where values from spatial blocks dimensions
are moved in the batch dimension
:param data: Node producing the data tensor.
:param block_shape: The sizes of the block of values to be moved.
:param pads_begin: Specifies the padding for the beginning along each axis of `data`.
:param pads_end: Specifies the padding for the ending along each axis of `data`.
:param name: Optional output node name.
:return: The new node performing a SpaceToBatch operation.
@param data: Node producing the data tensor.
@param block_shape: The sizes of the block of values to be moved.
@param pads_begin: Specifies the padding for the beginning along each axis of `data`.
@param pads_end: Specifies the padding for the ending along each axis of `data`.
@param name: Optional output node name.
@return The new node performing a SpaceToBatch operation.
"""
return _get_node_factory_opset2().create(
"SpaceToBatch", as_nodes(data, block_shape, pads_begin, pads_end)

View File

@@ -14,7 +14,7 @@
# limitations under the License.
# ******************************************************************************
"""Factory functions for all ngraph ops."""
"""! Factory functions for all ngraph ops."""
from typing import Callable, Iterable, List, Optional, Set, Union
import numpy as np
@@ -60,12 +60,12 @@ _get_node_factory_opset3 = partial(_get_node_factory, "opset3")
@nameable_op
def assign(new_value: NodeInput, variable_id: str, name: Optional[str] = None) -> Node:
"""Return a node which produces the Assign operation.
"""! Return a node which produces the Assign operation.
:param new_value: Node producing a value to be assigned to a variable.
:param variable_id: Id of a variable to be updated.
:param name: Optional name for output node.
:return: Assign node
@param new_value: Node producing a value to be assigned to a variable.
@param variable_id: Id of a variable to be updated.
@param name: Optional name for output node.
@return Assign node
"""
return _get_node_factory_opset3().create(
"Assign",
@@ -82,16 +82,16 @@ def broadcast(
broadcast_spec: str = "NUMPY",
name: Optional[str] = None,
) -> Node:
"""Create a node which broadcasts the input node's values along specified axes to a desired shape.
"""! Create a node which broadcasts the input node's values along specified axes to a desired shape.
:param data: The node with input tensor data.
:param target_shape: The node with a new shape we want to broadcast tensor to.
:param axes_mapping: The node with a axis positions (0-based) in the result
@param data: The node with input tensor data.
@param target_shape: The node with a new shape we want to broadcast tensor to.
@param axes_mapping: The node with a axis positions (0-based) in the result
that are being broadcast.
:param broadcast_spec: The type of broadcasting that specifies mapping of input tensor axes
@param broadcast_spec: The type of broadcasting that specifies mapping of input tensor axes
to output shape axes. Range of values: NUMPY, EXPLICIT, BIDIRECTIONAL.
:param name: Optional new name for output node.
:return: New node with broadcast shape.
@param name: Optional new name for output node.
@return New node with broadcast shape.
"""
inputs = as_nodes(data, target_shape)
if broadcast_spec.upper() == "EXPLICIT":
@@ -109,15 +109,15 @@ def bucketize(
with_right_bound: bool = True,
name: Optional[str] = None,
) -> Node:
"""Return a node which produces the Bucketize operation.
"""! Return a node which produces the Bucketize operation.
:param data: Input data to bucketize
:param buckets: 1-D of sorted unique boundaries for buckets
:param output_type: Output tensor type, "i64" or "i32", defaults to i64
:param with_right_bound: indicates whether bucket includes the right or left
@param data: Input data to bucketize
@param buckets: 1-D of sorted unique boundaries for buckets
@param output_type: Output tensor type, "i64" or "i32", defaults to i64
@param with_right_bound: indicates whether bucket includes the right or left
edge of interval. default true = includes right edge
:param name: Optional name for output node.
:return: Bucketize node
@param name: Optional name for output node.
@return Bucketize node
"""
return _get_node_factory_opset3().create(
"Bucketize",
@@ -134,13 +134,13 @@ def cum_sum(
reverse: bool = False,
name: Optional[str] = None,
) -> Node:
"""Construct a cumulative summation operation.
"""! Construct a cumulative summation operation.
:param arg: The tensor to be summed.
:param axis: zero dimension tensor specifying axis position along which sum will be performed.
:param exclusive: if set to true, the top element is not included
:param reverse: if set to true, will perform the sums in reverse direction
:return: New node performing the operation
@param arg: The tensor to be summed.
@param axis: zero dimension tensor specifying axis position along which sum will be performed.
@param exclusive: if set to true, the top element is not included
@param reverse: if set to true, will perform the sums in reverse direction
@return New node performing the operation
"""
return _get_node_factory_opset3().create(
"CumSum", as_nodes(arg, axis), {"exclusive": exclusive, "reverse": reverse}
@@ -156,15 +156,15 @@ def embedding_bag_offsets_sum(
per_sample_weights: Optional[NodeInput] = None,
name: Optional[str] = None,
) -> Node:
"""Return a node which performs sums of bags of embeddings without the intermediate embeddings.
"""! Return a node which performs sums of bags of embeddings without the intermediate embeddings.
:param emb_table: Tensor containing the embedding lookup table.
:param indices: Tensor with indices.
:param offsets: Tensor containing the starting index positions of each bag in indices.
:param per_sample_weights: Tensor with weights for each sample.
:param default_index: Scalar containing default index in embedding table to fill empty bags.
:param name: Optional name for output node.
:return: The new node which performs EmbeddingBagOffsetsSum
@param emb_table: Tensor containing the embedding lookup table.
@param indices: Tensor with indices.
@param offsets: Tensor containing the starting index positions of each bag in indices.
@param per_sample_weights: Tensor with weights for each sample.
@param default_index: Scalar containing default index in embedding table to fill empty bags.
@param name: Optional name for output node.
@return The new node which performs EmbeddingBagOffsetsSum
"""
inputs = [emb_table, as_node(indices), as_node(offsets)]
if per_sample_weights is not None:
@@ -183,16 +183,16 @@ def embedding_bag_packed_sum(
per_sample_weights: Optional[NodeInput] = None,
name: Optional[str] = None,
) -> Node:
"""Return an EmbeddingBagPackedSum node.
"""! Return an EmbeddingBagPackedSum node.
EmbeddingSegmentsSum constructs an output tensor by replacing every index in a given
input tensor with a row (from the weights matrix) at that index
:param emb_table: Tensor containing the embedding lookup table.
:param indices: Tensor with indices.
:param per_sample_weights: Weights to be multiplied with embedding table.
:param name: Optional name for output node.
:return: EmbeddingBagPackedSum node
@param emb_table: Tensor containing the embedding lookup table.
@param indices: Tensor with indices.
@param per_sample_weights: Weights to be multiplied with embedding table.
@param name: Optional name for output node.
@return EmbeddingBagPackedSum node
"""
inputs = [as_node(emb_table), as_node(indices)]
if per_sample_weights is not None:
@@ -211,19 +211,19 @@ def embedding_segments_sum(
per_sample_weights: Optional[NodeInput] = None,
name: Optional[str] = None,
) -> Node:
"""Return an EmbeddingSegmentsSum node.
"""! Return an EmbeddingSegmentsSum node.
EmbeddingSegmentsSum constructs an output tensor by replacing every index in a given
input tensor with a row (from the weights matrix) at that index
:param emb_table: Tensor containing the embedding lookup table.
:param indices: Tensor with indices.
:param segment_ids: Tensor with indices into the output Tensor
:param num_segments: Tensor with number of segments.
:param default_index: Scalar containing default index in embedding table to fill empty bags.
:param per_sample_weights: Weights to be multiplied with embedding table.
:param name: Optional name for output node.
:return: EmbeddingSegmentsSum node
@param emb_table: Tensor containing the embedding lookup table.
@param indices: Tensor with indices.
@param segment_ids: Tensor with indices into the output Tensor
@param num_segments: Tensor with number of segments.
@param default_index: Scalar containing default index in embedding table to fill empty bags.
@param per_sample_weights: Weights to be multiplied with embedding table.
@param name: Optional name for output node.
@return EmbeddingSegmentsSum node
"""
inputs = [as_node(emb_table), as_node(indices), as_node(segment_ids)]
if per_sample_weights is not None:
@@ -248,15 +248,15 @@ def extract_image_patches(
auto_pad: str,
name: Optional[str] = None,
) -> Node:
"""Return a node which produces the ExtractImagePatches operation.
"""! Return a node which produces the ExtractImagePatches operation.
:param image: 4-D Input data to extract image patches.
:param sizes: Patch size in the format of [size_rows, size_cols].
:param strides: Patch movement stride in the format of [stride_rows, stride_cols]
:param rates: Element seleciton rate for creating a patch.
:param auto_pad: Padding type.
:param name: Optional name for output node.
:return: ExtractImagePatches node
@param image: 4-D Input data to extract image patches.
@param sizes: Patch size in the format of [size_rows, size_cols].
@param strides: Patch movement stride in the format of [stride_rows, stride_cols]
@param rates: Element seleciton rate for creating a patch.
@param auto_pad: Padding type.
@param name: Optional name for output node.
@return ExtractImagePatches node
"""
return _get_node_factory_opset3().create(
"ExtractImagePatches",
@@ -280,36 +280,36 @@ def gru_cell(
linear_before_reset: bool = False,
name: Optional[str] = None,
) -> Node:
"""Perform GRUCell operation on the tensor from input node.
"""! Perform GRUCell operation on the tensor from input node.
GRUCell represents a single GRU Cell that computes the output
using the formula described in the paper: https://arxiv.org/abs/1406.1078
Note this class represents only single *cell* and not whole *layer*.
:param X: The input tensor with shape: [batch_size, input_size].
:param initial_hidden_state: The hidden state tensor at current time step with shape:
@param X: The input tensor with shape: [batch_size, input_size].
@param initial_hidden_state: The hidden state tensor at current time step with shape:
[batch_size, hidden_size].
:param W: The weights for matrix multiplication, gate order: zrh.
@param W: The weights for matrix multiplication, gate order: zrh.
Shape: [3*hidden_size, input_size].
:param R: The recurrence weights for matrix multiplication.
@param R: The recurrence weights for matrix multiplication.
Shape: [3*hidden_size, hidden_size].
:param B: The sum of biases (weight and recurrence).
@param B: The sum of biases (weight and recurrence).
For linear_before_reset set True the shape is [4*hidden_size].
Otherwise the shape is [3*hidden_size].
:param hidden_size: The number of hidden units for recurrent cell.
@param hidden_size: The number of hidden units for recurrent cell.
Specifies hidden state size.
:param activations: The vector of activation functions used inside recurrent cell.
:param activation_alpha: The vector of alpha parameters for activation functions in
@param activations: The vector of activation functions used inside recurrent cell.
@param activation_alpha: The vector of alpha parameters for activation functions in
order respective to activation list.
:param activation_beta: The vector of beta parameters for activation functions in order
@param activation_beta: The vector of beta parameters for activation functions in order
respective to activation list.
:param clip: The value defining clipping range [-clip, clip] on input of
@param clip: The value defining clipping range [-clip, clip] on input of
activation functions.
:param linear_before_reset: Flag denotes if the layer behaves according to the modification
@param linear_before_reset: Flag denotes if the layer behaves according to the modification
of GRUCell described in the formula in the ONNX documentation.
:param name: Optional output node name.
:returns: The new node performing a GRUCell operation on tensor from input node.
@param name: Optional output node name.
@return The new node performing a GRUCell operation on tensor from input node.
"""
if activations is None:
activations = ["relu", "sigmoid", "tanh"]
@@ -342,19 +342,19 @@ def non_max_suppression(
output_type: str = "i64",
name: Optional[str] = None,
) -> Node:
"""Return a node which performs NonMaxSuppression.
"""! Return a node which performs NonMaxSuppression.
:param boxes: Tensor with box coordinates.
:param scores: Tensor with box scores.
:param max_output_boxes_per_class: Tensor Specifying maximum number of boxes
@param boxes: Tensor with box coordinates.
@param scores: Tensor with box scores.
@param max_output_boxes_per_class: Tensor Specifying maximum number of boxes
to be selected per class.
:param iou_threshold: Tensor specifying intersection over union threshold
:param score_threshold: Tensor specifying minimum score to consider box for the processing.
:param box_encoding: Format of boxes data encoding.
:param sort_result_descending: Flag that specifies whenever it is necessary to sort selected
@param iou_threshold: Tensor specifying intersection over union threshold
@param score_threshold: Tensor specifying minimum score to consider box for the processing.
@param box_encoding: Format of boxes data encoding.
@param sort_result_descending: Flag that specifies whenever it is necessary to sort selected
boxes across batches or not.
:param output_type: Output element type.
:return: The new node which performs NonMaxSuppression
@param output_type: Output element type.
@return The new node which performs NonMaxSuppression
"""
if max_output_boxes_per_class is None:
max_output_boxes_per_class = make_constant_node(0, np.int64)
@@ -375,12 +375,12 @@ def non_max_suppression(
@nameable_op
def non_zero(data: NodeInput, output_type: str = "i64", name: Optional[str] = None,) -> Node:
"""Return the indices of the elements that are non-zero.
"""! Return the indices of the elements that are non-zero.
:param data: Input data.
:param output_type: Output tensor type.
@param data: Input data.
@param output_type: Output tensor type.
:return: The new node which performs NonZero
@return The new node which performs NonZero
"""
return _get_node_factory_opset3().create(
"NonZero",
@@ -391,12 +391,12 @@ def non_zero(data: NodeInput, output_type: str = "i64", name: Optional[str] = No
@nameable_op
def read_value(init_value: NodeInput, variable_id: str, name: Optional[str] = None) -> Node:
"""Return a node which produces the Assign operation.
"""! Return a node which produces the Assign operation.
:param init_value: Node producing a value to be returned instead of an unassigned variable.
:param variable_id: Id of a variable to be read.
:param name: Optional name for output node.
:return: ReadValue node
@param init_value: Node producing a value to be returned instead of an unassigned variable.
@param variable_id: Id of a variable to be read.
@param name: Optional name for output node.
@return ReadValue node
"""
return _get_node_factory_opset3().create(
"ReadValue",
@@ -419,31 +419,31 @@ def rnn_cell(
clip: float = 0.0,
name: Optional[str] = None,
) -> Node:
"""Perform RNNCell operation on tensor from input node.
"""! Perform RNNCell operation on tensor from input node.
It follows notation and equations defined as in ONNX standard:
https://github.com/onnx/onnx/blob/master/docs/Operators.md#RNN
Note this class represents only single *cell* and not whole RNN *layer*.
:param X: The input tensor with shape: [batch_size, input_size].
:param initial_hidden_state: The hidden state tensor at current time step with shape:
@param X: The input tensor with shape: [batch_size, input_size].
@param initial_hidden_state: The hidden state tensor at current time step with shape:
[batch_size, hidden_size].
:param W: The weight tensor with shape: [hidden_size, input_size].
:param R: The recurrence weight tensor with shape: [hidden_size,
@param W: The weight tensor with shape: [hidden_size, input_size].
@param R: The recurrence weight tensor with shape: [hidden_size,
hidden_size].
:param B: The bias tensor for input gate with shape: [2*hidden_size].
:param hidden_size: The number of hidden units for recurrent cell.
@param B: The bias tensor for input gate with shape: [2*hidden_size].
@param hidden_size: The number of hidden units for recurrent cell.
Specifies hidden state size.
:param activations: The vector of activation functions used inside recurrent cell.
:param activation_alpha: The vector of alpha parameters for activation functions in
@param activations: The vector of activation functions used inside recurrent cell.
@param activation_alpha: The vector of alpha parameters for activation functions in
order respective to activation list.
:param activation_beta: The vector of beta parameters for activation functions in order
@param activation_beta: The vector of beta parameters for activation functions in order
respective to activation list.
:param clip: The value defining clipping range [-clip, clip] on input of
@param clip: The value defining clipping range [-clip, clip] on input of
activation functions.
:param name: Optional output node name.
:returns: The new node performing a RNNCell operation on tensor from input node.
@param name: Optional output node name.
@return The new node performing a RNNCell operation on tensor from input node.
"""
if activations is None:
activations = ["sigmoid", "tanh"]
@@ -475,20 +475,20 @@ def roi_align(
mode: str,
name: Optional[str] = None,
) -> Node:
"""Return a node which performs ROIAlign.
"""! Return a node which performs ROIAlign.
:param data: Input data.
:param rois: RoIs (Regions of Interest) to pool over.
:param batch_indices: Tensor with each element denoting the index of
@param data: Input data.
@param rois: RoIs (Regions of Interest) to pool over.
@param batch_indices: Tensor with each element denoting the index of
the corresponding image in the batch.
:param pooled_h: Height of the ROI output feature map.
:param pooled_w: Width of the ROI output feature map.
:param sampling_ratio: Number of bins over height and width to use to calculate
@param pooled_h: Height of the ROI output feature map.
@param pooled_w: Width of the ROI output feature map.
@param sampling_ratio: Number of bins over height and width to use to calculate
each output feature map element.
:param spatial_scale: Multiplicative spatial scale factor to translate ROI coordinates.
:param mode: Method to perform pooling to produce output feature map elements.
@param spatial_scale: Multiplicative spatial scale factor to translate ROI coordinates.
@param mode: Method to perform pooling to produce output feature map elements.
:return: The new node which performs ROIAlign
@return The new node which performs ROIAlign
"""
inputs = as_nodes(data, rois, batch_indices)
attributes = {
@@ -509,7 +509,7 @@ def scatter_elements_update(
axis: NodeInput,
name: Optional[str] = None,
) -> Node:
"""Return a node which produces a ScatterElementsUpdate operation.
"""! Return a node which produces a ScatterElementsUpdate operation.
ScatterElementsUpdate creates a copy of the first input tensor with updated elements
specified with second and third input tensors.
@@ -521,11 +521,11 @@ def scatter_elements_update(
corresponding entry in `indices` and the index-value for dimension not equal
to `axis` is obtained from the index of the entry itself.
:param data: The input tensor to be updated.
:param indices: The tensor with indexes which will be updated.
:param updates: The tensor with update values.
:param axis: The axis for scatter.
:return: ScatterElementsUpdate node
@param data: The input tensor to be updated.
@param indices: The tensor with indexes which will be updated.
@param updates: The tensor with update values.
@param axis: The axis for scatter.
@return ScatterElementsUpdate node
"""
return _get_node_factory_opset3().create(
"ScatterElementsUpdate", as_nodes(data, indices, updates, axis)
@@ -536,15 +536,15 @@ def scatter_elements_update(
def scatter_update(
data: Node, indices: NodeInput, updates: NodeInput, axis: NodeInput, name: Optional[str] = None
) -> Node:
"""Return a node which produces a ScatterUpdate operation.
"""! Return a node which produces a ScatterUpdate operation.
ScatterUpdate sets new values to slices from data addressed by indices.
:param data: The input tensor to be updated.
:param indices: The tensor with indexes which will be updated.
:param updates: The tensor with update values.
:param axis: The axis at which elements will be updated.
:return: ScatterUpdate node
@param data: The input tensor to be updated.
@param indices: The tensor with indexes which will be updated.
@param updates: The tensor with update values.
@param axis: The axis at which elements will be updated.
@return ScatterUpdate node
"""
return _get_node_factory_opset3().create(
"ScatterUpdate",
@@ -554,11 +554,11 @@ def scatter_update(
@nameable_op
def shape_of(data: NodeInput, output_type: str = "i64", name: Optional[str] = None) -> Node:
"""Return a node which produces a tensor containing the shape of its input data.
"""! Return a node which produces a tensor containing the shape of its input data.
:param data: The tensor containing the input data.
@param data: The tensor containing the input data.
:para output_type: Output element type.
:return: ShapeOf node
@return ShapeOf node
"""
return _get_node_factory_opset3().create(
"ShapeOf",
@@ -569,21 +569,20 @@ def shape_of(data: NodeInput, output_type: str = "i64", name: Optional[str] = No
@nameable_op
def shuffle_channels(data: Node, axis: int, groups: int, name: Optional[str] = None) -> Node:
"""Perform permutation on data in the channel dimension of the input tensor.
"""! Perform permutation on data in the channel dimension of the input tensor.
The operation is the equivalent with the following transformation of the input tensor
:code:`data` of shape [N, C, H, W]:
`data` of shape [N, C, H, W]:
:code:`data_reshaped` = reshape(:code:`data`, [N, group, C / group, H * W])
`data_reshaped` = reshape(`data`, [N, group, C / group, H * W])
:code:`data_trnasposed` = transpose(:code:`data_reshaped`, [0, 2, 1, 3])
`data_trnasposed` = transpose(`data_reshaped`, [0, 2, 1, 3])
:code:`output` = reshape(:code:`data_trnasposed`, [N, C, H, W])
`output` = reshape(`data_trnasposed`, [N, C, H, W])
For example:
.. code-block:: python
~~~~~~~~~~~~~{.py}
Inputs: tensor of shape [1, 6, 2, 2]
data = [[[[ 0., 1.], [ 2., 3.]],
@@ -604,15 +603,16 @@ def shuffle_channels(data: Node, axis: int, groups: int, name: Optional[str] = N
[[ 4., 5.], [ 6., 7.]],
[[12., 13.], [14., 15.]],
[[20., 21.], [22., 23.]]]]
~~~~~~~~~~~~~
:param data: The node with input tensor.
:param axis: Channel dimension index in the data tensor.
@param data: The node with input tensor.
@param axis: Channel dimension index in the data tensor.
A negative value means that the index should be calculated
from the back of the input data shape.
:param group:The channel dimension specified by the axis parameter
@param group: The channel dimension specified by the axis parameter
should be split into this number of groups.
:param name: Optional output node name.
:return: The new node performing a permutation on data in the channel dimension
@param name: Optional output node name.
@return The new node performing a permutation on data in the channel dimension
of the input tensor.
"""
return _get_node_factory_opset3().create(
@@ -630,15 +630,15 @@ def topk(
index_element_type: str = "i32",
name: Optional[str] = None,
) -> Node:
"""Return a node which performs TopK.
"""! Return a node which performs TopK.
:param data: Input data.
:param k: K.
:param axis: TopK Axis.
:param mode: Compute TopK largest ('max') or smallest ('min')
:param sort: Order of output elements (sort by: 'none', 'index' or 'value')
:param index_element_type: Type of output tensor with indices.
:return: The new node which performs TopK (both indices and values)
@param data: Input data.
@param k: K.
@param axis: TopK Axis.
@param mode: Compute TopK largest ('max') or smallest ('min')
@param sort: Order of output elements (sort by: 'none', 'index' or 'value')
@param index_element_type: Type of output tensor with indices.
@return The new node which performs TopK (both indices and values)
"""
return _get_node_factory_opset3().create(
"TopK",

View File

@@ -14,7 +14,7 @@
# limitations under the License.
# ******************************************************************************
"""Factory functions for all ngraph ops."""
"""! Factory functions for all ngraph ops."""
from typing import Callable, Iterable, List, Optional, Set, Union
import numpy as np
@@ -70,17 +70,17 @@ def ctc_loss(
unique: bool = False,
name: Optional[str] = None,
) -> Node:
"""Return a node which performs CTCLoss.
"""! Return a node which performs CTCLoss.
:param logits: 3-D tensor of logits.
:param logit_length: 1-D tensor of lengths for each object from a batch.
:param labels: 2-D tensor of labels for which likelihood is estimated using logits.
:param label_length: 1-D tensor of length for each label sequence.
:param blank_index: Scalar used to mark a blank index.
:param preprocess_collapse_repeated: Flag for preprocessing labels before loss calculation.
:param ctc_merge_repeated: Flag for merging repeated characters in a potential alignment.
:param unique: Flag to find unique elements in a target.
:return: The new node which performs CTCLoss
@param logits: 3-D tensor of logits.
@param logit_length: 1-D tensor of lengths for each object from a batch.
@param labels: 2-D tensor of labels for which likelihood is estimated using logits.
@param label_length: 1-D tensor of length for each label sequence.
@param blank_index: Scalar used to mark a blank index.
@param preprocess_collapse_repeated: Flag for preprocessing labels before loss calculation.
@param ctc_merge_repeated: Flag for merging repeated characters in a potential alignment.
@param unique: Flag to find unique elements in a target.
@return The new node which performs CTCLoss
"""
if blank_index is not None:
inputs = as_nodes(logits, logit_length, labels, label_length, blank_index)
@@ -108,19 +108,19 @@ def non_max_suppression(
output_type: str = "i64",
name: Optional[str] = None,
) -> Node:
"""Return a node which performs NonMaxSuppression.
"""! Return a node which performs NonMaxSuppression.
:param boxes: Tensor with box coordinates.
:param scores: Tensor with box scores.
:param max_output_boxes_per_class: Tensor Specifying maximum number of boxes
@param boxes: Tensor with box coordinates.
@param scores: Tensor with box scores.
@param max_output_boxes_per_class: Tensor Specifying maximum number of boxes
to be selected per class.
:param iou_threshold: Tensor specifying intersection over union threshold
:param score_threshold: Tensor specifying minimum score to consider box for the processing.
:param box_encoding: Format of boxes data encoding.
:param sort_result_descending: Flag that specifies whenever it is necessary to sort selected
@param iou_threshold: Tensor specifying intersection over union threshold
@param score_threshold: Tensor specifying minimum score to consider box for the processing.
@param box_encoding: Format of boxes data encoding.
@param sort_result_descending: Flag that specifies whenever it is necessary to sort selected
boxes across batches or not.
:param output_type: Output element type.
:return: The new node which performs NonMaxSuppression
@param output_type: Output element type.
@return The new node which performs NonMaxSuppression
"""
if max_output_boxes_per_class is None:
max_output_boxes_per_class = make_constant_node(0, np.int64)
@@ -141,30 +141,30 @@ def non_max_suppression(
@nameable_op
def softplus(data: NodeInput, name: Optional[str] = None) -> Node:
"""Apply SoftPlus operation on each element of input tensor.
"""! Apply SoftPlus operation on each element of input tensor.
:param data: The tensor providing input data.
:return: The new node with SoftPlus operation applied on each element.
@param data: The tensor providing input data.
@return The new node with SoftPlus operation applied on each element.
"""
return _get_node_factory_opset4().create("SoftPlus", as_nodes(data), {})
@nameable_op
def mish(data: NodeInput, name: Optional[str] = None,) -> Node:
"""Return a node which performs Mish.
"""! Return a node which performs Mish.
:param data: Tensor with input data floating point type.
:return: The new node which performs Mish
@param data: Tensor with input data floating point type.
@return The new node which performs Mish
"""
return _get_node_factory_opset4().create("Mish", as_nodes(data), {})
@nameable_op
def hswish(data: NodeInput, name: Optional[str] = None,) -> Node:
"""Return a node which performs HSwish (hard version of Swish).
"""! Return a node which performs HSwish (hard version of Swish).
:param data: Tensor with input data floating point type.
:return: The new node which performs HSwish
@param data: Tensor with input data floating point type.
@return The new node which performs HSwish
"""
return _get_node_factory_opset4().create("HSwish", as_nodes(data), {})
@@ -175,10 +175,10 @@ def swish(
beta: Optional[NodeInput] = None,
name: Optional[str] = None,
) -> Node:
"""Return a node which performing Swish activation function Swish(x, beta=1.0) = x * sigmoid(x * beta)).
"""! Return a node which performing Swish activation function Swish(x, beta=1.0) = x * sigmoid(x * beta)).
:param data: Tensor with input data floating point type.
:return: The new node which performs Swish
@param data: Tensor with input data floating point type.
@return The new node which performs Swish
"""
if beta is None:
beta = make_constant_node(1.0, np.float32)
@@ -187,33 +187,33 @@ def swish(
@nameable_op
def acosh(node: NodeInput, name: Optional[str] = None) -> Node:
"""Apply hyperbolic inverse cosine function on the input node element-wise.
"""! Apply hyperbolic inverse cosine function on the input node element-wise.
:param node: One of: input node, array or scalar.
:param name: Optional new name for output node.
:return: New node with arccosh operation applied on it.
@param node: One of: input node, array or scalar.
@param name: Optional new name for output node.
@return New node with arccosh operation applied on it.
"""
return _get_node_factory_opset4().create("Acosh", [node])
@nameable_op
def asinh(node: NodeInput, name: Optional[str] = None) -> Node:
"""Apply hyperbolic inverse sinus function on the input node element-wise.
"""! Apply hyperbolic inverse sinus function on the input node element-wise.
:param node: One of: input node, array or scalar.
:param name: Optional new name for output node.
:return: New node with arcsinh operation applied on it.
@param node: One of: input node, array or scalar.
@param name: Optional new name for output node.
@return New node with arcsinh operation applied on it.
"""
return _get_node_factory_opset4().create("Asinh", [node])
@nameable_op
def atanh(node: NodeInput, name: Optional[str] = None) -> Node:
"""Apply hyperbolic inverse tangent function on the input node element-wise.
"""! Apply hyperbolic inverse tangent function on the input node element-wise.
:param node: One of: input node, array or scalar.
:param name: Optional new name for output node.
:return: New node with arctanh operation applied on it.
@param node: One of: input node, array or scalar.
@param name: Optional new name for output node.
@return New node with arctanh operation applied on it.
"""
return _get_node_factory_opset4().create("Atanh", [node])
@@ -226,13 +226,13 @@ def proposal(
attrs: dict,
name: Optional[str] = None,
) -> Node:
"""Filter bounding boxes and outputs only those with the highest prediction confidence.
"""! Filter bounding boxes and outputs only those with the highest prediction confidence.
:param class_probs: 4D input floating point tensor with class prediction scores.
:param bbox_deltas: 4D input floating point tensor with corrected predictions of bounding boxes
:param image_shape: The 1D input tensor with 3 or 4 elements describing image shape.
:param attrs: The dictionary containing key, value pairs for attributes.
:param name: Optional name for the output node.
@param class_probs: 4D input floating point tensor with class prediction scores.
@param bbox_deltas: 4D input floating point tensor with corrected predictions of bounding boxes
@param image_shape: The 1D input tensor with 3 or 4 elements describing image shape.
@param attrs: The dictionary containing key, value pairs for attributes.
@param name: Optional name for the output node.
* base_size The size of the anchor to which scale and ratio attributes are applied.
Range of values: a positive unsigned integer number
Default value: None
@@ -296,7 +296,7 @@ def proposal(
Default value: "" (empty string)
Required: no
Example of attribute dictionary:
.. code-block:: python
~~~~~~~~~~~~~~~~~~~~~~~~{.py}
# just required ones
attrs = {
'base_size': 85,
@@ -308,8 +308,9 @@ def proposal(
'ratio': [0.1, 1.5, 2.0, 2.5],
'scale': [2, 3, 3, 4],
}
~~~~~~~~~~~~~~~~~~~~~~~~
Optional attributes which are absent from dictionary will be set with corresponding default.
:return: Node representing Proposal operation.
@return Node representing Proposal operation.
"""
requirements = [
("base_size", True, np.unsignedinteger, is_positive_value),
@@ -339,13 +340,13 @@ def proposal(
def reduce_l1(
node: NodeInput, reduction_axes: NodeInput, keep_dims: bool = False, name: Optional[str] = None
) -> Node:
"""L1-reduction operation on input tensor, eliminating the specified reduction axes.
"""! L1-reduction operation on input tensor, eliminating the specified reduction axes.
:param node: The tensor we want to mean-reduce.
:param reduction_axes: The axes to eliminate through mean operation.
:param keep_dims: If set to True it holds axes that are used for reduction
:param name: Optional name for output node.
:return: The new node performing mean-reduction operation.
@param node: The tensor we want to mean-reduce.
@param reduction_axes: The axes to eliminate through mean operation.
@param keep_dims: If set to True it holds axes that are used for reduction
@param name: Optional name for output node.
@return The new node performing mean-reduction operation.
"""
return _get_node_factory_opset4().create(
"ReduceL1", as_nodes(node, reduction_axes), {"keep_dims": keep_dims}
@@ -356,13 +357,13 @@ def reduce_l1(
def reduce_l2(
node: NodeInput, reduction_axes: NodeInput, keep_dims: bool = False, name: Optional[str] = None
) -> Node:
"""L2-reduction operation on input tensor, eliminating the specified reduction axes.
"""! L2-reduction operation on input tensor, eliminating the specified reduction axes.
:param node: The tensor we want to mean-reduce.
:param reduction_axes: The axes to eliminate through mean operation.
:param keep_dims: If set to True it holds axes that are used for reduction
:param name: Optional name for output node.
:return: The new node performing mean-reduction operation.
@param node: The tensor we want to mean-reduce.
@param reduction_axes: The axes to eliminate through mean operation.
@param keep_dims: If set to True it holds axes that are used for reduction
@param name: Optional name for output node.
@return The new node performing mean-reduction operation.
"""
return _get_node_factory_opset4().create(
"ReduceL2", as_nodes(node, reduction_axes), {"keep_dims": keep_dims}
@@ -384,22 +385,22 @@ def lstm_cell(
clip: float = 0.0,
name: Optional[str] = None,
) -> Node:
"""Return a node which performs LSTMCell operation.
"""! Return a node which performs LSTMCell operation.
:param X: The input tensor with shape: [batch_size, input_size].
:param initial_hidden_state: The hidden state tensor with shape: [batch_size, hidden_size].
:param initial_cell_state: The cell state tensor with shape: [batch_size, hidden_size].
:param W: The weight tensor with shape: [4*hidden_size, input_size].
:param R: The recurrence weight tensor with shape: [4*hidden_size, hidden_size].
:param B: The bias tensor for gates with shape: [4*hidden_size].
:param hidden_size: Specifies hidden state size.
:param activations: The list of three activation functions for gates.
:param activations_alpha: The list of alpha parameters for activation functions.
:param activations_beta: The list of beta parameters for activation functions.
:param clip: Specifies bound values [-C, C] for tensor clipping performed before activations.
:param name: An optional name of the output node.
@param X: The input tensor with shape: [batch_size, input_size].
@param initial_hidden_state: The hidden state tensor with shape: [batch_size, hidden_size].
@param initial_cell_state: The cell state tensor with shape: [batch_size, hidden_size].
@param W: The weight tensor with shape: [4*hidden_size, input_size].
@param R: The recurrence weight tensor with shape: [4*hidden_size, hidden_size].
@param B: The bias tensor for gates with shape: [4*hidden_size].
@param hidden_size: Specifies hidden state size.
@param activations: The list of three activation functions for gates.
@param activations_alpha: The list of alpha parameters for activation functions.
@param activations_beta: The list of beta parameters for activation functions.
@param clip: Specifies bound values [-C, C] for tensor clipping performed before activations.
@param name: An optional name of the output node.
:return: The new node represents LSTMCell. Node outputs count: 2.
@return The new node represents LSTMCell. Node outputs count: 2.
"""
if activations is None:
activations = ["sigmoid", "tanh", "tanh"]

View File

@@ -27,7 +27,7 @@ from ngraph.utils.types import (
def _get_node_factory(opset_version: Optional[str] = None) -> NodeFactory:
"""Return NodeFactory configured to create operators from specified opset version."""
"""! Return NodeFactory configured to create operators from specified opset version."""
if opset_version:
return NodeFactory(opset_version)
else:

View File

@@ -13,4 +13,4 @@
# See the License for the specific language governing permissions and
# limitations under the License.
# ******************************************************************************
"""Generic utilities. Factor related functions out to separate files."""
"""! Generic utilities. Factor related functions out to separate files."""

View File

@@ -26,16 +26,16 @@ log = logging.getLogger(__name__)
def get_broadcast_axes(
output_shape: TensorShape, input_shape: TensorShape, axis: int = None
) -> AxisSet:
"""Generate a list of broadcast axes for ngraph++ broadcast.
"""! Generate a list of broadcast axes for ngraph++ broadcast.
Informally, a broadcast "adds" axes to the input tensor,
replicating elements from the input tensor as needed to fill the new dimensions.
Function calculate which of the output axes are added in this way.
:param output_shape: The new shape for the output tensor.
:param input_shape: The shape of input tensor.
:param axis: The axis along which we want to replicate elements.
:return: The indices of added axes.
@param output_shape: The new shape for the output tensor.
@param input_shape: The shape of input tensor.
@param axis: The axis along which we want to replicate elements.
@return The indices of added axes.
"""
axes_indexes = list(range(0, len(output_shape)))
if axis is None:

View File

@@ -27,7 +27,7 @@ def _set_node_friendly_name(node: Node, **kwargs: Any) -> Node:
def nameable_op(node_factory_function: Callable) -> Callable:
"""Set the name to the ngraph operator returned by the wrapped function."""
"""! Set the name to the ngraph operator returned by the wrapped function."""
@wraps(node_factory_function)
def wrapper(*args: Any, **kwargs: Any) -> Node:
@@ -39,7 +39,7 @@ def nameable_op(node_factory_function: Callable) -> Callable:
def unary_op(node_factory_function: Callable) -> Callable:
"""Convert the first input value to a Constant Node if a numeric value is detected."""
"""! Convert the first input value to a Constant Node if a numeric value is detected."""
@wraps(node_factory_function)
def wrapper(input_value: NodeInput, *args: Any, **kwargs: Any) -> Node:
@@ -52,7 +52,7 @@ def unary_op(node_factory_function: Callable) -> Callable:
def binary_op(node_factory_function: Callable) -> Callable:
"""Convert the first two input values to Constant Nodes if numeric values are detected."""
"""! Convert the first two input values to Constant Nodes if numeric values are detected."""
@wraps(node_factory_function)
def wrapper(left: NodeInput, right: NodeInput, *args: Any, **kwargs: Any) -> Node:

View File

@@ -14,7 +14,7 @@
# limitations under the License.
# ******************************************************************************
"""Helper functions for validating user input."""
"""! Helper functions for validating user input."""
import logging
from typing import Any, Callable, Dict, Iterable, List, Optional, Tuple, Type
@@ -27,7 +27,7 @@ log = logging.getLogger(__name__)
def assert_list_of_ints(value_list: Iterable[int], message: str) -> None:
"""Verify that the provided value is an iterable of integers."""
"""! Verify that the provided value is an iterable of integers."""
try:
for value in value_list:
if not isinstance(value, int):
@@ -39,16 +39,16 @@ def assert_list_of_ints(value_list: Iterable[int], message: str) -> None:
def _check_value(op_name, attr_key, value, val_type, cond=None):
# type: (str, str, Any, Type, Optional[Callable[[Any], bool]]) -> bool
"""Check whether provided value satisfies specified criteria.
"""! Check whether provided value satisfies specified criteria.
:param op_name: The operator name which attributes are checked.
:param attr_key: The attribute name.
:param value: The value to check.
:param val_type: Required value type.
:param cond: The optional function running additional checks.
@param op_name: The operator name which attributes are checked.
@param attr_key: The attribute name.
@param value: The value to check.
@param val_type: Required value type.
@param cond: The optional function running additional checks.
:raises UserInputError:
:return: True if attribute satisfies all criterias. Otherwise False.
@return True if attribute satisfies all criterias. Otherwise False.
"""
if not np.issubdtype(type(value), val_type):
raise UserInputError(
@@ -67,19 +67,19 @@ def _check_value(op_name, attr_key, value, val_type, cond=None):
def check_valid_attribute(op_name, attr_dict, attr_key, val_type, cond=None, required=False):
# type: (str, dict, str, Type, Optional[Callable[[Any], bool]], Optional[bool]) -> bool
"""Check whether specified attribute satisfies given criteria.
"""! Check whether specified attribute satisfies given criteria.
:param op_name: The operator name which attributes are checked.
:param attr_dict: Dictionary containing key-value attributes to check.
:param attr_key: Key value for validated attribute.
:param val_type: Value type for validated attribute.
:param cond: Any callable wich accept attribute value and returns True or False.
:param required: Whether provided attribute key is not required. This mean it may be missing
@param op_name: The operator name which attributes are checked.
@param attr_dict: Dictionary containing key-value attributes to check.
@param attr_key: Key value for validated attribute.
@param val_type: Value type for validated attribute.
@param cond: Any callable wich accept attribute value and returns True or False.
@param required: Whether provided attribute key is not required. This mean it may be missing
from provided dictionary.
:raises UserInputError:
:return: True if attribute satisfies all criterias. Otherwise False.
@return True if attribute satisfies all criterias. Otherwise False.
"""
result = True
@@ -110,11 +110,11 @@ def check_valid_attributes(
requirements, # type: List[Tuple[str, bool, Type, Optional[Callable]]]
):
# type: (...) -> bool
"""Perform attributes validation according to specified type, value criteria.
"""! Perform attributes validation according to specified type, value criteria.
:param op_name: The operator name which attributes are checked.
:param attributes: The dictionary with user provided attributes to check.
:param requirements: The list of tuples describing attributes' requirements. The tuple should
@param op_name: The operator name which attributes are checked.
@param attributes: The dictionary with user provided attributes to check.
@param requirements: The list of tuples describing attributes' requirements. The tuple should
contain following values:
(attr_name: str,
is_required: bool,
@@ -122,7 +122,7 @@ def check_valid_attributes(
value_condition: Callable)
:raises UserInputError:
:return: True if all attributes satisfies criterias. Otherwise False.
@return True if all attributes satisfies criterias. Otherwise False.
"""
for attr, required, val_type, cond in requirements:
check_valid_attribute(op_name, attributes, attr, val_type, cond, required)
@@ -130,20 +130,20 @@ def check_valid_attributes(
def is_positive_value(x): # type: (Any) -> bool
"""Determine whether the specified x is positive value.
"""! Determine whether the specified x is positive value.
:param x: The value to check.
@param x: The value to check.
:returns: True if the specified x is positive value, False otherwise.
@return True if the specified x is positive value, False otherwise.
"""
return x > 0
def is_non_negative_value(x): # type: (Any) -> bool
"""Determine whether the specified x is non-negative value.
"""! Determine whether the specified x is non-negative value.
:param x: The value to check.
@param x: The value to check.
:returns: True if the specified x is non-negative value, False otherwise.
@return True if the specified x is non-negative value, False otherwise.
"""
return x >= 0

View File

@@ -9,27 +9,27 @@ DEFAULT_OPSET = "opset4"
class NodeFactory(object):
"""Factory front-end to create node objects."""
"""! Factory front-end to create node objects."""
def __init__(self, opset_version: str = DEFAULT_OPSET) -> None:
"""Create the NodeFactory object.
"""! Create the NodeFactory object.
:param opset_version: The opset version the factory will use to produce ops from.
@param opset_version: The opset version the factory will use to produce ops from.
"""
self.factory = _NodeFactory(opset_version)
def create(
self, op_type_name: str, arguments: List[Node], attributes: Optional[Dict[str, Any]] = None
) -> Node:
"""Create node object from provided description.
"""! Create node object from provided description.
The user does not have to provide all node's attributes, but only required ones.
:param op_type_name: The operator type name.
:param arguments: The operator arguments.
:param attributes: The operator attributes.
@param op_type_name: The operator type name.
@param arguments: The operator arguments.
@param attributes: The operator attributes.
:returns: Node object representing requested operator with attributes set.
@return Node object representing requested operator with attributes set.
"""
if attributes is None:
attributes = {}
@@ -65,12 +65,12 @@ class NodeFactory(object):
@staticmethod
def _normalize_attr_name(attr_name: str, prefix: str) -> str:
"""Normalize attribute name.
"""! Normalize attribute name.
:param attr_name: The attribute name.
:param prefix: The prefix to attach to attribute name.
@param attr_name: The attribute name.
@param prefix: The prefix to attach to attribute name.
:returns: The modified attribute name.
@return The modified attribute name.
"""
# Trim first part of the name if there is only one level of attribute hierarchy.
if attr_name.count(".") == 1:
@@ -79,32 +79,32 @@ class NodeFactory(object):
@classmethod
def _normalize_attr_name_getter(cls, attr_name: str) -> str:
"""Normalize atr name to be suitable for getter function name.
"""! Normalize atr name to be suitable for getter function name.
:param attr_name: The attribute name to normalize
@param attr_name: The attribute name to normalize
:returns: The appropriate getter function name.
@return The appropriate getter function name.
"""
return cls._normalize_attr_name(attr_name, "get_")
@classmethod
def _normalize_attr_name_setter(cls, attr_name: str) -> str:
"""Normalize attribute name to be suitable for setter function name.
"""! Normalize attribute name to be suitable for setter function name.
:param attr_name: The attribute name to normalize
@param attr_name: The attribute name to normalize
:returns: The appropriate setter function name.
@return The appropriate setter function name.
"""
return cls._normalize_attr_name(attr_name, "set_")
@staticmethod
def _get_node_attr_value(node: Node, attr_name: str) -> Any:
"""Get provided node attribute value.
"""! Get provided node attribute value.
:param node: The node we retrieve attribute value from.
:param attr_name: The attribute name.
@param node: The node we retrieve attribute value from.
@param attr_name: The attribute name.
:returns: The node attribute value.
@return The node attribute value.
"""
if not node._attr_cache_valid:
node._attr_cache = node._get_attributes()
@@ -113,11 +113,11 @@ class NodeFactory(object):
@staticmethod
def _set_node_attr_value(node: Node, attr_name: str, value: Any) -> None:
"""Set the node attribute value.
"""! Set the node attribute value.
:param node: The node we change attribute value for.
:param attr_name: The attribute name.
:param value: The new attribute value.
@param node: The node we change attribute value for.
@param attr_name: The attribute name.
@param value: The new attribute value.
"""
node._set_attribute(attr_name, value)
node._attr_cache[attr_name] = value

Some files were not shown because too many files have changed in this diff Show More