* Added info on DockerHub CI Framework
* Feature/azaytsev/change layout (#3295)
* Changes according to feedback comments
* Replaced @ref's with html links
* Fixed links, added a title page for installing from repos and images, fixed formatting issues
* Added links
* minor fix
* Added DL Streamer to the list of components installed by default
* Link fixes
* Link fixes
* ovms doc fix (#2988)
* added OpenVINO Model Server
* ovms doc fixes
Co-authored-by: Trawinski, Dariusz <dariusz.trawinski@intel.com>
* Updated openvino_docs.xml
* Added Intel® Iris® Xe Dedicated Graphics, naming convention info (#3523)
* Added Intel® Iris® Xe Dedicated Graphics, naming convention info
* Added GPU.0 GPU.1
* added info about Intel® Iris® Xe MAX Graphics drivers
* Feature/azaytsev/transition s3 bucket (#3609)
* Replaced https://download.01.org/ links with https://storage.openvinotoolkit.org/
* Fixed links
# Conflicts:
# inference-engine/ie_bridges/java/samples/README.md
* Benchmarks 2021 2 (#3590)
* Initial changes
* Updates
* Updates
* Updates
* Fixed graph names
* minor fix
* Fixed link
* Implemented changes according to the review changes
* fixed links
* Updated Legal_Information.md according to review feedback
* Replaced Uzel* UI-AR8 with Mustang-V100-MX8
* Feature/azaytsev/ovsa docs (#3627)
* Added ovsa_get_started.md
* Fixed formatting issues
* Fixed formatting issues
* Fixed formatting issues
* Fixed formatting issues
* Fixed formatting issues
* Fixed formatting issues
* Fixed formatting issues
* Updated the GSG topic, added a new image
* Formatting issues fixes
* Formatting issues fixes
* Formatting issues fixes
* Formatting issues fixes
* Formatting issues fixes
* Formatting issues fixes
* Formatting issues fixes
* Formatting issues fixes
* Formatting issues fixes
* Formatting issues fixes
* Formatting issues fixes
* Formatting issues fixes
* Formatting issues fixes
* Revert "Formatting issues fixes"
This reverts commit c6e6207431.
* Replaced to Security section
* doc fixes (#3626)
Co-authored-by: Nikolay Tyukaev <ntyukaev_lo@jenkins.inn.intel.com>
# Conflicts:
# docs/IE_DG/network_state_intro.md
* fix latex formula (#3630)
Co-authored-by: Nikolay Tyukaev <ntyukaev_lo@jenkins.inn.intel.com>
* fix comments ngraph api 2021.2 (#3520)
* fix comments ngraph api
* remove whitespace
* fixes
Co-authored-by: Nikolay Tyukaev <ntyukaev_lo@jenkins.inn.intel.com>
* Feature/azaytsev/g api docs (#3731)
* Initial commit
* Added content
* Added new content for g-api documentation. Removed obsolete links through all docs
* Fixed layout
* Fixed layout
* Added new topics
* Added new info
* added a note
* Removed redundant .svg
# Conflicts:
# docs/get_started/get_started_dl_workbench.md
* [Cherry-pick] DL Workbench cross-linking (#3488)
* Added links to MO and Benchmark App
* Changed wording
* Fixes a link
* fixed a link
* Changed the wording
* Links to WB
* Changed wording
* Changed wording
* Fixes
* Changes the wording
* Minor corrections
* Removed an extra point
* cherry-pick
* Added the doc
* More instructions and images
* Added slide
* Borders for screenshots
* fixes
* Fixes
* Added link to Benchmark app
* Replaced the image
* tiny fix
* tiny fix
* Fixed a typo
* Feature/azaytsev/g api docs (#3731)
* Initial commit
* Added content
* Added new content for g-api documentation. Removed obsolete links through all docs
* Fixed layout
* Fixed layout
* Added new topics
* Added new info
* added a note
* Removed redundant .svg
* Doc updates 2021 2 (#3749)
* Change the name of parameter tensorflow_use_custom_operations_config to transformations_config
* Fixed formatting
* Corrected MYRIAD plugin name
* Installation Guides formatting fixes
* Installation Guides formatting fixes
* Installation Guides formatting fixes
* Installation Guides formatting fixes
* Installation Guides formatting fixes
* Installation Guides formatting fixes
* Installation Guides formatting fixes
* Installation Guides formatting fixes
* Installation Guides formatting fixes
* Fixed link to Model Optimizer Extensibility
* Fixed link to Model Optimizer Extensibility
* Fixed link to Model Optimizer Extensibility
* Fixed link to Model Optimizer Extensibility
* Fixed link to Model Optimizer Extensibility
* Fixed formatting
* Fixed formatting
* Fixed formatting
* Fixed formatting
* Fixed formatting
* Fixed formatting
* Fixed formatting
* Fixed formatting
* Fixed formatting
* Fixed formatting
* Fixed formatting
* Updated IGS, added links to Get Started Guides
* Fixed links
* Fixed formatting issues
* Fixed formatting issues
* Fixed formatting issues
* Fixed formatting issues
* Move the Note to the proper place
* Removed optimization notice
# Conflicts:
# docs/ops/detection/DetectionOutput_1.md
* minor fix
* Benchmark updates (#4041)
* Link fixes for 2021.2 benchmark page (#4086)
* Benchmark updates
* Fixed links
Co-authored-by: Trawinski, Dariusz <dariusz.trawinski@intel.com>
Co-authored-by: Nikolay Tyukaev <nikolay.tyukaev@intel.com>
Co-authored-by: Nikolay Tyukaev <ntyukaev_lo@jenkins.inn.intel.com>
Co-authored-by: Alina Alborova <alina.alborova@intel.com>
14 KiB
Inference Engine Samples
The Inference Engine sample applications are simple console applications that show how to utilize specific Inference Engine capabilities within an application, assist developers in executing specific tasks such as loading a model, running inference, querying specific device capabilities and etc.
After installation of Intel® Distribution of OpenVINO™ toolkit, С, C++ and Python* sample applications are available in the following directories, respectively:
<INSTALL_DIR>/inference_engine/samples/c<INSTALL_DIR>/inference_engine/samples/cpp<INSTALL_DIR>/inference_engine/samples/python
Inference Engine sample applications include the following:
- Automatic Speech Recognition C++ Sample – Acoustic model inference based on Kaldi neural networks and speech feature vectors.
- Benchmark Application – Estimates deep learning inference performance on supported devices for synchronous and asynchronous modes.
- Hello Classification Sample – Inference of image classification networks like AlexNet and GoogLeNet using Synchronous Inference Request API. Input of any size and layout can be set to an infer request which will be pre-processed automatically during inference (the sample supports only images as inputs and supports Unicode paths).
- Hello NV12 Input Classification Sample – Input of any size and layout can be provided to an infer request. The sample transforms the input to the NV12 color format and pre-process it automatically during inference. The sample supports only images as inputs.
- Hello Query Device Sample – Query of available Inference Engine devices and their metrics, configuration values.
- Hello Reshape SSD C++ Sample** – Inference of SSD networks resized by ShapeInfer API according to an input size.
- Image Classification Sample Async – Inference of image classification networks like AlexNet and GoogLeNet using Asynchronous Inference Request API (the sample supports only images as inputs).
- Image Classification Python* Sample – Inference of image classification networks like AlexNet and GoogLeNet using Synchronous Inference Request API (the sample supports only images as inputs).
- Neural Style Transfer Sample – Style Transfer sample (the sample supports only images as inputs).
- nGraph Function Creation C++ Sample – Construction of the LeNet network using the nGraph function creation sample.
- Object Detection for SSD Sample – Inference of object detection networks based on the SSD, this sample is simplified version that supports only images as inputs.
Note
: All samples support input paths containing only ASCII characters, except the Hello Classification Sample, that supports Unicode.
Media Files Available for Samples
To run the sample applications, you can use images and videos from the media files collection available at https://github.com/intel-iot-devkit/sample-videos.
Samples that Support Pre-Trained Models
To run the sample, you can use [public](@ref omz_models_public_index) or [Intel's](@ref omz_models_intel_index) pre-trained models from the Open Model Zoo. The models can be downloaded using the [Model Downloader](@ref omz_tools_downloader_README).
Build the Sample Applications
Build the Sample Applications on Linux*
The officially supported Linux* build environment is the following:
- Ubuntu* 18.04 LTS 64-bit or CentOS* 7.6 64-bit
- GCC* 7.5.0 (for Ubuntu* 18.04) or GCC* 4.8.5 (for CentOS* 7.6)
- CMake* version 3.10 or higher
Note
: For building samples from the open-source version of OpenVINO™ toolkit, see the build instructions on GitHub.
To build the C or C++ sample applications for Linux, go to the <INSTALL_DIR>/inference_engine/samples/c or <INSTALL_DIR>/inference_engine/samples/cpp directory, respectively, and run the build_samples.sh script:
build_samples.sh
Once the build is completed, you can find sample binaries in the following folders:
- C samples:
~/inference_engine_c_samples_build/intel64/Release - C++ samples:
~/inference_engine_cpp_samples_build/intel64/Release
You can also build the sample applications manually:
Note
: If you have installed the product as a root user, switch to root mode before you continue:
sudo -i
- Navigate to a directory that you have write access to and create a samples build directory. This example uses a directory named
build:
mkdir build
Note
: If you ran the Image Classification verification script during the installation, the C++ samples build directory was already created in your home directory:
~/inference_engine_samples_build/
- Go to the created directory:
cd build
- Run CMake to generate the Make files for release or debug configuration. For example, for C++ samples:
- For release configuration:
cmake -DCMAKE_BUILD_TYPE=Release <INSTALL_DIR>/inference_engine/samples/cpp
- For debug configuration:
cmake -DCMAKE_BUILD_TYPE=Debug <INSTALL_DIR>/inference_engine/samples/cpp
- Run
maketo build the samples:
make
For the release configuration, the sample application binaries are in <path_to_build_directory>/intel64/Release/;
for the debug configuration — in <path_to_build_directory>/intel64/Debug/.
Build the Sample Applications on Microsoft Windows* OS
The recommended Windows* build environment is the following:
- Microsoft Windows* 10
- Microsoft Visual Studio* 2017, or 2019
- CMake* version 3.10 or higher
Note
: If you want to use Microsoft Visual Studio 2019, you are required to install CMake 3.14.
To build the C or C++ sample applications on Windows, go to the <INSTALL_DIR>\inference_engine\samples\c or <INSTALL_DIR>\inference_engine\samples\cpp directory, respectively, and run the build_samples_msvc.bat batch file:
build_samples_msvc.bat
By default, the script automatically detects the highest Microsoft Visual Studio version installed on the machine and uses it to create and build
a solution for a sample code. Optionally, you can also specify the preferred Microsoft Visual Studio version to be used by the script. Supported
versions are VS2017 and VS2019. For example, to build the C++ samples using the Microsoft Visual Studio 2017, use the following command:
<INSTALL_DIR>\inference_engine\samples\cpp\build_samples_msvc.bat VS2017
Once the build is completed, you can find sample binaries in the following folders:
- C samples:
C:\Users\<user>\Documents\Intel\OpenVINO\inference_engine_c_samples_build\intel64\Release - C++ samples:
C:\Users\<user>\Documents\Intel\OpenVINO\inference_engine_cpp_samples_build\intel64\Release
You can also build a generated solution manually. For example, if you want to build C++ sample binaries in Debug configuration, run the appropriate version of the
Microsoft Visual Studio and open the generated solution file from the C:\Users\<user>\Documents\Intel\OpenVINO\inference_engine_cpp_samples_build\Samples.sln
directory.
Build the Sample Applications on macOS*
The officially supported macOS* build environment is the following:
- macOS* 10.15 64-bit
- Clang* compiler from Xcode* 10.1 or higher
- CMake* version 3.13 or higher
Note
: For building samples from the open-source version of OpenVINO™ toolkit, see the build instructions on GitHub.
To build the C or C++ sample applications for macOS, go to the <INSTALL_DIR>/inference_engine/samples/c or <INSTALL_DIR>/inference_engine/samples/cpp directory, respectively, and run the build_samples.sh script:
build_samples.sh
Once the build is completed, you can find sample binaries in the following folders:
- C samples:
~/inference_engine_c_samples_build/intel64/Release - C++ samples:
~/inference_engine_cpp_samples_build/intel64/Release
You can also build the sample applications manually:
Note
: If you have installed the product as a root user, switch to root mode before you continue:
sudo -i
Note
: Before proceeding, make sure you have OpenVINO™ environment set correctly. This can be done manually by
cd <INSTALL_DIR>/bin
source setupvars.sh
- Navigate to a directory that you have write access to and create a samples build directory. This example uses a directory named
build:
mkdir build
Note
: If you ran the Image Classification verification script during the installation, the C++ samples build directory was already created in your home directory:
~/inference_engine_samples_build/
- Go to the created directory:
cd build
- Run CMake to generate the Make files for release or debug configuration. For example, for C++ samples:
- For release configuration:
cmake -DCMAKE_BUILD_TYPE=Release <INSTALL_DIR>/inference_engine/samples/cpp
- For debug configuration:
cmake -DCMAKE_BUILD_TYPE=Debug <INSTALL_DIR>/inference_engine/samples/cpp
- Run
maketo build the samples:
make
For the release configuration, the sample application binaries are in <path_to_build_directory>/intel64/Release/;
for the debug configuration — in <path_to_build_directory>/intel64/Debug/.
Get Ready for Running the Sample Applications
Get Ready for Running the Sample Applications on Linux*
Before running compiled binary files, make sure your application can find the
Inference Engine and OpenCV libraries.
Run the setupvars script to set all necessary environment variables:
source <INSTALL_DIR>/bin/setupvars.sh
(Optional): The OpenVINO environment variables are removed when you close the shell. As an option, you can permanently set the environment variables as follows:
- Open the
.bashrcfile in<user_home_directory>:
vi <user_home_directory>/.bashrc
- Add this line to the end of the file:
source /opt/intel/openvino/bin/setupvars.sh
- Save and close the file: press the Esc key, type
:wqand press the Enter key. - To test your change, open a new terminal. You will see
[setupvars.sh] OpenVINO environment initialized.
You are ready to run sample applications. To learn about how to run a particular sample, read the sample documentation by clicking the sample name in the samples list above.
Get Ready for Running the Sample Applications on Windows*
Before running compiled binary files, make sure your application can find the
Inference Engine and OpenCV libraries.
Use the setupvars script, which sets all necessary environment variables:
<INSTALL_DIR>\bin\setupvars.bat
To debug or run the samples on Windows in Microsoft Visual Studio, make sure you
have properly configured Debugging environment settings for the Debug
and Release configurations. Set correct paths to the OpenCV libraries, and
debug and release versions of the Inference Engine libraries.
For example, for the Debug configuration, go to the project's
Configuration Properties to the Debugging category and set the PATH
variable in the Environment field to the following:
PATH=<INSTALL_DIR>\deployment_tools\inference_engine\bin\intel64\Debug;<INSTALL_DIR>\opencv\bin;%PATH%
where <INSTALL_DIR> is the directory in which the OpenVINO toolkit is installed.
You are ready to run sample applications. To learn about how to run a particular sample, read the sample documentation by clicking the sample name in the samples list above.