* added unit tests * added readme for model optimizer * added a list of supported IE plugins
Repository components
The Inference Engine can infer models in different formats with various input and output formats.
The open source version of Inference Engine includes the following plugins:
| PLUGIN | DEVICE TYPES |
|---|---|
| CPU plugin | Intel® Xeon® with Intel® AVX2 and AVX512, Intel® Core™ Processors with Intel® AVX2, Intel® Atom® Processors with Intel® SSE |
| GPU plugin | Intel® Processor Graphics, including Intel® HD Graphics and Intel® Iris® Graphics |
| GNA plugin | Intel® Speech Enabling Developer Kit, Amazon Alexa* Premium Far-Field Developer Kit, Intel® Pentium® Silver processor J5005, Intel® Celeron® processor J4005, Intel® Core™ i3-8121U processor |
| Heterogeneous plugin | Heterogeneous plugin enables computing for inference on one network on several Intel® devices. |
Inference Engine plugins for Intel® FPGA and Intel® Movidius™ Neural Compute Stick are distributed only in a binary form as a part of Intel® Distribution of OpenVINO™.
Build on Linux* Systems
The software was validated on:
- Ubuntu* 16.04 with default GCC* 5.4.0
- CentOS* 7.4 with default GCC* 4.8.5
- Intel® Graphics Compute Runtime for OpenCL™ Driver package 18.28.11080.
Software Requirements
- CMake* 3.9 or higher
- GCC* 4.8 or higher to build the Inference Engine
- Python 2.7 or higher for Inference Engine Python API wrapper
Build Steps
- Clone submodules:
git submodule init git submodule update --recursive - Install build dependencies using the
install_dependencies.shscript in the project root folder. - Create a build folder:
mkdir build
- Inference Engine uses a CMake-based build system. In the created
builddirectory, runcmaketo fetch project dependencies and create Unix makefiles, then runmaketo build the project:
cmake -DCMAKE_BUILD_TYPE=Release ..
make -j16
You can use the following additional build options:
-
Internal JIT GEMM implementation is used by default.
-
To switch to OpenBLAS* implementation, use
GEMM=OPENBLASoption andBLAS_INCLUDE_DIRSandBLAS_LIBRARIEScmake options to specify path to OpenBLAS headers and library, for example use the following options on CentOS*:-DGEMM=OPENBLAS -DBLAS_INCLUDE_DIRS=/usr/include/openblas -DBLAS_LIBRARIES=/usr/lib64/libopenblas.so.0 -
To switch to optimized MKL-ML* GEMM implementation, use
GEMM=MKLandMKLROOTcmake options to specify path to unpacked MKL-ML withincludeandlibfolders, for example use the following options:-DGEMM=MKL -DMKLROOT=<path_to_MKL>. MKL-ML* package can be downloaded here -
OpenMP threading is used by default. To build Inference Engine with TBB threading, set
-DTHREADING=TBBoption. -
To build Python API wrapper, use -DENABLE_PYTHON=ON option. To specify exact Python version, use the following options:
-DPYTHON_EXECUTABLE=which python3.6-DPYTHON_LIBRARY=/usr/lib/x86_64-linux-gnu/libpython3.6m.so -DPYTHON_INCLUDE_DIR=/usr/include/python3.6 -
To switch on/off the CPU and GPU plugins, use
cmakeoptions-DENABLE_MKL_DNN=ON/OFFand-DENABLE_CLDNN=ON/OFF.
Build on Windows* Systems:
The software was validated on:
- Microsoft* Windows* 10 with Visual Studio 2017 and Intel® C++ Compiler 2018 Update 3
- Intel® Graphics Driver for Windows* [24.20] driver package.
Software Requirements
- CMake* 3.9 or higher
- OpenBLAS* and mingw64* runtime dependencies.
- Intel® C++ Compiler 18.0 to build the Inference Engine on Windows.
- Python 3.4 or higher for Inference Engine Python API wrapper
Build Steps
- Clone submodules:
git submodule init git submodule update --recursive - Download and install Intel® C++ Compiler 18.0
- Install OpenBLAS:
- Download OpenBLAS*
- Unzip the downloaded package to a directory on your machine. In this document, this directory is referred to as
<OPENBLAS_DIR>.
- Create build directory:
mkdir build - In the
builddirectory, runcmaketo fetch project dependencies and generate a Visual Studio solution:
cd build
cmake -G "Visual Studio 15 2017 Win64" -T "Intel C++ Compiler 18.0" ^
-DCMAKE_BUILD_TYPE=Release ^
-DICCLIB="C:\Program Files (x86)\IntelSWTools\compilers_and_libraries_2018\windows\compiler\lib" ..
-
Internal JIT GEMM implementation is used by default.
-
To switch to OpenBLAS GEMM implementation, use -DGEMM=OPENBLAS cmake option and specify path to OpenBLAS using
-DBLAS_INCLUDE_DIRS=<OPENBLAS_DIR>\includeand-DBLAS_LIBRARIES=<OPENBLAS_DIR>\lib\libopenblas.dll.aoptions. Prebuilt OpenBLAS* package can be downloaded here, mingw64* runtime dependencies here -
To switch to optimized MKL-ML GEMM implementation, use
GEMM=MKLandMKLROOTcmake options to specify path to unpacked MKL-ML withincludeandlibfolders, for example use the following options:-DGEMM=MKL -DMKLROOT=<path_to_MKL>. MKL-ML* package can be downloaded here -
OpenMP threading is used by default. To build Inference Engine with TBB threading, set
-DTHREADING=TBBoption. -
To build Python API wrapper, use -DENABLE_PYTHON=ON option. To specify exact Python version, use the following options:
-DPYTHON_EXECUTABLE="C:\Program Files\Python36\python.exe" -DPYTHON_INCLUDE_DIR="C:\Program Files\Python36\include" -DPYTHON_LIBRARY="C:\Program Files\Python36\libs\python36.lib".
- Build generated solution in Visual Studio 2017 or run
cmake --build . --config Releaseto build from the command line.
Building Inference Engine with Ninja
call "C:\Program Files (x86)\IntelSWTools\compilers_and_libraries_2018\windows\bin\ipsxe-comp-vars.bat" intel64 vs2017
set CXX=icl
set CC=icl
cmake -G Ninja -Wno-dev -DCMAKE_BUILD_TYPE=Release ..
cmake --build . --config Release
Before running the samples on Microsoft* Windows*, please add path to OpenMP library (<dldt_repo>/inference-engine/temp/omp/lib) and OpenCV libraries (<dldt_repo>/inference-engine/temp/opencv_4.0.0/bin) to the %PATH% environment variable.
* Other names and brands may be claimed as the property of others.