Files
openvino/docs/install_guides/installing-openvino-windows.md
Andrey Zaytsev 940eb43095 Feature/azaytsev/merge to master (#2786)
* [IE CLDNN] Memory allocation optimizations (#2178)

* [GNA] Safety fixes (#2193)

* LSTMCell test [GNA] LSTMCell fix for GNA (#2216)

* [GNA] fix scale factor calculation for unfused bias after fc (2021.1) (#2195)

* [GNA] fix scale factor calculation for unfused bias after fc

* change check

* add test

* apply requested changes

* cpplint fix

* apply test changes

* modify model for test to match ::op::

* [LPT] Copy constant with several outputs before blob update (#2197)

* [LPT] Copy constant implementation

* [LPT] the same Constant ops as FQ interval boundaries

* [Scripts] Fixing issue with exporting path-like env when it undef  (#2164)

* setupvars.sh: Added logic for exporting path env in case if it not defined

* setupvars: Removed duplicated colon

* Kept quotes where they were

* setupvars: updated copyrights

* FakeQuantize + Mul fusion (#2133)

* FQ+Mul fusion transform skeleton

* FQ+Mul fusion transform tests prep

* Basic UT for the transform

* Basic implementation of the transform

* Parametrized UTs for FQMul transform

* Parametrization of FQ+Mul UTs

* Make sure that the shapes of constants match

* Check if the mul constant matches FQ data

* CentOs compilation error fix

* PR feedback and adjusted tests

* NHWC layout of the mul constant

* UT: FQ output limits 4D

* Redundant CF pass removed

* Rewrite the graph in a different way

* Shape checking infrastructure skeleton

* Handle some negative cases

* Check the rt info in the fusion test

* Fuse all Mul nodes detected after FQ node

* Dont cast the original FQ node

* Dont throw if CF fails in new output range calculation

* More UTs

* Accept any type of input to FQ in the transformation

* Test the fusion when all FQ inputs are non-const

* Fusion test when only one output limit is const

* Extend error message (#2174)

* some nGraph KW fixes (#2176)

* Removed redundant methods

* Fixed KW for linux

* Fix QueryNetwork for networks with KSO (#2202)

* Added a test to reproduce QueryNetwork with KSO

* Fixed QueryNetwork for networks with KSO

* Added additional test

* Fixed output names for case with redundant ops before result (#2209)

* [IE][VPU]: Workaround to support parameter Beta for layer Swish (#2207)

* Workaround to full support Swish layer. It is faster than native Swish for now.

* [IE][VPU]: Remove the second call of ngraph::CommonOptimizations (#2221)

* Remove the second call of ngraph::CommonOptimizations in myriad plugin
* Reuse code with vpu ngraph transformations

* Duplicate PR 2167 for release branch: GatherTree description was extended and outdated link fixed (#2235)

* add more alrifications to description

* move clarification to comment

* pseudo code become more accurate

* review changes

* Add exposing function signatures via Cython (#2244)

* [DOC] Reshape feature (#2194)

* [IE][VPU][OpenCL] 2021.1 release compiler (#2189)

* Statically analyzed issues. (#2261)

* [IE][VPU]: Fix K propagation through Reshape (2021.1) (#2180)

* Fix K propagation through Reshape
* Add test cases

* Revert "[IE TESTS] dynavic batch for mvn layer (#1010)" (#2256)

This reverts commit 2e3378c50f.

* Fixed KW warning and review issues (#2262)

* [IE][VPU]: update firmware 1381 (#2236)

* Reverting devicePriorities to be vector and respect the order, as opposed to the incorrect (recent?) refactoring that introduced the unordered_map that effectively ignores the priorities (#2251)

* update OpenCV version to 4.5.0 (#2260)

* Add VPUX configuration to compile_tool (#2248)

* [IE][TESTS] Fix compareRawBuffers and compareBlobData methods (#2246)

Use `<=` comparison instead of `<` with thresholds.
This allows to use `0` threshold for bit-exact comparison.

* [IE][VPU]: KW fixes (#2186)

* Some KW fixes
* Fix printTo in vpu ngraph transformations

* Fix for static PartialShape detection algorithm (#2177)

* Fixes for Interpolate-4. (#2281)

* Update get_ov_update_message.py (#2286)

* Clone a specific tag for pybind11 (#2296)

* [Scripts] Fix setting PYTHONPATH logic (#2305)

* setupvars.sh: Added logic for exporting path env in case if it not defined

* setupvars: Removed duplicated colon

* install_openvino_dependencies: Updated copyrights

setupvars.bat: Updated notification about incorrect Python version. Removed checking ICC2019
setupvars.sh: Removed logic with choosing higher version of installed Python. Added dynamic detecting python3 major and minor version for setting path. Add checking minimum required Python version(now 3.6)

* Added python3-gi package and fixed libglib2.0-0 package location. (#2294)

* [IE TESTS] CoreThreading_LoadNetwork tests were disabled for GPU plugin (#2245) (#2283)

* setupvars: Updated notifications, fixed calling python in Windows case (#2318)

* Updated operations specification documents (2021.1) (#2268)

* Updated documentation structure and remove incorrect added files for Acosh-1, Asinh-1 and Atanh-1

* Fixed broken links

* Fixed c samples build (#2278) (#2304)

* Fixed c samples build

fixed CVS-38816 - Failure to build samples in C

* Fixed issue with gflags

* Revert "[IE][VPU]: Fix K propagation through Reshape (2021.1) (#2180)" (#2322)

This reverts commit d604a03ac0.

* Added ONNX Resize-11 and ONNX Resize-13 to supported frameworks layers list. (#2325)

* Implement `run_executable.py` to run `TimeTests` several times (#2125) (#2188)

CI passed

* install_NEO_OCL_driver: Updated exit codes, messages. Updated way to remove old driver on Ubuntu (#2333)

* Bump cmake version to 3.13 (#2339)

* install_NEO_OCL_driver: Added checking of installed packages before trying to remove them. Added quotes for echo. (#2350)

* convert to doxygen comments

* add doxygen doc build configurations (#2191)

Co-authored-by: Nikolay Tyukaev <ntyukaev_lo@jenkins.inn.intel.com>

* [DOCS] Added an evaluate method for custom operation (#2272)

* Added an evaluate method for custom operation

* Fixed comments

* Downgrade cmake for samples (#2372)

* Downgrade cmake for samples

Downgraded cmake version to default version for Ubuntu 18.04

* Updated supported python version

The minimal python version in 2021.1 is 3.5

* Added notes about cmake requirements for samples and demo

* Install dependency refactoring. (#2381)

* Updated Transformation development doc (#2370)

* Delete xfail for resolved known issue (#2385)

* Fix layout links for dl streamer and c api (#2375)

* fix layouts

* change the dl-streamer link

Co-authored-by: Nikolay Tyukaev <ntyukaev_lo@jenkins.inn.intel.com>

* Added link options for cross-compilation (#2397)

* Added new GSG for macOS, made minor changes in Windows GSG (#2070) (#2405)

* Added new GSG for macOS, made minor changes in Windows GSG

* Update get_started_macos.md

Co-authored-by: Anastasiya Ageeva <anastasiya.ageeva@intel.com>

* Fixed docs build on Windows (#2383)

* layouts and code comments

* Replace absolute links to docs.openvinotoolkit.org by relative ones (#2439)

* Replaced direct links to docs.openvinotoolkit.org with relative links

* Replaced direct links to docs.openvinotoolkit.org with relative links. Added GSGs for Win and macOS

* Minor fixes in GSGs

* Replaced direct links to docs.openvinotoolkit.org with relative links

* Removed links to OpenVINO markdown files that contain anchor - they don't work in the current implementation of the doc process

* Fixed Notes

* Removed links to OpenVINO markdown files that contain anchor - they don't work in the current implementation of the doc process

* fixed link to installing-openvino-linux.md

* Update the menu to align with POT doc headers (#2433)

* Update the menu to align with POT doc headers

It changes the menu to align with Post-training Optimization Toolkit documentation titles.

* Corrected one title

Run Examples => How to Run Examples

* Added closing braсket (#2466)

Fixed syntax error (b4b03b1)

* Remove the deprecation notice (#2314)

* Removed deprecation notice

* Removed the note from other files

* [DOCS] Update Installation Guide - GPU steps (#2308)

* Initial commit

* fixing lists

* Update installing-openvino-linux.md

* Get rid of the note

* Added the scrrenshot

* Update installing-openvino-linux.md

* fixes

* separate layout

* [Docs] Update MO What's new description (#2481)

* Azure CI: Add separated pipelines for Windows, Linux, Mac

* Feature/azaytsev/benchmarks 2021 1 (#2501)

* Initial changes for 2021.1

* Inserted Graphtool scripts, updated configurations info

* Updated FAQ and minor changes to performance_benchmarks.md

* Updated for 2021.1

* Updated

* incorporated review comments

* incorporated review comments for FAQ

* fixed link

* Update build-instruction.md for MacOsX (#2457)

* Update build-instruction.md for MacOsX

* Removed call of install_dependencies.sh from the steps

* Changed layouts

* Feature/azaytsev/cvs-38240 (#2469)

* Updated for 2020 version, replaced Ubuntu 16.04 with Ubuntu 20.04

* Updated the release package numbers

* Removed FPGA from the documentation

* Updated according to the comments in the ticket CVS-37827 (#2448)

* Updated according to CVS-38225

* some changes

* Update docs for speech libs and demos (#2518)

* Made changes to benchmarks according to review comments

* Remove `--collect_results_only` (#2523)

* Remove `--collect_results_only` from MemCheckTests

* Remove CLI keys from README

* Added logo info to the Legal_Information, updated Ubuntu, CentOS supported versions

* Updated supported Intel® Core™ processors list

* Fixed table formatting

* [Jenkinsfile] Bump infra (#2546)

* [GNA] Documentation updates for 2021.1 (#2460)

* [GNA] Documentation updates for 2021.1

* Take Mike's comments into account

* More fixes according to review

* Fix processor generation names

* update api layouts

* Added new index page with overview

* Changed CMake and Python versions

* Fixed links

* some layout changes

* some layout changes

* nGraph Python API tutorial (#2500)

* nGraph Python API tutorial

* Tweaks

* Code review comments

* Code review comments

* some layout changes

* COnverted svg images to png

* layouts

* update layout

* Added a label for nGraph_Python_API.md

* fixed links

* Fixed image

* First draft of nGraph documentation (#2271)

* First draft of nGraph documentation

* updated according to review comments

* Updated

* Reviewed the nGraph Transformation section, added missing images

* Update nGraph_dg.md

* Delete python_api.md

Removed since there is already the nGraph_Python_API.md document with a comprehensive overview.

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>
Co-authored-by: CCR\avladimi <anastasiya.ageeva@intel.com>

* Feature/azaytsev/docs 2021 1 (#2560)

* Removed FPGA from the documentation

* Updated according to CVS-38225

* Added logo info to the Legal_Information, updated Ubuntu, CentOS supported versions

* Updated supported Intel® Core™ processors list

* Added new index page with overview

* Changed CMake and Python versions

* Fixed links

* COnverted svg images to png

* Added a label for nGraph_Python_API.md

* fixed links

* Fixed image

* Update SW requirements in build instructions and change latest release to 2021.1 (#2565)

* removed links to ../IE_DG/Introduction.md

* Removed links to tools overview page as removed

* some changes

* Remove link to Integrate_your_kernels_into_IE.md

* remove openvino_docs_IE_DG_Graph_debug_capabilities from layout as it was removed

* Fixed links to images (#2569)

* update layouts

* Added deprecation note for PassConfig class (#2593)

* Post-release fixes and installation path changes

* Added pip install documentation (#2465)

* Added pip install documentation

* Change references

* tiny fixes of links

* Update installing-openvino-pip.md

Co-authored-by: Alina Alborova <alina.alborova@intel.com>

* Update OpenVino ONNX CI check (#2599)

* Update OpenVino ONNX CI

* Change parallel execution to single

* Enlarge timeout

* Remove timeout

* Add timeout to test execution

* Added PIP installation and Build from Source to the layout

* Fixed formatting issue, removed broken link

* Renamed section EXAMPLES to RESOURCES according to review comments

* add mo faq navigation by url param

* Skip hanging test case of OpenVino ONNX CI (#2608)

* Update OpenVino ONNX CI

* Change parallel execution to single

* Enlarge timeout

* Remove timeout

* Add timeout to test execution

* Skip hanging test

* Add description to skip issue

* Removed DLDT description

* Replaced wrong links

* MInor fix for path to the cpp samples

* fixes

* Update ops.py

* Fix style

* Improve pip installation guide (#2644)

* Improve pip installation guide

* Updated after comments

* Feature/ntyukaev/separate layout (#2629)

* convert to doxygen comments

* layouts and code comments

* separate layout

* Changed layouts

* Removed FPGA from the documentation

* Updated according to CVS-38225

* some changes

* Made changes to benchmarks according to review comments

* Added logo info to the Legal_Information, updated Ubuntu, CentOS supported versions

* Updated supported Intel® Core™ processors list

* Fixed table formatting

* update api layouts

* Added new index page with overview

* Changed CMake and Python versions

* Fixed links

* some layout changes

* some layout changes

* some layout changes

* COnverted svg images to png

* layouts

* update layout

* Added a label for nGraph_Python_API.md

* fixed links

* Fixed image

* removed links to ../IE_DG/Introduction.md

* Removed links to tools overview page as removed

* some changes

* Remove link to Integrate_your_kernels_into_IE.md

* remove openvino_docs_IE_DG_Graph_debug_capabilities from layout as it was removed

* update layouts

* Post-release fixes and installation path changes

* Added PIP installation and Build from Source to the layout

* Fixed formatting issue, removed broken link

* Renamed section EXAMPLES to RESOURCES according to review comments

* add mo faq navigation by url param

* Removed DLDT description

* Replaced wrong links

* MInor fix for path to the cpp samples

* fixes

* Update ops.py

* Fix style

Co-authored-by: Nikolay Tyukaev <ntyukaev_lo@jenkins.inn.intel.com>
Co-authored-by: Tyukaev <nikolay.tyukaev@intel.com>
Co-authored-by: aalborov <alina.alborova@intel.com>
Co-authored-by: Rafal Blaczkowski <rafal.blaczkowski@intel.com>
Co-authored-by: Alexander Zhogov <alexander.zhogov@intel.com>

* Fixed CVS-35316 (#2072) (#2670)

Co-authored-by: Anastasiya Ageeva <anastasiya.ageeva@intel.com>

* [install_dependencies.sh] install latest cmake if current version is lower 3.13 (#2695) (#2701)

* [install_dependencies.sh] install latest cmake if current version is lower 3.13

* add shellcheck for Ubuntu

* install python 2.7 for Ubuntu

* Removed redundant file

* Exclude files that we didn't changed from merging

Co-authored-by: Sergey Shlyapnikov <sergey.shlyapnikov@intel.com>
Co-authored-by: Denis Orlov <denis.orlov@intel.com>
Co-authored-by: Kamil Magierski <kamil.magierski@intel.com>
Co-authored-by: Anna Alberska <anna.alberska@intel.com>
Co-authored-by: Edward Shogulin <edward.shogulin@intel.com>
Co-authored-by: Artyom Anokhov <artyom.anokhov@intel.com>
Co-authored-by: Tomasz Dołbniak <tomasz.dolbniak@intel.com>
Co-authored-by: Ilya Churaev <ilya.churaev@intel.com>
Co-authored-by: Roman Vyunov (Intel) <roman.vyunov@intel.com>
Co-authored-by: Maksim Doronin <maksim.doronin@intel.com>
Co-authored-by: Svetlana Dolinina <svetlana.a.dolinina@intel.com>
Co-authored-by: Evgeny Talanin <evgeny.talanin@intel.com>
Co-authored-by: Evgenya Stepyreva <evgenya.stepyreva@intel.com>
Co-authored-by: Maxim Kurin <maxim.kurin@intel.com>
Co-authored-by: Nikolay Shchegolev <nikolay.shchegolev@intel.com>
Co-authored-by: Andrew Bakalin <andrew.bakalin@intel.com>
Co-authored-by: Gorokhov Dmitriy <dmitry.gorokhov@intel.com>
Co-authored-by: Evgeny Latkin <evgeny.latkin@intel.com>
Co-authored-by: Maxim Shevtsov <maxim.y.shevtsov@intel.com>
Co-authored-by: Alexey Suhov <alexey.suhov@intel.com>
Co-authored-by: Alexander Novak <sasha-novak@yandex.ru>
Co-authored-by: Vladislav Vinogradov <vlad.vinogradov@intel.com>
Co-authored-by: Vladislav Volkov <vladislav.volkov@intel.com>
Co-authored-by: Vladimir Gavrilov <vladimir.gavrilov@intel.com>
Co-authored-by: Zoe Cayetano <zoe.cayetano@intel.com>
Co-authored-by: Dmitrii Denisov <dmitrii.denisov@intel.com>
Co-authored-by: Irina Efode <irina.efode@intel.com>
Co-authored-by: Evgeny Lazarev <evgeny.lazarev@intel.com>
Co-authored-by: Mikhail Ryzhov <mikhail.ryzhov@intel.com>
Co-authored-by: Vitaliy Urusovskij <vitaliy.urusovskij@intel.com>
Co-authored-by: Nikolay Tyukaev <ntyukaev_lo@jenkins.inn.intel.com>
Co-authored-by: Nikolay Tyukaev <nikolay.tyukaev@intel.com>
Co-authored-by: Gleb Kazantaev <gleb.kazantaev@intel.com>
Co-authored-by: Rafal Blaczkowski <rafal.blaczkowski@intel.com>
Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
Co-authored-by: Anastasiya Ageeva <anastasiya.ageeva@intel.com>
Co-authored-by: Maksim Proshin <mvproshin@gmail.com>
Co-authored-by: Alina Alborova <alina.alborova@intel.com>
Co-authored-by: Maxim Vafin <maxim.vafin@intel.com>
Co-authored-by: azhogov <alexander.zhogov@intel.com>
Co-authored-by: Alina Kladieva <alina.kladieva@intel.com>
Co-authored-by: Michał Karzyński <4430709+postrational@users.noreply.github.com>
Co-authored-by: Anton Romanov <anton.romanov@intel.com>
2020-10-27 00:41:46 +03:00

31 KiB

Install Intel® Distribution of OpenVINO™ toolkit for Windows* 10

NOTES:

  • This guide applies to Microsoft Windows* 10 64-bit. For Linux* OS information and instructions, see the Installation Guide for Linux.
  • Intel® System Studio is an all-in-one, cross-platform tool suite, purpose-built to simplify system bring-up and improve system and IoT device application performance on Intel® platforms. If you are using the Intel® Distribution of OpenVINO™ with Intel® System Studio, go to Get Started with Intel® System Studio.

Introduction

Important

:

  • All steps in this guide are required, unless otherwise stated.
  • In addition to the download package, you must install dependencies and complete configuration steps.

Your installation is complete when these are all completed:

  1. Install the Intel® Distribution of OpenVINO™ toolkit core components

  2. Install the dependencies:

    Note

    : If you want to use Microsoft Visual Studio 2019, you are required to install CMake 3.14.

    Important

    : As part of this installation, make sure you click the option to add the application to your PATH environment variable.

  3. Set Environment Variables

  4. Configure the Model Optimizer

  5. Run two Verification Scripts to Verify Installation

  6. Optional: 

About the Intel® Distribution of OpenVINO™ toolkit

OpenVINO™ toolkit is a comprehensive toolkit for quickly developing applications and solutions that solve a variety of tasks including emulation of human vision, automatic speech recognition, natural language processing, recommendation systems, and many others. Based on latest generations of artificial neural networks, including Convolutional Neural Networks (CNNs), recurrent and attention-based networks, the toolkit extends computer vision and non-vision workloads across Intel® hardware, maximizing performance. It accelerates applications with high-performance, AI and deep learning inference deployed from edge to cloud.

For more information, see the online Intel® Distribution of OpenVINO™ toolkit Overview page.

The Intel® Distribution of OpenVINO™ toolkit for Windows* 10 OS:

  • Enables CNN-based deep learning inference on the edge
  • Supports heterogeneous execution across Intel® CPU, Intel® Processor Graphics (GPU), Intel® Neural Compute Stick 2, and Intel® Vision Accelerator Design with Intel® Movidius™ VPUs
  • Speeds time-to-market through an easy-to-use library of computer vision functions and pre-optimized kernels
  • Includes optimized calls for computer vision standards including OpenCV* and OpenCL™

Included in the Installation Package

The following components are installed by default:

Component Description
Model Optimizer This tool imports, converts, and optimizes models that were trained in popular frameworks to a format usable by Intel tools, especially the Inference Engine.
NOTE: Popular frameworks include such frameworks as Caffe*, TensorFlow*, MXNet*, and ONNX*.
Inference Engine This is the engine that runs the deep learning model. It includes a set of libraries for an easy inference integration into your applications.
OpenCV* OpenCV* community version compiled for Intel® hardware
Inference Engine Samples A set of simple console applications demonstrating how to use Intel's Deep Learning Inference Engine in your applications.
[Demos](@ref omz_demos_README) A set of console applications that demonstrate how you can use the Inference Engine in your applications to solve specific use-cases
Additional Tools A set of tools to work with your models including [Accuracy Checker utility](@ref omz_tools_accuracy_checker_README), [Post-Training Optimization Tool Guide](@ref pot_README), [Model Downloader](@ref omz_tools_downloader_README) and other
[Documentation for Pre-Trained Models ](@ref omz_models_intel_index) Documentation for the pre-trained models available in the Open Model Zoo repo

System Requirements

Hardware

  • 6th to 11th generation Intel® Core™ processors and Intel® Xeon® processors
  • Intel® Xeon® processor E family (formerly code named Sandy Bridge, Ivy Bridge, Haswell, and Broadwell)
  • 3rd generation Intel® Xeon® Scalable processor (formerly code named Cooper Lake)
  • Intel® Xeon® Scalable processor (formerly Skylake and Cascade Lake)
  • Intel Atom® processor with support for Intel® Streaming SIMD Extensions 4.1 (Intel® SSE4.1)
  • Intel Pentium® processor N4200/5, N3350/5, or N3450/5 with Intel® HD Graphics
  • Intel® Neural Compute Stick 2
  • Intel® Vision Accelerator Design with Intel® Movidius™ VPUs

Note

: With OpenVINO™ 2020.4 release, Intel® Movidius™ Neural Compute Stick is no longer supported.

Processor Notes:

  • Processor graphics are not included in all processors. See Processors specifications for information about your processor.
  • A chipset that supports processor graphics is required if you're using an Intel Xeon processor. See Chipset specifications for information about your chipset.

Operating System

  • Microsoft Windows* 10 64-bit

Software

Installation Steps

Install the Intel® Distribution of OpenVINO™ toolkit Core Components

  1. If you have not downloaded the Intel® Distribution of OpenVINO™ toolkit, download the latest version. By default, the file is saved to the Downloads directory as w_openvino_toolkit_p_<version>.exe.

  2. Go to the Downloads folder and double-click w_openvino_toolkit_p_<version>.exe. A window opens to let you choose your installation directory and components. The default installation directory is C:\Program Files (x86)\Intel\openvino_<version>, for simplicity, a shortcut to the latest installation is also created: C:\Program Files (x86)\Intel\openvino_2021. If you choose a different installation directory, the installer will create the directory for you:

  3. Click Next.

  4. You are asked if you want to provide consent to gather information. Choose the option of your choice. Click Next.

  5. If you are missing external dependencies, you will see a warning screen. Write down the dependencies you are missing. You need to take no other action at this time. After installing the Intel® Distribution of OpenVINO™ toolkit core components, install the missing dependencies. The screen example below indicates you are missing two dependencies:

  6. Click Next.

  7. When the first part of installation is complete, the final screen informs you that the core components have been installed and additional steps still required:

  8. Click Finish to close the installation wizard. A new browser window opens to the next section of the installation guide to set the environment variables. You are in the same document. The new window opens in case you ran the installation without first opening this installation guide.

  9. If the installation indicated you must install dependencies, install them first. If there are no missing dependencies, you can go ahead and set the environment variables.

Set the Environment Variables

Note

: If you installed the Intel® Distribution of OpenVINO™ to the non-default install directory, replace C:\Program Files (x86)\Intel with the directory in which you installed the software.

You must update several environment variables before you can compile and run OpenVINO™ applications. Open the Command Prompt, and run the setupvars.bat batch file to temporarily set your environment variables:

cd C:\Program Files (x86)\Intel\openvino_2021\bin\
setupvars.bat

(Optional): OpenVINO toolkit environment variables are removed when you close the Command Prompt window. As an option, you can permanently set the environment variables manually.

The environment variables are set. Continue to the next section to configure the Model Optimizer.

Configure the Model Optimizer

Important

: These steps are required. You must configure the Model Optimizer for at least one framework. The Model Optimizer will fail if you do not complete the steps in this section.

Note

: If you see an error indicating Python is not installed when you know you installed it, your computer might not be able to find the program. For the instructions to add Python to your system environment variables, see Update Your Windows Environment Variables.

The Model Optimizer is a key component of the Intel® Distribution of OpenVINO™ toolkit. You cannot do inference on your trained model without running the model through the Model Optimizer. When you run a pre-trained model through the Model Optimizer, your output is an Intermediate Representation (IR) of the network. The IR is a pair of files that describe the whole model:

  • .xml: Describes the network topology
  • .bin: Contains the weights and biases binary data

The Inference Engine reads, loads, and infers the IR files, using a common API across the CPU, GPU, or VPU hardware.

The Model Optimizer is a Python*-based command line tool (mo.py), which is located in C:\Program Files (x86)\Intel\openvino_2021\deployment_tools\model_optimizer. Use this tool on models trained with popular deep learning frameworks such as Caffe*, TensorFlow*, MXNet*, and ONNX* to convert them to an optimized IR format that the Inference Engine can use.

This section explains how to use scripts to configure the Model Optimizer either for all of the supported frameworks at the same time or for individual frameworks. If you want to manually configure the Model Optimizer instead of using scripts, see the Using Manual Configuration Process section on the Configuring the Model Optimizer page.

For more information about the Model Optimizer, see the Model Optimizer Developer Guide.

Model Optimizer Configuration Steps

You can configure the Model Optimizer either for all supported frameworks at once or for one framework at a time. Choose the option that best suits your needs. If you see error messages, make sure you installed all dependencies.

Important

: The Internet access is required to execute the following steps successfully. If you have access to the Internet through the proxy server only, please make sure that it is configured in your environment.

Note

: In the steps below:

  • If you you want to use the Model Optimizer from another installed versions of Intel® Distribution of OpenVINO™ toolkit installed, replace openvino_2021 with openvino_<version>, where <version> is the required version.
  • If you installed the Intel® Distribution of OpenVINO™ toolkit to the non-default installation directory, replace C:\Program Files (x86)\Intel with the directory where you installed the software.

These steps use a command prompt to make sure you see error messages.

Option 1: Configure the Model Optimizer for all supported frameworks at the same time:

  1. Open a command prompt. To do so, type cmd in your Search Windows box and then press Enter. Type commands in the opened window:

  2. Go to the Model Optimizer prerequisites directory.

cd C:\Program Files (x86)\Intel\openvino_2021\deployment_tools\model_optimizer\install_prerequisites
  1. Run the following batch file to configure the Model Optimizer for Caffe*, TensorFlow* 1.x, MXNet*, Kaldi*, and ONNX*:
install_prerequisites.bat

Option 2: Configure the Model Optimizer for each framework separately:

  1. Go to the Model Optimizer prerequisites directory:
cd C:\Program Files (x86)\Intel\openvino_2021\deployment_tools\model_optimizer\install_prerequisites
  1. Run the batch file for the framework you will use with the Model Optimizer. You can use more than one:

    • For Caffe:
    install_prerequisites_caffe.bat
    
    • For TensorFlow 1.x:
    install_prerequisites_tf.bat
    
    • For TensorFlow 2.x:
    install_prerequisites_tf2.bat
    
    • For MXNet:
    install_prerequisites_mxnet.bat
    
    • For ONNX:
    install_prerequisites_onnx.bat
    
    • For Kaldi:
    install_prerequisites_kaldi.bat
    

The Model Optimizer is configured for one or more frameworks. Success is indicated by a screen similar to this:

You are ready to use two short demos to see the results of running the Intel Distribution of OpenVINO toolkit and to verify your installation was successful. The demo scripts are required since they perform additional configuration steps. Continue to the next section.

If you want to use a GPU or VPU, or update your Windows* environment variables, read through the Optional Steps section.

Use Verification Scripts to Verify Your Installation

Important

: This section is required. In addition to confirming your installation was successful, demo scripts perform other steps, such as setting up your computer to use the Inference Engine samples.

Note

: The paths in this section assume you used the default installation directory. If you used a directory other than C:\Program Files (x86)\Intel, update the directory with the location where you installed the software.
To verify the installation and compile two samples, run the verification applications provided with the product on the CPU:

  1. Open a command prompt window.

  2. Go to the Inference Engine demo directory:

    cd C:\Program Files (x86)\Intel\openvino_2021\deployment_tools\demo\
    
  3. Run the verification scripts by following the instructions in the next section.

Run the Image Classification Verification Script

To run the script, start the demo_squeezenet_download_convert_run.bat file:

demo_squeezenet_download_convert_run.bat

This script downloads a SqueezeNet model, uses the Model Optimizer to convert the model to the .&zwj;bin and .&zwj;xml Intermediate Representation (IR) files. The Inference Engine requires this model conversion so it can use the IR as input and achieve optimum performance on Intel hardware.
This verification script builds the Image Classification Sample Async application and run it with the car.png image in the demo directory. For a brief description of the Intermediate Representation, see Configuring the Model Optimizer.

When the verification script completes, you will have the label and confidence for the top-10 categories:

This demo is complete. Leave the console open and continue to the next section to run the Inference Pipeline demo.

Run the Inference Pipeline Verification Script

To run the script, start the demo_security_barrier_camera.bat file while still in the console:

demo_security_barrier_camera.bat

This script downloads three pre-trained model IRs, builds the [Security Barrier Camera Demo](@ref omz_demos_security_barrier_camera_demo_README) application, and runs it with the downloaded models and the car_1.bmp image from the demo directory to show an inference pipeline. The verification script uses vehicle recognition in which vehicle attributes build on each other to narrow in on a specific attribute.

First, an object is identified as a vehicle. This identification is used as input to the next model, which identifies specific vehicle attributes, including the license plate. Finally, the attributes identified as the license plate are used as input to the third model, which recognizes specific characters in the license plate.

When the demo completes, you have two windows open:

  • A console window that displays information about the tasks performed by the demo
  • An image viewer window that displays a resulting frame with detections rendered as bounding boxes, similar to the following:

Close the image viewer window to end the demo.

To learn more about the verification scripts, see README.txt in C:\Program Files (x86)\Intel\openvino_2021\deployment_tools\demo.

For detailed description of the OpenVINO™ pre-trained object detection and object recognition models, see the [Overview of OpenVINO™ toolkit Pre-Trained Models](@ref omz_models_intel_index) page.

In this section, you saw a preview of the Intel® Distribution of OpenVINO™ toolkit capabilities.

Congratulations. You have completed all the required installation, configuration, and build steps to work with your trained models using CPU.

If you want to use Intel® Processor graphics (GPU), Intel® Neural Compute Stick 2 or Intel® Vision Accelerator Design with Intel® Movidius™ VPUs, or add CMake* and Python* to your Windows* environment variables, read through the next section for additional steps.

If you want to continue and run the Image Classification Sample Application on one of the supported hardware device, see the Run the Image Classification Sample Application section.

Optional Steps

Use the optional steps below if you want to:

Optional: Additional Installation Steps for Intel® Processor Graphics (GPU)

Note

: These steps are required only if you want to use a GPU.

If your applications offload computation to Intel® Integrated Graphics, you must have the Intel Graphics Driver for Windows version 15.65 or higher. To see if you have this driver installed:

  1. Type device manager in your Search Windows box. The Device Manager opens.

  2. Click the drop-down arrow to view the Display adapters. You see the adapter that is installed in your computer:

  3. Right-click the adapter name and select Properties.

  4. Click the Driver tab to see the driver version. Make sure the version number is 15.65 or higher.

  5. If your device driver version is lower than 15.65, download and install a higher version.

You are done updating your device driver and are ready to use your GPU.

Optional: Additional Installation Steps for the Intel® Vision Accelerator Design with Intel® Movidius™ VPUs

Note

: These steps are required only if you want to use Intel® Vision Accelerator Design with Intel® Movidius™ VPUs.

To perform inference on Intel® Vision Accelerator Design with Intel® Movidius™ VPUs, the following additional installation steps are required:

  1. Download and install Visual C++ Redistributable for Visual Studio 2017
  2. Check with a support engineer if your Intel® Vision Accelerator Design with Intel® Movidius™ VPUs card requires SMBUS connection to PCIe slot (most unlikely). Install the SMBUS driver only if confirmed (by default, it's not required):
    1. Go to the <INSTALL_DIR>\deployment_tools\inference-engine\external\hddl\SMBusDriver directory, where <INSTALL_DIR> is the directory in which the Intel Distribution of OpenVINO toolkit is installed.
    2. Right click on the hddlsmbus.inf file and choose Install from the pop up menu.

You are done installing your device driver and are ready to use your Intel® Vision Accelerator Design with Intel® Movidius™ VPUs.

See also:

After configuration is done, you are ready to run the verification scripts with the HDDL Plugin for your Intel® Vision Accelerator Design with Intel® Movidius™ VPUs.

  1. Open a command prompt window.

  2. Go to the Inference Engine demo directory:

    cd C:\Program Files (x86)\Intel\openvino_2021\deployment_tools\demo\
    
  3. Run the Image Classification verification script. If you have access to the Internet through the proxy server only, please make sure that it is configured in your environment.

    demo_squeezenet_download_convert_run.bat -d HDDL
    
  4. Run the Inference Pipeline verification script:

    demo_security_barrier_camera.bat -d HDDL
    

Optional: Update Your Windows Environment Variables

Note

: These steps are only required under special circumstances, such as if you forgot to check the box during the CMake* or Python* installation to add the application to your Windows PATH environment variable.

Use these steps to update your Windows PATH if a command you execute returns an error message stating that an application cannot be found. This might happen if you do not add CMake or Python to your PATH environment variable during the installation.

  1. In your Search Windows box, type Edit the system environment variables and press Enter. A window similar to the following displays:

  2. At the bottom of the screen, click Environment Variables.

  3. Under System variables, click Path and then Edit:

  4. In the opened window, click Browse. A browse window opens:

  5. If you need to add CMake to the PATH, browse to the directory in which you installed CMake. The default directory is C:\Program Files\CMake.

  6. If you need to add Python to the PATH, browse to the directory in which you installed Python. The default directory is C:\Users\<USER_ID>\AppData\Local\Programs\Python\Python36\Python.

  7. Click OK repeatedly to close each screen.

Your PATH environment variable is updated.

Run the Image Classification Sample Application

Important

: This section requires that you have Run the Verification Scripts to Verify Installation. This script builds the Image Classification sample application and downloads and converts the required Caffe* Squeezenet model to an IR.

In this section you will run the Image Classification sample application, with the Caffe* Squeezenet1.1 model on three types of Intel® hardware: CPU, GPU and VPUs.

Image Classification sample application binary file was automatically built and the FP16 model IR files are created when you Ran the Image Classification Verification Script.

The Image Classification sample application binary file located in the C:\Users\<username>\Documents\Intel\OpenVINO\inference_engine_samples_build\intel64\Release\ directory. The Caffe* Squeezenet model IR files (.bin and .xml) are located in the in the C:\Users\<username>\Documents\Intel\OpenVINO\openvino_models\ir\public\squeezenet1.1\FP16\ directory.

Note

: If you installed the Intel® Distribution of OpenVINO™ toolkit to the non-default installation directory, replace C:\Program Files (x86)\Intel with the directory where you installed the software.

To run the sample application:

  1. Set up environment variables:
cd C:\Program Files (x86)\Intel\openvino_2021\bin\setupvars.bat
  1. Go to the samples build directory:
cd C:\Users\<username>\Documents\Intel\OpenVINO\inference_engine_samples_build\intel64\Release
  1. Run the sample executable with specifying the car.png file from the demo directory as an input image, the IR of your FP16 model and a plugin for a hardware device to perform inference on.

Note

: Running the sample application on hardware other than CPU requires performing additional hardware configuration steps.

  • For CPU:
classification_sample_async.exe -i "C:\Program Files (x86)\Intel\openvino_2021\deployment_tools\demo\car.png" -m "C:\Users\<username>\Documents\Intel\OpenVINO\openvino_models\ir\public\squeezenet1.1\FP16\squeezenet1.1.xml" -d CPU
  • For GPU:
classification_sample_async.exe -i "C:\Program Files (x86)\Intel\openvino_2021\deployment_tools\demo\car.png" -m "C:\Users\<username>\Documents\Intel\OpenVINO\openvino_models\ir\public\squeezenet1.1\FP16\squeezenet1.1.xml" -d GPU
  • For VPU (Intel® Neural Compute Stick 2):
classification_sample_async.exe -i "C:\Program Files (x86)\Intel\openvino_2021\deployment_tools\demo\car.png" -m "C:\Users\<username>\Documents\Intel\OpenVINO\openvino_models\ir\public\squeezenet1.1\FP16\squeezenet1.1.xml" -d MYRIAD
  • For VPU (Intel® Vision Accelerator Design with Intel® Movidius™ VPUs):
classification_sample_async.exe -i "C:\Program Files (x86)\Intel\openvino_2021\deployment_tools\demo\car.png" -m "C:\Users\<username>\Documents\Intel\OpenVINO\openvino_models\ir\public\squeezenet1.1\FP16\squeezenet1.1.xml" -d HDDL

For information on Sample Applications, see the Inference Engine Samples Overview.

Congratulations, you have finished the installation of the Intel® Distribution of OpenVINO™ toolkit for Windows*. To learn more about how the Intel® Distribution of OpenVINO™ toolkit works, the Hello World tutorial and other resources are provided below.

Summary

In this document, you installed the Intel® Distribution of OpenVINO™ toolkit and its dependencies. You also configured the Model Optimizer for one or more frameworks. After the software was installed and configured, you ran two verification scripts. You might have also installed drivers that will let you use a GPU or VPU to infer your models and run the Image Classification Sample application.

You are now ready to learn more about converting models trained with popular deep learning frameworks to the Inference Engine format, following the links below, or you can move on to running the sample applications.

To learn more about converting deep learning models, go to:

Additional Resources