Files
openvino/docs/get_started/get_started_macos.md
Andrey Zaytsev 40eba6a2ef Feature/merge 2021 3 to master (#5307)
* Feature/azaytsev/cldnn doc fixes (#4600)

* Legal fixes, removed the Generating docs section

* Removed info regarding generating docs

Co-authored-by: Trawinski, Dariusz <dariusz.trawinski@intel.com>

* Feature/azaytsev/gna model link fixes (#4599)

* Added info on DockerHub CI Framework

* Feature/azaytsev/change layout (#3295)

* Changes according to feedback comments

* Replaced @ref's with html links

* Fixed links, added a title page for installing from repos and images, fixed formatting issues

* Added links

* minor fix

* Added DL Streamer to the list of components installed by default

* Link fixes

* Link fixes

* ovms doc fix (#2988)

* added OpenVINO Model Server

* ovms doc fixes

Co-authored-by: Trawinski, Dariusz <dariusz.trawinski@intel.com>

* Updated openvino_docs.xml

* Link Fixes

Co-authored-by: Trawinski, Dariusz <dariusz.trawinski@intel.com>

* Fix for broken CC in CPU plugin (#4595)

* Azure CI: Add "ref: releases/2021/3"

* Fixed clone rt info (#4597)

* [.ci/azure] Enable CC build (#4619)

* Formula fix (#4624)

* Fixed transformation to pull constants into Loop body (cherry-pick of PR 4591) (#4607)

* Cherry-pick of PR 4591

* Fixed typo

* Moved a check into the parameter_unchanged_after_iteration function

* Fixed KW hits (#4638)

* [CPU] Supported ANY layout for inputs in inferRequest (#4621)

* [.ci/azure] Add windows_conditional_compilation.yml (#4648) (#4655)

* Fix for MKLDNN constant layers execution (#4642)

* Fix for MKLDNN constant layers execution

* Single mkldnn::engine for all MKLDNN graphs

* Add workaround for control edges to support TF 2.4 RNN (#4634)

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Corrected PyYAML dependency (#4598) (#4620)

5.4.2 is absent on PyPI

* [CPU] Statically analyzed issues. (#4637)

* Docs api (#4657)

* Updated API changes document

* Comment for CVS-49440

* Add documentation on how to convert QuartzNet model (#4664)

* Add documentation on how to convert QuartzNet model (#4422)

* Add documentation on how to convert QuartzNet model

* Apply review feedback

* Small fix

* Apply review feedback

* Apply suggestions from code review

Co-authored-by: Anastasiya Ageeva <anastasiya.ageeva@intel.com>

Co-authored-by: Anastasiya Ageeva <anastasiya.ageeva@intel.com>

* Add reference to file

Co-authored-by: Anastasiya Ageeva <anastasiya.ageeva@intel.com>

* Fixed bug in assign elimination transformation. (#4644)

* [doc] Updated PyPI support OSes (#4643) (#4662)

* [doc] Updated PyPI support OSes (#4643)

* Updated PyPI support OSes

* Added python versions for win and mac

* Update pypi-openvino-dev.md

* Update pypi-openvino-dev.md

* Update pypi-openvino-rt.md

* Update pypi-openvino-dev.md

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

* [IE][VPU]: Fix empty output of CTCGreedyDecoderSeqLen (#4653)

* Allow the second output of CTCGreedyDecoderSeqLen to be nullptr in cases when it is not used but calculated in the Myriad plugin. In this case, parse the second output as FakeData
* It is a cherry-pick of #4652
* Update the firmware to release version

* [VPU] WA for Segmentation fault on dlclose() issue (#4645)

* Document TensorFlow 2* Update: Layers Support and Remove Beta Status (#4474) (#4711)

* Document TensorFlow 2* Update: Layers Support and Remove Beta Status

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Update documentation based on latest test results and feedback

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Remove ConvLSTM2D from supported layers list

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Document Dot layer without limitation

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Address feedback upon DenseFeatures and RNN operations

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Do a grammar correction

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Do a grammar correction based on feedback

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Updated nGraph custom op documentation (#4604)

* Updated nGraph custom op documentation

* Fixed comments

* [IE CLDNN] Fix missing variable initializations and types (#4669)

* Fix NormalizeL2 creation in QueryNetwork (cherry pick from master PR 4310) (#4651)

* Updated documentation about the supported YOLOv3 model from ONNX (#4722) (#4726)

* Restored folded Operations for QueryNetwork (#4685)

* Restored folded Operations for QueryNetwork

* Fixed comment

* Add unfolded constant operations to supported layers map

* Add STN to list of supported models (#4728)

* Fix python API for Loop/TensorIterator/Assign/ReadValue operations

* Catch std::except in fuzz tests (#4695)

Fuzz tests must catch all expected exceptions from IE. IE is using C++ std
library which may raise standard exceptions which IE pass through.

* Docs update (#4626)

* Updated latency case desc to cover multi-socket machines

* updated opt guide a bit

* avoiding '#' which is interpreted as ref

* Update CPU.md

* Update docs/optimization_guide/dldt_optimization_guide.md

Co-authored-by: Alina Alborova <alina.alborova@intel.com>

* Update docs/optimization_guide/dldt_optimization_guide.md

Co-authored-by: Alina Alborova <alina.alborova@intel.com>

* Update docs/optimization_guide/dldt_optimization_guide.md

Co-authored-by: Alina Alborova <alina.alborova@intel.com>

* Update docs/optimization_guide/dldt_optimization_guide.md

Co-authored-by: Alina Alborova <alina.alborova@intel.com>

* Update docs/optimization_guide/dldt_optimization_guide.md

Co-authored-by: Alina Alborova <alina.alborova@intel.com>

Co-authored-by: Alina Alborova <alina.alborova@intel.com>

* Blocked dims hwc 2021/3 (#4729)

* Fix for BlockedDims

* Added test for HWC layout

* [GNA] Update documentation regarding splits and concatenations support (#4740)

* Added mo.py to wheel packages (#4731)

* Inserted a disclaimer (#4760)

* Fixed some klockwork issues in C API samples (#4767)

* Feature/vpu doc fixes 2021 3 (#4635)

* Documentation fixes and updates for VPU

* minor correction

* minor correction

* Fixed links

* updated supported layers list for vpu

* [DOCS] added iname/oname (#4735)

* [VPU] Limit dlclose() WA to be used for Ubuntu only (#4806)

* Fixed wrong link (#4817)

* MKLDNN weights cache key calculation algorithm changed (#4790)

* Updated PIP install instructions (#4821)

* Document YOLACT support (#4749)

* Document YOLACT support

* Add preprocessing section

* Apply suggestions from code review

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Add documentation on how to convert F3Net model (#4863)

* Add instruction for F3Net model pytorch->onnx conversion

* Fix style

* Fixed dead lock in telemetry (#4873)

* Fixed dead lock in telemetry

* Refactored TelemetrySender.send function

* Refactored send function implementation to avoid deadlocks

* Unit tests for telemetry sender function

* Added legal header

* avladimi/cvs-31369: Documented packages content to YUM/APT IGs (#4839)

* Documented runtime/dev packages content

* Minor formatting fixes

* Implemented review comments

* Update installing-openvino-apt.md

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

* [DOC] Low-Precision 8-bit Integer Inference (#4834)

* [DOC] Low-Precision 8-bit Integer Inference

* [DOC] Low-Precision 8-bit Integer Inference: comment fixes

* Update docs/IE_DG/Int8Inference.md

Co-authored-by: Anastasiya Ageeva <anastasiya.ageeva@intel.com>

* Update docs/IE_DG/Int8Inference.md

Co-authored-by: Anastasiya Ageeva <anastasiya.ageeva@intel.com>

* Update docs/IE_DG/Int8Inference.md

Co-authored-by: Anastasiya Ageeva <anastasiya.ageeva@intel.com>

* Update docs/IE_DG/Int8Inference.md

Co-authored-by: Anastasiya Ageeva <anastasiya.ageeva@intel.com>

* Update docs/IE_DG/Int8Inference.md

Co-authored-by: Anastasiya Ageeva <anastasiya.ageeva@intel.com>

* Update docs/IE_DG/Int8Inference.md

Co-authored-by: Anastasiya Ageeva <anastasiya.ageeva@intel.com>

* Update docs/IE_DG/Int8Inference.md

Co-authored-by: Anastasiya Ageeva <anastasiya.ageeva@intel.com>

* Update docs/IE_DG/Int8Inference.md

Co-authored-by: Anastasiya Ageeva <anastasiya.ageeva@intel.com>

* Update docs/IE_DG/Int8Inference.md

Co-authored-by: Anastasiya Ageeva <anastasiya.ageeva@intel.com>

* Update docs/IE_DG/Int8Inference.md

Co-authored-by: Anastasiya Ageeva <anastasiya.ageeva@intel.com>

* [DOC] LPT comments fix

* [DOC] LPT comments fix: absolute links are updated to relative

* Update Int8Inference.md

* Update Int8Inference.md

Co-authored-by: Anastasiya Ageeva <anastasiya.ageeva@intel.com>
Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

* Avladimi/cherry pick from master (#4892)

* Fixed CVS-48061

* Reviewed and edited the Customization instructions

* Fixed broken links in the TOC

* Fixed links

* Fixed formatting in the IG for Raspberry

* Feature/benchmarks 2021 3 (#4910)

* added new topics, changed the intro text

* updated

* Updates

* Updates

* Updates

* Updates

* Updates

* Added yolo-v4-tf and unet-camvid-onnx graphs

* Date for pricing is updated to March 15th

* Feature/omz link changes (#4911)

* Changed labels for demos and model downloader

* Changed links to models and tools

* Changed links to models and tools

* Changed links to demos

* [cherry-pick] Extensibility docs review (#4915)

* Feature/ovsa docs 2021 3 (#4914)

* Updated to 2021-3, fixed formatting issues

* Fixed formatting issues

* Fixed formatting issues

* Fixed formatting issues

* Update ovsa_get_started.md

* Clarification of Low Latency Transformation and State API documentation (#4877)

* Assign/ReadValue, LowLatency and StateAPI clarifications

* Apply suggestions from code review: spelling mistakes

Co-authored-by: Anastasiya Ageeva <anastasiya.ageeva@intel.com>

* fixed wording

* cherry-pick missing commit to release branch: low latency documentation

* Resolve review remarks

Co-authored-by: Anastasiya Ageeva <anastasiya.ageeva@intel.com>
Co-authored-by: Svetlana Dolinina <svetlana.a.dolinina@intel.com>

* DevCloud call outs (#4904)

* [README.md] change latest release to 2021.3

* [49342] Update recommended CMake version on install guide in documentation (#4763)

* Inserted a disclaimer

* Another disclaimer

* Update installing-openvino-windows.md

* Update installing-openvino-windows.md

* Update installing-openvino-windows.md

* Feature/doc fixes 2021 3 (#4971)

* Made changes for CVS-50424

* Changes for CVS-49349

* Minor change for CVS-49349

* Changes for CVS-49343

* Cherry-pick #PR4254

* Replaced /opt/intel/openvino/ with /opt/intel/openvino_2021/ as the default target directory

* (CVS-50786) Added a new section Reference IMplementations to keep Speech Library and Speech Recognition Demos

* Doc fixes

* Replaced links to inference_engine_intro.md with Deep_Learning_Inference_Engine_DevGuide.md, fixed links

* Fixed link

* Fixes

* Fixes

* Reemoved Intel® Xeon® processor E family

* fixes for graphs (#5057)

* compression.configs.hardware config to package_data (#5066)

* update OpenCV version to 4.5.2 (#5069)

* update OpenCV version to 4.5.2

* Enable mo.front.common.extractors module (#5038)

* Enable mo.front.common.extractors module (#5018)

* Enable mo.front.common.extractors module

* Update package_BOM.txt

* Test MO wheel content

* fix doc iframe issue - 2021.3 (#5090)

* wrap with htmlonly

* wrap with htmlonly

* Add specification for ExperimentalDetectron* oprations (#5128)

* Feature/benchmarks 2021 3 ehl (#5191)

* Added EHL config

* Updated graphs

* improve table formatting

* Wrap <iframe> tag with \htmlonly \endhtmlonly to avoid build errors

* Updated graphs

* Fixed links to TDP and Price for 8380

* Add PyTorch section to the documentation (#4972)

* Add PyTorch section to the documentation

* Apply review feedback

* Remove section about loop

* Apply review feedback

* Apply review feedback

* Apply review feedback

* doc: add Red Hat docker registry (#5184) (#5253)

* Incorporate changes in master

Co-authored-by: Trawinski, Dariusz <dariusz.trawinski@intel.com>
Co-authored-by: Vladislav Volkov <vladislav.volkov@intel.com>
Co-authored-by: azhogov <alexander.zhogov@intel.com>
Co-authored-by: Ilya Churaev <ilya.churaev@intel.com>
Co-authored-by: Alina Kladieva <alina.kladieva@intel.com>
Co-authored-by: Evgeny Lazarev <evgeny.lazarev@intel.com>
Co-authored-by: Gorokhov Dmitriy <dmitry.gorokhov@intel.com>
Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>
Co-authored-by: Mikhail Ryzhov <mikhail.ryzhov@intel.com>
Co-authored-by: Nikolay Shchegolev <nikolay.shchegolev@intel.com>
Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
Co-authored-by: Maxim Vafin <maxim.vafin@intel.com>
Co-authored-by: Anastasiya Ageeva <anastasiya.ageeva@intel.com>
Co-authored-by: Anastasia Popova <anastasia.popova@intel.com>
Co-authored-by: Maksim Doronin <maksim.doronin@intel.com>
Co-authored-by: Andrew Bakalin <andrew.bakalin@intel.com>
Co-authored-by: Mikhail Letavin <mikhail.letavin@intel.com>
Co-authored-by: Anton Chetverikov <Anton.Chetverikov@intel.com>
Co-authored-by: Ivan Tikhonov <ivan.tikhonov@intel.com>
Co-authored-by: Andrey Somsikov <andrey.somsikov@intel.com>
Co-authored-by: Maxim Shevtsov <maxim.y.shevtsov@intel.com>
Co-authored-by: Alina Alborova <alina.alborova@intel.com>
Co-authored-by: Elizaveta Lobanova <elizaveta.lobanova@intel.com>
Co-authored-by: Andrey Dmitriev <andrey.dmitriev@intel.com>
Co-authored-by: Helena Kloosterman <helena.kloosterman@intel.com>
Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>
Co-authored-by: Edward Shogulin <edward.shogulin@intel.com>
Co-authored-by: Svetlana Dolinina <svetlana.a.dolinina@intel.com>
Co-authored-by: Alexey Suhov <alexey.suhov@intel.com>
Co-authored-by: Sergey Lyubimtsev <sergey.lyubimtsev@intel.com>
Co-authored-by: Dmitry Kurtaev <dmitry.kurtaev+github@gmail.com>
Co-authored-by: Nikolay Tyukaev <nikolay.tyukaev@intel.com>
Co-authored-by: Kate Generalova <kate.generalova@intel.com>
2021-04-19 20:19:17 +03:00

30 KiB
Raw Blame History

Get Started with OpenVINO™ Toolkit on macOS*

The OpenVINO™ toolkit optimizes and runs Deep Learning Neural Network models on Intel® hardware. This guide helps you get started with the OpenVINO™ toolkit you installed on macOS*.

In this guide, you will:

  • Learn the OpenVINO™ inference workflow
  • Run demo scripts that illustrate the workflow and perform the steps for you
  • Run the workflow steps yourself, using detailed instructions with a code sample and demo application

OpenVINO™ toolkit Components

The toolkit consists of three primary components:

  • Model Optimizer: Optimizes models for Intel® architecture, converting models into a format compatible with the Inference Engine. This format is called an Intermediate Representation (IR).
  • Intermediate Representation: The Model Optimizer output. A model converted to a format that has been optimized for Intel® architecture and is usable by the Inference Engine.
  • Inference Engine: The software libraries that run inference against the IR (optimized model) to produce inference results.

In addition, demo scripts, code samples and demo applications are provided to help you get up and running with the toolkit:

  • Demo Scripts - Batch scripts that automatically perform the workflow steps to demonstrate running inference pipelines for different scenarios.
  • Code Samples - Small console applications that show you how to:
    • Utilize specific OpenVINO capabilities in an application.
    • Perform specific tasks, such as loading a model, running inference, querying specific device capabilities, and more.
  • [Demo Applications](@ref omz_demos) - Console applications that provide robust application templates to help you implement specific deep learning scenarios. These applications involve increasingly complex processing pipelines that gather analysis data from several models that run inference simultaneously, such as detecting a person in a video stream along with detecting the person's physical attributes, such as age, gender, and emotional state.

Intel® Distribution of OpenVINO™ toolkit Installation and Deployment Tools Directory Structure

This guide assumes you completed all Intel® Distribution of OpenVINO™ toolkit installation and configuration steps. If you have not yet installed and configured the toolkit, see Install Intel® Distribution of OpenVINO™ toolkit for macOS*.

By default, the Intel® Distribution of OpenVINO™ is installed to the following directory, referred to as <INSTALL_DIR>:

  • For root or administrator: /opt/intel/openvino_<version>/
  • For regular users: /home/<USER>/intel/openvino_<version>/

For simplicity, a symbolic link to the latest installation is also created: /home/<user>/intel/openvino_2021/.

If you installed the Intel® Distribution of OpenVINO™ toolkit to a directory other than the default, replace /opt/intel or /home/<USER>/ with the directory in which you installed the software.

The primary tools for deploying your models and applications are installed to the <INSTALL_DIR>/deployment_tools directory.

Click for the Intel® Distribution of OpenVINO™ toolkit directory structure
Directory         Description
demo/ Demo scripts. Demonstrate pipelines for inference scenarios, automatically perform steps and print detailed output to the console. For more information, see the Use OpenVINO: Demo Scripts section.
inference_engine/ Inference Engine directory. Contains Inference Engine API binaries and source files, samples and extensions source files, and resources like hardware drivers.
      external/ Third-party dependencies and drivers.
      include/ Inference Engine header files. For API documentation, see the Inference Engine API Reference.
      lib/ Inference Engine static libraries.
      samples/ Inference Engine samples. Contains source code for C++ and Python* samples and build scripts. See the Inference Engine Samples Overview.
      share/ CMake configuration files for linking with Inference Engine.
~intel_models/ Symbolic link to the intel_models subfolder of the open_model_zoo folder.
model_optimizer/ Model Optimizer directory. Contains configuration scripts, scripts to run the Model Optimizer and other files. See the Model Optimizer Developer Guide.
ngraph/ nGraph directory. Includes the nGraph header and library files.
open_model_zoo/ Open Model Zoo directory. Includes the Model Downloader tool to download [pre-trained OpenVINO](@ref omz_models_group_intel) and public models, OpenVINO models documentation, demo applications and the Accuracy Checker tool to evaluate model accuracy.
      demos/ Demo applications for inference scenarios. Also includes documentation and build scripts.
      intel_models/ Pre-trained OpenVINO models and associated documentation. See the [Overview of OpenVINO™ Toolkit Pre-Trained Models](@ref omz_models_group_intel).
      models Intel's trained and public models that can be obtained with Model Downloader.
      tools/ Model Downloader and Accuracy Checker tools.
tools/ Contains a symbolic link to the Model Downloader folder and auxiliary tools to work with your models: Calibration tool, Benchmark and Collect Statistics tools.

OpenVINO™ Workflow Overview

The simplified OpenVINO™ workflow is:

  1. Get a trained model for your inference task. Example inference tasks: pedestrian detection, face detection, vehicle detection, license plate recognition, head pose.
  2. Run the trained model through the Model Optimizer to convert the model to an IR, which consists of a pair of .xml and .bin files that are used as the input for Inference Engine.
  3. Use the Inference Engine API in the application to run inference against the IR (optimized model) and output inference results. The application can be an OpenVINO™ sample, demo, or your own application.

Use the Demo Scripts to Learn the Workflow

The demo scripts in <INSTALL_DIR>/deployment_tools/demo give you a starting point to learn the OpenVINO workflow. These scripts automatically perform the workflow steps to demonstrate running inference pipelines for different scenarios. The demo steps let you see how to:

  • Compile several samples from the source files delivered as part of the OpenVINO toolkit
  • Download trained models
  • Perform pipeline steps and see the output on the console

Note

: You must have Internet access to run the demo scripts. If your Internet access is through a proxy server, make sure the operating system environment proxy information is configured.

The demo scripts can run inference on any supported target device. Although the default inference device is CPU, you can use the -d parameter to change the inference device. The general command to run the scripts looks as follows:

./<script_name> -d [CPU, MYRIAD]

Before running the demo applications on Intel® Neural Compute Stick 2 device, you must complete additional configuration steps. For details, see the Steps for Intel® Neural Compute Stick 2 section in the installation instructions.

The following paragraphs describe each demo script.

Image Classification Demo Script

The demo_squeezenet_download_convert_run script illustrates the image classification pipeline.

The script:

  1. Downloads a SqueezeNet model.
  2. Runs the Model Optimizer to convert the model to the IR.
  3. Builds the Image Classification Sample Async application.
  4. Runs the compiled sample with the car.png image located in the demo directory.
Click for an example of running the Image Classification demo script

To run the script to perform inference on a CPU:

./demo_squeezenet_download_convert_run.sh

When the script completes, you see the label and confidence for the top-10 categories:


Top 10 results:

Image /opt/intel/openvino_2021/deployment_tools/demo/car.png

classid probability label
------- ----------- -----
817     0.6853030   sports car, sport car
479     0.1835197   car wheel
511     0.0917197   convertible
436     0.0200694   beach wagon, station wagon, wagon, estate car, beach waggon, station waggon, waggon
751     0.0069604   racer, race car, racing car
656     0.0044177   minivan
717     0.0024739   pickup, pickup truck
581     0.0017788   grille, radiator grille
468     0.0013083   cab, hack, taxi, taxicab
661     0.0007443   Model T

[ INFO ] Execution successful

Inference Pipeline Demo Script

The demo_security_barrier_camera uses vehicle recognition in which vehicle attributes build on each other to narrow in on a specific attribute.

The script:

  1. Downloads three pre-trained model IRs.
  2. Builds the Security Barrier Camera Demo application.
  3. Runs the application with the downloaded models and the car_1.bmp image from the demo directory to show an inference pipeline.

This application:

  1. Identifies an object identified as a vehicle.
  2. Uses the vehicle identification as input to the second model, which identifies specific vehicle attributes, including the license plate.
  3. Uses the the license plate as input to the third model, which recognizes specific characters in the license plate.
Click for an example of Running the Pipeline demo script

To run the script performing inference on a CPU:

./demo_security_barrier_camera.sh

When the verification script completes, you see an image that displays the resulting frame with detections rendered as bounding boxes, and text:

Benchmark Demo Script

The demo_benchmark_app script illustrates how to use the Benchmark Application to estimate deep learning inference performance on supported devices.

The script:

  1. Downloads a SqueezeNet model.
  2. Runs the Model Optimizer to convert the model to the IR.
  3. Builds the Inference Engine Benchmark tool.
  4. Runs the tool with the car.png image located in the demo directory.
Click for an example of running the Benchmark demo script

To run the script that performs inference on a CPU:

./demo_squeezenet_download_convert_run.sh

When the verification script completes, you see the performance counters, resulting latency, and throughput values displayed on the screen.

Use Code Samples and Demo Applications to Learn the Workflow

This section guides you through a simplified workflow for the Intel® Distribution of OpenVINO™ toolkit using code samples and demo applications.

You will perform the following steps:

  1. Use the Model Downloader to download suitable models.
  2. Convert the models with the Model Optimizer.
  3. Download media files to run inference on.
  4. Run inference on the Image Classification Code Sample and see the results.
  5. Run inference on the Security Barrier Camera Demo application and see the results.

Each demo and code sample is a separate application, but they use the same behavior and components.

Inputs you need to specify when using a code sample or demo application:

  • A compiled OpenVINO™ code sample or demo application that runs inferencing against a model that has been run through the Model Optimizer, resulting in an IR, using the other inputs you provide.
  • One or more models in the IR format. Each model is trained for a specific task. Examples include pedestrian detection, face detection, vehicle detection, license plate recognition, head pose, and others. Different models are used for different applications. Models can be chained together to provide multiple features; for example, vehicle + make/model + license plate recognition.
  • One or more media files. The media is typically a video file, but can be a still photo.
  • One or more target device on which you run inference. The target device can be the CPU, or VPU accelerator.

Build the Code Samples and Demo Applications

To perform sample inference, run the Image Classification code sample and Security Barrier Camera demo application that are automatically compiled when you run the Image Classification and Inference Pipeline demo scripts. The binary files are in the ~/inference_engine_samples_build/intel64/Release and ~/inference_engine_demos_build/intel64/Release directories, respectively.

You can also build all available sample code and demo applications from the source files delivered with the OpenVINO toolkit. To learn how to do this, see the instructions in the Inference Engine Code Samples Overview and [Demo Applications Overview](@ref omz_demos) sections.

Step 1: Download the Models

You must have a model that is specific for you inference task. Example model types are:

  • Classification (AlexNet, GoogleNet, SqueezeNet, others) - Detects one type of element in a frame.
  • Object Detection (SSD, YOLO) - Draws bounding boxes around multiple types of objects.
  • Custom (Often based on SSD)

Options to find a model suitable for the OpenVINO™ toolkit are:

  • Download public and Intel's pre-trained models from the Open Model Zoo using [Model Downloader tool](@ref omz_tools_downloader).
  • Download from GitHub*, Caffe* Zoo, TensorFlow* Zoo, and other resources.
  • Train your own model.

This guide uses the Model Downloader to get pre-trained models. You can use one of the following options to find a model:

  • List the models available in the downloader:
cd /opt/intel/openvino_2021/deployment_tools/tools/model_downloader/
python3 info_dumper.py --print_all
  • Use grep to list models that have a specific name pattern:
python3 info_dumper.py --print_all | grep <model_name>

Use the Model Downloader to download the models to a models directory. This guide uses <models_dir> as the models directory and <models_name> as the model name:

sudo python3 ./downloader.py --name <model_name> --output_dir <models_dir>

NOTE: Always run the downloader with sudo.

Download the following models if you want to run the Image Classification Sample and Security Barrier Camera Demo application:

Model Name Code Sample or Demo App
squeezenet1.1 Image Classification Sample
vehicle-license-plate-detection-barrier-0106 Security Barrier Camera Demo application
vehicle-attributes-recognition-barrier-0039 Security Barrier Camera Demo application
license-plate-recognition-barrier-0001 Security Barrier Camera Demo application
Click for an example of downloading the SqueezeNet Caffe* model

To download the SqueezeNet 1.1 Caffe* model to the ~/models folder:

sudo python3 ./downloader.py --name squeezenet1.1 --output_dir ~/models

Your screen looks similar to this after the download:

###############|| Downloading models ||###############

========= Downloading /Users/username/models/public/squeezenet1.1/squeezenet1.1.prototxt
... 100%, 9 KB, 44058 KB/s, 0 seconds passed

========= Downloading /Users/username/models/public/squeezenet1.1/squeezenet1.1.caffemodel
... 100%, 4834 KB, 4877 KB/s, 0 seconds passed

###############|| Post processing ||###############

========= Replacing text in /Users/username/models/public/squeezenet1.1/squeezenet1.1.prototxt =========
Click for an example of downloading models for the Security Barrier Camera Demo application

To download all three pre-trained models in FP16 precision to the ~/models folder:

./downloader.py --name vehicle-license-plate-detection-barrier-0106,vehicle-attributes-recognition-barrier-0039,license-plate-recognition-barrier-0001 --output_dir ~/models --precisions FP16

Your screen looks similar to this after the download:

################|| Downloading models ||################

========== Downloading /Users/username/models/intel/vehicle-license-plate-detection-barrier-0106/FP16/vehicle-license-plate-detection-barrier-0106.xml
... 100%, 207 KB, 313926 KB/s, 0 seconds passed

========== Downloading /Users/username/models/intel/vehicle-license-plate-detection-barrier-0106/FP16/vehicle-license-plate-detection-barrier-0106.bin
... 100%, 1256 KB, 2552 KB/s, 0 seconds passed

========== Downloading /Users/username/models/intel/vehicle-attributes-recognition-barrier-0039/FP16/vehicle-attributes-recognition-barrier-0039.xml
... 100%, 32 KB, 172042 KB/s, 0 seconds passed

========== Downloading /Users/username/models/intel/vehicle-attributes-recognition-barrier-0039/FP16/vehicle-attributes-recognition-barrier-0039.bin
... 100%, 1222 KB, 2712 KB/s, 0 seconds passed

========== Downloading /Users/username/models/intel/license-plate-recognition-barrier-0001/FP16/license-plate-recognition-barrier-0001.xml
... 100%, 47 KB, 217130 KB/s, 0 seconds passed

========== Downloading /Users/username/models/intel/license-plate-recognition-barrier-0001/FP16/license-plate-recognition-barrier-0001.bin
... 100%, 2378 KB, 4222 KB/s, 0 seconds passed

################|| Post-processing ||################

Step 2: Convert the Models to the Intermediate Representation

In this step, your trained models are ready to run through the Model Optimizer to convert them to the Intermediate Representation (IR) format. This is required before using the Inference Engine with the model.

Models in the Intermediate Representation format always include a pair of .xml and .bin files. Make sure you have these files for the Inference Engine to find them.

  • REQUIRED: model_name.xml
  • REQUIRED: model_name.bin

This guide uses the public SqueezeNet 1.1 Caffe* model to run the Image Classification Sample. See the example to download a model in the Download Models section to learn how to download this model.

The squeezenet1.1 model is downloaded in the Caffe* format. You must use the Model Optimizer to convert the model to the IR. The vehicle-license-plate-detection-barrier-0106, vehicle-attributes-recognition-barrier-0039, license-plate-recognition-barrier-0001 models are downloaded in the Intermediate Representation format. You don't need to use the Model Optimizer to convert these models.

  1. Create an <ir_dir> directory to contain the model's IR.

  2. The Inference Engine can perform inference on different precision formats, such as FP32, FP16, INT8. To prepare an IR with specific precision, run the Model Optimizer with the appropriate --data_type option.

  3. Run the Model Optimizer script:

    cd /opt/intel/openvino_2021/deployment_tools/model_optimizer
    
    python3 ./mo.py --input_model <model_dir>/<model_file> --data_type <model_precision> --output_dir <ir_dir>
    

    The produced IR files are in the <ir_dir> directory.

Click for an example of converting the SqueezeNet Caffe* model

The following command converts the public SqueezeNet 1.1 Caffe* model to the FP16 IR and saves to the ~/models/public/squeezenet1.1/ir output directory:

   cd /opt/intel/openvino_2021/deployment_tools/model_optimizer
python3 ./mo.py --input_model ~/models/public/squeezenet1.1/squeezenet1.1.caffemodel --data_type FP16 --output_dir ~/models/public/squeezenet1.1/ir

After the Model Optimizer script is completed, the produced IR files (squeezenet1.1.xml, squeezenet1.1.bin) are in the specified ~/models/public/squeezenet1.1/ir directory.

Copy the squeezenet1.1.labels file from the /opt/intel/openvino_2021/deployment_tools/demo/ to <ir_dir>. This file contains the classes that ImageNet uses. Therefore, the inference results show text instead of classification numbers:

cp /opt/intel/openvino_2021/deployment_tools/demo/squeezenet1.1.labels <ir_dir>

Step 3: Download a Video or a Still Photo as Media

Many sources are available from which you can download video media to use the code samples and demo applications. Possibilities include:

As an alternative, the Intel® Distribution of OpenVINO™ toolkit includes two sample images that you can use for running code samples and demo applications:

  • /opt/intel/openvino_2021/deployment_tools/demo/car.png
  • /opt/intel/openvino_2021/deployment_tools/demo/car_1.bmp

Step 4: Run the Image Classification Code Sample

Note

: The Image Classification code sample is automatically compiled when you ran the Image Classification demo script. If you want to compile it manually, see the Inference Engine Code Samples Overview document.

To run the Image Classification code sample with an input image on the IR:

  1. Set up the OpenVINO environment variables:
    source /opt/intel/openvino_2021/bin/setupvars.sh
    
  2. Go to the code samples build directory:
    cd ~/inference_engine_samples_build/intel64/Release
    
  3. Run the code sample executable, specifying the input media file, the IR of your model, and a target device on which you want to perform inference:
    classification_sample_async -i <path_to_media> -m <path_to_model> -d <target_device>
    
Click for examples of running the Image Classification code sample on different devices

The following commands run the Image Classification Code Sample using the car.png file from the /opt/intel/openvino_2021/deployment_tools/demo/ directory as an input image, the IR of your model from ~/models/public/squeezenet1.1/ir and on different hardware devices:

CPU:

./classification_sample_async -i /opt/intel/openvino_2021/deployment_tools/demo/car.png -m ~/models/public/squeezenet1.1/ir/squeezenet1.1.xml -d CPU

MYRIAD:

Note

: Running inference on VPU devices (Intel® Neural Compute Stick 2) with the MYRIAD plugin requires additional hardware configuration steps. For details, see the Steps for Intel® Neural Compute Stick 2 section in the installation instructions.

./classification_sample_async -i /opt/intel/openvino_2021/deployment_tools/demo/car.png -m ~/models/public/squeezenet1.1/ir/squeezenet1.1.xml -d MYRIAD

When the Sample Application completes, you see the label and confidence for the top-10 categories on the display. Below is a sample output with inference results on CPU:

Top 10 results:

Image /opt/intel/openvino_2021/deployment_tools/demo/car.png

classid probability label
------- ----------- -----
817     0.8364177   sports car, sport car
511     0.0945683   convertible
479     0.0419195   car wheel
751     0.0091233   racer, race car, racing car
436     0.0068038   beach wagon, station wagon, wagon, estate car, beach waggon, station waggon, waggon
656     0.0037315   minivan
586     0.0025940   half track
717     0.0016044   pickup, pickup truck
864     0.0012045   tow truck, tow car, wrecker
581     0.0005833   grille, radiator grille

[ INFO ] Execution successful

Step 5: Run the Security Barrier Camera Demo Application

Note

: The Security Barrier Camera Demo Application is automatically compiled when you run the Inference Pipeline demo scripts. If you want to build it manually, see the instructions in the [Demo Applications Overview](@ref omz_demos) section.

To run the Security Barrier Camera Demo Application using an input image on the prepared IRs:

  1. Set up the OpenVINO environment variables:
    source /opt/intel/openvino_2021/bin/setupvars.sh
    
  2. Go to the demo application build directory:
    cd ~/inference_engine_demos_build/intel64/Release
    
  3. Run the demo executable, specifying the input media file, list of model IRs, and a target device on which to perform inference:
    ./security_barrier_camera_demo -i <path_to_media> -m <path_to_vehicle-license-plate-detection_model_xml> -m_va <path_to_vehicle_attributes_model_xml> -m_lpr <path_to_license_plate_recognition_model_xml> -d <target_device>
    
Click for examples of running the Security Barrier Camera demo application on different devices

CPU:

./security_barrier_camera_demo -i /opt/intel/openvino_2021/deployment_tools/demo/car_1.bmp -m ~/models/intel/vehicle-license-plate-detection-barrier-0106/FP16/vehicle-license-plate-detection-barrier-0106.xml -m_va ~/models/intel/vehicle-attributes-recognition-barrier-0039/FP16/vehicle-attributes-recognition-barrier-0039.xml -m_lpr ~/models/intel/license-plate-recognition-barrier-0001/FP16/license-plate-recognition-barrier-0001.xml -d CPU

MYRIAD:

Note

: Running inference on VPU devices (Intel® Neural Compute Stick 2) with the MYRIAD plugin requires additional hardware configuration steps. For details, see the Steps for Intel® Neural Compute Stick 2 section in the installation instructions.

./classification_sample_async -i <INSTALL_DIR>/inference-engine/samples/sample_data/car.png -m <ir_dir>/squeezenet1.1.xml -d MYRIAD

Basic Guidelines for Using Code Samples and Demo Applications

Following are some basic guidelines for executing the OpenVINO™ workflow using the code samples and demo applications:

  1. Before using the OpenVINO™ samples, always set up the environment:
source /opt/intel/openvino_2021/bin/setupvars.sh
  1. Have the directory path for the following:
  • Code Sample binaries located in ~/inference_engine_cpp_samples_build/intel64/Release
  • Demo Application binaries located in ~/inference_engine_demos_build/intel64/Release
  • Media: Video or image. See Download Media.
  • Model: Neural Network topology converted with the Model Optimizer to the IR format (.bin and .xml files). See Download Models for more information.

Typical Code Sample and Demo Application Syntax Examples

Template to call sample code or a demo application:

<path_to_app> -i <path_to_media> -m <path_to_model> -d <target_device>

With the sample information specified, the command might look like this:

./object_detection_demo_ssd_async -i ~/Videos/catshow.mp4 \
-m ~/ir/fp32/mobilenet-ssd.xml -d CPU

Advanced Demo Use

Some demo applications let you use multiple models for different purposes. In these cases, the output of the first model is usually used as the input for later models.

For example, an SSD will detect a variety of objects in a frame, then age, gender, head pose, emotion recognition and similar models target the objects classified by the SSD to perform their functions.

In these cases, the use pattern in the last part of the template above is usually:

-m_<acronym> … -d_<acronym> …

For head pose:

-m_hp <headpose model> -d_hp <headpose hardware target>

Example of an Entire Command (object_detection + head pose):

./object_detection_demo_ssd_async -i ~/Videos/catshow.mp4 \
-m ~/ir/fp32/mobilenet-ssd.xml -d CPU -m_hp headpose.xml \
-d_hp CPU

Example of an Entire Command (object_detection + head pose + age-gender):

./object_detection_demo_ssd_async -i ~/Videos/catshow.mp4 \
-m ~/r/fp32/mobilenet-ssd.xml -d CPU -m_hp headpose.xml \
-d_hp CPU -m_ag age-gender.xml -d_ag CPU

You can see all the sample applications parameters by adding the -h or --help option at the command line.

Additional Resources

Use these resources to learn more about the OpenVINO™ toolkit: