Files
openvino/docs/get_started/get_started_scripts.md
Andrey Zaytsev 4ae6258bed Feature/azaytsev/from 2021 4 (#9247)
* Added info on DockerHub CI Framework

* Feature/azaytsev/change layout (#3295)

* Changes according to feedback comments

* Replaced @ref's with html links

* Fixed links, added a title page for installing from repos and images, fixed formatting issues

* Added links

* minor fix

* Added DL Streamer to the list of components installed by default

* Link fixes

* Link fixes

* ovms doc fix (#2988)

* added OpenVINO Model Server

* ovms doc fixes

Co-authored-by: Trawinski, Dariusz <dariusz.trawinski@intel.com>

* Updated openvino_docs.xml

* Updated the link to software license agreements

* Revert "Updated the link to software license agreements"

This reverts commit 706dac500e.

* Docs to Sphinx (#8151)

* docs to sphinx

* Update GPU.md

* Update CPU.md

* Update AUTO.md

* Update performance_int8_vs_fp32.md

* update

* update md

* updates

* disable doc ci

* disable ci

* fix index.rst

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>
# Conflicts:
#	.gitignore
#	docs/CMakeLists.txt
#	docs/IE_DG/Deep_Learning_Inference_Engine_DevGuide.md
#	docs/IE_DG/Extensibility_DG/Custom_ONNX_Ops.md
#	docs/IE_DG/Extensibility_DG/VPU_Kernel.md
#	docs/IE_DG/InferenceEngine_QueryAPI.md
#	docs/IE_DG/Int8Inference.md
#	docs/IE_DG/Integrate_with_customer_application_new_API.md
#	docs/IE_DG/Model_caching_overview.md
#	docs/IE_DG/supported_plugins/GPU_RemoteBlob_API.md
#	docs/IE_DG/supported_plugins/HETERO.md
#	docs/IE_DG/supported_plugins/MULTI.md
#	docs/MO_DG/prepare_model/convert_model/Convert_Model_From_Caffe.md
#	docs/MO_DG/prepare_model/convert_model/Convert_Model_From_Kaldi.md
#	docs/MO_DG/prepare_model/convert_model/Convert_Model_From_MxNet.md
#	docs/MO_DG/prepare_model/convert_model/Convert_Model_From_ONNX.md
#	docs/MO_DG/prepare_model/convert_model/Converting_Model.md
#	docs/MO_DG/prepare_model/convert_model/Converting_Model_General.md
#	docs/MO_DG/prepare_model/convert_model/Cutting_Model.md
#	docs/MO_DG/prepare_model/convert_model/pytorch_specific/Convert_RNNT.md
#	docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_EfficientDet_Models.md
#	docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_WideAndDeep_Family_Models.md
#	docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_YOLO_From_Tensorflow.md
#	docs/doxygen/Doxyfile.config
#	docs/doxygen/ie_docs.xml
#	docs/doxygen/ie_plugin_api.config
#	docs/doxygen/ngraph_cpp_api.config
#	docs/doxygen/openvino_docs.xml
#	docs/get_started/get_started_macos.md
#	docs/get_started/get_started_raspbian.md
#	docs/get_started/get_started_windows.md
#	docs/img/cpu_int8_flow.png
#	docs/index.md
#	docs/install_guides/VisionAcceleratorFPGA_Configure.md
#	docs/install_guides/VisionAcceleratorFPGA_Configure_Windows.md
#	docs/install_guides/deployment-manager-tool.md
#	docs/install_guides/installing-openvino-linux.md
#	docs/install_guides/installing-openvino-macos.md
#	docs/install_guides/installing-openvino-windows.md
#	docs/optimization_guide/dldt_optimization_guide.md
#	inference-engine/ie_bridges/c/include/c_api/ie_c_api.h
#	inference-engine/ie_bridges/python/docs/api_overview.md
#	inference-engine/ie_bridges/python/sample/ngraph_function_creation_sample/README.md
#	inference-engine/ie_bridges/python/sample/speech_sample/README.md
#	inference-engine/ie_bridges/python/src/openvino/inference_engine/ie_api.pyx
#	inference-engine/include/ie_api.h
#	inference-engine/include/ie_core.hpp
#	inference-engine/include/ie_version.hpp
#	inference-engine/samples/benchmark_app/README.md
#	inference-engine/samples/speech_sample/README.md
#	inference-engine/src/plugin_api/exec_graph_info.hpp
#	inference-engine/src/plugin_api/file_utils.h
#	inference-engine/src/transformations/include/transformations_visibility.hpp
#	inference-engine/tools/benchmark_tool/README.md
#	ngraph/core/include/ngraph/ngraph.hpp
#	ngraph/frontend/onnx_common/include/onnx_common/parser.hpp
#	ngraph/python/src/ngraph/utils/node_factory.py
#	openvino/itt/include/openvino/itt.hpp
#	thirdparty/ade
#	tools/benchmark/README.md

* Cherry-picked remove font-family (#8211)

* Cherry-picked: Update get_started_scripts.md (#8338)

* doc updates (#8268)

* Various doc changes

* theme changes

* remove font-family (#8211)

* fix  css

* Update uninstalling-openvino.md

* fix css

* fix

* Fixes for Installation Guides

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>
Co-authored-by: kblaszczak-intel <karol.blaszczak@intel.com>
# Conflicts:
#	docs/IE_DG/Bfloat16Inference.md
#	docs/IE_DG/InferenceEngine_QueryAPI.md
#	docs/IE_DG/OnnxImporterTutorial.md
#	docs/IE_DG/supported_plugins/AUTO.md
#	docs/IE_DG/supported_plugins/HETERO.md
#	docs/IE_DG/supported_plugins/MULTI.md
#	docs/MO_DG/prepare_model/convert_model/Convert_Model_From_Kaldi.md
#	docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_YOLO_From_Tensorflow.md
#	docs/install_guides/installing-openvino-macos.md
#	docs/install_guides/installing-openvino-windows.md
#	docs/ops/opset.md
#	inference-engine/samples/benchmark_app/README.md
#	inference-engine/tools/benchmark_tool/README.md
#	thirdparty/ade

* Cherry-picked: doc script changes (#8568)

* fix openvino-sphinx-theme

* add linkcheck target

* fix

* change version

* add doxygen-xfail.txt

* fix

* AA

* fix

* fix

* fix

* fix

* fix
# Conflicts:
#	thirdparty/ade

* Cherry-pick: Feature/azaytsev/doc updates gna 2021 4 2 (#8567)

* Various doc changes

* Reformatted C++/Pythob sections. Updated with info from PR8490

* additional fix

* Gemini Lake replaced with Elkhart Lake

* Fixed links in IGs, Added 12th Gen
# Conflicts:
#	docs/IE_DG/supported_plugins/GNA.md
#	thirdparty/ade

* Cherry-pick: Feature/azaytsev/doc fixes (#8897)

* Various doc changes

* Removed the empty Learning path topic

* Restored the Gemini Lake CPIU list
# Conflicts:
#	docs/IE_DG/supported_plugins/GNA.md
#	thirdparty/ade

* Cherry-pick: sphinx copybutton doxyrest code blocks (#8992)

# Conflicts:
#	thirdparty/ade

* Cherry-pick: iframe video enable fullscreen (#9041)

# Conflicts:
#	thirdparty/ade

* Cherry-pick: fix untitled titles (#9213)

# Conflicts:
#	thirdparty/ade

* Cherry-pick: perf bench graph animation (#9045)

* animation

* fix
# Conflicts:
#	thirdparty/ade

* Cherry-pick: doc pytest (#8888)

* docs pytest

* fixes
# Conflicts:
#	docs/doxygen/doxygen-ignore.txt
#	docs/scripts/ie_docs.xml
#	thirdparty/ade

* Cherry-pick: restore deleted files (#9215)

* Added new operations to the doc structure (from removed ie_docs.xml)

* Additional fixes

* Update docs/IE_DG/InferenceEngine_QueryAPI.md

Co-authored-by: Helena Kloosterman <helena.kloosterman@intel.com>

* Update docs/IE_DG/Int8Inference.md

Co-authored-by: Helena Kloosterman <helena.kloosterman@intel.com>

* Update Custom_Layers_Guide.md

* Changes according to review  comments

* doc scripts fixes

* Update docs/IE_DG/Int8Inference.md

Co-authored-by: Helena Kloosterman <helena.kloosterman@intel.com>

* Update Int8Inference.md

* update xfail

* clang format

* updated xfail

Co-authored-by: Trawinski, Dariusz <dariusz.trawinski@intel.com>
Co-authored-by: Nikolay Tyukaev <nikolay.tyukaev@intel.com>
Co-authored-by: kblaszczak-intel <karol.blaszczak@intel.com>
Co-authored-by: Yury Gorbachev <yury.gorbachev@intel.com>
Co-authored-by: Helena Kloosterman <helena.kloosterman@intel.com>
2021-12-21 20:26:37 +03:00

10 KiB

Getting Started with Demo Scripts

Introduction

A set of demo scripts in the openvino_2021/deployment_tools/demo directory give you a starting point for learning the OpenVINO™ workflow. These scripts automatically perform the workflow steps to demonstrate running inference pipelines for different scenarios. The demo steps let you see how to:

  • Compile several samples from the source files delivered as part of the OpenVINO™ toolkit.
  • Download trained models.
  • Convert the models to IR (Intermediate Representation format used by OpenVINO™) with Model Optimizer.
  • Perform pipeline steps and see the output on the console.

This guide assumes you completed all installation and configuration steps. If you have not yet installed and configured the toolkit:

@sphinxdirective .. tab:: Linux

See :doc:Install Intel® Distribution of OpenVINO™ toolkit for Linux* <openvino_docs_install_guides_installing_openvino_linux>

.. tab:: Windows

See :doc:Install Intel® Distribution of OpenVINO™ toolkit for Windows* <openvino_docs_install_guides_installing_openvino_windows>

.. tab:: macOS

See :doc:Install Intel® Distribution of OpenVINO™ toolkit for macOS* <openvino_docs_install_guides_installing_openvino_macos>

@endsphinxdirective

The demo scripts can run inference on any supported target device. Although the default inference device (i.e., processor) is the CPU, you can add the -d parameter to specify a different inference device. The general command to run a demo script is as follows:

@sphinxdirective .. tab:: Linux

.. code-block:: sh

  #If you installed in a location other than /opt/intel, substitute that path.
  cd /opt/intel/openvino_2021/deployment_tools/demo/
  ./<script_name> -d [CPU, GPU, MYRIAD, HDDL]

.. tab:: Windows

.. code-block:: sh

  rem If you installed in a location other than the default, substitute that path.
  cd "C:\Program Files (x86)\Intel\openvino_2021\deployment_tools\demo"
  .\<script_name> -d [CPU, GPU, MYRIAD, HDDL]

.. tab:: macOS

.. code-block:: sh

  #If you installed in a location other than /opt/intel, substitute that path.
  cd /opt/intel/openvino_2021/deployment_tools/demo/
  ./<script_name> -d [CPU, MYRIAD]

@endsphinxdirective

Before running the demo applications on Intel® Processor Graphics or on an Intel® Neural Compute Stick 2 device, you must complete additional configuration steps.

@sphinxdirective .. tab:: Linux

For details, see the following sections in the :doc:installation instructions <openvino_docs_install_guides_installing_openvino_linux>:

  • Steps for Intel® Processor Graphics (GPU)
  • Steps for Intel® Neural Compute Stick 2

.. tab:: Windows

For details, see the following sections in the :doc:installation instructions <openvino_docs_install_guides_installing_openvino_windows>:

  • Additional Installation Steps for Intel® Processor Graphics (GPU)
  • Additional Installation Steps for Intel® Vision Accelerator Design with Intel® Movidius™ VPUs

.. tab:: macOS

For details, see the following sections in the :doc:installation instructions <openvino_docs_install_guides_installing_openvino_macos>:

  • Steps for Intel® Neural Compute Stick 2

@endsphinxdirective

The following sections describe each demo script.

Image Classification Demo Script

The demo_squeezenet_download_convert_run script illustrates the image classification pipeline.

The script:

  1. Downloads a SqueezeNet model.
  2. Runs the Model Optimizer to convert the model to the IR format used by OpenVINO™.
  3. Builds the Image Classification Sample Async application.
  4. Runs the compiled sample with the car.png image located in the demo directory.

Example of Running the Image Classification Demo Script

@sphinxdirective .. raw:: html

@endsphinxdirective Click for an example of running the Image Classification demo script

To preview the image that the script will classify:

@sphinxdirective .. tab:: Linux

.. code-block:: sh

  cd /opt/intel/openvino_2021/deployment_tools/demo
  eog car.png

.. tab:: Windows

.. code-block:: sh

  car.png

.. tab:: macOS

.. code-block:: sh

  cd /opt/intel/openvino_2021/deployment_tools/demo
  open car.png

@endsphinxdirective

To run the script and perform inference on the CPU:

@sphinxdirective .. tab:: Linux

.. code-block:: sh

  ./demo_squeezenet_download_convert_run.sh

.. tab:: Windows

  .. code-block:: bat

     .\demo_squeezenet_download_convert_run.bat

.. tab:: macOS

.. code-block:: sh

  ./demo_squeezenet_download_convert_run.sh

@endsphinxdirective

When the script completes, you see the label and confidence for the top 10 categories:

@sphinxdirective .. tab:: Linux

.. code-block:: sh

  Top 10 results:

  Image /opt/intel/openvino_2021/deployment_tools/demo/car.png

  classid probability label
  ------- ----------- -----
  817     0.8363345   sports car, sport car
  511     0.0946488   convertible
  479     0.0419131   car wheel
  751     0.0091071   racer, race car, racing car
  436     0.0068161   beach wagon, station wagon, wagon, estate car, beach waggon, station waggon, waggon
  656     0.0037564   minivan
  586     0.0025741   half track
  717     0.0016069   pickup, pickup truck
  864     0.0012027   tow truck, tow car, wrecker
  581     0.0005882   grille, radiator grille

  [ INFO ] Execution successful

  [ INFO ] This sample is an API example, for any performance measurements please use the dedicated benchmark_app tool

.. tab:: Windows

.. code-block:: bat

  Top 10 results:

  Image C:\Program Files (x86)\Intel\openvino_2021\deployment_tools\demo\car.png

  classid probability label
  ------- ----------- -----
  817     0.8363345   sports car, sport car
  511     0.0946488   convertible
  479     0.0419131   car wheel
  751     0.0091071   racer, race car, racing car
  436     0.0068161   beach wagon, station wagon, wagon, estate car, beach wagon, station wagon, wagon
  656     0.0037564   minivan
  586     0.0025741   half track
  717     0.0016069   pickup, pickup truck
  864     0.0012027   tow truck, tow car, wrecker
  581     0.0005882   grille, radiator grille

  [ INFO ] Execution successful

  [ INFO ] This sample is an API example, for any performance measurements please use the dedicated benchmark_app tool

.. tab:: macOS

.. code-block:: sh

  Top 10 results:

  Image /Users/colin/intel/openvino_2021/deployment_tools/demo/car.png

  classid probability label
  ------- ----------- -----
  817     0.8363345   sports car, sport car
  511     0.0946488   convertible
  479     0.0419131   car wheel
  751     0.0091071   racer, race car, racing car
  436     0.0068161   beach wagon, station wagon, wagon, estate car, beach wagon, station wagon, wagon
  656     0.0037564   minivan
  586     0.0025741   half track
  717     0.0016069   pickup, pickup truck
  864     0.0012027   tow truck, tow car, wrecker
  581     0.0005882   grille, radiator grille

  [ INFO ] Execution successful

  [ INFO ] This sample is an API example, for any performance measurements please use the dedicated benchmark_app tool

@endsphinxdirective

@sphinxdirective .. raw:: html

</div>

@endsphinxdirective

Inference Pipeline Demo Script

The demo_security_barrier_camera application uses vehicle recognition in which vehicle attributes build on each other to narrow in on a specific attribute.

The script:

  1. Downloads three pre-trained models, already converted to IR format.
  2. Builds the Security Barrier Camera Demo application.
  3. Runs the application with the three models and the car_1.bmp image from the demo directory to show an inference pipeline.

This application:

  1. Gets the boundaries an object identified as a vehicle with the first model.
  2. Uses the vehicle identification as input to the second model, which identifies specific vehicle attributes, including the license plate.
  3. Uses the license plate as input to the third model, which recognizes specific characters in the license plate.

Example of Running the Pipeline Demo Script

@sphinxdirective .. raw:: html

@endsphinxdirective Click for an example of Running the Pipeline demo script

To run the script performing inference on Intel® Processor Graphics: @sphinxdirective .. tab:: Linux

.. code-block:: sh

  ./demo_security_barrier_camera.sh -d GPU

.. tab:: Windows

.. code-block:: bat

  .\demo_security_barrier_camera.bat -d GPU

@endsphinxdirective

When the verification script is complete, you see an image that displays the resulting frame with detections rendered as bounding boxes and overlaid text:

@sphinxdirective .. tab:: Linux

.. image:: ../img/inference_pipeline_script_lnx.png

.. tab:: Windows

.. image:: ../img/inference_pipeline_script_win.png

.. tab:: macOS

.. image:: ../img/inference_pipeline_script_mac.png

@endsphinxdirective

@sphinxdirective .. raw:: html

@endsphinxdirective

Benchmark Demo Script

The demo_benchmark_app script illustrates how to use the Benchmark Application to estimate deep learning inference performance on supported devices.

The script:

  1. Downloads a SqueezeNet model.
  2. Runs the Model Optimizer to convert the model to IR format.
  3. Builds the Inference Engine Benchmark tool.
  4. Runs the tool with the car.png image located in the demo directory.

Example of Running the Benchmark Demo Script

@sphinxdirective .. raw:: html

@endsphinxdirective Click for an example of running the Benchmark demo script

To run the script that performs measures inference performance: @sphinxdirective .. tab:: Linux

.. code-block:: sh

  ./demo_benchmark_app.sh

.. tab:: Windows

.. code-block:: bat

  .\demo_benchmark_app.bat

.. tab:: macOS

.. code-block:: sh

  ./demo_benchmark_app.sh

@endsphinxdirective

When the verification script is complete, you see the performance counters, resulting latency, and throughput values displayed on the screen. @sphinxdirective .. raw:: html

@endsphinxdirective

Other Get Started Documents

For more get started documents, visit the pages below:

Get Started with Sample and Demo Applications

Get Started with Instructions