Files
openvino/docs/IE_DG/InferenceEngine_QueryAPI.md
Andrey Zaytsev 4ae6258bed Feature/azaytsev/from 2021 4 (#9247)
* Added info on DockerHub CI Framework

* Feature/azaytsev/change layout (#3295)

* Changes according to feedback comments

* Replaced @ref's with html links

* Fixed links, added a title page for installing from repos and images, fixed formatting issues

* Added links

* minor fix

* Added DL Streamer to the list of components installed by default

* Link fixes

* Link fixes

* ovms doc fix (#2988)

* added OpenVINO Model Server

* ovms doc fixes

Co-authored-by: Trawinski, Dariusz <dariusz.trawinski@intel.com>

* Updated openvino_docs.xml

* Updated the link to software license agreements

* Revert "Updated the link to software license agreements"

This reverts commit 706dac500e.

* Docs to Sphinx (#8151)

* docs to sphinx

* Update GPU.md

* Update CPU.md

* Update AUTO.md

* Update performance_int8_vs_fp32.md

* update

* update md

* updates

* disable doc ci

* disable ci

* fix index.rst

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>
# Conflicts:
#	.gitignore
#	docs/CMakeLists.txt
#	docs/IE_DG/Deep_Learning_Inference_Engine_DevGuide.md
#	docs/IE_DG/Extensibility_DG/Custom_ONNX_Ops.md
#	docs/IE_DG/Extensibility_DG/VPU_Kernel.md
#	docs/IE_DG/InferenceEngine_QueryAPI.md
#	docs/IE_DG/Int8Inference.md
#	docs/IE_DG/Integrate_with_customer_application_new_API.md
#	docs/IE_DG/Model_caching_overview.md
#	docs/IE_DG/supported_plugins/GPU_RemoteBlob_API.md
#	docs/IE_DG/supported_plugins/HETERO.md
#	docs/IE_DG/supported_plugins/MULTI.md
#	docs/MO_DG/prepare_model/convert_model/Convert_Model_From_Caffe.md
#	docs/MO_DG/prepare_model/convert_model/Convert_Model_From_Kaldi.md
#	docs/MO_DG/prepare_model/convert_model/Convert_Model_From_MxNet.md
#	docs/MO_DG/prepare_model/convert_model/Convert_Model_From_ONNX.md
#	docs/MO_DG/prepare_model/convert_model/Converting_Model.md
#	docs/MO_DG/prepare_model/convert_model/Converting_Model_General.md
#	docs/MO_DG/prepare_model/convert_model/Cutting_Model.md
#	docs/MO_DG/prepare_model/convert_model/pytorch_specific/Convert_RNNT.md
#	docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_EfficientDet_Models.md
#	docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_WideAndDeep_Family_Models.md
#	docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_YOLO_From_Tensorflow.md
#	docs/doxygen/Doxyfile.config
#	docs/doxygen/ie_docs.xml
#	docs/doxygen/ie_plugin_api.config
#	docs/doxygen/ngraph_cpp_api.config
#	docs/doxygen/openvino_docs.xml
#	docs/get_started/get_started_macos.md
#	docs/get_started/get_started_raspbian.md
#	docs/get_started/get_started_windows.md
#	docs/img/cpu_int8_flow.png
#	docs/index.md
#	docs/install_guides/VisionAcceleratorFPGA_Configure.md
#	docs/install_guides/VisionAcceleratorFPGA_Configure_Windows.md
#	docs/install_guides/deployment-manager-tool.md
#	docs/install_guides/installing-openvino-linux.md
#	docs/install_guides/installing-openvino-macos.md
#	docs/install_guides/installing-openvino-windows.md
#	docs/optimization_guide/dldt_optimization_guide.md
#	inference-engine/ie_bridges/c/include/c_api/ie_c_api.h
#	inference-engine/ie_bridges/python/docs/api_overview.md
#	inference-engine/ie_bridges/python/sample/ngraph_function_creation_sample/README.md
#	inference-engine/ie_bridges/python/sample/speech_sample/README.md
#	inference-engine/ie_bridges/python/src/openvino/inference_engine/ie_api.pyx
#	inference-engine/include/ie_api.h
#	inference-engine/include/ie_core.hpp
#	inference-engine/include/ie_version.hpp
#	inference-engine/samples/benchmark_app/README.md
#	inference-engine/samples/speech_sample/README.md
#	inference-engine/src/plugin_api/exec_graph_info.hpp
#	inference-engine/src/plugin_api/file_utils.h
#	inference-engine/src/transformations/include/transformations_visibility.hpp
#	inference-engine/tools/benchmark_tool/README.md
#	ngraph/core/include/ngraph/ngraph.hpp
#	ngraph/frontend/onnx_common/include/onnx_common/parser.hpp
#	ngraph/python/src/ngraph/utils/node_factory.py
#	openvino/itt/include/openvino/itt.hpp
#	thirdparty/ade
#	tools/benchmark/README.md

* Cherry-picked remove font-family (#8211)

* Cherry-picked: Update get_started_scripts.md (#8338)

* doc updates (#8268)

* Various doc changes

* theme changes

* remove font-family (#8211)

* fix  css

* Update uninstalling-openvino.md

* fix css

* fix

* Fixes for Installation Guides

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>
Co-authored-by: kblaszczak-intel <karol.blaszczak@intel.com>
# Conflicts:
#	docs/IE_DG/Bfloat16Inference.md
#	docs/IE_DG/InferenceEngine_QueryAPI.md
#	docs/IE_DG/OnnxImporterTutorial.md
#	docs/IE_DG/supported_plugins/AUTO.md
#	docs/IE_DG/supported_plugins/HETERO.md
#	docs/IE_DG/supported_plugins/MULTI.md
#	docs/MO_DG/prepare_model/convert_model/Convert_Model_From_Kaldi.md
#	docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_YOLO_From_Tensorflow.md
#	docs/install_guides/installing-openvino-macos.md
#	docs/install_guides/installing-openvino-windows.md
#	docs/ops/opset.md
#	inference-engine/samples/benchmark_app/README.md
#	inference-engine/tools/benchmark_tool/README.md
#	thirdparty/ade

* Cherry-picked: doc script changes (#8568)

* fix openvino-sphinx-theme

* add linkcheck target

* fix

* change version

* add doxygen-xfail.txt

* fix

* AA

* fix

* fix

* fix

* fix

* fix
# Conflicts:
#	thirdparty/ade

* Cherry-pick: Feature/azaytsev/doc updates gna 2021 4 2 (#8567)

* Various doc changes

* Reformatted C++/Pythob sections. Updated with info from PR8490

* additional fix

* Gemini Lake replaced with Elkhart Lake

* Fixed links in IGs, Added 12th Gen
# Conflicts:
#	docs/IE_DG/supported_plugins/GNA.md
#	thirdparty/ade

* Cherry-pick: Feature/azaytsev/doc fixes (#8897)

* Various doc changes

* Removed the empty Learning path topic

* Restored the Gemini Lake CPIU list
# Conflicts:
#	docs/IE_DG/supported_plugins/GNA.md
#	thirdparty/ade

* Cherry-pick: sphinx copybutton doxyrest code blocks (#8992)

# Conflicts:
#	thirdparty/ade

* Cherry-pick: iframe video enable fullscreen (#9041)

# Conflicts:
#	thirdparty/ade

* Cherry-pick: fix untitled titles (#9213)

# Conflicts:
#	thirdparty/ade

* Cherry-pick: perf bench graph animation (#9045)

* animation

* fix
# Conflicts:
#	thirdparty/ade

* Cherry-pick: doc pytest (#8888)

* docs pytest

* fixes
# Conflicts:
#	docs/doxygen/doxygen-ignore.txt
#	docs/scripts/ie_docs.xml
#	thirdparty/ade

* Cherry-pick: restore deleted files (#9215)

* Added new operations to the doc structure (from removed ie_docs.xml)

* Additional fixes

* Update docs/IE_DG/InferenceEngine_QueryAPI.md

Co-authored-by: Helena Kloosterman <helena.kloosterman@intel.com>

* Update docs/IE_DG/Int8Inference.md

Co-authored-by: Helena Kloosterman <helena.kloosterman@intel.com>

* Update Custom_Layers_Guide.md

* Changes according to review  comments

* doc scripts fixes

* Update docs/IE_DG/Int8Inference.md

Co-authored-by: Helena Kloosterman <helena.kloosterman@intel.com>

* Update Int8Inference.md

* update xfail

* clang format

* updated xfail

Co-authored-by: Trawinski, Dariusz <dariusz.trawinski@intel.com>
Co-authored-by: Nikolay Tyukaev <nikolay.tyukaev@intel.com>
Co-authored-by: kblaszczak-intel <karol.blaszczak@intel.com>
Co-authored-by: Yury Gorbachev <yury.gorbachev@intel.com>
Co-authored-by: Helena Kloosterman <helena.kloosterman@intel.com>
2021-12-21 20:26:37 +03:00

10 KiB
Raw Blame History

Introduction to Inference Engine Device Query API

Inference Engine Query API (C++)

@sphinxdirective .. raw:: html

<div id="switcher-cpp" class="switcher-anchor">C++</div>

@endsphinxdirective

The OpenVINO™ toolkit supports inferencing with several types of devices (processors or accelerators). This section provides a high-level description of the process of querying of different device properties and configuration values at runtime. Refer to the Hello Query Device С++ Sample sources and the Multi-Device Plugin documentation for examples of using the Inference Engine Query API in user applications.

Using the Inference Engine Query API in Your Code

The InferenceEngine::Core class provides the following API to query device information, set or get different device configuration properties:

  • InferenceEngine::Core::GetAvailableDevices - Provides a list of available devices. If there are more than one instance of a specific device, the devices are enumerated with .suffix where suffix is a unique string identifier. The device name can be passed to all methods of the InferenceEngine::Core class that work with devices, for example InferenceEngine::Core::LoadNetwork.
  • InferenceEngine::Core::GetMetric - Provides information about specific device. InferenceEngine::Core::GetConfig - Gets the current value of a specific configuration key.
  • InferenceEngine::Core::SetConfig - Sets a new value for the configuration key.

The InferenceEngine::ExecutableNetwork class is also extended to support the Query API:

  • InferenceEngine::ExecutableNetwork::GetMetric
  • InferenceEngine::ExecutableNetwork::GetConfig
  • InferenceEngine::ExecutableNetwork::SetConfig

Query API in the Core Class

GetAvailableDevices

@snippet snippets/InferenceEngine_QueryAPI0.cpp part0

The function returns a list of available devices, for example:

MYRIAD.1.2-ma2480
MYRIAD.1.4-ma2480
CPU
GPU.0
GPU.1

Each device name can then be passed to:

  • InferenceEngine::Core::LoadNetwork to load the network to a specific device.
  • InferenceEngine::Core::GetMetric to get common or device specific metrics.
  • All other methods of the InferenceEngine::Core class that accept deviceName.

GetConfig()

The code below demonstrates how to understand whether the HETERO device dumps GraphViz .dot files with split graphs during the split stage:

@snippet snippets/InferenceEngine_QueryAPI1.cpp part1

For documentation about common configuration keys, refer to ie_plugin_config.hpp. Device specific configuration keys can be found in corresponding plugin folders.

GetMetric()

  • To extract device properties such as available device, device name, supported configuration keys, and others, use the InferenceEngine::Core::GetMetric method:

@snippet snippets/InferenceEngine_QueryAPI2.cpp part2

A returned value appears as follows: Intel(R) Core(TM) i7-8700 CPU @ 3.20GHz.

Note

: All metrics have a type, which is specified during metric instantiation. The list of common device-agnostic metrics can be found in ie_plugin_config.hpp. Device specific metrics (for example, for HDDL or MYRIAD devices) can be found in corresponding plugin folders.

Query API in the ExecutableNetwork Class

GetMetric()

The method is used to get an executable network specific metric such as METRIC_KEY(OPTIMAL_NUMBER_OF_INFER_REQUESTS):

@snippet snippets/InferenceEngine_QueryAPI3.cpp part3

Or the current temperature of the MYRIAD device:

@snippet snippets/InferenceEngine_QueryAPI4.cpp part4

GetConfig()

The method is used to get information about configuration values the executable network has been created with:

@snippet snippets/InferenceEngine_QueryAPI5.cpp part5

SetConfig()

The only device that supports this method is Multi-Device.

Inference Engine Query API (Python)

@sphinxdirective .. raw:: html

<div id="switcher-python" class="switcher-anchor">Python</div>

@endsphinxdirective

This section provides a high-level description of the process of querying of different device properties and configuration values. Refer to the Hello Query Device Python Sample sources and the Multi-Device Plugin documentation for examples of using the Inference Engine Query API in user applications.

Using the Inference Engine Query API in Your Code

The Inference Engine Core class provides the following API to query device information, set or get different device configuration properties:

The ie_api.ExecutableNetwork class is also extended to support the Query API:

Query API in the IECore Class

Get Available Devices

from openvino.inference_engine import IECore

ie = IECore()
print(ie.available_devices)

This code prints a list of available devices, for example:

MYRIAD.1.2-ma2480
MYRIAD.1.4-ma2480
FPGA.0
FPGA.1
CPU
GPU.0
GPU.1

Each device name can then be passed to:

  • IECore.load_network to load the network to a specific device.
  • IECore.get_metric to get common or device specific metrics.
  • All other methods of the IECore class that accept a device name.

Get Metric

To extract device properties such as available device, device name, supported configuration keys, and others, use the IECore.get_metric method:

from openvino.inference_engine import IECore

ie = IECore()
ie.get_metric(device_name="CPU", metric_name="FULL_DEVICE_NAME")

A returned value appears as follows: Intel(R) Core(TM) i7-8700 CPU @ 3.20GHz.

To list all supported metrics for a device:

from openvino.inference_engine import IECore

ie = IECore()
ie.get_metric(device_name="GPU", metric_name="SUPPORTED_METRICS")

Get Configuration

The code below uses the IECore.get_config method and demonstrates how to understand whether the HETERO device dumps .dot files with split graphs during the split stage:

from openvino.inference_engine import IECore

ie = IECore()
ie.get_config(device_name="HETERO", config_name="HETERO_DUMP_GRAPH_DOT")

To list all supported configuration keys for a device:

from openvino.inference_engine import IECore

ie = IECore()
ie.get_metric(device_name=device, metric_name="SUPPORTED_CONFIG_KEYS")

For documentation about common configuration keys, refer to ie_plugin_config.hpp. Device specific configuration keys can be found in corresponding plugin folders.

Query API in the ExecutableNetwork Class

Get Metric

To get the name of the loaded network:

from openvino.inference_engine import IECore

ie = IECore()
net = ie.read_network(model=path_to_xml_file)
exec_net = ie.load_network(network=net, device_name=device)
exec_net.get_metric("NETWORK_NAME")

Use exec_net.get_metric("SUPPORTED_METRICS") to list all supported metrics for an ExecutableNetwork instance.

Get Configuration

The IECore.get_config method is used to get information about configuration values the executable network has been created with:

from openvino.inference_engine import IECore

ie = IECore()
net = ie.read_network(model=path_to_xml_file)
exec_net = ie.load_network(network=net, device_name="CPU")
exec_net.get_config("CPU_THREADS_NUM")

Or the current temperature of MYRIAD device:

from openvino.inference_engine import IECore

ie = IECore()
net = ie.read_network(model=path_to_xml_file)
exec_net = ie.load_network(network=net, device_name="MYRIAD")
exec_net.get_config("DEVICE_THERMAL")

Use exec_net.get_metric("SUPPORTED_CONFIG_KEYS") to list all supported configuration keys.

Set Configuration

The only device that supports this method in the ExecutableNetwork class is the Multi-Device, where you can change the priorities of the devices for the Multi plugin in real time: exec_net.set_config({{"MULTI_DEVICE_PRIORITIES", "GPU,CPU"}}). See the Multi-Device documentation for more details.