Porting OV Runtime (PR #11658) to 2022.2 https://github.com/openvinotoolkit/openvino/pull/11658/
9.9 KiB
Working with devices
@sphinxdirective
.. toctree:: :maxdepth: 1 :hidden:
openvino_docs_OV_UG_query_api openvino_docs_OV_UG_supported_plugins_CPU openvino_docs_OV_UG_supported_plugins_GPU openvino_docs_OV_UG_supported_plugins_VPU openvino_docs_OV_UG_supported_plugins_GNA openvino_docs_OV_UG_supported_plugins_ARM_CPU
@endsphinxdirective
The OpenVINO Runtime provides capabilities to infer deep learning models on the following device types with corresponding plugins:
OpenVINO Runtime also has several execution capabilities which work on top of other devices:
| Capability | Description |
|---|---|
| Multi-Device execution | Multi-Device enables simultaneous inference of the same model on several devices in parallel. |
| Auto-Device selection | Auto-Device selection enables selecting Intel device for inference automatically. |
| Heterogeneous execution | Heterogeneous execution enables automatic inference splitting between several devices (for example if a device doesn't support certain operation). |
| Automatic Batching | Auto-Batching plugin enables the batching (on top of the specified device) that is completely transparent to the application. |
Devices similar to the ones used for benchmarking can be accessed, using Intel® DevCloud for the Edge, a remote development environment with access to Intel® hardware and the latest versions of the Intel® Distribution of the OpenVINO™ Toolkit. Learn more or Register here.
@anchor features_support_matrix
Feature Support Matrix
The table below demonstrates support of key features by OpenVINO device plugins.
| Capability | CPU | GPU | GNA | Arm® CPU |
|---|---|---|---|---|
| Heterogeneous execution | Yes | Yes | No | Yes |
| Multi-device execution | Yes | Yes | Partial | Yes |
| Automatic batching | No | Yes | No | No |
| Multi-stream execution | Yes | Yes | No | Yes |
| Models caching | Yes | Partial | Yes | No |
| Dynamic shapes | Yes | Partial | No | No |
| Import/Export | Yes | No | Yes | No |
| Preprocessing acceleration | Yes | Yes | No | Partial |
| Stateful models | Yes | No | Yes | No |
| [Extensibility](@ref openvino_docs_Extensibility_UG_Intro) | Yes | Yes | No | No |
For more details on plugin-specific feature limitations, see the corresponding plugin pages.
Enumerating Available Devices
The OpenVINO Runtime API features dedicated methods of enumerating devices and their capabilities. See the Hello Query Device C++ Sample. This is an example output from the sample (truncated to device names only):
./hello_query_device
Available devices:
Device: CPU
...
Device: GPU.0
...
Device: GPU.1
...
Device: HDDL
A simple programmatic way to enumerate the devices and use with the multi-device is as follows:
@sphinxdirective
.. tab:: C++
.. doxygensnippet:: docs/snippets/MULTI2.cpp
:language: cpp
:fragment: [part2]
@endsphinxdirective
Beyond the typical "CPU", "GPU", "HDDL", and so on, when multiple instances of a device are available, the names are more qualified. For example, this is how two Intel® Movidius™ Myriad™ X sticks are listed with the hello_query_sample:
...
Device: MYRIAD.1.2-ma2480
...
Device: MYRIAD.1.4-ma2480
So, the explicit configuration to use both would be "MULTI:MYRIAD.1.2-ma2480,MYRIAD.1.4-ma2480". Accordingly, the code that loops over all available devices of the "MYRIAD" type only is as follows:
@sphinxdirective
.. tab:: C++
.. doxygensnippet:: docs/snippets/MULTI3.cpp
:language: cpp
:fragment: [part3]
@endsphinxdirective