* DOCS-doc_structure_step_2 - adjustments to the previous change based on feedback - changes focusing on ModelOptimizer section to mitigate the removal of ONNX and PdPd articles * remove 2 files we brought back after 22.1
3.6 KiB
Inference Device Support
@sphinxdirective
.. toctree:: :maxdepth: 1 :hidden:
openvino_docs_OV_UG_query_api openvino_docs_OV_UG_supported_plugins_CPU openvino_docs_OV_UG_supported_plugins_GPU openvino_docs_OV_UG_supported_plugins_VPU openvino_docs_OV_UG_supported_plugins_GNA openvino_docs_OV_UG_supported_plugins_ARM_CPU
@endsphinxdirective
OpenVINO™ Runtime can infer deep learning models using the following device types:
For a more detailed list of hardware, see Supported Devices
Devices similar to the ones used for benchmarking can be accessed, using Intel® DevCloud for the Edge, a remote development environment with access to Intel® hardware and the latest versions of the Intel® Distribution of the OpenVINO™ Toolkit. Learn more or Register here.
@anchor features_support_matrix
Feature Support Matrix
The table below demonstrates support of key features by OpenVINO device plugins.
| Capability | CPU | GPU | GNA | Arm® CPU |
|---|---|---|---|---|
| Heterogeneous execution | Yes | Yes | No | Yes |
| Multi-device execution | Yes | Yes | Partial | Yes |
| Automatic batching | No | Yes | No | No |
| Multi-stream execution | Yes | Yes | No | Yes |
| Models caching | Yes | Partial | Yes | No |
| Dynamic shapes | Yes | Partial | No | No |
| Import/Export | Yes | No | Yes | No |
| Preprocessing acceleration | Yes | Yes | No | Partial |
| Stateful models | Yes | No | Yes | No |
| [Extensibility](@ref openvino_docs_Extensibility_UG_Intro) | Yes | Yes | No | No |
For more details on plugin-specific feature limitations, see the corresponding plugin pages.
Enumerating Available Devices
The OpenVINO Runtime API features dedicated methods of enumerating devices and their capabilities. See the Hello Query Device C++ Sample. This is an example output from the sample (truncated to device names only):
./hello_query_device
Available devices:
Device: CPU
...
Device: GPU.0
...
Device: GPU.1
...
Device: HDDL
A simple programmatic way to enumerate the devices and use with the multi-device is as follows:
@sphinxdirective
.. tab:: C++
.. doxygensnippet:: docs/snippets/MULTI2.cpp
:language: cpp
:fragment: [part2]
@endsphinxdirective
Beyond the typical "CPU", "GPU", "HDDL", and so on, when multiple instances of a device are available, the names are more qualified. For example, this is how two Intel® Movidius™ Myriad™ X sticks are listed with the hello_query_sample:
...
Device: MYRIAD.1.2-ma2480
...
Device: MYRIAD.1.4-ma2480
So, the explicit configuration to use both would be "MULTI:MYRIAD.1.2-ma2480,MYRIAD.1.4-ma2480". Accordingly, the code that loops over all available devices of the "MYRIAD" type only is as follows:
@sphinxdirective
.. tab:: C++
.. doxygensnippet:: docs/snippets/MULTI3.cpp
:language: cpp
:fragment: [part3]
@endsphinxdirective