Files
openvino/docs/OV_Runtime_UG/supported_plugins/Device_Plugins.md
2022-07-20 11:14:45 +02:00

9.9 KiB

Working with devices

@sphinxdirective

.. toctree:: :maxdepth: 1 :hidden:

openvino_docs_OV_UG_query_api openvino_docs_OV_UG_supported_plugins_CPU openvino_docs_OV_UG_supported_plugins_GPU openvino_docs_OV_UG_supported_plugins_VPU openvino_docs_OV_UG_supported_plugins_GNA openvino_docs_OV_UG_supported_plugins_ARM_CPU

@endsphinxdirective

The OpenVINO Runtime provides capabilities to infer deep learning models on the following device types with corresponding plugins:

Plugin Device types
CPU Intel® Xeon®, Intel® Core™ and Intel® Atom® processors with Intel® Streaming SIMD Extensions (Intel® SSE4.2), Intel® Advanced Vector Extensions 2 (Intel® AVX2), Intel® Advanced Vector Extensions 512 (Intel® AVX-512), Intel® Vector Neural Network Instructions (Intel® AVX512-VNNI) and bfloat16 extension for AVX-512 (Intel® AVX-512_BF16 Extension)
GPU Intel® Graphics, including Intel® HD Graphics, Intel® UHD Graphics, Intel® Iris® Graphics, Intel® Xe Graphics, Intel® Xe MAX Graphics
VPUs Intel® Neural Compute Stick 2 powered by the Intel® Movidius™ Myriad™ X, Intel® Vision Accelerator Design with Intel® Movidius™ VPUs
GNA Intel® Speech Enabling Developer Kit; Amazon Alexa Premium Far-Field Developer Kit; Intel® Pentium® Silver Processors N5xxx, J5xxx and Intel® Celeron® Processors N4xxx, J4xxx (formerly codenamed Gemini Lake): Intel® Pentium® Silver J5005 Processor, Intel® Pentium® Silver N5000 Processor, Intel® Celeron® J4005 Processor, Intel® Celeron® J4105 Processor, Intel® Celeron® J4125 Processor, Intel® Celeron® Processor N4100, Intel® Celeron® Processor N4000; Intel® Pentium® Processors N6xxx, J6xxx, Intel® Celeron® Processors N6xxx, J6xxx and Intel Atom® x6xxxxx (formerly codenamed Elkhart Lake); Intel® Core™ Processors (formerly codenamed Cannon Lake); 10th Generation Intel® Core™ Processors (formerly codenamed Ice Lake): Intel® Core™ i7-1065G7 Processor, Intel® Core™ i7-1060G7 Processor, Intel® Core™ i5-1035G4 Processor, Intel® Core™ i5-1035G7 Processor, Intel® Core™ i5-1035G1 Processor, Intel® Core™ i5-1030G7 Processor, Intel® Core™ i5-1030G4 Processor, Intel® Core™ i3-1005G1 Processor, Intel® Core™ i3-1000G1 Processor, Intel® Core™ i3-1000G4 Processor; 11th Generation Intel® Core™ Processors (formerly codenamed Tiger Lake); 12th Generation Intel® Core™ Processors (formerly codenamed Alder Lake)
Arm® CPU Raspberry Pi™ 4 Model B, Apple® Mac mini with M1 chip, NVIDIA® Jetson Nano™, Android™ devices

OpenVINO Runtime also has several execution capabilities which work on top of other devices:

Capability Description
Multi-Device execution Multi-Device enables simultaneous inference of the same model on several devices in parallel.
Auto-Device selection Auto-Device selection enables selecting Intel device for inference automatically.
Heterogeneous execution Heterogeneous execution enables automatic inference splitting between several devices (for example if a device doesn't support certain operation).
Automatic Batching Auto-Batching plugin enables the batching (on top of the specified device) that is completely transparent to the application.

Devices similar to the ones used for benchmarking can be accessed, using Intel® DevCloud for the Edge, a remote development environment with access to Intel® hardware and the latest versions of the Intel® Distribution of the OpenVINO™ Toolkit. Learn more or Register here.

@anchor features_support_matrix

Feature Support Matrix

The table below demonstrates support of key features by OpenVINO device plugins.

Capability CPU GPU GNA Arm® CPU
Heterogeneous execution Yes Yes No Yes
Multi-device execution Yes Yes Partial Yes
Automatic batching No Yes No No
Multi-stream execution Yes Yes No Yes
Models caching Yes Partial Yes No
Dynamic shapes Yes Partial No No
Import/Export Yes No Yes No
Preprocessing acceleration Yes Yes No Partial
Stateful models Yes No Yes No
[Extensibility](@ref openvino_docs_Extensibility_UG_Intro) Yes Yes No No

For more details on plugin-specific feature limitations, see the corresponding plugin pages.

Enumerating Available Devices

The OpenVINO Runtime API features dedicated methods of enumerating devices and their capabilities. See the Hello Query Device C++ Sample. This is an example output from the sample (truncated to device names only):

  ./hello_query_device
  Available devices:
      Device: CPU
  ...
      Device: GPU.0
  ...
      Device: GPU.1
  ...
      Device: HDDL

A simple programmatic way to enumerate the devices and use with the multi-device is as follows:

@sphinxdirective

.. tab:: C++

.. doxygensnippet:: docs/snippets/MULTI2.cpp
   :language: cpp
   :fragment: [part2]

@endsphinxdirective

Beyond the typical "CPU", "GPU", "HDDL", and so on, when multiple instances of a device are available, the names are more qualified. For example, this is how two Intel® Movidius™ Myriad™ X sticks are listed with the hello_query_sample:

...
    Device: MYRIAD.1.2-ma2480
...
    Device: MYRIAD.1.4-ma2480

So, the explicit configuration to use both would be "MULTI:MYRIAD.1.2-ma2480,MYRIAD.1.4-ma2480". Accordingly, the code that loops over all available devices of the "MYRIAD" type only is as follows:

@sphinxdirective

.. tab:: C++

.. doxygensnippet:: docs/snippets/MULTI3.cpp
   :language: cpp
   :fragment: [part3]

@endsphinxdirective