Files
openvino/src/plugins/auto
Ilya Lavrenov c3b22af3f7 CoreImpl small refactoring (#16145)
* Small CoreImpl refactoring

* Removed cache_dirhandling from CPU plugin

* clang-format

* Fixed python tests

* Fix

* Fixed bugs in HETERO case

* Fixed clang-format and warnings in auto plugin

* Added import_export as capability for TEMPLATE plugin

* Commented throw exception from loaded_from_cache

* Fixed clang-formatof ro template plugin
2023-03-08 19:19:52 +04:00
..
2023-01-16 11:02:17 +04:00
2023-03-08 19:19:52 +04:00

OpenVINO™ AUTO Plugin

The main responsibility of the AUTO plugin is to provide a unified device that enables developers to code deep learning applications once and deploy them anywhere.

Other capabilities of the AUTO plugin include:

  • Static device selection, which intelligently loads a network to one device or multiple devices.
  • CPU acceleration to start inferencing while the target device is still loading the network.
  • Model priority support for loading multiple networks to multiple devices.

The component is written in C++. If you want to contribute to the AUTO plugin, follow the common coding style rules.

Key contacts

In case of any questions, review and merge requests, contact the AUTO Plugin maintainer group.

Components

The AUTO plugin follows the OpenVINO™ plugin architecture and consists of several main components:

  • docs contains developer documentation for the AUTO plugin.
  • current folder contains sources of the AUTO plugin.

Learn more in the OpenVINO™ Plugin Developer Guide.

Architecture

The diagram below shows an overview of the components responsible for the basic inference flow:

flowchart TD

    subgraph Application["Application"]
    end

    subgraph OpenVINO Runtime["OpenVINO Runtime"]
        AUTO["AUTO Plugin"] --> CPU["CPU Plugin"]
        AUTO["AUTO Plugin"] --> GPU["GPU Plugin"]
    end

    Application --> AUTO

    style Application fill:#6c9f7f

Find more details in the AUTO Plugin architecture document.

Tutorials

See also