Shoujiang Ma 3aa5413363 [AUTO] AUTO inferences with CPU while the actual accelerator loads the network (#5944)
* DRAFT: hot-swap async request (when VPU or other acceleratore finally loaded)

* DRAFT2: communicating the hot-swap back to the AutoExecNetwork, added logic for blobs (and keeping them internally to the AutoRequest), added lot's of comments for fixme (items to close the semantics and behaviour)

* Rebase/refactor code and fix tests issue

Signed-off-by: Shoujiang Ma <shoujiang.ma@intel.com>

* Fix logic for AutoExecutableNetwork

1. force loadNetwork in parallel by std::launch::async
2. address some fixme in auto_exec_network.cpp
3. capture by reference

Signed-off-by: Shoujiang Ma <shoujiang.ma@intel.com>

* Fix core dump

Signed-off-by: Shoujiang Ma <shoujiang.ma@intel.com>

* Lambda explicit capture by value

Signed-off-by: Shoujiang Ma <shoujiang.ma@intel.com>

* Add debug log to detect destory order of plugin and core

Signed-off-by: Shoujiang Ma <shoujiang.ma@intel.com>

* Use sync to load cpu and gpu to check ci

Signed-off-by: Shoujiang Ma <shoujiang.ma@intel.com>

* Revert "Use sync to load cpu and gpu to check ci"

This reverts commit 66e09ccd47321e26f68392976d59b1e69cd3df1a.

* Copy network in lambda

Signed-off-by: Shoujiang Ma <shoujiang.ma@intel.com>

* Try to fix CanCreateInferRequest test in Centos

Signed-off-by: Shoujiang Ma <shoujiang.ma@intel.com>

* Remove print log

Signed-off-by: Shoujiang Ma <shoujiang.ma@intel.com>

* Fix CI issues in AUTO because GPU execNetwork doesn't support SetConfig

Signed-off-by: Shoujiang Ma <shoujiang.ma@intel.com>

* Address reviewer's comment: handle load network failure

Signed-off-by: Shoujiang Ma <shoujiang.ma@intel.com>

* weak_ptr rather than plain ICore* to make sure we hold the Core (whihc holds the plugins in turn) from the destruction in case we need that

* Replace ie::ICore* to shared_ptr<ie::ICore>

Signed-off-by: Shoujiang Ma <shoujiang.ma@intel.com>

Co-authored-by: myshevts <maxim.y.shevtsov@intel.com>
2021-06-29 15:54:40 +03:00
2021-06-28 22:50:41 +03:00
2021-06-08 11:00:02 +03:00
2021-06-22 17:43:17 +03:00
2020-07-20 17:36:08 +03:00
2021-05-31 15:24:56 +03:00
2018-10-16 13:45:03 +03:00
2021-06-22 17:43:17 +03:00
2020-11-17 16:44:44 +03:00

OpenVINO™ Toolkit

Stable release Apache License Version 2.0 GitHub branch checks state Azure DevOps builds (branch)

This toolkit allows developers to deploy pre-trained deep learning models through a high-level C++ Inference Engine API integrated with application logic.

This open source version includes several components: namely Model Optimizer, nGraph and Inference Engine, as well as CPU, GPU, MYRIAD, multi device and heterogeneous plugins to accelerate deep learning inferencing on Intel® CPUs and Intel® Processor Graphics. It supports pre-trained models from the Open Model Zoo, along with 100+ open source and public models in popular formats such as Caffe*, TensorFlow*, MXNet* and ONNX*.

Repository components:

License

Deep Learning Deployment Toolkit is licensed under Apache License Version 2.0. By contributing to the project, you agree to the license and copyright terms therein and release your contribution under these terms.

Resources:

Support

Please report questions, issues and suggestions using:


* Other names and brands may be claimed as the property of others.

Languages
C++ 80.5%
Python 15.5%
C 2.8%
CMake 0.9%
Cython 0.1%