* DRAFT: hot-swap async request (when VPU or other acceleratore finally loaded) * DRAFT2: communicating the hot-swap back to the AutoExecNetwork, added logic for blobs (and keeping them internally to the AutoRequest), added lot's of comments for fixme (items to close the semantics and behaviour) * Rebase/refactor code and fix tests issue Signed-off-by: Shoujiang Ma <shoujiang.ma@intel.com> * Fix logic for AutoExecutableNetwork 1. force loadNetwork in parallel by std::launch::async 2. address some fixme in auto_exec_network.cpp 3. capture by reference Signed-off-by: Shoujiang Ma <shoujiang.ma@intel.com> * Fix core dump Signed-off-by: Shoujiang Ma <shoujiang.ma@intel.com> * Lambda explicit capture by value Signed-off-by: Shoujiang Ma <shoujiang.ma@intel.com> * Add debug log to detect destory order of plugin and core Signed-off-by: Shoujiang Ma <shoujiang.ma@intel.com> * Use sync to load cpu and gpu to check ci Signed-off-by: Shoujiang Ma <shoujiang.ma@intel.com> * Revert "Use sync to load cpu and gpu to check ci" This reverts commit 66e09ccd47321e26f68392976d59b1e69cd3df1a. * Copy network in lambda Signed-off-by: Shoujiang Ma <shoujiang.ma@intel.com> * Try to fix CanCreateInferRequest test in Centos Signed-off-by: Shoujiang Ma <shoujiang.ma@intel.com> * Remove print log Signed-off-by: Shoujiang Ma <shoujiang.ma@intel.com> * Fix CI issues in AUTO because GPU execNetwork doesn't support SetConfig Signed-off-by: Shoujiang Ma <shoujiang.ma@intel.com> * Address reviewer's comment: handle load network failure Signed-off-by: Shoujiang Ma <shoujiang.ma@intel.com> * weak_ptr rather than plain ICore* to make sure we hold the Core (whihc holds the plugins in turn) from the destruction in case we need that * Replace ie::ICore* to shared_ptr<ie::ICore> Signed-off-by: Shoujiang Ma <shoujiang.ma@intel.com> Co-authored-by: myshevts <maxim.y.shevtsov@intel.com>
OpenVINO™ Toolkit
This toolkit allows developers to deploy pre-trained deep learning models through a high-level C++ Inference Engine API integrated with application logic.
This open source version includes several components: namely Model Optimizer, nGraph and Inference Engine, as well as CPU, GPU, MYRIAD, multi device and heterogeneous plugins to accelerate deep learning inferencing on Intel® CPUs and Intel® Processor Graphics. It supports pre-trained models from the Open Model Zoo, along with 100+ open source and public models in popular formats such as Caffe*, TensorFlow*, MXNet* and ONNX*.
Repository components:
License
Deep Learning Deployment Toolkit is licensed under Apache License Version 2.0. By contributing to the project, you agree to the license and copyright terms therein and release your contribution under these terms.
Resources:
- Docs: https://docs.openvinotoolkit.org/
- Wiki: https://github.com/openvinotoolkit/openvino/wiki
- Issue tracking: https://github.com/openvinotoolkit/openvino/issues
- Storage: https://storage.openvinotoolkit.org/
- Additional OpenVINO™ modules: https://github.com/openvinotoolkit/openvino_contrib
- Intel® Distribution of OpenVINO™ toolkit Product Page
- Intel® Distribution of OpenVINO™ toolkit Release Notes
Support
Please report questions, issues and suggestions using:
- The
openvinotag on StackOverflow* - GitHub* Issues
- Forum
* Other names and brands may be claimed as the property of others.