OpenVINO™ is an open-source toolkit for optimizing and deploying AI inference
Go to file
Mikhail Nosov 7f9daadc08
Model caching support - Core part (#4661)
* Model caching support - Core part

Introducing model caching support
Use core.SetConfig({{CONFIG_KEY(CACHE_DIR), <dir>}}); to enable caching of models

OpenVINO will try to create caching folder if it doesn't exist, but it is recommended for client to create caching folder with necessary permissions before enabling cache in config

For caching, plugins shall support import/export functionality
Plugin requirements:
- Add METRIC_KEY(IMPORT_EXPORT_SUPPORT) in SUPPORTED_METRICS to support caching
If plugin has different device architectures with different caches, i.e.
For "GNA.0" - one cache, for "GNA.10" - another cache
In this case plugin shall support DEVICE_ARCHITECTURE metric and return different strings for different DEVICE_ID's

Added functional tests

* Fix CentOS build issues

* Few updates according to code review

* Revert unnecessary changes for Import/Export core implementation

These changes affect old behavior and may be undesired
For caching support these is no need to change anything in this area
If needed, such removal of 'Magic' usage can be done under separate task in future

* More tests:
1) Verify that Imported data from stream is the same as was exported
2) Verify that cache is not loaded when config in LoadNetwork is changed
3) Verify that if CNN Network is changed between ReadNetwork and LoadNetwork - cache is not loaded

* Update of NetworkCompilationContext

Put back functionality of calculating hash based on runtime information, weights
Implemented OstreamHashWrapper to avoid serialization to buffer

* Correction of CACHE_DIR key description

* Unit tests for compilation_context

Changes:
1) Improved handling of OstreamHashAdapter
2) Improved runtime info serialization (not just PrimitivesPriority and affinity)
3) Removed redundant weights hash calculation

* Fix GCC 4.8 build issues

* Compilation context updates
1) Use hash of sum of serialized data to get hash of network. It is more efficient comparing to weights sum calculation
2) CalculateFileInfo - convert path to absolute ("./test.blob" and "test.blob" shall give same hash)

* Hash - added more rt_info attributes + tests

- PrimitivesPriority
- FusedNames
- Dequantization

* Moved "get_absolute_path" macro to file_utils.h

* Make 'absoluteFilePath' a library API, not macro

* One more unit test for fileName hashing

* Fix compilation error after merge with latest master

* Allow tests to be executed in parallel (stress mode)

* More minor updates for stress testing

Now it allows to execute tests with '--repeat=100' option where one test is executed in multiple processes simultaneously

Example:
./gtest-parallel <openvino_dir>/bin/intel64/Debug/ieFuncTests --gtest_filter=CachingTest* --repeat=10

* Use absolute model file path for calculating blob name

* Added 'createDirectoryRecursive' API to plugin_api/file_utils
2021-03-16 14:13:45 +03:00
.ci Enable protobuf-lite in ONNX CI tests (#4487) 2021-03-15 14:34:02 +01:00
.github nGraph code style upgrade to clang-format-9 (#4721) 2021-03-15 22:06:01 +03:00
cmake Move link of CC library under cmake macro (#4779) 2021-03-16 14:04:48 +03:00
docs Feature/cherry pick 4635 to master (#4788) 2021-03-16 12:03:41 +03:00
inference-engine Model caching support - Core part (#4661) 2021-03-16 14:13:45 +03:00
licensing updated third-party-programs.txt (#4789) 2021-03-16 14:07:16 +03:00
model-optimizer Added support for attribute 'pad_to_max_dimension' for TF OD API models (#4720) 2021-03-11 13:21:20 +03:00
ngraph Move link of CC library under cmake macro (#4779) 2021-03-16 14:04:48 +03:00
openvino Fix for broken CC in CPU plugin (#4594) 2021-03-04 12:22:21 +03:00
scripts Added gstreamer1.0-x pkg for dlstreamer (#4512) 2021-03-02 12:41:56 +03:00
tests Improvements in cmake scripts (#4766) 2021-03-15 13:07:38 +03:00
thirdparty Removed redundant code from csv collector (#4225) 2021-02-09 06:39:27 +03:00
tools [IE Tools] Fix seed for random generator in benchmark_app(s) to have reproducable data between runs of the same application (#4757) 2021-03-13 16:41:39 +03:00
.gitattributes Doc Migration (master) (#1377) 2020-07-20 17:36:08 +03:00
.gitignore publish master branch snapshot, revision 8d31237e2c3f673cbb0f0ba110fc10f5cce1d2bb 2020-05-22 02:23:12 +03:00
.gitmodules Optimizations for precision conversion operations in nGraph reference implementations (#3974) 2021-02-08 16:21:45 +03:00
CMakeLists.txt WA for GCC 7.5 pybind warning (#4277) 2021-02-11 18:41:23 +03:00
CODEOWNERS Added code owners for scripts folder (#2130) 2020-09-08 17:23:27 +03:00
install_build_dependencies.sh [install_dependencies.sh] install latest cmake if current version is lower 3.13 (#2695) 2020-10-16 21:03:46 +03:00
Jenkinsfile [Jenkinsfile] Disable failFast & enable propagateStatus (#3503) 2020-12-10 12:05:03 +03:00
LICENSE Publishing R3 2018-10-16 13:45:03 +03:00
README.md Feature/azaytsev/gna model link fixes (#4590) 2021-03-03 19:03:40 +03:00
SECURITY.md Added SECURITY.md back (#3177) 2020-11-17 16:44:44 +03:00

OpenVINO™ Toolkit

Stable release Apache License Version 2.0 GitHub branch checks state Azure DevOps builds (branch)

This toolkit allows developers to deploy pre-trained deep learning models through a high-level C++ Inference Engine API integrated with application logic.

This open source version includes several components: namely Model Optimizer, nGraph and Inference Engine, as well as CPU, GPU, MYRIAD, multi device and heterogeneous plugins to accelerate deep learning inferencing on Intel® CPUs and Intel® Processor Graphics. It supports pre-trained models from the Open Model Zoo, along with 100+ open source and public models in popular formats such as Caffe*, TensorFlow*, MXNet* and ONNX*.

Repository components:

License

Deep Learning Deployment Toolkit is licensed under Apache License Version 2.0. By contributing to the project, you agree to the license and copyright terms therein and release your contribution under these terms.

Resources:

Support

Please report questions, issues and suggestions using:


* Other names and brands may be claimed as the property of others.