OpenVINO™ is an open-source toolkit for optimizing and deploying AI inference
Go to file
Mikhail Nosov d20900e235
[Caching] Add caching options to benchmark app (#4909)
* Python API for LoadNetwork by model file name

* BenchmarkApp: Add caching and LoadNetworkFromFile support

    2 new options are introduced
    - cache_dir <dir> - enables models caching
    - load_from_file - use new perform "LoadNetwork" by model file name

    Using both parameters will achieve maximum performance of read/load network on startup

    Tests:
    1) Run "benchmark_app -h". Help will display 2 new options. After available devices there will be list of devices with cache support
    2) ./benchmark_app -d CPU -i <model.xml> -load_from_file
    Verify that some test steps are skipped (related to ReadNetwork, re-shaping etc)
    3) Pre-requisite: support of caching shall be enabled for Template plugin
    ./benchmark_app -d TEMPLATE -i <model.onnx> -load_from_file -cache_dir someDir
    Verify that "someDir" is created and generated blob is available
    Run again, verify that loading works as well (should be faster as it will not load onnx model)
    4) Run same test as (3), but without -load_from_file option. Verify that cache is properly created
    For some devices loadNetwork time shall be improved when cache is available

* Removed additional timing prints

* Correction from old code

* Revert "Removed additional timing prints"

Additional change - when .blob is chosen instead of .xml, it takes priority over caching flags

* Removed new time printings

As discussed, these time measurements like 'total first inference time' will be available in 'timeTests' scripts

* Fix clang-format issues
2021-05-17 13:41:15 +03:00
.ci Azure CI: Enable IB for CPU func tests (#5639) 2021-05-14 19:27:18 +03:00
.github feat: linters for IE Py API, wheel, samples (#5352) 2021-04-28 13:52:03 +03:00
cmake Support old TBBs in cmake (#5638) 2021-05-15 11:42:09 +03:00
docs Update specification for ConvolutionBackpropData. (#4679) 2021-05-17 09:21:07 +03:00
inference-engine [Caching] Add caching options to benchmark app (#4909) 2021-05-17 13:41:15 +03:00
licensing updated third-party-programs.txt (#5607) 2021-05-13 13:30:20 +03:00
model-optimizer Common telemetry (#5032) 2021-05-14 21:56:03 +03:00
ngraph [SubgraphsDumper] Extract statistics instead of constants with weights. (#5149) 2021-05-17 12:13:02 +03:00
openvino Nested ITT counters lead to invalid performance measurement results (#5172) 2021-04-29 07:33:21 +03:00
scripts add python3-gi-cairo dependency for dlstreamer on Ubuntu 20 (#5061) 2021-04-14 12:34:50 +03:00
tests [CPU] Plugin migration on ngraph (#4344) 2021-05-06 19:49:24 +03:00
thirdparty Openvino autogenerated cmake (#5484) 2021-05-07 11:57:51 +03:00
tools [Caching] Add caching options to benchmark app (#4909) 2021-05-17 13:41:15 +03:00
.gitattributes Doc Migration (master) (#1377) 2020-07-20 17:36:08 +03:00
.gitignore publish master branch snapshot, revision 8d31237e2c3f673cbb0f0ba110fc10f5cce1d2bb 2020-05-22 02:23:12 +03:00
.gitmodules Optimizations for precision conversion operations in nGraph reference implementations (#3974) 2021-02-08 16:21:45 +03:00
CMakeLists.txt Openvino autogenerated cmake (#5484) 2021-05-07 11:57:51 +03:00
CODEOWNERS Added code owners for scripts folder (#2130) 2020-09-08 17:23:27 +03:00
install_build_dependencies.sh script: add git-lfs to install_build_deps (#4811) 2021-03-29 20:31:23 +03:00
Jenkinsfile [Jenkinsfile] Disable failFast & enable propagateStatus (#3503) 2020-12-10 12:05:03 +03:00
LICENSE Publishing R3 2018-10-16 13:45:03 +03:00
README.md Feature/merge 2021 3 to master (#5307) 2021-04-19 20:19:17 +03:00
SECURITY.md Added SECURITY.md back (#3177) 2020-11-17 16:44:44 +03:00

OpenVINO™ Toolkit

Stable release Apache License Version 2.0 GitHub branch checks state Azure DevOps builds (branch)

This toolkit allows developers to deploy pre-trained deep learning models through a high-level C++ Inference Engine API integrated with application logic.

This open source version includes several components: namely Model Optimizer, nGraph and Inference Engine, as well as CPU, GPU, MYRIAD, multi device and heterogeneous plugins to accelerate deep learning inferencing on Intel® CPUs and Intel® Processor Graphics. It supports pre-trained models from the Open Model Zoo, along with 100+ open source and public models in popular formats such as Caffe*, TensorFlow*, MXNet* and ONNX*.

Repository components:

License

Deep Learning Deployment Toolkit is licensed under Apache License Version 2.0. By contributing to the project, you agree to the license and copyright terms therein and release your contribution under these terms.

Resources:

Support

Please report questions, issues and suggestions using:


* Other names and brands may be claimed as the property of others.