* change the deprecated method to the recent * first ver of the hybrid cores aware CPU streams (+debug info) * more debug and fixed sum threads * disabled NUMA pinning to experiment with affinity via OS * further brushing of stream to core type logic * hybrid CPU-aware getNumberOfCPUCores * adding check on the efficiency * experimental TBB package (that cmake should pull from the internal server) * iterating over core types in the reversed order (so the big cores are populated first in case user specified less than all #threads) * adding back the NUMA affinity code-path for the full validation (incl 2 sockets Windows Server) * cpplint fix and tabbing the #if clauses for the readbility * pre-production TBB from internal server * wrapping over #cores/types * wrapping over #cores/types, ver 2 * wrapping over #streams instead * disabling warnings as errors for a while (to unlock testing) * accomodating new TBB layout for dependencies.bat * next tbb ver (with debug binaries that probably can unlock the commodity builds, without playing product_configs) * minor brushing for experiments (so that pinning can be disabled) * minor brushing from code review * Updating the SHA hash which appeared when rebasing to the master * WIP refactoring * Completed refactoring of the "config" phase of the cpu stream executor and on-the-fly streams to core types mapping * making the benchmark_app aware about new pinning mode * Brushing a bit (in preparation for the "soft" affinity) * map to vector to simplify the things * updated executors comparison * more fine-grained pinning scheme for the HYBRID (required to allow all cores on 2+8 1+4, and other LITTLE-skewed scenarios) TODO: seprate little to big ratio for the fp322 and int8 (and pass the fp32Only flag to the MakeDefaultMultiTHreaded) * separating fp32 and int8 intensive cases for hybrid execution, also leveraging the HT if the #big_cores is small, refactored. also switched to the 2021.2 oneTBB RC package * code style * stripped tbb archives from unused folders and files, also has to rename the LICENSE.txt to the LICENSE to match existing OV packaging tools * assigning nodeId regradless of pinning mode * tests OpenCV builds with same 2021.2 oneTBB, ubuntu 18/20 * cmake install paths for oneTBB, alos a ie_parallel.cmake warning on older ver of TBB * Updated latency case desc to cover multi-socket machines * adding centos8 OCV with oneTBB build updating TBB drops with hwloc shared libs added. * enabled internal OCV from THIRD_PARTY_SERVER to test thru CI.. Added Centos7 notbb OCV build (until g-api get ready for onetbb) to unlock the Centos7 CI build * separate rpath log to respect one-tbb specific paths * fixed SEQ code-path * fixed doc misprint * allowing all cores in 2+8 for int8 as well * cleaned from debug printfs * HYBRID_AWARE pinning option for the Python benchmark_app * OpenVINO Hybrid CPUs support * Remove custom::task_arena abstraction layout * Get back to the custom::task_arena interface * Add windows.h inclusion * Fix typo in macro name * Separate TBB and TBBbind packages * Fix compile-time conditions * Fix preprocessors conditions * Fix typo * Fix linking * make linking private * Fix typo * Fix target_compile_definitions syntax * Implement CMake install logic, update sha hash for the tbbbind_2_4 package * Add tbbbind_2_4 required paths to setup_vars * Update CI paths * Include ie_parallel.hpp to ie_system_conf.cpp * Try to update dependencies scripts * Try to fix dependencies.bat * Modify dependencies script * Use static tbbbind_2_4 library * Remove redundant paths from CI * Revert "cleaned from debug printfs" This reverts commit |
||
---|---|---|
.ci | ||
.github | ||
cmake | ||
docs | ||
inference-engine | ||
licensing | ||
model-optimizer | ||
ngraph | ||
openvino | ||
scripts | ||
tests | ||
thirdparty | ||
tools | ||
.gitattributes | ||
.gitignore | ||
.gitmodules | ||
CMakeLists.txt | ||
CODEOWNERS | ||
install_build_dependencies.sh | ||
Jenkinsfile | ||
LICENSE | ||
README.md | ||
SECURITY.md |
OpenVINO™ Toolkit
This toolkit allows developers to deploy pre-trained deep learning models through a high-level C++ Inference Engine API integrated with application logic.
This open source version includes several components: namely Model Optimizer, nGraph and Inference Engine, as well as CPU, GPU, MYRIAD, multi device and heterogeneous plugins to accelerate deep learning inferencing on Intel® CPUs and Intel® Processor Graphics. It supports pre-trained models from the Open Model Zoo, along with 100+ open source and public models in popular formats such as Caffe*, TensorFlow*, MXNet* and ONNX*.
Repository components:
License
Deep Learning Deployment Toolkit is licensed under Apache License Version 2.0. By contributing to the project, you agree to the license and copyright terms therein and release your contribution under these terms.
Resources:
- Docs: https://docs.openvinotoolkit.org/
- Wiki: https://github.com/openvinotoolkit/openvino/wiki
- Issue tracking: https://github.com/openvinotoolkit/openvino/issues
- Storage: https://storage.openvinotoolkit.org/
- Additional OpenVINO™ modules: https://github.com/openvinotoolkit/openvino_contrib
- Intel® Distribution of OpenVINO™ toolkit Product Page
- Intel® Distribution of OpenVINO™ toolkit Release Notes
Support
Please report questions, issues and suggestions using:
- The
openvino
tag on StackOverflow* - GitHub* Issues
- Forum
* Other names and brands may be claimed as the property of others.