* Added inputs argument to all compare() function overloads * Rewritten compare() function for NMS * Implemented sorting by name of expected outputs * Implemented sorting by name of actual outputs * Added accounting for simultaneous dynamism and the need to convert outputs in Template plugin * Added a separate case to the GetBlob function for correct dimensions * Rewritten Expected outputs sorting to work correctly on cpuFuncTests * Fixing code style problems * Implemented sorting by name of actual outputs for functional tests * Debug prints removed * Replacing a raw pointer with a vector * Fixing code style problems * Shifting the sorting place Expected outputs * Added sorting of Expected exits in one more place * Quality transition to SLT2.0 * Removing unnecessary code after SLT2.0 * Fix soft_nms_sigma argument * Removing unnecessary parts after SLT2.0 * Remove unnecessary outputs sorting * Removing parts from the code for debugging * Fix for NMS * Trying to make CI green * Checking test passage without adding convert precision * Checking CI * There is an algorithm that adds Convert only if there is f16, fp16 in inputs * Add Convert Op in cases where inputs are not already installed f32 * Check that the CI will go away if you put everything back * Revert changes, validate f32 change on ci * Adding Convert f16-f32 only if there is a function parameter of type f16 * The presence of f16/bf16 as a parameter type is now mandatory to add Convert * Added prints for params, inputs, outputs * Logic checking the absence of Convert * Cosmetic fixes * Setting the correct value for selected_scores_type NMS-5 * Fix bf * Increased readability * Missing parts added * Removed the static for the vector |
||
---|---|---|
.ci | ||
.github | ||
cmake | ||
docs | ||
licensing | ||
samples | ||
scripts | ||
src | ||
tests | ||
thirdparty | ||
tools | ||
.gitattributes | ||
.gitignore | ||
.gitmodules | ||
CMakeLists.txt | ||
CODEOWNERS | ||
CONTRIBUTING.md | ||
install_build_dependencies.sh | ||
Jenkinsfile | ||
LICENSE | ||
README.md | ||
SECURITY.md |
OpenVINO™ Toolkit
This toolkit allows developers to deploy pre-trained deep learning models through a high-level OpenVINO™ Runtime C++ and Python APIs integrated with application logic.
This open source version includes several components: namely Model Optimizer, OpenVINO™ Runtime, Post-Training Optimization Tool, as well as CPU, GPU, MYRIAD, multi device and heterogeneous plugins to accelerate deep learning inferencing on Intel® CPUs and Intel® Processor Graphics. It supports pre-trained models from the Open Model Zoo, along with 100+ open source and public models in popular formats such as TensorFlow, ONNX, PaddlePaddle, MXNet, Caffe, Kaldi.
Repository components
License
Deep Learning Deployment Toolkit is licensed under Apache License Version 2.0. By contributing to the project, you agree to the license and copyright terms therein and release your contribution under these terms.
Resources
- Docs: https://docs.openvino.ai/
- Wiki: https://github.com/openvinotoolkit/openvino/wiki
- Issue tracking: https://github.com/openvinotoolkit/openvino/issues
- Storage: https://storage.openvinotoolkit.org/
- Additional OpenVINO™ toolkit modules: https://github.com/openvinotoolkit/openvino_contrib
- Intel® Distribution of OpenVINO™ toolkit Product Page
- Intel® Distribution of OpenVINO™ toolkit Release Notes
Support
Please report questions, issues and suggestions using:
- The
openvino
tag on StackOverflow* - GitHub* Issues
- Forum
* Other names and brands may be claimed as the property of others.