* Written nGraph version of the MO transformation SplitConcatPairToInterpolate. * Small fix. * Started to write tests for the transformation. * Small fixes. * Written more tests. * Deleted commented code. * Deleted debug prints. * Added the transformation SplitConcatPairToInterpolateFusion into common_fusions. * Small fix. * Relaced std::set by std::unordered_set. * Now the function grouped_vector is not template function. * Small simplification. * Deleted commented code. * Now std::pair is used instead of SplitAndScale. * Enabled the transformation SplitConcatPairToInterpolateFusion and added it into MOCTransformations pipeline. * Removed the transformation from common_optimization.cpp. * Small fixes. * Added comment to the function grouped_vector. * Deleted the local variable result from the function get_split_before_concat(). * Small change. * Added comments for the conditions on input rank. * Used std::tie instead of .first and .second. * Skipped the ngraph namespace specification in the transformation callback. * Got rid of std::unordered_set of std::shared_ptr<ngraph::opset8::Split>. * size_t was replaced with uint64_t. * Added descrption of the transformation. * Small fix. * Added condition that scale_factor != 1. * Added more tests. Also the transformation has boolean parameter use_shape_for_elimination. * Small fix. * Written CPU layer tests for the nGraph transformation SplitConcatPairToInterpolateFusion. * Some fixes. * Added tests for the case of dynamic input shapes. |
||
---|---|---|
.ci | ||
.github | ||
cmake | ||
docs | ||
inference-engine | ||
licensing | ||
model-optimizer | ||
ngraph | ||
openvino | ||
runtime | ||
samples | ||
scripts | ||
tests | ||
thirdparty | ||
tools | ||
.gitattributes | ||
.gitignore | ||
.gitmodules | ||
CMakeLists.txt | ||
CODEOWNERS | ||
install_build_dependencies.sh | ||
Jenkinsfile | ||
LICENSE | ||
README.md | ||
SECURITY.md |
OpenVINO™ Toolkit
This toolkit allows developers to deploy pre-trained deep learning models through a high-level C++ Inference Engine API integrated with application logic.
This open source version includes several components: namely Model Optimizer, nGraph and Inference Engine, as well as CPU, GPU, MYRIAD, multi device and heterogeneous plugins to accelerate deep learning inferencing on Intel® CPUs and Intel® Processor Graphics. It supports pre-trained models from the Open Model Zoo, along with 100+ open source and public models in popular formats such as Caffe*, TensorFlow*, MXNet* and ONNX*.
Repository components:
License
Deep Learning Deployment Toolkit is licensed under Apache License Version 2.0. By contributing to the project, you agree to the license and copyright terms therein and release your contribution under these terms.
Resources:
- Docs: https://docs.openvinotoolkit.org/
- Wiki: https://github.com/openvinotoolkit/openvino/wiki
- Issue tracking: https://github.com/openvinotoolkit/openvino/issues
- Storage: https://storage.openvinotoolkit.org/
- Additional OpenVINO™ modules: https://github.com/openvinotoolkit/openvino_contrib
- Intel® Distribution of OpenVINO™ toolkit Product Page
- Intel® Distribution of OpenVINO™ toolkit Release Notes
Support
Please report questions, issues and suggestions using:
- The
openvino
tag on StackOverflow* - GitHub* Issues
- Forum
* Other names and brands may be claimed as the property of others.