* Fix for concat layer with more than 2 inputs Signed-off-by: Bartosz Sochacki <bartosz.sochacki@intel.com> * Fixed check if affine is used for crop layer Signed-off-by: Bartosz Sochacki <bartosz.sochacki@intel.com> * code cleanup for fix affine layer check Signed-off-by: Bartosz Sochacki <bartosz.sochacki@intel.com> * added test for concat layer with multiple inputs * simplified test to use less number of layers * fixed code style * fixed coding style * addressed review comments and one more issue that appeared during testing * fixed code style errors * scale factor propagation for concat layer with multiple inputs * fix for a case when all inputs to concat are activation layers * fix for linux compilation - C++14 is not enabled and fails on lambda with auto parameters * corrected current year in headers in concat multi input tests * fixes for code review issues raised by Denis Orlov * enabled integer mode computation in GNA concat multi input test * removed 1 space per review comment * a fix to fail when not all scale factors are equal * added GNA_DEVICE_MODE config to concat multi input test * corrected searching for a next input to concat layer * changed selection of 2nd candidate for source quant value * code style fix - else and brackets should be in the same line * small code improvement * fix for mixing line endings * addressed with endless requantization loop and fixed failing tests |
||
---|---|---|
.ci/openvino-onnx | ||
.github | ||
cmake | ||
docs | ||
inference-engine | ||
model-optimizer | ||
ngraph | ||
openvino | ||
scripts | ||
tests | ||
tools | ||
.gitattributes | ||
.gitignore | ||
.gitmodules | ||
azure-pipelines.yml | ||
build-instruction.md | ||
CMakeLists.txt | ||
CODEOWNERS | ||
CONTRIBUTING_DOCS.md | ||
CONTRIBUTING.md | ||
get-started-linux.md | ||
install_dependencies.sh | ||
Jenkinsfile | ||
LICENSE | ||
README.md |
OpenVINO™ Toolkit - Deep Learning Deployment Toolkit repository
This toolkit allows developers to deploy pre-trained deep learning models through a high-level C++ Inference Engine API integrated with application logic.
This open source version includes two components: namely Model Optimizer and Inference Engine, as well as CPU, GPU and heterogeneous plugins to accelerate deep learning inferencing on Intel® CPUs and Intel® Processor Graphics. It supports pre-trained models from the Open Model Zoo, along with 100+ open source and public models in popular formats such as Caffe*, TensorFlow*, MXNet* and ONNX*.
Repository components:
License
Deep Learning Deployment Toolkit is licensed under Apache License Version 2.0. By contributing to the project, you agree to the license and copyright terms therein and release your contribution under these terms.
Documentation
- OpenVINO™ Release Notes
- OpenVINO™ Inference Engine Build Instructions
- Get Started with Deep Learning Deployment Toolkit on Linux*
- Introduction to Deep Learning Deployment Toolkit
- Inference Engine Developer Guide
- Model Optimizer Developer Guide
How to Contribute
See CONTRIBUTING for contribution to the code. See CONTRIBUTING_DOCS for contribution to the documentation. Thank you!
Support
Please report questions, issues and suggestions using:
- The
openvino
tag on StackOverflow* - GitHub* Issues
- Forum
* Other names and brands may be claimed as the property of others.