OpenVINO™ is an open-source toolkit for optimizing and deploying AI inference
Go to file
Svetlana Dolinina 5e8f997262
Fix bug in AddReshapeTransposeAroundConvPool for Kaldi LSTM networks (#9885)
* change order of transformations to work correctly with Convolutions in Kaldi LSTM networks

* removed unneeded changes and add unit tests

* remove comment

* remove changes from memory_offset_adjustment, move all fixes inside add_reshape_transpose_around_conv_pool to avoid new bugs

* removed test for deleted changes

* replace -1 by None
2022-02-01 17:06:49 +03:00
.ci [GNA] Refactor and install libGNAConfig.cmake (#9793) 2022-02-01 10:51:07 +03:00
.github [VPU] Rename INTEL_VPU to INTEL_MYRIAD, move thirdparty and vpu_dependencies (#9827) 2022-01-31 16:58:33 +03:00
cmake [CMake] Add debug postfix on mac (#10027) 2022-02-01 12:41:26 +03:00
docs Fix yolov3 documentation (#9901) 2022-02-01 13:12:12 +03:00
licensing Move licensing to docs folder in install package (#9842) 2022-01-21 19:14:02 +03:00
samples Benchmark_app: Command line args processing is modified to use both tensor and corresponding node names (#9968) 2022-02-01 16:05:00 +03:00
scripts [VPU] Rename INTEL_VPU to INTEL_MYRIAD, move thirdparty and vpu_dependencies (#9827) 2022-01-31 16:58:33 +03:00
src Enable reshape sequence fusion transformation based target shape bounds (#9886) 2022-02-01 14:51:47 +03:00
tests Moved memory tests to OV API 2.0 (#9924) 2022-02-01 14:36:05 +03:00
thirdparty [OMZ]: update submodule (#10036) 2022-02-01 15:03:17 +03:00
tools Fix bug in AddReshapeTransposeAroundConvPool for Kaldi LSTM networks (#9885) 2022-02-01 17:06:49 +03:00
.gitattributes [POT] Update tests with new data (#8209) 2021-10-27 12:40:19 +03:00
.gitignore [Python API] Move old python bindings (#9134) 2022-01-24 13:16:07 +03:00
.gitmodules [GPU] Moved onednn_gpu to plugin folder (#9458) 2021-12-29 11:06:14 +03:00
CMakeLists.txt [VPU] Rename INTEL_VPU to INTEL_MYRIAD, move thirdparty and vpu_dependencies (#9827) 2022-01-31 16:58:33 +03:00
CODEOWNERS [VPU] Rename INTEL_VPU to INTEL_MYRIAD, move thirdparty and vpu_dependencies (#9827) 2022-01-31 16:58:33 +03:00
install_build_dependencies.sh Update year to 2022 in copyright notice (#9755) 2022-01-19 01:07:49 +03:00
Jenkinsfile Beautify Jenkinsfile a little bit 2021-05-31 15:24:56 +03:00
LICENSE Publishing R3 2018-10-16 13:45:03 +03:00
README.md fix errors in documentation (#9384) 2022-01-27 19:39:49 +03:00
SECURITY.md Added SECURITY.md back (#3177) 2020-11-17 16:44:44 +03:00

OpenVINO™ Toolkit

Stable release Apache License Version 2.0 GitHub branch checks state Azure DevOps builds (branch) PyPI Downloads

This toolkit allows developers to deploy pre-trained deep learning models through a high-level C++ Inference Engine API integrated with application logic.

This open source version includes several components: namely Model Optimizer, nGraph and Inference Engine, as well as CPU, GPU, MYRIAD, multi device and heterogeneous plugins to accelerate deep learning inferencing on Intel® CPUs and Intel® Processor Graphics. It supports pre-trained models from the Open Model Zoo, along with 100+ open source and public models in popular formats such as Caffe*, TensorFlow*, MXNet* and ONNX*.

Repository components:

License

Deep Learning Deployment Toolkit is licensed under Apache License Version 2.0. By contributing to the project, you agree to the license and copyright terms therein and release your contribution under these terms.

Resources:

Support

Please report questions, issues and suggestions using:


* Other names and brands may be claimed as the property of others.