* validate_and_infer_types() implementation * input parameter validation for LSTM, GRU and RNN * style-check applied * Add LSTMSequence dynamic shape validation and test props for RNNCell, GRUCell, LSTMCell and LSTMSequence. * recurrent_sequence.hpp moved to ngraph/core/include/ngraph/op/util/ * style check applied * removed unused variable from LSTMSequence::validate_and_infer_types * Add missing newline mark at the end of file. * Add supression macro for FusedOp deprecation. * Add element type initialization * transpose,rnn cell reference implementations * Apply PR review remarks * reference implementations for cells op, single layer tests, align lstm cell/sequence according to the spec * lstm/gru/rnn cell decompostion transformations * ngraph codestyle * clean up * ngraph code style * change inheritance of Cells, fix build * fix build * fix build again * remove Peepholes from LSTMSeq, fix copy_runtime_info in transformations * Rewrite tests to use gtest exception assertions. * resolve tests issues * ngraph codestyle * add missed files * fix typeprop tests * fix lstm sequence checks * fix arm build * fix arm again * delete unnecessary file * add convert weghts format function, enable lstm test, resolve review comments * add ngraph builders * ngraph codestyle * fix unit tests * revert transpose reference implementation * revert LSTM Cell v0, add LSTMCell v1, update transformation lstm_cell_to_cell_ie * v1 version of LSTMCell op * LSTMSequence v1 operation, exclude LSTMSeq from opset4 * fix python api tests * resolve review comments, tests for decomposition transformations, switch lstm cell to opset4 in mo Co-authored-by: Szymon Durawa <szymon.durawa@intel.com> |
||
---|---|---|
.ci/openvino-onnx | ||
.github | ||
cmake | ||
docs | ||
inference-engine | ||
model-optimizer | ||
ngraph | ||
openvino | ||
scripts | ||
tests | ||
tools | ||
.gitattributes | ||
.gitignore | ||
.gitmodules | ||
azure-pipelines.yml | ||
build-instruction.md | ||
CMakeLists.txt | ||
CODEOWNERS | ||
CONTRIBUTING_DOCS.md | ||
CONTRIBUTING.md | ||
get-started-linux.md | ||
install_dependencies.sh | ||
Jenkinsfile | ||
LICENSE | ||
README.md |
OpenVINO™ Toolkit - Deep Learning Deployment Toolkit repository
This toolkit allows developers to deploy pre-trained deep learning models through a high-level C++ Inference Engine API integrated with application logic.
This open source version includes two components: namely Model Optimizer and Inference Engine, as well as CPU, GPU and heterogeneous plugins to accelerate deep learning inferencing on Intel® CPUs and Intel® Processor Graphics. It supports pre-trained models from the Open Model Zoo, along with 100+ open source and public models in popular formats such as Caffe*, TensorFlow*, MXNet* and ONNX*.
Repository components:
License
Deep Learning Deployment Toolkit is licensed under Apache License Version 2.0. By contributing to the project, you agree to the license and copyright terms therein and release your contribution under these terms.
Documentation
- OpenVINO™ Release Notes
- OpenVINO™ Inference Engine Build Instructions
- Get Started with Deep Learning Deployment Toolkit on Linux*
- Introduction to Deep Learning Deployment Toolkit
- Inference Engine Developer Guide
- Model Optimizer Developer Guide
How to Contribute
See CONTRIBUTING for contribution to the code. See CONTRIBUTING_DOCS for contribution to the documentation. Thank you!
Support
Please report questions, issues and suggestions using:
- The
openvino
tag on StackOverflow* - GitHub* Issues
- Forum
* Other names and brands may be claimed as the property of others.