* Specification for the NMS-4 operation (updated shape infer function) * Enabled NMS-4 in the Model Optimizer * Changed opset version for NMS with dynamic outputs and namespace to be "dynamic" * Added NMS-4 * Added opset4 to the nGraph * Added unit tests for NMS-4 type infer * Renamed UpgradeNMS3ToNMS4 to UpgradeNMS3ToNMSDynamic. Added stub for ConvertNMS4ToLegacy * Make IE aware of opset4 ops * Updated NMSIE to have different shape infer function based on the NMS it was converted from. Implemented NMS4->NMSIE conversion * Apply code style * Updated StaticShapeNonMaximumSuppression op in the VPU * Introduced new version of NMSIE operation with shape infer function from v4::NMS * Fixed dynamicToStaticNonMaxSuppression transformation * Added new version of NMSIE op with updated shape infer function * Fixed NMS4 to NMSIE2 transformation * Fixed constructors for nGraph ops v4::NM and dynamic::NMS * Updated text in the opset4 specification document * Code style fixes * Fixed constructors for StaticShapeNMS + fixed test * Minor change to the NMS op in the MO * Fixed typo in the dynamic_to_static_shape_non_max_suppression transformation * Removed redundant checks * Refactored NMS infer and validate functions * Added more checks to the validate_and_infer_types functions for NMS-3 and NMS-4 * Fixed compilation issue on Windows for op NMS * Code style fixes * Fixed typos in the NMSIE and NMSIE2 to CNNLayer op conversion * Fixed typo in the ie_cnn_layer_builder_ngraph.cpp * Fixed the NMSToLegacyNMS transformation. Added unit tests * Apply code review comments * Refactored NMSIE to use visitors * Removed calling ConvertNMS4ToLegacy in the common optimizations * Moved NMS4ToNMSLegacy to convert1_to_legacy group of transformations * Removed useless include statement * Removed copy-paste issue Co-authored-by: Evgeny Lazarev <elazarev.nnov@gmail.com> |
||
---|---|---|
.github/workflows | ||
cmake | ||
docs | ||
inference-engine | ||
model-optimizer | ||
ngraph | ||
scripts | ||
tests | ||
tools | ||
.gitattributes | ||
.gitignore | ||
.gitmodules | ||
azure-pipelines.yml | ||
build-instruction.md | ||
CMakeLists.txt | ||
CODEOWNERS | ||
CONTRIBUTING.md | ||
get-started-linux.md | ||
install_dependencies.sh | ||
Jenkinsfile | ||
LICENSE | ||
README.md |
OpenVINO™ Toolkit - Deep Learning Deployment Toolkit repository
This toolkit allows developers to deploy pre-trained deep learning models through a high-level C++ Inference Engine API integrated with application logic.
This open source version includes two components: namely Model Optimizer and Inference Engine, as well as CPU, GPU and heterogeneous plugins to accelerate deep learning inferencing on Intel® CPUs and Intel® Processor Graphics. It supports pre-trained models from the Open Model Zoo, along with 100+ open source and public models in popular formats such as Caffe*, TensorFlow*, MXNet* and ONNX*.
Repository components:
License
Deep Learning Deployment Toolkit is licensed under Apache License Version 2.0. By contributing to the project, you agree to the license and copyright terms therein and release your contribution under these terms.
Documentation
- OpenVINO™ Release Notes
- OpenVINO™ Inference Engine Build Instructions
- Get Started with Deep Learning Deployment Toolkit on Linux*
- Introduction to Deep Learning Deployment Toolkit
- Inference Engine Developer Guide
- Model Optimizer Developer Guide
How to Contribute
See CONTRIBUTING for details. Thank you!
Support
Please report questions, issues and suggestions using:
- The
openvino
tag on StackOverflow* - GitHub* Issues
- Forum
* Other names and brands may be claimed as the property of others.