* [MO] Clean-up MO cmd-line options Remove the following Model Optimizer deprecated options that are no longer used for several releases: disable_fusing, disable_gfusing, generate_deprecated_IR_V7, legacy_ir_generation, keep_shape_ops, move_to_preprocess Deprecate through CLI the following options for which functionality triggered from POT or automatically: disable_weights_compression, disable_nhwc_to_nchw, disable_resnet_optimization, finegrain_fusing. Correct and extend description of each MO option to be printed during model conversion. Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com> * Correct documentation about input shapes Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com> * Perform final corrections in documentation Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com> * Remove legacy_ir_generation overall Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com> * Clean-up tests from deprecated options Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com> * Recover disable_fusing option as deprecated Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com> * Fix keys for static_shape and extensions Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com> * Remove extension key that does not work Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com> * Apply feedback: remove disable_gfusing, correct docs Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com> * Recover disable_fusing option for unit-tests Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com> * Apply feedback for documentation Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com> * Apply feedback about parameters use_legacy_frontend and use_new_frontend Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com> * DO minor fixes for indentation of MO logs Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com> * Revert log.error for fallback message Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com> * Revert disable_weights_compression parameter for tests Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>
OpenVINO™ Toolkit
This toolkit allows developers to deploy pre-trained deep learning models through a high-level OpenVINO™ Runtime C++ and Python APIs integrated with application logic.
This open source version includes several components: namely Model Optimizer, OpenVINO™ Runtime, Post-Training Optimization Tool, as well as CPU, GPU, MYRIAD, multi device and heterogeneous plugins to accelerate deep learning inferencing on Intel® CPUs and Intel® Processor Graphics. It supports pre-trained models from the Open Model Zoo, along with 100+ open source and public models in popular formats such as TensorFlow, ONNX, PaddlePaddle, MXNet, Caffe, Kaldi.
Repository components
License
Deep Learning Deployment Toolkit is licensed under Apache License Version 2.0. By contributing to the project, you agree to the license and copyright terms therein and release your contribution under these terms.
Resources
- Docs: https://docs.openvino.ai/
- Wiki: https://github.com/openvinotoolkit/openvino/wiki
- Issue tracking: https://github.com/openvinotoolkit/openvino/issues
- Storage: https://storage.openvinotoolkit.org/
- Additional OpenVINO™ toolkit modules: https://github.com/openvinotoolkit/openvino_contrib
- Intel® Distribution of OpenVINO™ toolkit Product Page
- Intel® Distribution of OpenVINO™ toolkit Release Notes
Support
Please report questions, issues and suggestions using:
- The
openvinotag on StackOverflow* - GitHub* Issues
- Forum
* Other names and brands may be claimed as the property of others.