* Subgraph extraction in ONNX models * Windows compilation error fix + docs update * Code cleanup after the first round of reviews * CI compilation error fix * Even more CI compilation error fixes * Proper usage of ADL in generic code * ONNX shape inference related code cleanup * Disable the onnx test utils when pb-lite is used * PB dependency removal from UT, strong types for input and output edges, more verbose errors in tests * Fix for the protobuf descriptor database corruption * testing visibility changes * Revert the changes that didn't work * Make tests green again? * Make the current tests pass * Remove the ONNX header from editor's tests * Switch from stable_partition to remove_if because of compiler bugs * Obsolete test removal and cmakelists cleanup in tests * Macos failed, reverting some changes * Handle the multiple output consumers UC * Keep the tensor name when replacing an initializer * Cutting a graph with multiple consumers of inputs and initializers * Subgraph extraction with multiple initializer consumers * Add CMakeList, move files * fixed headers locations * fixed calling tests * apply code styles * change namespace to onnx_editor * removd not needed dependencies to onnx_editor * Remove linking libonnx from unit-test * Consider all flavors of protobuf libraries * disable lto for onnx and onnx_proto * revert set ONNX_INCLUDE_DIR change * create new target onnx_common * cmake dependencies clean-up * Added onnx_common visibility * pass ModelProto as reference + remove redundant dependencies * expose onnx symbols by onnx_common * remove -fno-lto, move InputEdge and OutputEdge to other header * configure onnx linking for MSVC and APPLE * remove unused comment * fixed linking configuration for msvc and apple/clang * clean cmakes * added dependencies * remove onnx and onnx_proto from NGRAPH_EXPORT_TARGETS_ENABLE * make onnx and protobuf dynamic * cmake dependency clean-up * added onnx patch - make only onnx_proto shared * use protobuf should depend on BUILD_SHARED_LIBS * fixed patch * fixed install targets * add if to install protobuf * fixed onnx flag * move protobuf install * cmake dependencies * added protobuf header include dir + clean-up ng cmake * added protobuf to ng targets * fix apple build * dependencies clean-up * make onnx_editor static lib * remove onnx editor visibility * fixed protobuf export enablement * fix apple build * test build protobuf by fetch content * fix apple protobuf build * Docs update * Producer name update in test onnx models * More comments in the subgraph extraction code * Merge remote-tracking branch 'upstream/master' into mbencer/MoveEditorToSeparateLib * add dependency to onnx_proto for test utils * dependency to ngraph_test_util * ngraph_test_util dependency. part. 2 * conditional dependency of onnx to ngraph_test_util * Apply suggestions from code review Co-authored-by: Tomasz Socha <tomasz.socha@intel.com> * review remarks * remove onnx_proto default visibility * camke remarks, not exporting onnx_common and onnx_editor to target set * [test fix macos CI] Revert "remove onnx_proto default visibility" * set onnx_proto visibility only for macos * [PyPI] remove rpath for dylib * Corrected cmd * fix protobuf-lite linking Co-authored-by: tomdol <tomasz.dolbniak@intel.com> Co-authored-by: Tomasz Jankowski <tomasz1.jankowski@intel.com> Co-authored-by: Mateusz Tabaka <mateusz.tabaka@intel.com> Co-authored-by: Tomasz Socha <tomasz.socha@intel.com> Co-authored-by: mryzhov <mikhail.ryzhov@intel.com> |
||
---|---|---|
.ci | ||
.github | ||
cmake | ||
docs | ||
inference-engine | ||
licensing | ||
model-optimizer | ||
ngraph | ||
openvino | ||
scripts | ||
tests | ||
thirdparty | ||
tools | ||
.gitattributes | ||
.gitignore | ||
.gitmodules | ||
CMakeLists.txt | ||
CODEOWNERS | ||
install_build_dependencies.sh | ||
Jenkinsfile | ||
LICENSE | ||
README.md | ||
SECURITY.md |
OpenVINO™ Toolkit
This toolkit allows developers to deploy pre-trained deep learning models through a high-level C++ Inference Engine API integrated with application logic.
This open source version includes several components: namely Model Optimizer, nGraph and Inference Engine, as well as CPU, GPU, MYRIAD, multi device and heterogeneous plugins to accelerate deep learning inferencing on Intel® CPUs and Intel® Processor Graphics. It supports pre-trained models from the Open Model Zoo, along with 100+ open source and public models in popular formats such as Caffe*, TensorFlow*, MXNet* and ONNX*.
Repository components:
License
Deep Learning Deployment Toolkit is licensed under Apache License Version 2.0. By contributing to the project, you agree to the license and copyright terms therein and release your contribution under these terms.
Resources:
- Docs: https://docs.openvinotoolkit.org/
- Wiki: https://github.com/openvinotoolkit/openvino/wiki
- Issue tracking: https://github.com/openvinotoolkit/openvino/issues
- Storage: https://storage.openvinotoolkit.org/
- Additional OpenVINO™ modules: https://github.com/openvinotoolkit/openvino_contrib
- Intel® Distribution of OpenVINO™ toolkit Product Page
- Intel® Distribution of OpenVINO™ toolkit Release Notes
Support
Please report questions, issues and suggestions using:
- The
openvino
tag on StackOverflow* - GitHub* Issues
- Forum
* Other names and brands may be claimed as the property of others.