* Docs: Update the doc on default hint and execution devices property (#14836) * Docs: Update to LATENCY as default hint * Docs: Update the doc on execution devices property * Update auto_device_selection.md Co-authored-by: Yuan Xu <yuan1.xu@intel.com> * 22.3: remove tbb version check for using tbbbind static library (#15700) * update symbolic link on uninstall page (#15720) * Update deployment_simplified.svg (#15681) * [NormalizeL2] normalization of reduction axes (#15841) (#15879) * Add test for negative axes, preliminary solution to solve uncorrect results * Normalize axes in operation NormalizeL2 * Add test for negative axes * Add EOF * [67541] - face-detection-0205, 0206 issues fixed (incorrect dimensions error) (#14687) * [CVS-67541] - face-detection-0205, 0206 issues fixed (incorrect dimensions error) * [CVS-67541] - face-detection-0205, 0206 issues fixed * Conversion fail for ov::hint::performance_mode with UNDEFINED value (#15903) * Update ov::hint::performance_hint UNDEFINED value from empty string to "UNDEFINED". Update benchmark Python version. Update the description about hint setting within benchmark APP README and help message. * Drop the reduntant changes. * Supported OpenSUSE 15.3 (#15897) (#15907) * [DOCS] Structure change for 'AUTO Device Selection' article - post merge fix (#15752) * aligning with 14750 * Fixed samples build on Debian 10 with cmake 3.13 (#15939) * Fixed samples build on Debian 10 with cmake 3.13 * Use 2022/3 branches * Limit setuptools version * Fixed issues in setupvars.sh (#15884) (#15952) * Fixed issues with setupvar.sh * Fixes setupvars realpath error --------- Co-authored-by: Otoka, Tomasz <tomasz.otoka@intel.com> * Apivalidator (#15951) * Improved API validator logic (#15942) * Fix for apiValidator when more than 1 target needs to be checked (#15950) * Prevent infinite recursion * [Snippets] Added matcher_name in ConvertConstantsToScalars pass (#15977) * Install libtbb2 instead of libtbb12 on U22.04 (#15993) * Apply Apivalidator to extra TBB libs (#15998) * [GNA] Changed max layer limit tests to avoid SEH exceptions (#15015) (#15460) * splitted test model * Changed test config * Set SF for all inputs * [Transformations] Enable missing runtime info check (#15796) (#15972) * Add rt info propagation to StridesOptimization * Enable rt info check for pruning tests * Fixed clang-format for C API (#16025) * Port to 2022.3 from master (#16049) * notebooks update (#16091) 20230302220806 * Update Customize_Model_Optimizer.md (#15687) Recreating #14062 * fix benchmark_app python to support YES and NO values for -pin parameter (#16042) * support YES and NO for -pin * add if property_name == 'AFFINITY' --------- Co-authored-by: Chen Peter <peter.chen@intel.com> * [Docs] nv12 changes port to 22.3 (#16115) Port: #15370 #16004 add single-plane input information create single-plane cpp snippet menu fix update formatting for sphinx directives additional snippet fixes --------- Co-authored-by: Ilya Churaev <ilyachur@gmail.com> Co-authored-by: Vladimir Paramuzov <vladimir.paramuzov@intel.com> * [DOCS] Port Frontend Extensions and OTX page (#16135) * [DOCS] Add OTX page to Ecosystem (#16118) * add otx page * change ecosystem page * add ote img * move ote page to rst * fix path * add path * img test * otx page * add docs to ecosystem page * [DOCS] Fix Frontend Extensions snippets (#16120) * move fe to rst * fix code snippets * add more line breaks * fix tabsets * fix link * fix anchor * test * fixing link * change tab directive * fix tabs * align code tabs * fix link * fix snippets * add dlwb to ecosystem * change ecosystem menu * exclude fe page * Port to 2022.3 (#16174) * Remove setuptools upperbound (#16054) * Added missed licenses to openvino-dev (#16057) * Fixed OpenMP + debian package code-path (#16058) * [CPU] Prevent out of bounds read inside Graph::InferDynamic (#16067) * Fixed compilation on Debian 11 with gcc 12.2 (#16096) * Fix for OpenCL --------- Co-authored-by: Przemyslaw Wysocki <przemyslaw.wysocki@intel.com> Co-authored-by: Maksim Kutakov <maksim.kutakov@intel.com> * Docs benchmarks page update port 22.3 (#16187) changes to benchmarks page to align with theme * Andreib/2022.3 myriad plugin obs (#16079) * Changed to OBS firmware * Changed dependencies settings for new FW --------- Co-authored-by: Daria Mityagina <daria.mityagina@intel.com> * port-16085 (#16210) * 234 update (#16212) Adding notebook 234-encodec-audio-compression * [DOCS] Adding 'Scrollbox' - new sphinx directive (#15307) port https://github.com/openvinotoolkit/openvino/pull/15305 * [DOCS] Updating 'Prerequisites' section in `Configurations for GNA` article - for 22.3 (#16237) * issue-15090 Add command for installation of prerequisites on Linux. * DOCS-image-fix port22.3 (#16341) (#16324) (#16308) * Clearing of CustomReplacementRegistry.registry in convert_model() (#15893) (#16347) * Clearing of CustomReplacementRegistry.registry. * Added test. * Fixed clearing of pipeline config params and TF session in convert_model() (#16191) (#16346) * Fixed pipeline config params clearing. * Added clearing of TF session. Added tests. --------- Co-authored-by: Wang Wangwang <wangwang.wang@intel.com> Co-authored-by: Yuan Xu <yuan1.xu@intel.com> Co-authored-by: Fang Xu <fang.xu@intel.com> Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com> Co-authored-by: Artur Kulikowski <artur.kulikowski@intel.com> Co-authored-by: Daria Mityagina <daria.mityagina@intel.com> Co-authored-by: Wang, Yang <yang4.wang@intel.com> Co-authored-by: Sebastian Golebiewski <sebastianx.golebiewski@intel.com> Co-authored-by: Otoka, Tomasz <tomasz.otoka@intel.com> Co-authored-by: Alexandra Sidorova <alexandra.sidorova@intel.com> Co-authored-by: Vitaliy Urusovskij <vitaliy.urusovskij@intel.com> Co-authored-by: Mikhail Ryzhov <mikhail.ryzhov@intel.com> Co-authored-by: Tomasz Jankowski <tomasz1.jankowski@intel.com> Co-authored-by: Haiqi Pan <haiqi.pan@intel.com> Co-authored-by: Chen Peter <peter.chen@intel.com> Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com> Co-authored-by: Ilya Churaev <ilyachur@gmail.com> Co-authored-by: Vladimir Paramuzov <vladimir.paramuzov@intel.com> Co-authored-by: Tatiana Savina <tatiana.savina@intel.com> Co-authored-by: Przemyslaw Wysocki <przemyslaw.wysocki@intel.com> Co-authored-by: Maksim Kutakov <maksim.kutakov@intel.com> Co-authored-by: Andrei-George Boji <andrei-george.boji@intel.com> Co-authored-by: Anastasiia Pnevskaia <anastasia.popova@intel.com>
Contents:
- What is OpenVINO?
- Supported Hardware matrix
- License
- Documentation
- Tutorials
- Products which use OpenVINO
- System requirements
- How to build
- How to contribute
- Get a support
- See also
What is OpenVINO toolkit?
OpenVINO™ is an open-source toolkit for optimizing and deploying AI inference.
- Boost deep learning performance in computer vision, automatic speech recognition, natural language processing and other common tasks
- Use models trained with popular frameworks like TensorFlow, PyTorch and more
- Reduce resource demands and efficiently deploy on a range of Intel® platforms from edge to cloud
This open-source version includes several components: namely Model Optimizer, OpenVINO™ Runtime, Post-Training Optimization Tool, as well as CPU, GPU, MYRIAD, multi device and heterogeneous plugins to accelerate deep learning inference on Intel® CPUs and Intel® Processor Graphics. It supports pre-trained models from Open Model Zoo, along with 100+ open source and public models in popular formats such as TensorFlow, ONNX, PaddlePaddle, MXNet, Caffe, Kaldi.
Components
- OpenVINO™ Runtime - is a set of C++ libraries with C and Python bindings providing a common API to deliver inference solutions on the platform of your choice.
- core - provides the base API for model representation and modification.
- inference - provides an API to infer models on the device.
- transformations - contains the set of common transformations which are used in OpenVINO plugins.
- low precision transformations - contains the set of transformations that are used in low precision models
- bindings - contains all available OpenVINO bindings which are maintained by the OpenVINO team.
- Plugins - contains OpenVINO plugins which are maintained in open-source by the OpenVINO team. For more information, take a look at the list of supported devices.
- Frontends - contains available OpenVINO frontends that allow reading models from the native framework format.
- Model Optimizer - is a cross-platform command-line tool that facilitates the transition between training and deployment environments, performs static model analysis, and adjusts deep learning models for optimal execution on end-point target devices.
- Post-Training Optimization Tool - is designed to accelerate the inference of deep learning models by applying special methods without model retraining or fine-tuning, for example, post-training 8-bit quantization.
- Samples - applications in C, C++ and Python languages that show basic OpenVINO use cases.
Supported Hardware matrix
The OpenVINO™ Runtime can infer models on different hardware devices. This section provides the list of supported devices.
| Device | Plugin | Library | ShortDescription |
|---|---|---|---|
| CPU | Intel CPU | openvino_intel_cpu_plugin | Intel Xeon with Intel® Advanced Vector Extensions 2 (Intel® AVX2), Intel® Advanced Vector Extensions 512 (Intel® AVX-512), and AVX512_BF16, Intel Core Processors with Intel AVX2, Intel Atom Processors with Intel® Streaming SIMD Extensions (Intel® SSE) |
| ARM CPU | openvino_arm_cpu_plugin | Raspberry Pi™ 4 Model B, Apple® Mac mini with M1 chip, NVIDIA® Jetson Nano™, Android™ devices | |
| GPU | Intel GPU | openvino_intel_gpu_plugin | Intel Processor Graphics, including Intel HD Graphics and Intel Iris Graphics |
| GNA | Intel GNA | openvino_intel_gna_plugin | Intel Speech Enabling Developer Kit, Amazon Alexa* Premium Far-Field Developer Kit, Intel Pentium Silver J5005 Processor, Intel Pentium Silver N5000 Processor, Intel Celeron J4005 Processor, Intel Celeron J4105 Processor, Intel Celeron Processor N4100, Intel Celeron Processor N4000, Intel Core i3-8121U Processor, Intel Core i7-1065G7 Processor, Intel Core i7-1060G7 Processor, Intel Core i5-1035G4 Processor, Intel Core i5-1035G7 Processor, Intel Core i5-1035G1 Processor, Intel Core i5-1030G7 Processor, Intel Core i5-1030G4 Processor, Intel Core i3-1005G1 Processor, Intel Core i3-1000G1 Processor, Intel Core i3-1000G4 Processor |
| VPU | Myriad plugin | openvino_intel_myriad_plugin | Intel® Neural Compute Stick 2 powered by the Intel® Movidius™ Myriad™ X |
OpenVINO™ Toolkit also contains several plugins which simplify loading models on several hardware devices:
| Plugin | Library | ShortDescription |
|---|---|---|
| Auto | openvino_auto_plugin | Auto plugin enables selecting Intel device for inference automatically |
| Auto Batch | openvino_auto_batch_plugin | Auto batch plugin performs on-the-fly automatic batching (i.e. grouping inference requests together) to improve device utilization, with no programming effort from the user |
| Hetero | openvino_hetero_plugin | Heterogeneous execution enables automatic inference splitting between several devices |
| Multi | openvino_auto_plugin | Multi plugin enables simultaneous inference of the same model on several devices in parallel |
License
OpenVINO™ Toolkit is licensed under Apache License Version 2.0. By contributing to the project, you agree to the license and copyright terms therein and release your contribution under these terms.
Documentation
User documentation
The latest documentation for OpenVINO™ Toolkit is available here. This documentation contains detailed information about all OpenVINO components and provides all the important information you may need to create an application based on binary OpenVINO distribution or own OpenVINO version without source code modification.
Developer documentation
Developer documentation contains information about architectural decisions which are applied inside the OpenVINO components. This documentation has all necessary information which could be needed in order to contribute to OpenVINO.
Tutorials
The list of OpenVINO tutorials:
Products which use OpenVINO
System requirements
The system requirements vary depending on platform and are available on dedicated pages:
How to build
See the OpenVINO Wiki to get more information about the OpenVINO build process.
How to contribute
See CONTRIBUTING for details. Thank you!
Get a support
Report questions, issues and suggestions, using:
- GitHub* Issues
- The
openvinotag on StackOverflow* - Forum
Additional Resources
- OpenVINO Wiki
- OpenVINO Storage
- Additional OpenVINO™ toolkit modules:
- Intel® Distribution of OpenVINO™ toolkit Product Page
- Intel® Distribution of OpenVINO™ toolkit Release Notes
- Neural Network Compression Framework (NNCF) - a suite of advanced algorithms for model inference optimization including quantization, filter pruning, binarization and sparsity
- OpenVINO™ Training Extensions (OTE) - convenient environment to train Deep Learning models and convert them using OpenVINO for optimized inference.
- OpenVINO™ Model Server (OVMS) - a scalable, high-performance solution for serving deep learning models optimized for Intel architectures
- DL Workbench - an alternative, web-based version of OpenVINO designed to facilitate optimization and compression of pre-trained deep learning models.
- Computer Vision Annotation Tool (CVAT) - an online, interactive video and image annotation tool for computer vision purposes.
- Dataset Management Framework (Datumaro) - a framework and CLI tool to build, transform, and analyze datasets.
* Other names and brands may be claimed as the property of others.
