* [Core] Support String Tensors Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com> * Add String Constant implementation Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com> * Fix build issue in tests * Add cast_vector for Constant of ov::string type Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com> * Fix build issue Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com> * Fix build issue: ambiguous type in GNA * Fix ambiguous build issue in GNA tests Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com> * Fix code-style * Fix code-style Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com> * Fix ambiguous build issue in GNA tests Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com> * Fix ambiguous build issue in TF FE tests Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com> * Update openvino.style for naming convention check Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com> * Fix compilation error in core unit tests - need typename Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com> * Add test for new element_type Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com> * Fix code-style Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com> * Update src/inference/src/dev/make_tensor.cpp Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com> * Add support of string Tensors for Constant Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com> * Fix copying string tensor value for Constant Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com> * Complete template methods for Constant Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com> * Improve performance for initialization and destruction of string Tensor for set_shape Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com> * Add check for string value in test Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com> * Remove unused variable Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com> * Update src/inference/src/dev/make_tensor.cpp * Fix copy_to for ITensor of string type and add tests Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com> * Add tests for Constant of string type and serialization Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com> * Use memset_allocation to switch initialization Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com> * Add additional documentation for host_ptr Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com> * Update src/core/src/op/constant.cpp * Use OPENVINO_THROW Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com> * Update src/core/include/openvino/op/constant.hpp * Update src/core/include/openvino/op/constant.hpp Co-authored-by: Pawel Raasz <pawel.raasz@intel.com> * Apply code-review feedback: use string_size Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com> * Apply code-review feedback Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com> * Recover evaluate impl for non-string type Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com> * Fix code for creating of string constant for legacy non HostTensor tensor Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com> * Fix build issue Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com> * Apply code-review feedback: simplify copy_to method Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com> * Fix build issue Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com> * Use StringAlignedBuffer to store string Constant values Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com> * Remove not needed methods in StringAlignedBuffer Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com> * Refactor set_shape method Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com> --------- Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com> Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com> Co-authored-by: Pawel Raasz <pawel.raasz@intel.com> |
||
---|---|---|
.ci | ||
.github | ||
cmake | ||
docs | ||
licensing | ||
samples | ||
scripts | ||
src | ||
tests | ||
thirdparty | ||
tools | ||
.gitattributes | ||
.gitignore | ||
.gitmodules | ||
CMakeLists.txt | ||
conan.lock | ||
conanfile.txt | ||
CONTRIBUTING_DOCS.md | ||
CONTRIBUTING_PR.md | ||
CONTRIBUTING.md | ||
cspell.json | ||
install_build_dependencies.sh | ||
Jenkinsfile | ||
LICENSE | ||
README.md | ||
SECURITY.md | ||
vcpkg.json |
Contents:
- What is OpenVINO?
- Supported Hardware matrix
- License
- Documentation
- Tutorials
- Products which use OpenVINO
- System requirements
- How to build
- How to contribute
- Get a support
- See also
What is OpenVINO toolkit?
OpenVINO™ is an open-source toolkit for optimizing and deploying AI inference.
- Boost deep learning performance in computer vision, automatic speech recognition, natural language processing and other common tasks
- Use models trained with popular frameworks like TensorFlow, PyTorch and more
- Reduce resource demands and efficiently deploy on a range of Intel® platforms from edge to cloud
This open-source version includes several components: namely OpenVINO Model Converter (OVC), OpenVINO™ Runtime, as well as CPU, GPU, GNA, multi device and heterogeneous plugins to accelerate deep learning inference on Intel® CPUs and Intel® Processor Graphics. It supports pre-trained models from Open Model Zoo, along with 100+ open source and public models in popular formats such as TensorFlow, ONNX, PaddlePaddle, MXNet, Caffe, Kaldi.
Components
- OpenVINO™ Runtime - is a set of C++ libraries with C and Python bindings providing a common API to deliver inference solutions on the platform of your choice.
- core - provides the base API for model representation and modification.
- inference - provides an API to infer models on the device.
- transformations - contains the set of common transformations which are used in OpenVINO plugins.
- low precision transformations - contains the set of transformations that are used in low precision models
- bindings - contains all available OpenVINO bindings which are maintained by the OpenVINO team.
- Plugins - contains OpenVINO plugins which are maintained in open-source by the OpenVINO team. For more information, take a look at the list of supported devices.
- Frontends - contains available OpenVINO frontends that allow reading models from the native framework format.
- OpenVINO Model Converter (OVC) - is a cross-platform command-line tool that facilitates the transition between training and deployment environments, and adjusts deep learning models for optimal execution on end-point target devices.
- Samples - applications in C, C++ and Python languages that show basic OpenVINO use cases.
Supported Hardware matrix
The OpenVINO™ Runtime can infer models on different hardware devices. This section provides the list of supported devices.
Device | Plugin | Library | Short Description |
---|---|---|---|
CPU | Intel CPU | openvino_intel_cpu_plugin | Intel Xeon with Intel® Advanced Vector Extensions 2 (Intel® AVX2), Intel® Advanced Vector Extensions 512 (Intel® AVX-512), and AVX512_BF16, Intel Core Processors with Intel AVX2, Intel Atom Processors with Intel® Streaming SIMD Extensions (Intel® SSE), Intel® Advanced Matrix Extensions (Intel® AMX) |
ARM CPU | openvino_arm_cpu_plugin | Raspberry Pi™ 4 Model B, Apple® Mac mini with Apple silicon | |
GPU | Intel GPU | openvino_intel_gpu_plugin | Intel Processor Graphics, including Intel HD Graphics and Intel Iris Graphics |
GNA | Intel GNA | openvino_intel_gna_plugin | Intel Speech Enabling Developer Kit, Amazon Alexa* Premium Far-Field Developer Kit, Intel Pentium Silver J5005 Processor, Intel Pentium Silver N5000 Processor, Intel Celeron J4005 Processor, Intel Celeron J4105 Processor, Intel Celeron Processor N4100, Intel Celeron Processor N4000, Intel Core i3-8121U Processor, Intel Core i7-1065G7 Processor, Intel Core i7-1060G7 Processor, Intel Core i5-1035G4 Processor, Intel Core i5-1035G7 Processor, Intel Core i5-1035G1 Processor, Intel Core i5-1030G7 Processor, Intel Core i5-1030G4 Processor, Intel Core i3-1005G1 Processor, Intel Core i3-1000G1 Processor, Intel Core i3-1000G4 Processor |
OpenVINO™ Toolkit also contains several plugins which simplify loading models on several hardware devices:
Plugin | Library | Short Description |
---|---|---|
Auto | openvino_auto_plugin | Auto plugin enables selecting Intel device for inference automatically |
Auto Batch | openvino_auto_batch_plugin | Auto batch plugin performs on-the-fly automatic batching (i.e. grouping inference requests together) to improve device utilization, with no programming effort from the user |
Hetero | openvino_hetero_plugin | Heterogeneous execution enables automatic inference splitting between several devices |
Multi | openvino_auto_plugin | Multi plugin enables simultaneous inference of the same model on several devices in parallel |
License
OpenVINO™ Toolkit is licensed under Apache License Version 2.0. By contributing to the project, you agree to the license and copyright terms therein and release your contribution under these terms.
Telemetry
OpenVINO™ collects software performance and usage data for the purpose of improving OpenVINO™ tools. This data is collected directly by OpenVINO™ or through the use of Google Analytics 4. You can opt-out at any time by running the command:
opt_in_out --opt_out
More Information is available at https://docs.openvino.ai/latest/openvino_docs_telemetry_information.html.
Documentation
User documentation
The latest documentation for OpenVINO™ Toolkit is available here. This documentation contains detailed information about all OpenVINO components and provides all the important information you may need to create an application based on binary OpenVINO distribution or own OpenVINO version without source code modification.
Developer documentation
Developer documentation contains information about architectural decisions which are applied inside the OpenVINO components. This documentation has all necessary information which could be needed in order to contribute to OpenVINO.
Tutorials
The list of OpenVINO tutorials:
Products which use OpenVINO
System requirements
The system requirements vary depending on platform and are available on dedicated pages:
How to build
See How to build OpenVINO to get more information about the OpenVINO build process.
How to contribute
See Contributions Welcome for good first issues.
See CONTRIBUTING for contribution details. Thank you!
Take the issue
If you wish to be assigned to an issue please add a comment with .take
command.
Get a support
Report questions, issues and suggestions, using:
- GitHub* Issues
- The
openvino
tag on StackOverflow* - Forum
Additional Resources
- OpenVINO Wiki
- OpenVINO Storage
- Additional OpenVINO™ toolkit modules:
- Intel® Distribution of OpenVINO™ toolkit Product Page
- Intel® Distribution of OpenVINO™ toolkit Release Notes
- Neural Network Compression Framework (NNCF) - a suite of advanced algorithms for model inference optimization including quantization, filter pruning, binarization and sparsity
- OpenVINO™ Training Extensions (OTE) - convenient environment to train Deep Learning models and convert them using OpenVINO for optimized inference.
- OpenVINO™ Model Server (OVMS) - a scalable, high-performance solution for serving deep learning models optimized for Intel architectures
- Computer Vision Annotation Tool (CVAT) - an online, interactive video and image annotation tool for computer vision purposes.
- Dataset Management Framework (Datumaro) - a framework and CLI tool to build, transform, and analyze datasets.
* Other names and brands may be claimed as the property of others.