[DOCS] sys req update (#20953)

This commit is contained in:
Karol Blaszczak
2023-11-15 16:00:36 +01:00
committed by GitHub
parent 2c3535355d
commit 4b078f698d
6 changed files with 590 additions and 474 deletions

View File

@@ -8,9 +8,8 @@
openvino_docs_performance_benchmarks
compatibility_and_support
Release Notes <https://www.intel.com/content/www/us/en/developer/articles/release-notes/openvino/2023-1.html>
prerelease_information
system_requirements
Release Notes <openvino_release_notes>
Additional Resources <resources>
OpenVINO is a toolkit for simple and efficient deployment of various deep learning models.

View File

@@ -1,6 +1,6 @@
# Supported Devices {#openvino_docs_OV_UG_supported_plugins_Supported_Devices}
@sphinxdirective
.. meta::
@@ -8,55 +8,30 @@
Distribution of OpenVINO™ toolkit.
The OpenVINO runtime can infer various models of different input and output formats. Here, you can find configurations
supported by OpenVINO devices, which are CPU, GPU, NPU, and GNA (Gaussian Neural Accelerator coprocessor).
Currently, processors of the 11th generation and later (up to the 13th generation at the moment) provide a further performance boost, especially with INT8 models.
OpenVINO enables you to implement its inference capabilities in your own software,
utilizing various hardware. It currently supports the following processing units
(for more details, see :doc:`system requirements <system_requirements>`):
* :doc:`CPU <openvino_docs_OV_UG_supported_plugins_CPU>`
* :doc:`GPU <openvino_docs_OV_UG_supported_plugins_GPU>`
* :doc:`GNA <openvino_docs_OV_UG_supported_plugins_GNA>`
.. note::
GNA, currently available in the Intel® Distribution of OpenVINO™ toolkit,
will be deprecated together with the hardware being discontinued
in future CPU solutions.
With OpenVINO™ 2023.0 release, support has been cancelled for:
- Intel® Neural Compute Stick 2 powered by the Intel® Movidius™ Myriad™ X
- Intel® Vision Accelerator Design with Intel® Movidius™
To keep using the MYRIAD and HDDL plugins with your hardware, revert to the OpenVINO 2022.3 LTS release.
+---------------------------------------------------------------------+------------------------------------------------------------------------------------------------------+
| OpenVINO Device | Supported Hardware |
+=====================================================================+======================================================================================================+
|| :doc:`CPU <openvino_docs_OV_UG_supported_plugins_CPU>` | Intel® Xeon® with Intel® Advanced Vector Extensions 2 (Intel® AVX2), Intel® Advanced Vector |
|| (x86) | Extensions 512 (Intel® AVX-512), Intel® Advanced Matrix Extensions (Intel® AMX), |
|| | Intel® Core™ Processors with Intel® AVX2, |
|| | Intel® Atom® Processors with Intel® Streaming SIMD Extensions (Intel® SSE) |
|| | |
|| (Arm®) | Raspberry Pi™ 4 Model B, Apple® Mac mini with Apple silicon |
|| | |
+---------------------------------------------------------------------+------------------------------------------------------------------------------------------------------+
|| :doc:`GPU <openvino_docs_OV_UG_supported_plugins_GPU>` | Intel® Processor Graphics including Intel® HD Graphics and Intel® Iris® Graphics, |
|| | Intel® Arc™ A-Series Graphics, Intel® Data Center GPU Flex Series, Intel® Data Center GPU Max Series |
+---------------------------------------------------------------------+------------------------------------------------------------------------------------------------------+
|| :doc:`GNA <openvino_docs_OV_UG_supported_plugins_GNA>` | Intel® Speech Enabling Developer Kit, Amazon Alexa* Premium Far-Field Developer Kit, Intel® |
|| (available in the Intel® Distribution of OpenVINO™ toolkit) | Pentium® Silver J5005 Processor, Intel® Pentium® Silver N5000 Processor, Intel® |
|| | Celeron® J4005 Processor, Intel® Celeron® J4105 Processor, Intel® Celeron® |
|| | Processor N4100, Intel® Celeron® Processor N4000, Intel® Core™ i3-8121U Processor, |
|| | Intel® Core™ i7-1065G7 Processor, Intel® Core™ i7-1060G7 Processor, Intel® |
|| | Core™ i5-1035G4 Processor, Intel® Core™ i5-1035G7 Processor, Intel® Core™ |
|| | i5-1035G1 Processor, Intel® Core™ i5-1030G7 Processor, Intel® Core™ i5-1030G4 Processor, |
|| | Intel® Core™ i3-1005G1 Processor, Intel® Core™ i3-1000G1 Processor, |
|| | Intel® Core™ i3-1000G4 Processor |
+---------------------------------------------------------------------+------------------------------------------------------------------------------------------------------+
|| :doc:`NPU <openvino_docs_OV_UG_supported_plugins_NPU>` | |
|| | |
|| | |
|| | |
|| | |
|| | |
|| | |
|| | |
+---------------------------------------------------------------------+------------------------------------------------------------------------------------------------------+
Beside inference using a specific device, OpenVINO offers three inference modes for automated inference management. These are:
Beside running inference with a specific device,
OpenVINO offers automated inference management with the following inference modes:
* :doc:`Automatic Device Selection <openvino_docs_OV_UG_supported_plugins_AUTO>` - automatically selects the best device
available for the given task. It offers many additional options and optimizations, including inference on
@@ -67,7 +42,7 @@ Beside inference using a specific device, OpenVINO offers three inference modes
automatically, for example, if one device doesnt support certain operations.
Devices similar to the ones we have used for benchmarking can be accessed using `Intel® DevCloud for the Edge <https://devcloud.intel.com/edge/>`__,
Devices similar to the ones used for benchmarking can be accessed using `Intel® DevCloud for the Edge <https://devcloud.intel.com/edge/>`__,
a remote development environment with access to Intel® hardware and the latest versions of the Intel® Distribution
of OpenVINO™ Toolkit. `Learn more <https://devcloud.intel.com/edge/get_started/devcloud/>`__ or `Register here <https://inteliot.force.com/DevcloudForEdge/s/>`__.
@@ -76,9 +51,7 @@ To learn more about each of the supported devices and modes, refer to the sectio
* :doc:`Inference Device Support <openvino_docs_OV_UG_Working_with_devices>`
* :doc:`Inference Modes <openvino_docs_Runtime_Inference_Modes_Overview>`
For setting relevant configuration, refer to the
For setting up a relevant configuration, refer to the
:doc:`Integrate with Customer Application <openvino_docs_OV_UG_Integrate_OV_with_your_application>`
topic (step 3 "Configure input and output").

View File

@@ -1,334 +0,0 @@
# Pre-release Information {#prerelease_information}
@sphinxdirective
.. meta::
:description: Check the pre-release information that includes a general
changelog for each version of OpenVINO Toolkit published under
the current cycle.
To ensure you can test OpenVINO's upcoming features even before they are officially released,
OpenVINO developers continue to roll out pre-release software. On this page you can find
a general changelog for each version published under the current cycle.
Your feedback on these new features is critical for us to make the best possible production quality version.
Please file a github Issue on these with the label “pre-release” so we can give it immediate attention. Thank you.
.. note::
These versions are pre-release software and have not undergone full validation or qualification. OpenVINO™ toolkit pre-release is:
* NOT to be incorporated into production software/solutions.
* NOT subject to official support.
* Subject to change in the future.
* Introduced to allow early testing and get early feedback from the community.
.. button-link:: https://github.com/openvinotoolkit/openvino/issues/new?assignees=octocat&labels=Pre-release%2Csupport_request&projects=&template=pre_release_feedback.yml&title=%5BPre-Release+Feedback%5D%3A
:color: primary
:outline:
:material-regular:`feedback;1.4em` Share your feedback
.. dropdown:: OpenVINO Toolkit 2023.2 Dev 22.09.2023
:animate: fade-in-slide-down
:color: primary
:open:
**What's Changed:**
* CPU runtime:
* Optimized Yolov8n and YoloV8s models on BF16/FP32.
* Optimized Falcon model on 4th Generation Intel® Xeon® Scalable Processors.
* GPU runtime:
* int8 weight compression further improves LLM performance. PR #19548
* Optimization for gemm & fc in iGPU. PR #19780
* TensorFlow FE:
* Added support for Selu operation. PR #19528
* Added support for XlaConvV2 operation. PR #19466
* Added support for TensorListLength and TensorListResize operations. PR #19390
* PyTorch FE:
* New operations supported
* aten::minimum aten::maximum. PR #19996
* aten::broadcast_tensors. PR #19994
* added support aten::logical_and, aten::logical_or, aten::logical_not, aten::logical_xor. PR #19981
* aten::scatter_reduce and extend aten::scatter. PR #19980
* prim::TupleIndex operation. PR #19978
* mixed precision in aten::min/max. PR #19936
* aten::tile op PR #19645
* aten::one_hot PR #19779
* PReLU. PR #19515
* aten::swapaxes. PR #19483
* non-boolean inputs for __or__ and __and__ operations. PR #19268
* Torchvision NMS can accept negative scores. PR #19826
* New openvino_notebooks:
* Visual Question Answering and Image Captioning using BLIP
**Fixed GitHub issues**
* Fixed #19784 “[Bug]: Cannot install libprotobuf-dev along with libopenvino-2023.0.2 on Ubuntu 22.04” with PR #19788
* Fixed #19617 “Add a clear error message when creating an empty Constant” with PR #19674
* Fixed #19616 “Align openvino.compile_model and openvino.Core.compile_model functions” with PR #19778
* Fixed #19469 “[Feature Request]: Add SeLu activation in the OpenVino IR (TensorFlow Conversion)” with PR #19528
* Fixed #19019 “[Bug]: Low performance of the TF quantized model.” With PR #19735
* Fixed #19018 “[Feature Request]: Support aarch64 python wheel for Linux” with PR #19594
* Fixed #18831 “Question: openvino support for Nvidia Jetson Xavier ?” with PR #19594
* Fixed #18786 “OpenVINO Wheel does not install Debug libraries when CMAKE_BUILD_TYPE is Debug #18786” with PR #19197
* Fixed #18731 “[Bug] Wrong output shapes of MaxPool” with PR #18965
* Fixed #18091 “[Bug] 2023.0 Version crashes on Jetson Nano - L4T - Ubuntu 18.04” with PR #19717
* Fixed #7194 “Conan for simplifying dependency management” with PR #17580
**Acknowledgements:**
Thanks for contributions from the OpenVINO developer community:
* @siddhant-0707,
* @PRATHAM-SPS,
* @okhovan
.. dropdown:: OpenVINO Toolkit 2023.1.0.dev20230728
:animate: fade-in-slide-down
:color: secondary
`Check on GitHub <https://github.com/openvinotoolkit/openvino/releases/tag/2023.1.0.dev20230811>`__
**New features:**
* CPU runtime:
* Enabled weights decompression support for Large Language models (LLMs). The implementation
supports avx2 and avx512 HW targets for Intel® Core™ processors for improved
latency mode (FP32 VS FP32+INT8 weights comparison). For 4th Generation Intel® Xeon®
Scalable Processors (formerly Sapphire Rapids) this INT8 decompression feature provides
performance improvement, compared to pure BF16 inference.
* Reduced memory consumption of compile model stage by moving constant folding of Transpose
nodes to the CPU Runtime side.
* Set FP16 inference precision by default for non-convolution networks on ARM. Convolution
network will be executed in FP32.
* GPU runtime: Added paddings for dynamic convolutions to improve performance for models like
Stable-Diffusion v2.1.
* Python API:
* Added the ``torchvision.transforms`` object to OpenVINO preprocessing.
* Moved all python tools related to OpenVINO into a single namespace,
improving user experience with better API readability.
* TensorFlow FE:
* Added support for the TensorFlow 1 Checkpoint format. All native TensorFlow formats are now enabled.
* Added support for 8 new operations:
* MaxPoolWithArgmax
* UnravelIndex
* AdjustContrastv2
* InvertPermutation
* CheckNumerics
* DivNoNan
* EnsureShape
* ShapeN
* PyTorch FE:
* Added support for 6 new operations. To know how to enjoy PyTorch models conversion follow
this `Link <https://docs.openvino.ai/2023.2/openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_PyTorch.html#experimental-converting-a-pytorch-model-with-pytorch-frontend>`__
* aten::concat
* aten::masked_scatter
* aten::linspace
* aten::view_as
* aten::std
* aten::outer
* aten::broadcast_to
**New openvino_notebooks:**
* `245-typo-detector <https://github.com/openvinotoolkit/openvino_notebooks/blob/main/notebooks/245-typo-detector>`__
: English Typo Detection in sentences with OpenVINO™
* `247-code-language-id <https://github.com/openvinotoolkit/openvino_notebooks/blob/main/notebooks/247-code-language-id/247-code-language-id.ipynb>`__
: Identify the programming language used in an arbitrary code snippet
* `121-convert-to-openvino <https://github.com/openvinotoolkit/openvino_notebooks/blob/main/notebooks/121-convert-to-openvino>`__
: Learn OpenVINO model conversion API
* `244-named-entity-recognition <https://github.com/openvinotoolkit/openvino_notebooks/blob/main/notebooks/244-named-entity-recognition>`__
: Named entity recognition with OpenVINO™
* `246-depth-estimation-videpth <https://github.com/openvinotoolkit/openvino_notebooks/blob/main/notebooks/246-depth-estimation-videpth>`__
: Monocular Visual-Inertial Depth Estimation with OpenVINO™
* `248-stable-diffusion-xl <https://github.com/openvinotoolkit/openvino_notebooks/blob/main/notebooks/248-stable-diffusion-xl>`__
: Image generation with Stable Diffusion XL
* `249-oneformer-segmentation <https://github.com/openvinotoolkit/openvino_notebooks/blob/main/notebooks/249-oneformer-segmentation>`__
: Universal segmentation with OneFormer
.. dropdown:: OpenVINO Toolkit 2023.1.0.dev20230728
:animate: fade-in-slide-down
:color: secondary
`Check on GitHub <https://github.com/openvinotoolkit/openvino/releases/tag/2023.1.0.dev20230728>`__
**New features:**
* Common:
- Proxy & hetero plugins have been migrated to API 2.0, providing enhanced compatibility and stability.
- Symbolic shape inference preview is now available, leading to improved performance for Large Language models (LLMs).
* CPU Plugin: Memory efficiency for output data between CPU plugin and the inference request has been significantly improved,
resulting in better performance for LLMs.
* GPU Plugin:
- Enabled support for dynamic shapes in more models, leading to improved performance.
- Introduced the 'if' and DetectionOutput operator to enhance model capabilities.
- Various performance improvements for StableDiffusion, SegmentAnything, U-Net, and Large Language models.
- Optimized dGPU performance through the integration of oneDNN 3.2 and fusion optimizations for MVN, Crop+Concat, permute, etc.
* Frameworks:
- PyTorch Updates: OpenVINO now supports originally quantized PyTorch models, including models produced with the Neural Network Compression Framework (NNCF).
- TensorFlow FE: Now supports Switch/Merge operations, bringing TensorFlow 1.x control flow support closer to full compatibility and enabling more models.
- Python API: Python Conversion API is now the primary conversion path, making it easier for Python developers to work with OpenVINO.
* NNCF: Enabled SmoothQuant method for Post-training Quantization, offering more techniques for quantizing models.
**Distribution:**
* Added conda-forge pre-release channel, simplifying OpenVINO pre-release installation with "conda install -c "conda-forge/label/openvino_dev" openvino" command.
* Python API is now distributed as a part of conda-forge distribution, allowing users to access it using the command above.
* Runtime can now be installed and used via vcpkg C++ package manager, providing more flexibility in integrating OpenVINO into projects.
**New models:**
* Enabled Large Language models such as open-llama, bloom, dolly-v2, GPT-J, llama-2, and more. We encourage users to try running their custom LLMs and share their feedback with us!
* Optimized performance for Stable Diffusion v2.1 (FP16 and INT8 for GPU) and Clip (CPU, INT8) models, improving their overall efficiency and accuracy.
**New openvino_notebooks:**
* `242-freevc-voice-conversion <https://github.com/openvinotoolkit/openvino_notebooks/blob/main/notebooks/242-freevc-voice-conversion>`__ - High-Quality Text-Free One-Shot Voice Conversion with FreeVC
* `241-riffusion-text-to-music <https://github.com/openvinotoolkit/openvino_notebooks/blob/main/notebooks/241-riffusion-text-to-music>`__ - Text-to-Music generation using Riffusion
* `220-books-alignment-labse <https://github.com/openvinotoolkit/openvino_notebooks/blob/main/notebooks/220-cross-lingual-books-alignment>`__ - Cross-lingual Books Alignment With Transformers
* `243-tflite-selfie-segmentation <https://github.com/openvinotoolkit/openvino_notebooks/blob/main/notebooks/243-tflite-selfie-segmentation>`__ - Selfie Segmentation using TFLite
.. dropdown:: OpenVINO Toolkit 2023.1.0.dev20230623
:animate: fade-in-slide-down
:color: secondary
The first pre-release for OpenVINO 2023.1, focused on fixing bugs and performance issues.
`Check on GitHub <https://github.com/openvinotoolkit/openvino/releases/tag/2023.1.0.dev20230623>`__
.. dropdown:: OpenVINO Toolkit 2023.0.0.dev20230407
:animate: fade-in-slide-down
:color: secondary
Note that a new distribution channel has been introduced for C++ developers: `Conda Forge <https://anaconda.org/conda-forge/openvino>`__
(the 2022.3.0 release is available there now).
* ARM device support is improved:
* increased model coverage up to the scope of x86,
* dynamic shapes enabled,
* performance boosted for many models including BERT,
* validated for Raspberry Pi 4 and Apple® Mac M1/M2.
* Performance for NLP scenarios is improved, especially for int8 models.
* The CPU device is enabled with BF16 data types, such that quantized models (INT8) can be run with BF16 plus INT8 mixed
precision, taking full advantage of the AMX capability of 4th Generation Intel® Xeon® Scalable Processors
(formerly Sapphire Rapids). The customer sees BF16/INT8 advantage, by default.
* Performance is improved on modern, hybrid Intel® Xeon® and Intel® Core® platforms,
where threads can be reliably and correctly mapped to the E-cores, P-cores, or both CPU core types.
It is now possible to optimize for performance or for power savings as needed.
* Neural Network Compression Framework (NNCF) becomes the quantization tool of choice. It now enables you to perform
post-training optimization, as well as quantization-aware training. Try it out: ``pip install nncf``.
Post-training Optimization Tool (POT) has been deprecated and will be removed in the future
(`MR16758 <https://github.com/openvinotoolkit/openvino/pull/16758/files>`__).
* New models are enabled, such as:
* Stable Diffusion 2.0,
* Paddle Slim,
* Segment Anything Model (SAM),
* Whisper,
* YOLOv8.
* Bug fixes:
* Fixes the problem of OpenVINO-dev wheel not containing the benchmark_app package.
* Rolls back the default of model saving with the FP16 precision - FP32 is the default again.
* Known issues:
* PyTorch model conversion via convert_model Python API fails if “silent=false” is specified explicitly.
By default, this parameter is set to true and there should be no issues.
.. dropdown:: OpenVINO Toolkit 2023.0.0.dev20230407
:animate: fade-in-slide-down
:color: secondary
* Enabled remote tensor in C API 2.0 (accepting tensor located in graph memory)
* Introduced model caching on GPU. Model Caching, which reduces First Inference Latency (FIL), is
extended to work as a single method on both CPU and GPU plug-ins.
* Added the post-training Accuracy-Aware Quantization mechanism for OpenVINO IR. By using this mechanism
the user can define the accuracy drop criteria and NNCF will consider it during the quantization.
* Migrated the CPU plugin to OneDNN 3.1.
* Enabled CPU fall-back for the AUTO plugin - in case of run-time failure of networks on accelerator devices, CPU is used.
* Now, AUTO supports the option to disable CPU as the initial acceleration device to speed up first-inference latency.
* Implemented ov::hint::inference_precision, which enables running network inference independently of the IR precision.
The default mode is FP16, it is possible to infer in FP32 to increase accuracy.
* Optimized performance on dGPU with Intel oneDNN v3.1, especially for transformer models.
* Enabled dynamic shapes on iGPU and dGPU for Transformer(NLP) models. Not all dynamic models are enabled but model coverage will be expanded in following releases.
* Improved performance for Transformer models for NLP pipelines on CPU.
* Extended support to the following models:
* Enabled MLPerf RNN-T model.
* Enabled Detectron2 MaskRCNN.
* Enabled OpenSeeFace models.
* Enabled Clip model.
* Optimized WeNet model.
Known issues:
* OpenVINO-dev wheel does not contain the benchmark_app package
.. dropdown:: OpenVINO Toolkit 2023.0.0.dev20230217
:animate: fade-in-slide-down
:color: secondary
OpenVINO™ repository tag: `2023.0.0.dev20230217 <https://github.com/openvinotoolkit/openvino/releases/tag/2023.0.0.dev20230217>`__
* Enabled PaddlePaddle Framework 2.4
* Preview of TensorFlow Lite Frontend Load models directly via “read_model” into OpenVINO Runtime and export OpenVINO IR format using model conversion API or “convert_model”
* PyTorch Frontend is available as an experimental feature which will allow you to convert PyTorch models, using convert_model Python API directly from your code without the need to export to the ONNX format. Model coverage is continuously increasing. Feel free to start using the option and give us feedback.
* Model conversion API now uses the TensorFlow Frontend as the default path for conversion to IR. Known limitations compared to the legacy approach are: TF1 Loop, Complex types, models requiring config files and old python extensions. The solution detects unsupported functionalities and provides fallback. To force using the legacy frontend ``use_legacy_fronted`` can be specified.
* Model conversion API now supports out-of-the-box conversion of TF2 Object Detection models. At this point, same performance experience is guaranteed only on CPU devices. Feel free to start enjoying TF2 Object Detection models without config files!
* Introduced new option ov::auto::enable_startup_fallback / ENABLE_STARTUP_FALLBACK to control whether to use CPU to accelerate first inference latency for accelerator HW devices like GPU.
* New FrontEndManager register_front_end(name, lib_path) interface added, to remove “OV_FRONTEND_PATH” env var (a way to load non-default frontends).
@endsphinxdirective

View File

@@ -0,0 +1,405 @@
# OpenVINO Releease Notes {#openvino_release_notes}
@sphinxdirective
The Intel® Distribution of OpenVINO™ toolkit is an open-source solution for
optimizing and deploying AI inference in domains such as computer vision,
automatic speech recognition, natural language processing, recommendation
systems, and generative AI. With its plug-in architecture, OpenVINO enables
developers to write once and deploy anywhere. We are proud to announce the
release of OpenVINO 2023.2 introducing a range of new features, improvements,
and deprecations aimed at enhancing the developer experience.
2023.2
##########
Summary of major features and improvements
++++++++++++++++++++++++++++++++++++++++++++
* More Generative AI coverage and framework integrations to minimize code changes:
* **Expanded model support for direct PyTorch model conversion** - automatically convert
additional models directly from PyTorch or execute via ``torch.compile`` with OpenVINO
as the backend.
* **New and noteworthy models supported** - we have enabled models used for chatbots,
instruction following, code generation, and many more, including prominent models
like Llava, chatGLM, Bark (text to audio) and LCM (Latent Consistency Models, an
optimized version of Stable Diffusion).
* **Easier optimization and conversion of Hugging Face models** - compress LLM models
to int8 with the Hugging Face Optimum command line interface and export models
to the OpenVINO IR format.
* **OpenVINO is now available on Conan**, a package manager which allows more seamless
package management for large scale projects for C and C++ developers.
* Broader Large Language Model (LLM) support and more model compression techniques
* Accelerate inference for LLM models on Intel® CoreTM CPU and iGPU with
the use of int8 model weight compression.
* Expanded model support for dynamic shapes for improved performance on GPU.
* Preview support for int4 model format is now included. Int4 optimized model
weights are now available to try on Intel® Core™ CPU and iGPU, to accelerate
models like Llama 2 and chatGLM2.
* The following int4 model compression formats are supported for inference
in runtime:
* Generative Pre-training Transformer Quantization (GPTQ); with GPTQ-compressed
models, you can access them through the Hugging Face repositories.
* Native int4 compression through Neural Network Compression Framework (NNCF).
* More portability and performance to run AI at the edge, in the cloud, or locally.
* **In 2023.1 we announced full support for ARM** architecture, now we have improved
performance by enabling FP16 model formats for LLMs and integrating additional
acceleration libraries to improve latency.
Support Change and Deprecation Notices
++++++++++++++++++++++++++++++++++++++++++
* The OpenVINO™ Development Tools package (pip install openvino-dev) is currently being
deprecated and will be removed from installation options and distribution channels
with 2025.0. To learn more, refer to the
:doc:`OpenVINO Legacy Features and Components page <openvino_legacy_features>`.
To ensure optimal performance, install the OpenVINO package (pip install openvino),
which includes essential components such as OpenVINO Runtime, OpenVINO Converter,
and Benchmark Tool.
* Tools:
* :doc:`Deployment Manager <openvino_docs_install_guides_deployment_manager_tool>`
is currently being deprecated and will be removed in the 2024.0 release.
* Accuracy Checker is being deprecated and will be discontinued with 2024.0.
* Post-Training Optimization Tool (POT) is being deprecated and will be
discontinued with 2024.0.
* Model Optimizer is being deprecated and will be fully supported until the 2025.0
release. Model conversion to the OpenVINO IR format should be performed through
OpenVINO Model Converter, which is part of the PyPI package. Follow the
:doc:`Model Optimizer to OpenVINO Model Converter transition <openvino_docs_OV_Converter_UG_prepare_model_convert_model_MO_OVC_transition>`
guide for smoother transition. Known limitations are TensorFlow model with
TF1 Control flow and object detection models. These limitations relate to the
gap in TensorFlow direct conversion capabilities which will be addressed in
upcoming releases.
* Deprecated support for PyTorch 1.13 in Neural Network Compression Framework (NNCF).
* Runtime:
* Intel® Gaussian & Neural Accelerator (Intel® GNA) is being deprecated, the
GNA plugin will be discontinued with 2024.0.
* OpenVINO C++/C/Python 1.0 APIs will be discontinued with 2024.0.
* Python 3.7 support has been discontinued.
OpenVINO™ Development Tools
++++++++++++++++++++++++++++++++++++++++++
List of components and their changes:
------------------------------------------
* :doc:`OpenVINO Model Converter tool <openvino_docs_model_processing_introduction>`
now supports the original framework shape format.
* `Neural Network Compression Framework (NNCF) <https://github.com/openvinotoolkit/nncf>`__
* Added data-free INT4 weight compression support for LLMs in OpenVINO IR with
``nncf.compress_weights()``.
* Preview feature was added to compress model weights to NF4 of LLMs in OpenVINO IR
with ``nncf.compress_weights()``.
* Improved quantization time of LLMs with NNCF PTQ API for ``nncf.quantize()``
and ``nncf.quantize_with_accuracy_control()``.
* Added support for SmoothQuant and ChannelAlighnment algorithms in NNCF HyperParameter
Tuner for automatic optimization of their hyperparameters during quantization.
* Added quantization support for the `IF` operation of models in OpenVINO IR
to speed up such models.
* NNCF Post-training Quantization for PyTorch backend is now supported with
``nncf.quantize()`` and the common implementation of quantization algorithms.
* Added support for PyTorch 2.1. PyTorch 1.13 support has been deprecated.
OpenVINO™ Runtime (previously known as Inference Engine)
---------------------------------------------------------
* OpenVINO Common
* Operations for reference implementations updated from legacy API to API 2.0.
* Symbolic transformation introduced the ability to remove Reshape operations surrounding MatMul operations.
* OpenVINO Python API
* Better support for the ``openvino.properties`` submodule, which now allows the use
of properties directly, without additional parenthesis. Example use-case:
``{openvino.properties.cache_dir: “./some_path/”}``.
* Added missing properties: ``execution_devices`` and ``loaded_from_cache``.
* Improved error propagation on imports from OpenVINO package.
* AUTO device plug-in (AUTO)
* Provided additional option to improve performance of cumulative throughput
(or MULTI), where part of CPU resources can be reserved for GPU inference
when GPU and CPU are both used for inference (using ov::hint::enable_cpu_pinning(true)).
This avoids the performance issue of CPU resource contention where there is
not enough CPU resources to schedule tasks for GPU
(`PR #19214 <https://github.com/openvinotoolkit/openvino/pull/19214>`__).
* CPU
* Introduced support of GPTQ quantized INT4 models, with improved performance
compared to INT8 weight compressed or FP16 models. In the CPU plugin,
the gain in performance is achieved by FullyConnected acceleration with
4bit weight decompression
(`PR #20607 <https://github.com/openvinotoolkit/openvino/pull/20607>`__).
* Improved performance of INT8 weight-compressed large language models on
some platforms, such as 13th Gen Intel Core
(`PR #20607 <https://github.com/openvinotoolkit/openvino/pull/20607>`__).
* Further reduced memory consumption of select large language models on
CPU platforms with AMX and AVX512 ISA, by eliminating extra memory copy
with unified weight layout in matrix multiplication operator
(`PR #19575 <https://github.com/openvinotoolkit/openvino/pull/19575>`__).
* Fixed performance issue observed in 2023.1 release on selected Xeon CPU
platform with improved thread workload partitioning matching L2 cache
utilization for operator like inner_product
(`PR #20436 <https://github.com/openvinotoolkit/openvino/pull/20436>`__).
* Extended support of configuration (enable_cpu_pinning) on Windows
platforms to allow fine-grain control on CPU resource used for inference
workload, by binding inference thread to CPU cores
(`PR #19418 <https://github.com/openvinotoolkit/openvino/pull/19418>`__).
* Optimized YoloV8n and YoloV8s model performance for BF16/FP32 precision.
* Optimized Falcon model on 4th Gen Intel® Xeon® Scalable Processors.
* Enabled support for FP16 inference precision on ARM.
* GPU
* Enhanced inference performance for Large Language Models:
* Introduced int8 weight compression to boost LLM performance. (`PR #19548 <https://github.com/openvinotoolkit/openvino/pull/19548>`__).
* Implemented int4 GPTQ weight compression for improved LLM performance.
* Optimized constant weights for LLMs, resulting in better memory usage
and faster model loading.
* Optimized gemm (general matrix multiply) and fc (fully connected) for
enhanced performance on iGPU. (`PR #19780 <https://github.com/openvinotoolkit/openvino/pull/19780>`__).
* Completed GPU plugin migration to API 2.0.
* Optimized PVC platform for enhanced performance (`PR #19767 <https://github.com/openvinotoolkit/openvino/pull/19767>`__).
* Added dynamic model support using loop operator.
* Added support for oneDNN 3.3 version.
* Model Import Updates
* TensorFlow Framework Support
* Supported conversion of models from memory in keras.Model and tf.function formats.
`PR #19903 <https://github.com/openvinotoolkit/openvino/pull/19903>`__
* Supported TF 2.14.
`PR #20385 <https://github.com/openvinotoolkit/openvino/pull/20385>`__
* New operations supported.
* Fixes:
* Attributes handling for CTCLoss operation.
`PR #20775 <https://github.com/openvinotoolkit/openvino/pull/20775>`__
* Attributes handling for CumSum operation.
`PR #20680 <https://github.com/openvinotoolkit/openvino/pull/20680>`__
* PartitionedCall fix for number of external and internal inputs mismatch.
`PR #20680 <https://github.com/openvinotoolkit/openvino/pull/20680>`__
* Preserving input and output tensor names for conversion of models from memory.
`PR #19690 <https://github.com/openvinotoolkit/openvino/pull/19690>`__
* 5D case for FusedBatchNorm.
`PR #19904 <https://github.com/openvinotoolkit/openvino/pull/19904>`__
* PyTorch Framework Support
* Supported INT4 GPTQ models
* New operations supported.
* ONNX Framework Support
* Added support for ONNX version 1.14.1 `PR #18359 <https://github.com/openvinotoolkit/openvino/pull/18359>`__
* New operations supported.
OpenVINO Ecosystem
+++++++++++++++++++++++++++++++++++++++++++++
OpenVINO Model Server
--------------------------
* Introduced an extension of the KServe gRPC API, enabling streaming input and
output for servables with Mediapipe graphs. This extension ensures the persistence
of Mediapipe graphs within a user session, improving processing performance.
This enhancement supports stateful graphs, such as tracking algorithms, and
enables the use of source calculators.
(`see additional documentation <https://github.com/openvinotoolkit/model_server/blob/main/docs/streaming_endpoints.md>`__)
* Mediapipe framework has been updated to the version 0.10.3.
* model_api used in the openvino inference Mediapipe calculator has been updated
and included with all its features.
* Added a demo showcasing gRPC streaming with Mediapipe graph.
(`see here <https://github.com/openvinotoolkit/model_server/tree/main/demos/mediapipe/holistic_tracking>`__)
* Added parameters for gRPC quota configuration and changed default gRPC channel
arguments to add rate limits. It will minimize the risks of impact of the service
from uncontrolled flow of requests.
* Updated python clients requirements to match wide range of python versions from 3.6 to 3.11
Learn more about the changes in https://github.com/openvinotoolkit/model_server/releases
Jupyter Notebook Tutorials
-----------------------------
* The following notebooks have been updated or newly added:
* `LaBSE <https://github.com/openvinotoolkit/openvino_notebooks/blob/main/notebooks/220-cross-lingual-books-alignment>`__
Cross-lingual Books Alignment With Transformers
* `LLM chatbot <https://github.com/openvinotoolkit/openvino_notebooks/blob/main/notebooks/254-llm-chatbot>`__
Create LLM-powered Chatbot
* Updated to include INT4 weights compression and Zephyr 7B model
* `Bark Text-to-Speech <https://github.com/openvinotoolkit/openvino_notebooks/blob/main/notebooks/256-bark-text-to-audio>`__
Text-to-Speech generation using Bark
* `LLaVA Multimodal Chatbot <https://github.com/openvinotoolkit/openvino_notebooks/blob/main/notebooks/257-llava-multimodal-chatbot>`__
Visual-language assistant with LLaVA
* `BLIP-Diffusion - Subject-Driven Generation <https://github.com/openvinotoolkit/openvino_notebooks/blob/main/notebooks/258-blip-diffusion-subject-generation>`__
Subject-driven image generation and editing using BLIP Diffusion
* `DeciDiffusion <https://github.com/openvinotoolkit/openvino_notebooks/blob/main/notebooks/259-decidiffusion-image-generation>`__
Image generation with DeciDiffusion
* `Fast Segment Anything <https://github.com/openvinotoolkit/openvino_notebooks/blob/main/notebooks/261-fast-segment-anything>`__
Object segmentations with FastSAM
* `SoftVC VITS Singing Voice Conversion <https://github.com/openvinotoolkit/openvino_notebooks/blob/main/notebooks/262-softvc-voice-conversion>`__
* `QR Code Monster <https://github.com/openvinotoolkit/openvino_notebooks/blob/main/notebooks/264-qrcode-monster>`__
Generate creative QR codes with ControlNet QR Code Monster
* `Würstchen <https://github.com/openvinotoolkit/openvino_notebooks/blob/main/notebooks/265-wuerstchen-image-generation>`__
Text-to-image generation with Würstchen
* `Distil-Whisper <https://github.com/openvinotoolkit/openvino_notebooks/blob/main/notebooks/267-distil-whisper-asr>`__
Automatic speech recognition using Distil-Whisper and OpenVINO™
* Added optimization support (8-bit quantization, weight compression)
by NNCF for the following notebooks:
* `Image generation with DeepFloyd IF <https://github.com/openvinotoolkit/openvino_notebooks/tree/main/notebooks/238-deepfloyd-if>`__
* `Instruction following using Databricks Dolly 2.0 <https://github.com/openvinotoolkit/openvino_notebooks/tree/main/notebooks/240-dolly-2-instruction-following>`__
* `Visual Question Answering and Image Captioning using BLIP <https://github.com/openvinotoolkit/openvino_notebooks/tree/main/notebooks/233-blip-visual-language-processing>`__
* `Grammatical Error Correction <https://github.com/openvinotoolkit/openvino_notebooks/tree/main/notebooks/214-grammar-correction>`__
* `Universal segmentation with OneFormer <https://github.com/openvinotoolkit/openvino_notebooks/tree/main/notebooks/249-oneformer-segmentation>`__
* `Visual-language assistant with LLaVA and OpenVINO <https://github.com/openvinotoolkit/openvino_notebooks/tree/main/notebooks/257-llava-multimodal-chatbot>`__
* `Image editing with InstructPix2Pix <https://github.com/openvinotoolkit/openvino_notebooks/tree/main/notebooks/231-instruct-pix2pix-image-editing>`__
* `MMS: Scaling Speech Technology to 1000+ languages <https://github.com/openvinotoolkit/openvino_notebooks/tree/main/notebooks/255-mms-massively-multilingual-speech>`__
* `Image generation with Latent Consistency Model <https://github.com/openvinotoolkit/openvino_notebooks/tree/main/notebooks/263-latent-consistency-models-image-generation>`__
* `Object segmentations with FastSAM <https://github.com/openvinotoolkit/openvino_notebooks/tree/main/notebooks/261-fast-segment-anything>`__
* `Automatic speech recognition using Distil-Whisper <https://github.com/openvinotoolkit/openvino_notebooks/tree/main/notebooks/267-distil-whisper-asr>`__
Known issues
++++++++++++++++++++++++++++++++++++++++++++
| ID - 118179
| Component - Python API, Plugins
| Description:
| When input byte sizes are matching, inference methods accept incorrect inputs
in copy mode (share_inputs=False). Example: [1, 4, 512, 512] is allowed when
[1, 512, 512, 4] is required by the model.
| Workaround:
| Pass inputs which shape and layout match model ones.
| ID - 124181
| Component - CPU plugin
| Description:
| On CPU platform with L2 cache size less than 256KB, such as i3 series of 8th
Gen Intel CORE platforms, some models may hang during model loading.
| Workaround:
| Rebuild the software from OpenVINO master or use the next OpenVINO release.
| ID - 121959
| Component - CPU plugin
| Description:
| During inference using latency hint on selected hybrid CPU platforms
(such as 12th or 13th Gen Intel CORE), there is a sporadic occurrence of
increased latency caused by the operating system scheduling of P-cores or
E-cores during OpenVINO initialization.
| Workaround:
| This will be fixed in the next OpenVINO release.
| ID - 123101
| Component - GPU plugin
| Description:
| Hung up of GPU plugin on A770 Graphics (dGPU) in case of
large batch size (1750).
| Workaround:
| Decrease the batch size, wait for fixed driver released.
Included in This Release
+++++++++++++++++++++++++++++++++++++++++++++
The Intel® Distribution of OpenVINO™ toolkit is available for downloading in
three types of operating systems: Windows, Linux, and macOS.
+--------------------------------------------------------------------+-----------------------------------------------------------+-------------------------------------------------+
|| Component || License | Location |
+================================+===================================+=================+=================+=======================+=================================================+
|| OpenVINO (Inference Engine) C++ Runtime || Dual licensing: || <install_root>/runtime/* |
|| Unified API to integrate the inference with application logic || Intel® OpenVINO™ Distribution License (Version May 2021) || <install_root>/runtime/include/* |
|| OpenVINO (Inference Engine) Headers || Apache 2.0 || |
+--------------------------------------------------------------------+-----------------------------------------------------------+-------------------------------------------------+
|| OpenVINO (Inference Engine) Pythion API || Apache 2.0 || <install_root>/python/* |
+--------------------------------------------------------------------+-----------------------------------------------------------+-------------------------------------------------+
|| OpenVINO (Inference Engine) Samples || Apache 2.0 || <install_root>/samples/* |
|| Samples that illustrate OpenVINO C++/ Python API usage || || |
+--------------------------------------------------------------------+-----------------------------------------------------------+-------------------------------------------------+
|| [Deprecated] Deployment manager || Apache 2.0 || <install_root>/tools/deployment_manager/* |
|| The Deployment Manager is a Python* command-line tool that || || |
|| creates a deployment package by assembling the model, IR files, || || |
|| your application, and associated dependencies into a runtime || || |
|| package for your target device. || || |
+--------------------------------------------------------------------+-----------------------------------------------------------+-------------------------------------------------+
Legal Information
+++++++++++++++++++++++++++++++++++++++++++++
You may not use or facilitate the use of this document in connection with any infringement
or other legal analysis concerning Intel products described herein.
You agree to grant Intel a non-exclusive, royalty-free license to any patent claim
thereafter drafted which includes subject matter disclosed herein.
No license (express or implied, by estoppel or otherwise) to any intellectual property
rights is granted by this document.
All information provided here is subject to change without notice. Contact your Intel
representative to obtain the latest Intel product specifications and roadmaps.
The products described may contain design defects or errors known as errata which may
cause the product to deviate from published specifications. Current characterized errata
are available on request.
Intel technologies' features and benefits depend on system configuration and may require
enabled hardware, software or service activation. Learn more at
`http://www.intel.com/ <http://www.intel.com/>`__
or from the OEM or retailer.
No computer system can be absolutely secure.
Intel, Atom, Arria, Core, Movidius, Xeon, OpenVINO, and the Intel logo are trademarks
of Intel Corporation in the U.S. and/or other countries.
OpenCL and the OpenCL logo are trademarks of Apple Inc. used by permission by Khronos
Other names and brands may be claimed as the property of others.
Copyright © 2023, Intel Corporation. All rights reserved.
For more complete information about compiler optimizations, see our Optimization Notice.
Performance varies by use, configuration and other factors. Learn more at
`www.Intel.com/PerformanceIndex <www.Intel.com/PerformanceIndex>`__.
Download
+++++++++++++++++++++++++++++++++++++++++++++
`The OpenVINO product selector tool <https://docs.openvino.ai/install>`__
provides easy access to the right packages that match your desired OS, version,
and distribution options.
@endsphinxdirective

View File

@@ -3,138 +3,164 @@
@sphinxdirective
Certain hardware (including but not limited to GPU and GNA) requires manual
installation of specific drivers to work correctly. The drivers may also require
updates to the operating system, including Linux kernel. These updates need to
be handled by the user and are not part of OpenVINO installation. Refer to your
system's documentation for updating instructions.
Certain hardware requires specific drivers to work properly with OpenVINO.
These drivers, including Linux* kernels, might require updates to your operating system,
which is not part of OpenVINO installation. Refer to your hardware's documentation
for updating instructions.
Intel CPU processors
#####################
CPU
##########
.. tab-set::
.. tab-item:: Supported Hardware
* Intel Atom® processor with Intel® SSE4.2 support
* Intel® Pentium® processor N4200/5, N3350/5, N3450/5 with Intel® HD Graphics
* 6th - 13th generation Intel® Core™ processors
* Intel® Xeon® Scalable Processors (code name Skylake)
* 2nd Generation Intel® Xeon® Scalable Processors (code name Cascade Lake)
* 3rd Generation Intel® Xeon® Scalable Processors(code nameCooper Lakeand Ice Lake)
* 4th Generation Intel® Xeon® Scalable Processors(code name Sapphire Rapids)
* Intel® Pentium® processor N4200/5, N3350/5, N3450/5 with Intel® HD Graphics
* 6th - 13th generation Intel® Core™ processors
* Intel®Core™Ultra (codenameMeteor Lake)
* Intel® Xeon® Scalable Processors (code name Skylake)
* 2nd Generation Intel® Xeon® Scalable Processors (code name Cascade Lake)
* 3rd Generation Intel® Xeon® Scalable Processors(code nameCooper Lakeand Ice Lake)
* 4th Generation Intel® Xeon® Scalable Processors(code name Sapphire Rapids)
* ARM* and ARM64 CPUs; Apple M1, M2 and Raspberry Pi
.. tab-item:: Required Operating Systems
.. tab-item:: Supported Operating Systems
* Ubuntu 22.04 long-term support (LTS), 64-bit (Kernel 5.15+)
* Ubuntu 20.04 long-term support (LTS), 64-bit (Kernel 5.15+)
* Ubuntu 18.04 long-term support (LTS) with limitations, 64-bit (Kernel 5.4+)
* Windows* 10
* Windows* 11
* macOS* 10.15 and above, 64-bit
* Windows* 10
* Windows* 11
* macOS* 10.15 and above, 64-bit
* macOS 11 and above, ARM64
* Red Hat Enterprise Linux* 8, 64-bit
* Debian 9 ARM64 and ARM
* CentOS 7 64-bit
Intel® Processor Graphics
###########################################
GPU
##########
.. tab-set::
.. tab-item:: Supported Hardware
.. tab-item:: Supported Hardware
* Intel® HD Graphics
* Intel® HD Graphics
* Intel® UHD Graphics
* Intel® Iris® Pro Graphics
* Intel® Iris® Xe Graphics
* Intel® Iris® Xe Max Graphics
* Intel® Arc ™ GPU Series
* Intel® Data Center GPU Flex Series
* Intel® Data Center GPU Max Series
.. tab-item:: Required Operating Systems
.. tab-item:: Supported Operating Systems
* Ubuntu* 22.04 long-term support (LTS), 64-bit
* Ubuntu* 20.04 long-term support (LTS), 64-bit
* Windows* 10, 64-bit
* Windows* 11, 64-bit
* Red Hat Enterprise Linux* 8, 64-bit
* Ubuntu 22.04 long-term support (LTS), 64-bit
* Ubuntu 20.04 long-term support (LTS), 64-bit
* Windows 10, 64-bit
* Windows 11, 64-bit
* Centos 7
* Red Hat Enterprise Linux 8, 64-bit
.. note::
.. tab-item:: Additional considerations
| Using a GPU requires installing drivers that are not included in the Intel® Distribution of OpenVINO™ toolkit.
| Not all Intel CPUs include the integrated graphics processor. See`Product Specifications <https://ark.intel.com/>`__
for information about your processor.
| Although this release works with Ubuntu 20.04 for discrete graphic cards, the support is limited
due to discrete graphics drivers.
| Recommended`OpenCL™driver <https://github.com/intel/compute-runtime>`__ versions:
22.43 for Ubuntu 22.04, 22.41 for Ubuntu 20.04 and 22.28 for Red Hat Enterprise Linux 8
* The use of of GPU requires drivers that are not included in the Intel®
Distribution of OpenVINO™ toolkit package.
* A chipset that supports processor graphics is required for Intel® Xeon®
processors. Processor graphics are not included in all processors. See
`Product Specifications <https://ark.intel.com/>`__
for information about your processor.
* Although this release works with Ubuntu 20.04 for discrete graphic cards,
Ubuntu 20.04 is not POR for discrete graphics drivers, so OpenVINO support
is limited.
* The following minimum (i.e., used for old hardware) OpenCL™ driver's versions
were used during OpenVINO internal validation: 22.43 for Ubuntu 22.04, 21.48
for Ubuntu 20.04 and 21.49 for Red Hat Enterprise Linux 8.
Intel® Gaussian & Neural Accelerator
###########################################
Operating Systems:
Ubuntu* 22.04 long-term support (LTS), 64-bit
Ubuntu* 20.04 long-term support (LTS), 64-bit
Windows* 10, 64-bit
Windows* 11, 64-bit
Operating system and developer environment requirements
############################################################
NPU and GNA
#############################
.. tab-set::
.. tab-item:: Linux OS
.. tab-item:: Operating Systems for NPU
* Ubuntu 22.04 long-term support (LTS), 64-bit
* Windows 11, 64-bit
.. tab-item:: Operating Systems for GNA
* Ubuntu 22.04 long-term support (LTS), 64-bit
* Ubuntu 20.04 long-term support (LTS), 64-bit
* Windows 10, 64-bit
* Windows 11, 64-bit
.. tab-item:: Additional considerations
* These Accelerators require drivers that are not included in the
Intel® Distribution of OpenVINO™ toolkit package.
* Users can access the NPU plugin through the OpenVINO archives on
the download page.
Operating systems and developer environment
#######################################################
.. tab-set::
.. tab-item:: Linux
* Ubuntu 22.04 with Linux kernel 5.15+
* Ubuntu 20.04 with Linux kernel 5.15+
* RHEL 8 with Linux kernel 5.4
* Red Hat Enterprise Linux 8 with Linux kernel 5.4
A Linux OS build environment requires:
* Python* 3.7-3.11
* `Intel® HD Graphics Driver <https://downloadcenter.intel.com/product/80939/Graphics-Drivers>`__
for inference on a GPU.
Build environment components:
GNU Compiler Collection and CMake are needed for building from source:
* Python* 3.8-3.11
* Intel® HD Graphics Driver. Required for inference on GPU.
* GNU Compiler Collection and CMake are needed for building from source:
* `GNU Compiler Collection (GCC) <https://www.gnu.org/software/gcc/>`__
8.4 (RHEL 8) 9.3 (Ubuntu 20)
* `CMake <https://cmake.org/download/>`__ 3.10 or higher
* GNU Compiler Collection (GCC) 7.5 and above
* CMake* 3.10 or higher
To support CPU, GPU, GNA, or hybrid-core CPU capabilities, higher versions of kernel
might be required for 10th Gen Intel® Core™ Processor,
11th Gen Intel® Core™ Processors, 11th Gen Intel® Core™ Processors S-Series Processors,
12th Gen Intel® Core™ Processors, 13th Gen Intel® Core™ Processors, or 4th Gen
Intel® Xeon® Scalable Processors.
Higher versions of kernel might be required for 10th Gen Intel® Core™ Processors,
11th Gen Intel® Core™ Processors, 11th Gen Intel® Core™ Processors S-Series Processors,
12th Gen Intel® Core™ Processors, 13th Gen Intel® Core™ Processors, Intel® Core™ Ultra
Processors, or 4th Gen Intel® Xeon® Scalable Processors to support CPU, GPU, GNA or
hybrid-cores CPU capabilities.
.. tab-item:: Windows* 10 and 11
.. tab-item:: Windows
A Windows OS build environment requires:
* Windows 10
* Windows 11
* `Microsoft Visual Studio 2019 <https://visualstudio.microsoft.com/vs/older-downloads/>`__
* `CMake <https://cmake.org/download/>`__ 3.14 or higher
* `Python 3.7-3.11 <http://www.python.org/downloads/>`__
* `Intel® HD Graphics Driver <https://downloadcenter.intel.com/product/80939/Graphics-Drivers>`__ for inference on a GPU.
Build environment components:
.. tab-item:: macOS* 10.15 and above
* Microsoft Visual Studio* 2019
* CMake 3.10 or higher
* Python* 3.8-3.11
* Intel® HD Graphics Driver (Required only for GPU).
A macOS build environment requires:
.. tab-item:: macOS
* `Xcode 10.3 <https://developer.apple.com/xcode/>`__
* `Python 3.7-3.11 <http://www.python.org/downloads/>`__
* `CMake 3.13 or higher <https://cmake.org/download/>`__
* macOS 10.15 and above
.. tab-item:: DL framework versions
Build environment components:
* TensorFlow 1.15, 2.12
* MxNet 1.9
* ONNX 1.13
* Xcode* 10.3
* Python 3.8-3.11
* CMake 3.10 or higher
.. tab-item:: DL frameworks versions:
* TensorFlow* 1.15, 2.12
* MxNet* 1.9.0
* ONNX* 1.14.1
* PaddlePaddle* 2.4
Other DL Framework versions may be compatible with the current OpenVINO
release, but only the versions listed here are fully validated.
This package can be installed on other versions of DL Framework
but only the version specified here is fully validated.
.. note::
@@ -148,4 +174,50 @@ Operating system and developer environment requirements
Legal Information
+++++++++++++++++++++++++++++++++++++++++++++
You may not use or facilitate the use of this document in connection with any infringement
or other legal analysis concerning Intel products described herein.
You agree to grant Intel a non-exclusive, royalty-free license to any patent claim
thereafter drafted which includes subject matter disclosed herein.
No license (express or implied, by estoppel or otherwise) to any intellectual property
rights is granted by this document.
All information provided here is subject to change without notice. Contact your Intel
representative to obtain the latest Intel product specifications and roadmaps.
The products described may contain design defects or errors known as errata which may
cause the product to deviate from published specifications. Current characterized errata
are available on request.
Intel technologies' features and benefits depend on system configuration and may require
enabled hardware, software or service activation. Learn more at
`http://www.intel.com/ <http://www.intel.com/>`__
or from the OEM or retailer.
No computer system can be absolutely secure.
Intel, Atom, Arria, Core, Movidius, Xeon, OpenVINO, and the Intel logo are trademarks
of Intel Corporation in the U.S. and/or other countries.
OpenCL and the OpenCL logo are trademarks of Apple Inc. used by permission by Khronos
Other names and brands may be claimed as the property of others.
Copyright © 2023, Intel Corporation. All rights reserved.
For more complete information about compiler optimizations, see our Optimization Notice.
Performance varies by use, configuration and other factors. Learn more at
`www.Intel.com/PerformanceIndex <www.Intel.com/PerformanceIndex>`__.
@endsphinxdirective

View File

@@ -45,7 +45,6 @@ offering.
when all major model frameworks became supported directly. For converting model
files explicitly, it has been replaced with a more light-weight and efficient
solution, the OpenVINO Converter (launched with OpenVINO 2023.1).
| :doc:`See how to use OVC <openvino_docs_model_processing_introduction>`
| :doc:`See how to transition from the legacy solution <openvino_docs_OV_Converter_UG_prepare_model_convert_model_MO_OVC_transition>`
@@ -86,6 +85,7 @@ offering.
| :doc:`See how to use NNCF for model optimization <openvino_docs_model_optimization_guide>`
| `Check the NNCF GitHub project, including documentation <https://github.com/openvinotoolkit/nncf>`__
| **Old Inference API 1.0**
| *New solution:* API 2.0 launched in OpenVINO 2022.1
| *Old solution:* discontinuation planned for OpenVINO 2024.0
@@ -94,6 +94,7 @@ offering.
used but is not recommended. Its discontinuation is planned for 2024.
| :doc:`See how to transition to API 2.0 <openvino_2_0_transition_guide>`
| **Compile tool**
| *New solution:* the tool is no longer needed
| *Old solution:* deprecated in OpenVINO 2023.0
@@ -101,21 +102,21 @@ offering.
| Compile tool is now deprecated. If you need to compile a model for inference on
a specific device, use the following script:
.. tab-set::
.. tab-item:: Python
.. tab-set::
.. tab-item:: Python
:sync: py
.. doxygensnippet:: docs/snippets/export_compiled_model.py
:language: python
:fragment: [export_compiled_model]
.. tab-item:: C++
:language: python
:fragment: [export_compiled_model]
.. tab-item:: C++
:sync: cpp
.. doxygensnippet:: docs/snippets/export_compiled_model.cpp
:language: cpp
:fragment: [export_compiled_model]
:language: cpp
:fragment: [export_compiled_model]
| :doc:`see which devices support import / export <openvino_docs_OV_UG_Working_with_devices>`
| :doc:`Learn more on preprocessing steps <openvino_docs_OV_UG_Preprocessing_Overview>`