DOCS doc structure step 2 port to master (#13084)
* DOCS-doc_structure_step_2 - adjustments to the previous change based on feedback - changes focusing on ModelOptimizer section to mitigate the removal of ONNX and PdPd articles * remove 2 files we brought back after 22.1
This commit is contained in:
parent
ca8a1c4902
commit
b6c77d21c9
@ -2,7 +2,10 @@
|
||||
|
||||
|
||||
Once you have a model that meets both OpenVINO™ and your requirements, you can choose among several ways of deploying it with your application:
|
||||
* [Run inference and develop your app with OpenVINO™ Runtime](../OV_Runtime_UG/openvino_intro.md).
|
||||
|
||||
* [Deploy your application locally](../OV_Runtime_UG/deployment/deployment_intro.md).
|
||||
* [Deploy your model online with the OpenVINO Model Server](@ref ovms_what_is_openvino_model_server).
|
||||
* [Deploy your application under the TensorFlow framework with, OpenVINO Integration](./openvino_ecosystem_ovtf.md).
|
||||
|
||||
|
||||
> **NOTE**: Note that [running inference in OpenVINO Runtime](../OV_Runtime_UG/openvino_intro.md) is the most basic form of deployment. Before moving forward, make sure you know how to create a proper Inference configuration.
|
24
docs/Documentation/inference_modes_overview.md
Normal file
24
docs/Documentation/inference_modes_overview.md
Normal file
@ -0,0 +1,24 @@
|
||||
# Inference Modes {#openvino_docs_Runtime_Inference_Modes_Overview}
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
:hidden:
|
||||
|
||||
openvino_docs_OV_UG_supported_plugins_AUTO
|
||||
openvino_docs_OV_UG_Running_on_multiple_devices
|
||||
openvino_docs_OV_UG_Hetero_execution
|
||||
openvino_docs_OV_UG_Automatic_Batching
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
OpenVINO Runtime offers multiple inference modes to allow optimum hardware utilization under different conditions. The most basic one is a single-device mode, which defines just one device responsible for the entire inference workload. It supports a range of Intel hardware by means of plugins embedded in the Runtime library, each set up to offer the best possible performance. For a complete list of supported devices and instructions on how to use them, refer to the [guide on inference devices](../OV_Runtime_UG/supported_plugins/Device_Plugins.md).
|
||||
|
||||
The remaining modes assume certain levels of automation in selecting devices for inference. Using them in the deployed solution may potentially increase its performance and portability. The automated modes are:
|
||||
|
||||
* [Automatic Device Selection (AUTO)](../OV_Runtime_UG/auto_device_selection.md)
|
||||
* [Multi-Device Execution (MULTI)](../OV_Runtime_UG/multi_device.md)
|
||||
* [Heterogeneous Execution (HETERO)](../OV_Runtime_UG/hetero_execution.md)
|
||||
* [Automatic Batching Execution (Auto-batching)](../OV_Runtime_UG/automatic_batching.md)
|
||||
|
@ -2,11 +2,21 @@
|
||||
|
||||
Every deep learning workflow begins with obtaining a model. You can choose to prepare a custom one, use a ready-made solution and adjust it to your needs, or even download and run a pre-trained network from an online database, such as OpenVINO's [Open Model Zoo](../model_zoo.md).
|
||||
|
||||
This section describes how to obtain and prepare your model for work with OpenVINO to get the best inference results:
|
||||
* [Browse a database of models for use in your projects](../model_zoo.md).
|
||||
[OpenVINO™ supports several model formats](../MO_DG/prepare_model/convert_model/supported_model_formats.md) and allows to convert them to it's own, OpenVINO IR, providing a tool dedicated to this task.
|
||||
|
||||
[Model Optimizer](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md) reads the original model and creates the OpenVINO IR model (.xml and .bin files) so that inference can ultimately be performed without delays due to format conversion. Optionally, Model Optimizer can adjust the model to be more suitable for inference, for example, by [alternating input shapes](../MO_DG/prepare_model/convert_model/Converting_Model.md), [embedding preprocessing](../MO_DG/prepare_model/Additional_Optimizations.md) and [cutting training parts off](../MO_DG/prepare_model/convert_model/Cutting_Model.md).
|
||||
|
||||
The approach to fully convert a model is considered the default choice, as it allows the full extent of OpenVINO features. The OpenVINO IR model format is used by other conversion and preparation tools, such as the Post-Training Optimization Tool, for further optimization of the converted model.
|
||||
|
||||
Conversion is not required for ONNX and PaddlePaddle models, as OpenVINO provides C++ and Python APIs for importing them to OpenVINO Runtime directly. It provides a convenient way to quickly switch from framework-based code to OpenVINO-based code in your inference application.
|
||||
|
||||
This section describes the how to obtain and prepare your model for work with OpenVINO to get the best inference results:
|
||||
* [See the supported formats and how to use them in your project](../MO_DG/prepare_model/convert_model/supported_model_formats.md)
|
||||
* [Convert different model formats to the OpenVINO IR format](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
|
||||
* [Automate model-related tasks with Model Downloader and additional OMZ Tools](https://docs.openvino.ai/latest/omz_tools_downloader.html).
|
||||
|
||||
To begin with, you may want to [browse a database of models for use in your projects](../model_zoo.md).
|
||||
|
||||
|
||||
|
||||
|
||||
|
@ -1,4 +1,4 @@
|
||||
# Converting Models with Model Optimizer {#openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide}
|
||||
# Model Optimizer Usage {#openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide}
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
@ -8,19 +8,12 @@
|
||||
:maxdepth: 1
|
||||
:hidden:
|
||||
|
||||
openvino_docs_model_inputs_outputs
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_Converting_Model
|
||||
openvino_docs_MO_DG_prepare_model_Model_Optimization_Techniques
|
||||
openvino_docs_MO_DG_prepare_model_Model_Optimization_Techniques
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_Cutting_Model
|
||||
openvino_docs_MO_DG_Additional_Optimization_Use_Cases
|
||||
openvino_docs_MO_DG_FP16_Compression
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_ONNX
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_PyTorch
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Paddle
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_MxNet
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Caffe
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Kaldi
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_tutorials
|
||||
openvino_docs_MO_DG_prepare_model_Model_Optimizer_FAQ
|
||||
|
||||
@endsphinxdirective
|
||||
@ -41,7 +34,7 @@ where IR is a pair of files describing the model:
|
||||
|
||||
* <code>.bin</code> - Contains the weights and biases binary data.
|
||||
|
||||
The generated IR can be additionally optimized for inference by [Post-training optimization](../../tools/pot/docs/Introduction.md)
|
||||
The OpenVINO IR can be additionally optimized for inference by [Post-training optimization](../../tools/pot/docs/Introduction.md)
|
||||
> that applies post-training quantization methods.
|
||||
|
||||
> **TIP**: You can also work with Model Optimizer in OpenVINO™ [Deep Learning Workbench (DL Workbench)](https://docs.openvino.ai/latest/workbench_docs_Workbench_DG_Introduction.html), which is a web-based tool with GUI for optimizing, fine-tuning, analyzing, visualizing, and comparing performance of deep learning models.
|
||||
|
@ -1,7 +1,7 @@
|
||||
# Model Optimization Techniques {#openvino_docs_MO_DG_prepare_model_Model_Optimization_Techniques}
|
||||
|
||||
Optimization offers methods to accelerate inference with the convolution neural networks (CNN) that do not require model retraining.
|
||||
|
||||
|
||||
* * *
|
||||
|
||||
## Linear Operations Fusing
|
||||
|
@ -0,0 +1,37 @@
|
||||
# Supported Model Formats {#Supported_Model_Formats}
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
:hidden:
|
||||
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_ONNX
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_PyTorch
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Paddle
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_MxNet
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Caffe
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Kaldi
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_tutorials
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
|
||||
**OpenVINO IR (Intermediate Representation)** - the proprietary format of OpenVINO™, benefiting from the full extent of its features.
|
||||
|
||||
**ONNX, PaddlePaddle** - formats supported directly, which means they can be used with OpenVINO Runtime without any prior conversion. For a guide on how to run inference on ONNX and PaddlePaddle, see how to [Integrate OpenVINO™ with Your Application](../../../OV_Runtime_UG/integrate_with_your_application.md).
|
||||
|
||||
**TensorFlow, PyTorch, MXNet, Caffe, Kaldi** - formats supported indirectly, which means they need to be converted to one of the formats listed before. Conversion from these formats to OpenVINO IR is performed with Model Optimizer. In some cases other converters need to be used as intermediaries.
|
||||
|
||||
Refer to the following articles for details on conversion for different formats and models:
|
||||
|
||||
* [How to convert ONNX](./Convert_Model_From_ONNX.md)
|
||||
* [How to convert PaddlePaddle](./Convert_Model_From_Paddle.md)
|
||||
* [How to convert TensorFlow](./Convert_Model_From_TensorFlow.md)
|
||||
* [How to convert PyTorch](./Convert_Model_From_PyTorch.md)
|
||||
* [How to convert MXNet](./Convert_Model_From_MxNet.md)
|
||||
* [How to convert Caffe](./Convert_Model_From_Caffe.md)
|
||||
* [How to convert Kaldi](./Convert_Model_From_Kaldi.md)
|
||||
|
||||
* [Conversion examples for specific models](./Convert_Model_Tutorials.md)
|
13
docs/MO_DG/prepare_model/model_inputs_outputs.md
Normal file
13
docs/MO_DG/prepare_model/model_inputs_outputs.md
Normal file
@ -0,0 +1,13 @@
|
||||
# Model Inputs and Outputs, Shapes and Layouts {#openvino_docs_model_inputs_outputs}
|
||||
|
||||
Users interact with a model by passing data to its _inputs_ before the inference and retrieving data from its _outputs_ after the inference. A model may have one or multiple inputs and outputs. Normally, in OpenVINO™ toolkit, all inputs and outputs in the converted model are identified in the same way as in the original framework model.
|
||||
|
||||
OpenVINO uses the _names of tensors_ for identification. Depending on the framework, the names of tensors are formed differently.
|
||||
|
||||
A model accepts inputs and produces outputs of some _shape_. Shape defines the number of dimensions in a tensor and their order. For example, an image classification model can accept tensor of shape [1, 3, 240, 240] and produces tensor of shape [1, 1000].
|
||||
|
||||
The meaning of each dimension in the shape is specified by its _layout_. Layout is an interpretation of shape dimensions. OpenVINO toolkit conversion tools and APIs keep all dimensions and their order unchanged and aligned with the original framework model. Usually, original models do not contain layout information explicitly, but in various pre-processing and post-processing scenarios in the OpenVINO Runtime API, sometimes it is required to have the layout specified explicitly. We recommend specifying layouts for inputs/outputs during the model conversion.
|
||||
|
||||
OpenVINO also supports _partially defined shapes_, where part of the dimensions is undefined. Undefined dimensions are also kept intact in the final IR file and you can define them later, during runtime. Undefined dimensions can be used as [dynamic dimensions](../../OV_Runtime_UG/ov_dynamic_shapes.md) for certain hardware and models, which enables you to change shapes of input data dynamically in each infer request. For example, the sequence length dimension in the BERT model can be left undefined and variously sized data along this dimension can be fed on the CPU.
|
||||
|
||||
To learn about how the model is represented in OpenVINO™ Runtime, see the [Model Representation in OpenVINO™ Runtime](../../OV_Runtime_UG/model_representation.md).
|
@ -1,67 +0,0 @@
|
||||
# ONNX Format Support {#ONNX_Format_Support}
|
||||
|
||||
|
||||
Since the 2020.4 release, OpenVINO™ has supported native usage of ONNX models. The `core.read_model()` method, which is the recommended approach to reading models, provides a uniform way to work with OpenVINO IR and ONNX formats alike. Example:
|
||||
|
||||
@sphinxdirective
|
||||
.. tab:: C++
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
ov::Core core;
|
||||
std::shared_ptr<ov::Model> model = core.read_model("model.xml")
|
||||
|
||||
.. tab:: Python
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
import openvino.runtime as ov
|
||||
core = ov.Core()
|
||||
model = core.read_model("model.xml")
|
||||
@endsphinxdirective
|
||||
|
||||
While ONNX models are directly supported by OpenVINO™, it can be useful to convert them to IR format to take advantage of advanced OpenVINO optimization tools and features. For information on how to convert an ONNX model to the OpenVINO IR format, see the [Converting an ONNX Model](https://github.com/openvinotoolkit/openvino/pull/MO_DG/prepare_model/convert_model/Convert_Model_From_ONNX.md) page.
|
||||
|
||||
### Reshape Feature
|
||||
OpenVINO™ does not provide a mechanism to specify pre-processing for the ONNX format, like mean value subtraction or reverse input channels. If an ONNX model contains dynamic shapes for input, please see the [Changing input shapes](ShapeInference.md) documentation.
|
||||
|
||||
### Weights Saved in External Files
|
||||
OpenVINO™ supports ONNX models that store weights in external files. It is especially useful for models larger than 2GB because of protobuf limitations. To read such models:
|
||||
|
||||
@sphinxdirective
|
||||
.. tab:: C++
|
||||
|
||||
* Use the `read_model` overload that takes `modelPath` as the input parameter (both `std::string` and `std::wstring`).
|
||||
* The `binPath` argument of `read_model` should be empty. Otherwise, a runtime exception is thrown because paths to external weights are saved directly in the ONNX model.
|
||||
* Reading models with external weights is **NOT** supported by the `read_model()` overload.
|
||||
|
||||
.. tab:: Python
|
||||
|
||||
* Use the `model` parameter in the `openvino.runtime.Core.read_model(model : "path_to_onnx_file")` method.
|
||||
* The `weights` parameter, for the path to the binary weight file, should be empty. Otherwise, a runtime exception is thrown because paths to external weights are saved directly in the ONNX model.
|
||||
* Reading models with external weights is **NOT** supported by the `read_model(weights: "path_to_bin_file")` parameter.
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
Paths to external weight files are saved in an ONNX model. They are relative to the model's directory path, which means that for a model located at `workspace/models/model.onnx` and a weights file at `workspace/models/data/weights.bin`, the path saved in the model would be: `data/weights.bin`.
|
||||
|
||||
Note that a single model can use many external weights files.
|
||||
What is more, data of many tensors can be stored in a single external weights file, processed using offset and length values, which can also be saved in a model.
|
||||
|
||||
The following input parameters are NOT supported for ONNX models and should be passed as empty (none) or not at all:
|
||||
|
||||
* for `ReadNetwork` (C++):
|
||||
* `const std::wstring& binPath`
|
||||
* `const std::string& binPath`
|
||||
* `const Tensor& weights`
|
||||
* for [openvino.runtime.Core.read_model](https://docs.openvino.ai/latest/api/ie_python_api/_autosummary/openvino.runtime.Core.html#openvino.runtime.Core.read_model)
|
||||
* `weights`
|
||||
|
||||
|
||||
You can find more details about the external data mechanism in [ONNX documentation](https://github.com/onnx/onnx/blob/master/docs/ExternalData.md).
|
||||
To convert a model to use the external data feature, you can use [ONNX helper functions](https://github.com/onnx/onnx/blob/master/onnx/external_data_helper.py).
|
||||
|
||||
Unsupported types of tensors:
|
||||
* string
|
||||
* complex64
|
||||
* complex128
|
@ -1,4 +1,4 @@
|
||||
# Running on Multiple Devices Simultaneously {#openvino_docs_OV_UG_Running_on_multiple_devices}
|
||||
# Multi-device execution {#openvino_docs_OV_UG_Running_on_multiple_devices}
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
|
@ -1,4 +1,4 @@
|
||||
# Performing Inference with OpenVINO Runtime {#openvino_docs_OV_UG_OV_Runtime_User_Guide}
|
||||
# Inference with OpenVINO Runtime {#openvino_docs_OV_UG_OV_Runtime_User_Guide}
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
@ -9,17 +9,13 @@
|
||||
:hidden:
|
||||
|
||||
openvino_docs_OV_UG_Integrate_OV_with_your_application
|
||||
openvino_docs_OV_UG_ShapeInference
|
||||
openvino_docs_Runtime_Inference_Modes_Overview
|
||||
openvino_docs_OV_UG_Working_with_devices
|
||||
openvino_docs_OV_UG_ShapeInference
|
||||
openvino_docs_OV_UG_Preprocessing_Overview
|
||||
openvino_docs_OV_UG_DynamicShapes
|
||||
openvino_docs_OV_UG_supported_plugins_AUTO
|
||||
openvino_docs_OV_UG_Running_on_multiple_devices
|
||||
openvino_docs_OV_UG_Hetero_execution
|
||||
openvino_docs_OV_UG_Performance_Hints
|
||||
openvino_docs_OV_UG_Automatic_Batching
|
||||
openvino_docs_OV_UG_network_state_intro
|
||||
ONNX_Format_Support
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
|
@ -1,4 +1,4 @@
|
||||
# Working with devices {#openvino_docs_OV_UG_Working_with_devices}
|
||||
# Inference Device Support {#openvino_docs_OV_UG_Working_with_devices}
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
@ -15,27 +15,19 @@
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
The OpenVINO Runtime provides capabilities to infer deep learning models on the following device types with corresponding plugins:
|
||||
OpenVINO™ Runtime can infer deep learning models using the following device types:
|
||||
|
||||
| Plugin | Device types |
|
||||
|--------|-------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
|[CPU](CPU.md) |Intel® Xeon®, Intel® Core™ and Intel® Atom® processors with Intel® Streaming SIMD Extensions (Intel® SSE4.2), Intel® Advanced Vector Extensions 2 (Intel® AVX2), Intel® Advanced Vector Extensions 512 (Intel® AVX-512), Intel® Vector Neural Network Instructions (Intel® AVX512-VNNI) and bfloat16 extension for AVX-512 (Intel® AVX-512_BF16 Extension)|
|
||||
|[GPU](GPU.md) |Intel® Graphics, including Intel® HD Graphics, Intel® UHD Graphics, Intel® Iris® Graphics, Intel® Xe Graphics, Intel® Xe MAX Graphics |
|
||||
|[VPUs](VPU.md) |Intel® Neural Compute Stick 2 powered by the Intel® Movidius™ Myriad™ X, Intel® Vision Accelerator Design with Intel® Movidius™ VPUs |
|
||||
|[GNA](GNA.md) |[Intel® Speech Enabling Developer Kit](https://www.intel.com/content/www/us/en/support/articles/000026156/boards-and-kits/smart-home.html); [Amazon Alexa Premium Far-Field Developer Kit](https://developer.amazon.com/en-US/alexa/alexa-voice-service/dev-kits/amazon-premium-voice); [Intel® Pentium® Silver Processors N5xxx, J5xxx and Intel® Celeron® Processors N4xxx, J4xxx (formerly codenamed Gemini Lake)](https://ark.intel.com/content/www/us/en/ark/products/codename/83915/gemini-lake.html): [Intel® Pentium® Silver J5005 Processor](https://ark.intel.com/content/www/us/en/ark/products/128984/intel-pentium-silver-j5005-processor-4m-cache-up-to-2-80-ghz.html), [Intel® Pentium® Silver N5000 Processor](https://ark.intel.com/content/www/us/en/ark/products/128990/intel-pentium-silver-n5000-processor-4m-cache-up-to-2-70-ghz.html), [Intel® Celeron® J4005 Processor](https://ark.intel.com/content/www/us/en/ark/products/128992/intel-celeron-j4005-processor-4m-cache-up-to-2-70-ghz.html), [Intel® Celeron® J4105 Processor](https://ark.intel.com/content/www/us/en/ark/products/128989/intel-celeron-j4105-processor-4m-cache-up-to-2-50-ghz.html), [Intel® Celeron® J4125 Processor](https://ark.intel.com/content/www/us/en/ark/products/197305/intel-celeron-processor-j4125-4m-cache-up-to-2-70-ghz.html), [Intel® Celeron® Processor N4100](https://ark.intel.com/content/www/us/en/ark/products/128983/intel-celeron-processor-n4100-4m-cache-up-to-2-40-ghz.html), [Intel® Celeron® Processor N4000](https://ark.intel.com/content/www/us/en/ark/products/128988/intel-celeron-processor-n4000-4m-cache-up-to-2-60-ghz.html); [Intel® Pentium® Processors N6xxx, J6xxx, Intel® Celeron® Processors N6xxx, J6xxx and Intel Atom® x6xxxxx (formerly codenamed Elkhart Lake)](https://ark.intel.com/content/www/us/en/ark/products/codename/128825/products-formerly-elkhart-lake.html); [Intel® Core™ Processors (formerly codenamed Cannon Lake)](https://ark.intel.com/content/www/us/en/ark/products/136863/intel-core-i3-8121u-processor-4m-cache-up-to-3-20-ghz.html); [10th Generation Intel® Core™ Processors (formerly codenamed Ice Lake)](https://ark.intel.com/content/www/us/en/ark/products/codename/74979/ice-lake.html): [Intel® Core™ i7-1065G7 Processor](https://ark.intel.com/content/www/us/en/ark/products/196597/intel-core-i71065g7-processor-8m-cache-up-to-3-90-ghz.html), [Intel® Core™ i7-1060G7 Processor](https://ark.intel.com/content/www/us/en/ark/products/197120/intel-core-i71060g7-processor-8m-cache-up-to-3-80-ghz.html), [Intel® Core™ i5-1035G4 Processor](https://ark.intel.com/content/www/us/en/ark/products/196591/intel-core-i51035g4-processor-6m-cache-up-to-3-70-ghz.html), [Intel® Core™ i5-1035G7 Processor](https://ark.intel.com/content/www/us/en/ark/products/196592/intel-core-i51035g7-processor-6m-cache-up-to-3-70-ghz.html), [Intel® Core™ i5-1035G1 Processor](https://ark.intel.com/content/www/us/en/ark/products/196603/intel-core-i51035g1-processor-6m-cache-up-to-3-60-ghz.html), [Intel® Core™ i5-1030G7 Processor](https://ark.intel.com/content/www/us/en/ark/products/197119/intel-core-i51030g7-processor-6m-cache-up-to-3-50-ghz.html), [Intel® Core™ i5-1030G4 Processor](https://ark.intel.com/content/www/us/en/ark/products/197121/intel-core-i51030g4-processor-6m-cache-up-to-3-50-ghz.html), [Intel® Core™ i3-1005G1 Processor](https://ark.intel.com/content/www/us/en/ark/products/196588/intel-core-i31005g1-processor-4m-cache-up-to-3-40-ghz.html), [Intel® Core™ i3-1000G1 Processor](https://ark.intel.com/content/www/us/en/ark/products/197122/intel-core-i31000g1-processor-4m-cache-up-to-3-20-ghz.html), [Intel® Core™ i3-1000G4 Processor](https://ark.intel.com/content/www/us/en/ark/products/197123/intel-core-i31000g4-processor-4m-cache-up-to-3-20-ghz.html); [11th Generation Intel® Core™ Processors (formerly codenamed Tiger Lake)](https://ark.intel.com/content/www/us/en/ark/products/codename/88759/tiger-lake.html); [12th Generation Intel® Core™ Processors (formerly codenamed Alder Lake)](https://ark.intel.com/content/www/us/en/ark/products/codename/147470/products-formerly-alder-lake.html)|
|
||||
|[Arm® CPU](ARM_CPU.md) |Raspberry Pi™ 4 Model B, Apple® Mac mini with M1 chip, NVIDIA® Jetson Nano™, Android™ devices |
|
||||
* [CPU](CPU.md)
|
||||
* [GPU](GPU.md)
|
||||
* [VPUs](VPU.md)
|
||||
* [GNA](GNA.md)
|
||||
* [Arm® CPU](ARM_CPU.md)
|
||||
|
||||
OpenVINO Runtime also has several execution capabilities which work on top of other devices:
|
||||
|
||||
| Capability | Description |
|
||||
|------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
|[Multi-Device execution](../multi_device.md) |Multi-Device enables simultaneous inference of the same model on several devices in parallel. |
|
||||
|[Auto-Device selection](../auto_device_selection.md) |Auto-Device selection enables selecting Intel device for inference automatically. |
|
||||
|[Heterogeneous execution](../hetero_execution.md) |Heterogeneous execution enables automatic inference splitting between several devices (for example if a device doesn't [support certain operation](#supported-layers)).|
|
||||
|[Automatic Batching](../automatic_batching.md) | Auto-Batching plugin enables the batching (on top of the specified device) that is completely transparent to the application. |
|
||||
For a more detailed list of hardware, see [Supported Devices](./Supported_Devices.md)
|
||||
|
||||
Devices similar to the ones used for benchmarking can be accessed, using [Intel® DevCloud for the Edge](https://devcloud.intel.com/edge/), a remote development environment with access to Intel® hardware and the latest versions of the Intel® Distribution of the OpenVINO™ Toolkit. [Learn more](https://devcloud.intel.com/edge/get_started/devcloud/) or [Register here](https://inteliot.force.com/DevcloudForEdge/s/).
|
||||
|
||||
|
||||
@anchor features_support_matrix
|
||||
## Feature Support Matrix
|
||||
The table below demonstrates support of key features by OpenVINO device plugins.
|
||||
@ -101,3 +93,6 @@ So, the explicit configuration to use both would be "MULTI:MYRIAD.1.2-ma2480,MYR
|
||||
:fragment: [part3]
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
|
||||
|
||||
|
@ -12,13 +12,22 @@
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
:caption: Converting and Preparing Models
|
||||
:caption: Model preparation
|
||||
:hidden:
|
||||
|
||||
openvino_docs_model_processing_introduction
|
||||
Supported_Model_Formats
|
||||
openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide
|
||||
omz_tools_downloader
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
:caption: Running Inference
|
||||
:hidden:
|
||||
|
||||
openvino_docs_OV_UG_OV_Runtime_User_Guide
|
||||
openvino_inference_engine_tools_compile_tool_README
|
||||
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
@ -39,9 +48,8 @@
|
||||
:hidden:
|
||||
|
||||
openvino_docs_deployment_guide_introduction
|
||||
openvino_docs_OV_UG_OV_Runtime_User_Guide
|
||||
openvino_deployment_guide
|
||||
openvino_inference_engine_tools_compile_tool_README
|
||||
|
||||
|
||||
|
||||
.. toctree::
|
||||
@ -57,17 +65,6 @@
|
||||
workbench_docs_Workbench_DG_Introduction
|
||||
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
:hidden:
|
||||
:caption: Media Processing and Computer Vision Libraries
|
||||
|
||||
Intel® Deep Learning Streamer <openvino_docs_dlstreamer>
|
||||
openvino_docs_gapi_gapi_intro
|
||||
OpenCV* Developer Guide <https://docs.opencv.org/master/>
|
||||
OpenCL™ Developer Guide <https://software.intel.com/en-us/openclsdk-devguide>
|
||||
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
:caption: OpenVINO Extensibility
|
||||
@ -86,6 +83,18 @@
|
||||
openvino_docs_security_guide_workbench
|
||||
openvino_docs_OV_UG_protecting_model_guide
|
||||
ovsa_get_started
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
:hidden:
|
||||
:caption: Media Processing and Computer Vision Libraries
|
||||
|
||||
Intel® Deep Learning Streamer <openvino_docs_dlstreamer>
|
||||
openvino_docs_gapi_gapi_intro
|
||||
OpenCV* Developer Guide <https://docs.opencv.org/master/>
|
||||
OpenCL™ Developer Guide <https://software.intel.com/en-us/openclsdk-devguide>
|
||||
OneVPL Developer Guide <https://www.intel.com/content/www/us/en/developer/articles/release-notes/oneapi-video-processing-library-release-notes.html>
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
This section provides reference documents that guide you through the OpenVINO toolkit workflow, from obtaining models, optimizing them, to deploying them in your own deep learning applications.
|
||||
|
Loading…
Reference in New Issue
Block a user