[DOCS] Adding metadata to articles (#18331)

* adding-metadata

* Apply suggestions from code review

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Apply suggestions from code review

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

---------

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>
This commit is contained in:
Sebastian Golebiewski 2023-07-03 13:09:07 +02:00 committed by GitHub
parent cb8d34ddc1
commit 152c9b63e2
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
428 changed files with 2077 additions and 158 deletions

View File

@ -1,6 +1,11 @@
# Datumaro {#datumaro_documentation}
@sphinxdirective
@sphinxdirective
.. meta::
:description: Start working with Datumaro, which offers functionalities for basic data
import/export, validation, correction, filtration and transformations.
Datumaro provides a suite of basic data import/export (IE) for more than 35 public vision data
formats and manipulation functionalities such as validation, correction, filtration, and some

View File

@ -2,6 +2,10 @@
@sphinxdirective
.. meta::
:description: Explore OpenCV Graph API and other media processing frameworks
used for development of computer vision solutions.
.. toctree::
:maxdepth: 1

View File

@ -1,6 +1,11 @@
# Model Preparation {#openvino_docs_model_processing_introduction}
@sphinxdirective
.. meta::
:description: Preparing models for OpenVINO Runtime. Learn how to convert and compile models from different frameworks or read them directly.
.. toctree::
:maxdepth: 1
:hidden:

View File

@ -2,6 +2,10 @@
@sphinxdirective
.. meta::
:description: OpenVINO™ is an ecosystem of utilities that have advanced capabilities, which help develop deep learning solutions.
.. toctree::
:maxdepth: 1
:hidden:

View File

@ -1,6 +1,12 @@
# OpenVINO™ Training Extensions {#ote_documentation}
@sphinxdirective
@sphinxdirective
.. meta::
:description: OpenVINO™ Training Extensions include advanced algorithms used
to create, train and convert deep learning models with OpenVINO
Toolkit for optimized inference.
OpenVINO™ Training Extensions provide a suite of advanced algorithms to train
Deep Learning models and convert them using the `OpenVINO™

View File

@ -3,6 +3,11 @@
@sphinxdirective
.. meta::
:description: OpenVINO toolkit workflow usually involves preparation,
optimization, and compression of models, running inference and
deploying deep learning applications.
.. toctree::
:maxdepth: 1
:hidden:

View File

@ -2,6 +2,11 @@
@sphinxdirective
.. meta::
:description: Learn the details of custom kernel support for the GPU device to
enable operations not supported by OpenVINO.
To enable operations not supported by OpenVINO™ out of the box, you may need an extension for OpenVINO operation set, and a custom kernel for the device you will target. This article describes custom kernel support for the GPU device.
The GPU codepath abstracts many details about OpenCL. You need to provide the kernel code in OpenCL C and an XML configuration file that connects the kernel and its parameters to the parameters of the operation.

View File

@ -2,6 +2,11 @@
@sphinxdirective
.. meta::
:description: Explore OpenVINO™ Extensibility API, which allows adding
support for models with custom operations and their further implementation
in applications.
.. toctree::
:maxdepth: 1
:hidden:

View File

@ -2,6 +2,11 @@
@sphinxdirective
.. meta::
:description: Explore OpenVINO™ Extension API which enables registering
custom operations to support models with operations
not supported by OpenVINO.
OpenVINO™ Extension API allows you to register custom operations to support models with operations which OpenVINO™ does not support out-of-the-box. This capability requires writing code in C++, so if you are using Python to develop your application you need to build a separate shared library implemented in C++ first and load it in Python using ``add_extension`` API. Please refer to :ref:`Create library with extensions <create_library_with_extensions>` for more details on library creation and usage. The remining part of this document describes how to implement an operation class.
Operation Class

View File

@ -2,6 +2,12 @@
@sphinxdirective
.. meta::
:description: Learn how to use frontend extension classes to facilitate the mapping
of custom operations from the framework model representation to the OpenVINO
representation.
The goal of this chapter is to explain how to use Frontend extension classes to facilitate
mapping of custom operations from framework model representation to OpenVINO representation.
Refer to :doc:`Introduction to OpenVINO Extension <openvino_docs_Extensibility_UG_Intro>` to

View File

@ -2,6 +2,11 @@
@sphinxdirective
.. meta::
:description: Get to know how Graph Rewrite handles running multiple matcher passes on
ov::Model in a single graph traversal.
``:ref:`ov::pass::GraphRewrite <doxid-classov_1_1pass_1_1_graph_rewrite>``` serves for running multiple matcher passes on ``:ref:`ov::Model <doxid-classov_1_1_model>``` in a single graph traversal.
Example:

View File

@ -2,6 +2,11 @@
@sphinxdirective
.. meta::
:description: Learn how to create a pattern, implement a callback, register
the pattern and Matcher to execute MatcherPass transformation
on a model.
``:ref:`ov::pass::MatcherPass <doxid-classov_1_1pass_1_1_matcher_pass>``` is used for pattern-based transformations.
Template for MatcherPass transformation class

View File

@ -2,6 +2,11 @@
@sphinxdirective
.. meta::
:description: Learn how to use Model Pass transformation class to take entire
ov::Model as input and process it.
``:ref:`ov::pass::ModelPass <doxid-classov_1_1pass_1_1_model_pass>``` is used for transformations that take entire ``:ref:`ov::Model <doxid-classov_1_1_model>``` as an input and process it.
Template for ModelPass transformation class

View File

@ -2,6 +2,11 @@
@sphinxdirective
.. meta::
:description: Learn how to apply additional model optimizations or transform
unsupported subgraphs and operations, using OpenVINO™ Transformations API.
.. toctree::
:maxdepth: 1
:hidden:

View File

@ -2,6 +2,9 @@
@sphinxdirective
.. meta::
:description: Use the base ov::IAsyncInferRequest class to implement a custom asynchronous inference request in OpenVINO.
Asynchronous Inference Request runs an inference pipeline asynchronously in one or several task executors depending on a device pipeline structure.
OpenVINO Runtime Plugin API provides the base ov::IAsyncInferRequest class:

View File

@ -2,6 +2,10 @@
@sphinxdirective
.. meta::
:description: Learn how to build a plugin using CMake and OpenVINO Developer Package.
OpenVINO build infrastructure provides the OpenVINO Developer Package for plugin development.
OpenVINO Developer Package

View File

@ -2,6 +2,10 @@
@sphinxdirective
.. meta::
:description: Use the ov::CompiledModel class as the base class for a compiled
model and to create an arbitrary number of ov::InferRequest objects.
ov::CompiledModel class functionality:
* Compile an ov::Model instance to a backend specific graph representation

View File

@ -2,6 +2,10 @@
@sphinxdirective
.. meta::
:description: Use the ov::ISyncInferRequest interface as the base class to implement a synchronous inference request in OpenVINO.
``InferRequest`` class functionality:
* Allocate input and output tensors needed for a backend-dependent network inference.

View File

@ -2,6 +2,12 @@
@sphinxdirective
.. meta::
:description: Develop and implement independent inference solutions for
different devices with the components of plugin architecture
of OpenVINO.
.. toctree::
:maxdepth: 1
:caption: Converting and Preparing Models

View File

@ -2,6 +2,11 @@
@sphinxdirective
.. meta::
:description: Explore OpenVINO Plugin API, which includes functions and
helper classes that simplify the development of new plugins.
OpenVINO Plugin usually represents a wrapper around a backend. Backends can be:
* OpenCL-like backend (e.g. clDNN library) for GPU devices.

View File

@ -2,6 +2,11 @@
@sphinxdirective
.. meta::
:description: Use the openvino::funcSharedTests library, which includes
a predefined set of functional tests and utilities to verify a plugin.
OpenVINO tests infrastructure provides a predefined set of functional tests and utilities. They are used to verify a plugin using the OpenVINO public API.
All the tests are written in the `Google Test C++ framework <https://github.com/google/googletest>`__.

View File

@ -2,6 +2,11 @@
@sphinxdirective
.. meta::
:description: Use the ov::Property class to define access rights and
specific properties of an OpenVINO plugin.
Plugin can provide own device-specific properties.
Property Class

View File

@ -3,6 +3,11 @@
@sphinxdirective
.. meta::
:description: Learn about the support for quantized models with different
precisions and the FakeQuantize operation used to express
quantization rules.
One of the feature of OpenVINO is the support of quantized models with different precisions: INT8, INT4, etc.
However, it is up to the plugin to define what exact precisions are supported by the particular HW.
All quantized models which can be expressed in IR have a unified representation by means of *FakeQuantize* operation.

View File

@ -2,6 +2,10 @@
@sphinxdirective
.. meta::
:description: Use the ov::RemoteContext class as the base class for a plugin-specific remote context.
ov::RemoteContext class functionality:
* Represents device-specific inference context.

View File

@ -2,6 +2,10 @@
@sphinxdirective
.. meta::
:description: Use the ov::IRemoteTensor interface as a base class for device-specific remote tensors.
ov::RemoteTensor class functionality:
* Provides an interface to work with device-specific memory.

View File

@ -2,6 +2,11 @@
@sphinxdirective
.. meta::
:description: Learn more about plugin development and specific features in
OpenVINO: precision transformations and support for quantized
models with different precisions.
.. toctree::
:maxdepth: 1
:hidden:

View File

@ -2,6 +2,10 @@
@sphinxdirective
.. meta::
:description: Learn about extra API references required for the development of
plugins in OpenVINO.
.. toctree::
:maxdepth: 1
:hidden:

View File

@ -2,6 +2,9 @@
@sphinxdirective
.. meta::
:description: Learn about AvgPoolPrecisionPreserved attribute used only during AvgPool operation.
:ref:`ngraph::AvgPoolPrecisionPreservedAttribute <doxid-classngraph_1_1_avg_pool_precision_preserved_attribute>` class represents the ``AvgPoolPrecisionPreserved`` attribute.
Utility attribute, which is used only during ``AvgPool`` operation, precision preserved property definition.

View File

@ -2,6 +2,10 @@
@sphinxdirective
.. meta::
:description: Learn about IntervalsAlignment attribute, which describes a subgraph with the same quantization intervals alignment.
:ref:`ngraph::IntervalsAlignmentAttribute <doxid-classngraph_1_1_intervals_alignment_attribute>` class represents the ``IntervalsAlignment`` attribute.
The attribute defines a subgraph with the same quantization intervals alignment. ``FakeQuantize`` operations are included. The attribute is used by quantization operations.

View File

@ -2,6 +2,10 @@
@sphinxdirective
.. meta::
:description: Learn about PrecisionPreserved attribute, which describes a precision preserved operation.
:ref:`ngraph::PrecisionPreservedAttribute <doxid-classngraph_1_1_precision_preserved_attribute>` class represents the ``PrecisionPreserved`` attribute.
The attribute defines a precision preserved operation. If the attribute is absent, then an operation is not precision preserved.

View File

@ -2,6 +2,10 @@
@sphinxdirective
.. meta::
:description: Learn about Precisions attribute, which describes the precision required for an input/output port or an operation.
:ref:`ngraph::PrecisionsAttribute <doxid-classngraph_1_1_precisions_attribute>` class represents the ``Precisions`` attribute.
The attribute defines precision which is required for input/output port or an operation.

View File

@ -2,6 +2,10 @@
@sphinxdirective
.. meta::
:description: Learn about QuantizationAlignment attribute, which describes a subgraph with the same quantization alignment.
:ref:`ngraph::QuantizationAlignmentAttribute <doxid-classngraph_1_1_quantization_alignment_attribute>` class represents the ``QuantizationAlignment`` attribute.
The attribute defines a subgraph with the same quantization alignment. ``FakeQuantize`` operations are not included. The attribute is used by quantization operations.

View File

@ -2,6 +2,10 @@
@sphinxdirective
.. meta::
:description: Learn about QuantizationGranularity attribute, which describes quantization granularity of operation inputs.
ngraph::QuantizationAttribute class represents the ``QuantizationGranularity`` attribute.
The attribute defines quantization granularity of operation inputs.

View File

@ -2,6 +2,9 @@
@sphinxdirective
.. meta::
:description: Learn about low precision transformations used to infer a quantized model in low precision with the maximum performance on Intel CPU, GPU, and ARM platforms.
.. toctree::
:maxdepth: 1
:caption: Low Precision Transformations

View File

@ -2,6 +2,10 @@
@sphinxdirective
.. meta::
:description: Check the lists of attributes created or used by model transformations.
.. toctree::
:maxdepth: 1
:caption: Attributes

View File

@ -2,6 +2,10 @@
@sphinxdirective
.. meta::
:description: Learn about optional Prerequisites transformations, that
prepare a model before applying other low precision transformations.
Prerequisites transformations are optional. The transformations prepare a model before running other low precision transformations. The transformations do not operate with dequantization operations or update precisions. Prerequisites transformations include:
* :doc:`PullReshapeThroughDequantization <openvino_docs_OV_UG_lpt_PullReshapeThroughDequantization>`

View File

@ -2,6 +2,10 @@
@sphinxdirective
.. meta::
:description: Learn about markup transformations, which are used to create
attributes for input and output ports and operations during runtime.
This step defines the optimal ``FakeQuantize`` decomposition precisions for the best inference performance via operations markup with runtime attribute instances. Attributes are created for input and output ports and operations. Transformations do not change the operation output port precisions. A model markup low precision logic is decomposed and implemented into the following common markup transformations. The order of transformations is important:
1. :doc:`MarkupBias <openvino_docs_OV_UG_lpt_MarkupBias>`

View File

@ -2,6 +2,12 @@
@sphinxdirective
.. meta::
:description: Learn about main transformations, which are mostly low
precision transformations that handle decomposition and
dequantization operations.
Main transformations are the majority of low precision transformations. Transformations operate with dequantization operations. Main transformations include:
* :doc:`AddTransformation <openvino_docs_OV_UG_lpt_AddTransformation>`

View File

@ -2,6 +2,10 @@
@sphinxdirective
.. meta::
:description: Check the list of transformations used to clean up the
resulting model to avoid unhandled dequantization operations.
* :doc:`EliminateFakeQuantizeTransformation <openvino_docs_OV_UG_lpt_EliminateFakeQuantizeTransformation>`
* :doc:`FoldConvertTransformation <openvino_docs_OV_UG_lpt_FoldConvertTransformation>`
* :doc:`FoldFakeQuantizeTransformation <openvino_docs_OV_UG_lpt_FoldFakeQuantizeTransformation>`

View File

@ -2,6 +2,10 @@
@sphinxdirective
.. meta::
:description: Learn about legal information and policies related to the use
of Intel® Distribution of OpenVINO™ toolkit.
Performance varies by use, configuration and other factors. Learn more at [www.intel.com/PerformanceIndex](https://www.intel.com/PerformanceIndex).

View File

@ -15,6 +15,11 @@
openvino_docs_MO_DG_Python_API
openvino_docs_MO_DG_prepare_model_Model_Optimizer_FAQ
.. meta::
:description: Model conversion (MO) furthers the transition between training and
deployment environments, it adjusts deep learning models for
optimal execution on target devices.
To convert a model to OpenVINO model format (``ov.Model``), you can use the following command:

View File

@ -2,6 +2,10 @@
@sphinxdirective
.. meta::
:description: Learn the essentials of representing deep learning models in OpenVINO
IR format and the use of supported operation sets.
.. toctree::
:maxdepth: 1
:hidden:
@ -9,7 +13,7 @@
openvino_docs_ops_opset
openvino_docs_operations_specifications
openvino_docs_ops_broadcast_rules
This article provides essential information on the format used for representation of deep learning models in OpenVINO toolkit and supported operation sets.

View File

@ -2,6 +2,10 @@
@sphinxdirective
.. meta::
:description: Learn how to convert a model from the
Caffe format to the OpenVINO Intermediate Representation.
.. warning::

View File

@ -2,6 +2,10 @@
@sphinxdirective
.. meta::
:description: Learn how to convert a model from the
Kaldi format to the OpenVINO Intermediate Representation.
.. warning::

View File

@ -2,6 +2,10 @@
@sphinxdirective
.. meta::
:description: Learn how to convert a model from the
MXNet format to the OpenVINO Intermediate Representation.
.. warning::
@ -14,7 +18,7 @@ To convert an MXNet model, run Model Optimizer with the path to the ``.params``
mo --input_model model-file-0000.params
Using MXNet-Specific Conversion Parameters
Using MXNet-Specific Conversion Parameters
##########################################
The following list provides the MXNet-specific parameters.
@ -40,7 +44,7 @@ The following list provides the MXNet-specific parameters.
Use only if your topology is one of ssd gluoncv topologies
.. note::
.. note::
By default, model conversion API does not use the Apache MXNet loader. It transforms the topology to another format which is compatible with the latest version of Apache MXNet. However, the Apache MXNet loader is required for models trained with lower version of Apache MXNet. If your model was trained with an Apache MXNet version lower than 1.0.0, specify the ``--legacy_mxnet_model`` key to enable the Apache MXNet loader. Note that the loader does not support models with custom layers. In this case, you must manually recompile Apache MXNet with custom layers and install it in your environment.
@ -77,4 +81,3 @@ See the :doc:`Model Conversion Tutorials <openvino_docs_MO_DG_prepare_model_conv
* :doc:`Convert MXNet Style Transfer Model <openvino_docs_MO_DG_prepare_model_convert_model_mxnet_specific_Convert_Style_Transfer_From_MXNet>`
@endsphinxdirective

View File

@ -2,6 +2,11 @@
@sphinxdirective
.. meta::
:description: Learn how to convert a model from the
ONNX format to the OpenVINO Intermediate Representation.
Introduction to ONNX
####################

View File

@ -2,6 +2,11 @@
@sphinxdirective
.. meta::
:description: Learn how to convert a model from the
PaddlePaddle format to the OpenVINO Intermediate Representation.
This page provides general instructions on how to convert a model from a PaddlePaddle format to the OpenVINO IR format using Model Optimizer. The instructions are different depending on PaddlePaddle model format.
.. note:: PaddlePaddle models are supported via FrontEnd API. You may skip conversion to IR and read models directly by OpenVINO runtime API. Refer to the :doc:`inference example <openvino_docs_OV_UG_Integrate_OV_with_your_application>` for more details. Using ``convert_model`` is still necessary in more complex cases, such as new custom inputs/outputs in model pruning, adding pre-processing, or using Python conversion extensions.
@ -11,6 +16,7 @@ Converting PaddlePaddle Model Inference Format
PaddlePaddle inference model includes ``.pdmodel`` (storing model structure) and ``.pdiparams`` (storing model weight). For how to export PaddlePaddle inference model, please refer to the `Exporting PaddlePaddle Inference Model <https://www.paddlepaddle.org.cn/documentation/docs/zh/develop/guides/beginner/model_save_load_cn.html>`__ Chinese guide.
To convert a PaddlePaddle model, use the ``mo`` script and specify the path to the input ``.pdmodel`` model file:
.. code-block:: sh

View File

@ -2,6 +2,11 @@
@sphinxdirective
.. meta::
:description: Learn how to convert a model from the
PyTorch format to the OpenVINO Intermediate Representation.
This page provides instructions on how to convert a model from the PyTorch format to the OpenVINO IR format using Model Optimizer.
Model Optimizer Python API allows the conversion of PyTorch models using the ``convert_model()`` method.

View File

@ -2,6 +2,11 @@
@sphinxdirective
.. meta::
:description: Learn how to convert a model from a
TensorFlow format to the OpenVINO Intermediate Representation.
This page provides general instructions on how to run model conversion from a TensorFlow format to the OpenVINO IR format. The instructions are different depending on whether your model was created with TensorFlow v1.X or TensorFlow v2.X.
.. note:: TensorFlow models are supported via :doc:`FrontEnd API <openvino_docs_MO_DG_TensorFlow_Frontend>`. You may skip conversion to IR and read models directly by OpenVINO runtime API. Refer to the :doc:`inference example <openvino_docs_OV_UG_Integrate_OV_with_your_application>` for more details. Using ``convert_model`` is still necessary in more complex cases, such as new custom inputs/outputs in model pruning, adding pre-processing, or using Python conversion extensions.

View File

@ -2,6 +2,11 @@
@sphinxdirective
.. meta::
:description: Learn how to convert a model from a
TensorFlow Lite format to the OpenVINO Intermediate Representation.
To convert a TensorFlow Lite model, use the ``mo`` script and specify the path to the input ``.tflite`` model file:
.. code-block:: sh

View File

@ -35,6 +35,8 @@
openvino_docs_MO_DG_prepare_model_convert_model_mxnet_specific_Convert_Style_Transfer_From_MXNet
openvino_docs_MO_DG_prepare_model_convert_model_kaldi_specific_Aspire_Tdnn_Model
.. meta::
:description: Get to know conversion methods for specific TensorFlow, ONNX, PyTorch, MXNet, and Kaldi models.
This section provides a set of tutorials that demonstrate conversion methods for specific

View File

@ -4,6 +4,10 @@ With model conversion API you can increase your model's efficiency by providing
@sphinxdirective
.. meta::
:description: Learn how to increase the efficiency of a model with MO by providing an additional shape definition with the input_shape and static_shape parameters.
.. _when_to_specify_input_shapes:

View File

@ -2,6 +2,11 @@
@sphinxdirective
.. meta::
:description: Learn how to generate a Low Precision IR - Intermediate
Representation suitable for INT8 low precision inference on CPU
and GPU devices.
Introduction
############

View File

@ -2,11 +2,15 @@
@sphinxdirective
.. meta::
:description: Learn how to convert an ASpIRE Chain TDNN
model from Kaldi to the OpenVINO Intermediate Representation.
.. warning::
Note that OpenVINO support for Kaldi is currently being deprecated and will be removed entirely in the future.
At the beginning, you should `download a pre-trained model <https://kaldi-asr.org/models/1/0001_aspire_chain_model.tar.gz>`__
for the ASpIRE Chain Time Delay Neural Network (TDNN) from the Kaldi project official website.

View File

@ -2,6 +2,11 @@
@sphinxdirective
.. meta::
:description: Learn how to convert GluonCV models
from MXNet to the OpenVINO Intermediate Representation.
.. warning::
Note that OpenVINO support for Apache MXNet is currently being deprecated and will be removed entirely in the future.

View File

@ -2,13 +2,15 @@
@sphinxdirective
.. meta::
:description: Learn how to convert a Style Transfer
model from MXNet to the OpenVINO Intermediate Representation.
.. warning::
Note that OpenVINO support for Apache MXNet is currently being deprecated and will be removed entirely in the future.
This article provides instructions on how to generate a model for style transfer, using the public MXNet neural style transfer sample.
**Step 1**: Download or clone the repository `Zhaw's Neural Style Transfer repository <https://github.com/zhaw/neural_style>`__ with an MXNet neural style transfer sample.

View File

@ -2,6 +2,11 @@
@sphinxdirective
.. meta::
:description: Learn how to convert a Faster R-CNN model
from ONNX to the OpenVINO Intermediate Representation.
The instructions below are applicable **only** to the Faster R-CNN model converted to the ONNX file format from the `maskrcnn-benchmark model <https://github.com/facebookresearch/maskrcnn-benchmark>`__:
1. Download the pretrained model file from `onnx/models <https://github.com/onnx/models/tree/master/vision/object_detection_segmentation/faster-rcnn>`__ (commit-SHA: 8883e49e68de7b43e263d56b9ed156dfa1e03117).

View File

@ -2,6 +2,11 @@
@sphinxdirective
.. meta::
:description: Learn how to convert a pre-trained GPT-2
model from ONNX to the OpenVINO Intermediate Representation.
`Public pre-trained GPT-2 model <https://github.com/onnx/models/tree/master/text/machine_comprehension/gpt-2>`__ is a large
transformer-based language model with a simple objective: predict the next word, given all of the previous words within some text.

View File

@ -2,6 +2,11 @@
@sphinxdirective
.. meta::
:description: Learn how to convert a pre-trained Mask
R-CNN model from ONNX to the OpenVINO Intermediate Representation.
The instructions below are applicable **only** to the Mask R-CNN model converted to the ONNX file format from the `maskrcnn-benchmark model <https://github.com/facebookresearch/maskrcnn-benchmark>`__.
1. Download the pretrained model file from `onnx/models <https://github.com/onnx/models/tree/master/vision/object_detection_segmentation/mask-rcnn>`__ (commit-SHA: 8883e49e68de7b43e263d56b9ed156dfa1e03117).

View File

@ -2,6 +2,11 @@
@sphinxdirective
.. meta::
:description: Learn how to convert a BERT-NER model
from Pytorch to the OpenVINO Intermediate Representation.
The goal of this article is to present a step-by-step guide on how to convert PyTorch BERT-NER model to OpenVINO IR. First, you need to download the model and convert it to ONNX.

View File

@ -2,6 +2,11 @@
@sphinxdirective
.. meta::
:description: Learn how to convert a Cascade RCNN R-101
model from Pytorch to the OpenVINO Intermediate Representation.
The goal of this article is to present a step-by-step guide on how to convert a PyTorch Cascade RCNN R-101 model to OpenVINO IR. First, you need to download the model and convert it to ONNX.
Downloading and Converting Model to ONNX

View File

@ -2,6 +2,11 @@
@sphinxdirective
.. meta::
:description: Learn how to convert a F3Net model
from Pytorch to the OpenVINO Intermediate Representation.
`F3Net <https://github.com/weijun88/F3Net>`__ : Fusion, Feedback and Focus for Salient Object Detection
Cloning the F3Net Repository

View File

@ -2,6 +2,11 @@
@sphinxdirective
.. meta::
:description: Learn how to convert a QuartzNet model
from Pytorch to the OpenVINO Intermediate Representation.
`NeMo project <https://github.com/NVIDIA/NeMo>`__ provides the QuartzNet model.
Downloading the Pre-trained QuartzNet Model

View File

@ -2,6 +2,11 @@
@sphinxdirective
.. meta::
:description: Learn how to convert a RCAN model
from Pytorch to the OpenVINO Intermediate Representation.
`RCAN <https://github.com/yulunzhang/RCAN>`__ : Image Super-Resolution Using Very Deep Residual Channel Attention Networks
Downloading and Converting the Model to ONNX

View File

@ -2,6 +2,11 @@
@sphinxdirective
.. meta::
:description: Learn how to convert a RNN-T model
from Pytorch to the OpenVINO Intermediate Representation.
This guide covers conversion of RNN-T model from `MLCommons <https://github.com/mlcommons>`__ repository. Follow
the instructions below to export a PyTorch model into ONNX, before converting it to IR:

View File

@ -2,6 +2,11 @@
@sphinxdirective
.. meta::
:description: Learn how to convert a YOLACT model
from Pytorch to the OpenVINO Intermediate Representation.
You Only Look At CoefficienTs (YOLACT) is a simple, fully convolutional model for real-time instance segmentation.
The PyTorch implementation is publicly available in `this GitHub repository <https://github.com/dbolya/yolact>`__.
The YOLACT++ model is not supported, because it uses deformable convolutional layers that cannot be represented in ONNX format.

View File

@ -16,6 +16,10 @@
openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Kaldi
openvino_docs_MO_DG_prepare_model_convert_model_tutorials
.. meta::
:description: In OpenVINO, ONNX, PaddlePaddle, TensorFlow and TensorFlow Lite
models do not require any prior conversion, while MxNet, Caffe and Kaldi do.
**OpenVINO IR (Intermediate Representation)** - the proprietary format of OpenVINO™, benefiting from the full extent of its features.

View File

@ -2,6 +2,12 @@
@sphinxdirective
.. meta::
:description: Learn how to convert the Attention OCR
model from the TensorFlow Attention OCR repository to the
OpenVINO Intermediate Representation.
This tutorial explains how to convert the Attention OCR (AOCR) model from the `TensorFlow Attention OCR repository <https://github.com/emedvedev/attention-ocr>`__ to the Intermediate Representation (IR).
Extracting a Model from ``aocr`` Library

View File

@ -2,6 +2,11 @@
@sphinxdirective
.. meta::
:description: Learn how to convert a BERT model
from TensorFlow to the OpenVINO Intermediate Representation.
Pretrained models for BERT (Bidirectional Encoder Representations from Transformers) are
`publicly available <https://github.com/google-research/bert>`__.

View File

@ -2,6 +2,11 @@
@sphinxdirective
.. meta::
:description: Learn how to convert a CRNN model
from TensorFlow to the OpenVINO Intermediate Representation.
This tutorial explains how to convert a CRNN model to OpenVINO™ Intermediate Representation (IR).
There are several public versions of TensorFlow CRNN model implementation available on GitHub. This tutorial explains how to convert the model from

View File

@ -2,6 +2,11 @@
@sphinxdirective
.. meta::
:description: Learn how to convert a DeepSpeech model
from TensorFlow to the OpenVINO Intermediate Representation.
`DeepSpeech project <https://github.com/mozilla/DeepSpeech>`__ provides an engine to train speech-to-text models.
Downloading the Pretrained DeepSpeech Model

View File

@ -2,6 +2,11 @@
@sphinxdirective
.. meta::
:description: Learn how to convert an EfficientDet model
from TensorFlow to the OpenVINO Intermediate Representation.
This tutorial explains how to convert EfficientDet public object detection models to the Intermediate Representation (IR).
.. _efficientdet-to-ir:

View File

@ -2,6 +2,11 @@
@sphinxdirective
.. meta::
:description: Learn how to convert a FaceNet model
from TensorFlow to the OpenVINO Intermediate Representation.
`Public pre-trained FaceNet models <https://github.com/davidsandberg/facenet#pre-trained-models>`__ contain both training
and inference part of graph. Switch between this two states is manageable with placeholder value.
Intermediate Representation (IR) models are intended for inference, which means that train part is redundant.

View File

@ -2,6 +2,11 @@
@sphinxdirective
.. meta::
:description: Learn how to convert a GNMT model
from TensorFlow to the OpenVINO Intermediate Representation.
This tutorial explains how to convert Google Neural Machine Translation (GNMT) model to the Intermediate Representation (IR).
There are several public versions of TensorFlow GNMT model implementation available on GitHub. This tutorial explains how to convert the GNMT model from the `TensorFlow Neural Machine Translation (NMT) repository <https://github.com/tensorflow/nmt>`__ to the IR.

View File

@ -2,6 +2,12 @@
@sphinxdirective
.. meta::
:description: Learn how to convert a Neural Collaborative
Filtering Model from TensorFlow to the OpenVINO Intermediate
Representation.
This tutorial explains how to convert Neural Collaborative Filtering (NCF) model to the OpenVINO Intermediate Representation.
`Public TensorFlow NCF model <https://github.com/tensorflow/models/tree/master/official/recommendation>`__ does not contain pre-trained weights. To convert this model to the IR:

View File

@ -2,6 +2,12 @@
@sphinxdirective
.. meta::
:description: Learn how to convert Object Detection
API Models from TensorFlow to the OpenVINO Intermediate
Representation.
* Starting with the 2022.1 release, model conversion API can convert the TensorFlow Object Detection API Faster and Mask RCNNs topologies differently. By default, model conversion adds operation "Proposal" to the generated IR. This operation needs an additional input to the model with name "image_info" which should be fed with several values describing the preprocessing applied to the input image (refer to the :doc:`Proposal <openvino_docs_ops_detection_Proposal_4>` operation specification for more information). However, this input is redundant for the models trained and inferred with equal size images. Model conversion API can generate IR for such models and insert operation :doc:`DetectionOutput <openvino_docs_ops_detection_DetectionOutput_1>` instead of ``Proposal``. The `DetectionOutput` operation does not require additional model input "image_info". Moreover, for some models the produced inference results are closer to the original TensorFlow model. In order to trigger new behavior, the attribute "operation_to_add" in the corresponding JSON transformation configuration file should be set to value "DetectionOutput" instead of default one "Proposal".
* Starting with the 2021.1 release, model conversion API converts the TensorFlow Object Detection API SSDs, Faster and Mask RCNNs topologies keeping shape-calculating sub-graphs by default, so topologies can be re-shaped in the OpenVINO Runtime using dedicated reshape API. Refer to the :doc:`Using Shape Inference <openvino_docs_OV_UG_ShapeInference>` guide for more information on how to use this feature. It is possible to change the both spatial dimensions of the input image and batch size.
* To generate IRs for TF 1 SSD topologies, model conversion API creates a number of ``PriorBoxClustered`` operations instead of a constant node with prior boxes calculated for the particular input image size. This change allows you to reshape the topology in the OpenVINO Runtime using dedicated API. The reshaping is supported for all SSD topologies except FPNs, which contain hardcoded shapes for some operations preventing from changing topology input shape.

View File

@ -2,6 +2,11 @@
@sphinxdirective
.. meta::
:description: Learn how to convert a RetinaNet model
from TensorFlow to the OpenVINO Intermediate Representation.
This tutorial explains how to convert a RetinaNet model to the Intermediate Representation (IR).
`Public RetinaNet model <https://github.com/fizyr/keras-retinanet>`__ does not contain pretrained TensorFlow weights.

View File

@ -2,6 +2,12 @@
@sphinxdirective
.. meta::
:description: Learn how to convert a Slim Image
Classification model from TensorFlow to the OpenVINO
Intermediate Representation.
`TensorFlow-Slim Image Classification Model Library <https://github.com/tensorflow/models/tree/master/research/slim/README.md>`__ is a library to define, train and evaluate classification models in TensorFlow. The library contains Python scripts defining the classification topologies together with checkpoint files for several pre-trained classification topologies. To convert a TensorFlow-Slim library model, complete the following steps:
1. Download the TensorFlow-Slim models `git repository <https://github.com/tensorflow/models>`__.

View File

@ -2,6 +2,11 @@
@sphinxdirective
.. meta::
:description: Learn how to convert Wide and Deep Family
models from TensorFlow to the OpenVINO Intermediate Representation.
The Wide and Deep models is a combination of wide and deep parts for memorization and generalization of object features respectively.
These models can contain different types of object features such as numerical, categorical, sparse and sequential features. These feature types are specified
through Tensorflow tf.feature_column API. Table below presents what feature types are supported by the OpenVINO toolkit.

View File

@ -2,6 +2,11 @@
@sphinxdirective
.. meta::
:description: Learn how to convert an XLNet model from
TensorFlow to the OpenVINO Intermediate Representation.
Pretrained models for XLNet (Bidirectional Encoder Representations from Transformers) are
`publicly available <https://github.com/zihangdai/xlnet>`__.

View File

@ -2,6 +2,11 @@
@sphinxdirective
.. meta::
:description: Learn how to convert YOLO models from
TensorFlow to the OpenVINO Intermediate Representation.
This document explains how to convert real-time object detection YOLOv1, YOLOv2, YOLOv3 and YOLOv4 public models to the Intermediate Representation (IR). All YOLO models are originally implemented in the DarkNet framework and consist of two files:
* The ``.cfg`` file with model configurations

View File

@ -2,6 +2,12 @@
@sphinxdirective
.. meta::
:description: Learn how to convert a TensorFlow Language
Model on One Billion Word Benchmark to the OpenVINO Intermediate
Representation.
Downloading a Pre-trained Language Model on One Billion Word Benchmark
######################################################################

View File

@ -2,6 +2,7 @@
@sphinxdirective
.. toctree::
:maxdepth: 1
:hidden:

View File

@ -2,6 +2,10 @@
@sphinxdirective
.. meta::
:description: Learn how to extract operator attributes in Model Optimizer to
support a custom Caffe operation written only in Python.
.. danger::
The code described here has been **deprecated!** Do not use it to avoid working with a legacy solution. It will be kept for some time to ensure backwards compatibility, but **you should not use** it in contemporary applications.

View File

@ -2,6 +2,11 @@
@sphinxdirective
.. meta::
:description: Learn about deprecated extensions, which enable injecting logic
to the model conversion pipeline without changing the Model
Optimizer core code.
.. toctree::
:maxdepth: 1
:hidden:

View File

@ -2,6 +2,12 @@
@sphinxdirective
.. meta::
:description: Learn about a deprecated generic extension in Model Optimizer,
which provides the operation extractor usable for all model
frameworks.
.. danger::
The code described here has been **deprecated!** Do not use it to avoid working with a legacy solution. It will be kept for some time to ensure backwards compatibility, but **you should not use** it in contemporary applications.

View File

@ -2,6 +2,11 @@
@sphinxdirective
.. meta::
:description: Learn about the Op class, that contains operation attributes,
which are set to a node of the graph created during model
conversion with Model Optimizer.
.. danger::
The code described here has been **deprecated!** Do not use it to avoid working with a legacy solution. It will be kept for some time to ensure backwards compatibility, but **you should not use** it in contemporary applications.

View File

@ -2,6 +2,10 @@
@sphinxdirective
.. meta::
:description: Learn about deprecated APIs and the Port and Connection classes
in Model Optimizer used for graph traversal and transformation.
.. danger::
The code described here has been **deprecated!** Do not use it to avoid working with a legacy solution. It will be kept for some time to ensure backwards compatibility, but **you should not use** it in contemporary applications.

View File

@ -2,6 +2,10 @@
@sphinxdirective
.. meta::
:description: Learn about various base classes for front, middle and back phase
transformations applied during model conversion with Model Optimizer.
.. danger::
The code described here has been **deprecated!** Do not use it to avoid working with a legacy solution. It will be kept for some time to ensure backwards compatibility, but **you should not use** it in contemporary applications.

View File

@ -2,6 +2,11 @@
@sphinxdirective
.. meta::
:description: In OpenVINO Runtime, you can enable Instrumentation and Tracing Technology API (ITT API) of Intel® VTune™
Profiler to control trace data during execution of AUTO plugin.
Using Debug Log
###############

View File

@ -1,7 +1,13 @@
# Model Caching Overview {#openvino_docs_OV_UG_Model_caching_overview}
@sphinxdirective
.. meta::
:description: Enabling model caching to export compiled model
automatically and reusing it can significantly
reduce duration of model compilation on application startup.
As described in :doc:`Integrate OpenVINO™ with Your Application <openvino_docs_OV_UG_Integrate_OV_with_your_application>`,
a common application flow consists of the following steps:

View File

@ -2,6 +2,9 @@
@sphinxdirective
.. meta::
:description: Explore the examples of operations supported in OpenVINO™ toolkit.
.. toctree::
:maxdepth: 1

View File

@ -2,6 +2,12 @@
@sphinxdirective
.. meta::
:description: OpenVINO™ Runtime Python API includes additional features to
improve user experience and provide simple yet powerful tool
for Python users.
OpenVINO™ Runtime Python API offers additional features and helpers to enhance user experience. The main goal of Python API is to provide user-friendly and simple yet powerful tool for Python users.
Easier Model Compilation

View File

@ -2,6 +2,11 @@
@sphinxdirective
.. meta::
:description: OpenVINO™ Runtime Python API enables you to share memory on inputs, hide
the latency with asynchronous calls and implement "postponed return".
.. warning::
All mentioned methods are very dependent on a specific hardware and software set-up.

View File

@ -4,6 +4,12 @@
.. _code samples:
.. meta::
:description: OpenVINO™ samples include a collection of simple console applications
that explain how to implement the capabilities and features of
OpenVINO API into an application.
.. toctree::
:maxdepth: 1
:hidden:

View File

@ -8,6 +8,9 @@
troubleshooting_reshape_errors
.. meta::
:description: OpenVINO™ allows changing model input shape during the runtime when the provided input has a different size than the model's input shape.
OpenVINO™ enables you to change model input shape during the application runtime.
It may be useful when you want to feed the model an input that has different size than the model input shape.

View File

@ -2,6 +2,12 @@
@sphinxdirective
.. meta::
:description: In OpenVINO™, you can use several methods to address the issues
of non-reshape-able models and shape collision, which prevent
normal shape propagation.
How To Avoid Shape Collision
############################

View File

@ -8,6 +8,11 @@
Debugging Auto-Device Plugin <openvino_docs_OV_UG_supported_plugins_AUTO_debugging>
.. meta::
:description: The Automatic Device Selection mode in OpenVINO™ Runtime
detects available devices and selects the optimal processing
unit for inference automatically.
This article introduces how Automatic Device Selection works and how to use it for inference.

View File

@ -2,6 +2,12 @@
@sphinxdirective
.. meta::
:description: The Automatic Batching Execution mode in OpenVINO Runtime
performs automatic batching to improve device utilization
by grouping inference requests.
The Automatic Batching Execution mode (or Auto-batching for short) performs automatic batching on-the-fly to improve device utilization by grouping inference requests together, without programming effort from the user.
With Automatic Batching, gathering the input and scattering the output from the individual inference requests required for the batch happen transparently, without affecting the application code.

View File

@ -2,6 +2,12 @@
@sphinxdirective
.. meta::
:description: OpenVINO™ Deployment Manager assembles the model, OpenVINO IR
files, your application, dependencies and creates a deployment
package for a target device.
The OpenVINO™ Deployment Manager is a Python command-line tool that creates a deployment package by assembling the model, OpenVINO IR files, your application, and associated dependencies into a runtime package for your target device. This tool is delivered within the Intel® Distribution of OpenVINO™ toolkit for Linux, Windows and macOS release packages. It is available in the ``<INSTALL_DIR>/tools/deployment_manager`` directory after installation.
This article provides instructions on how to create a package with Deployment Manager and then deploy the package to your target systems.

Some files were not shown because too many files have changed in this diff Show More