several ports to master
Fixing issues for Ecosystem Overview, Media Processing and Extensibility. Porting: #13098 #13086 #13108
This commit is contained in:
parent
0db641fc51
commit
be2180654e
@ -25,7 +25,7 @@ A solution for Model Developers and Independent Software Vendors to use secure p
|
||||
|
||||
More resources:
|
||||
* [documentation](https://docs.openvino.ai/latest/ovsa_get_started.html)
|
||||
* [GitHub]https://github.com/openvinotoolkit/security_addon)
|
||||
* [GitHub](https://github.com/openvinotoolkit/security_addon)
|
||||
|
||||
|
||||
### OpenVINO™ integration with TensorFlow (OVTF)
|
||||
@ -40,7 +40,7 @@ More resources:
|
||||
A streaming media analytics framework, based on the GStreamer multimedia framework, for creating complex media analytics pipelines.
|
||||
|
||||
More resources:
|
||||
* [documentation on GitHub](https://openvinotoolkit.github.io/dlstreamer_gst/)
|
||||
* [documentation on GitHub](https://dlstreamer.github.io/index.html)
|
||||
* [installation Guide on GitHub](https://github.com/openvinotoolkit/dlstreamer_gst/wiki/Install-Guide)
|
||||
|
||||
### DL Workbench
|
||||
@ -61,7 +61,7 @@ More resources:
|
||||
An online, interactive video and image annotation tool for computer vision purposes.
|
||||
|
||||
More resources:
|
||||
* [documentation on GitHub](https://openvinotoolkit.github.io/cvat/docs/)
|
||||
* [documentation on GitHub](https://opencv.github.io/cvat/docs/)
|
||||
* [web application](https://cvat.org/)
|
||||
* [Docker Hub](https://hub.docker.com/r/openvino/cvat_server)
|
||||
* [GitHub](https://github.com/openvinotoolkit/cvat)
|
||||
|
@ -70,7 +70,7 @@ To eliminate operation, OpenVINO™ has special method that considers all limita
|
||||
|
||||
`ov::replace_output_update_name()` in case of successful replacement it automatically preserves friendly name and runtime info.
|
||||
|
||||
## Transformations types <a name="transformations_types"></a>
|
||||
## Transformations types <a name="transformations-types"></a>
|
||||
|
||||
OpenVINO™ Runtime has three main transformation types:
|
||||
|
||||
@ -91,7 +91,7 @@ Transformation library has two internal macros to support conditional compilatio
|
||||
|
||||
When developing a transformation, you need to follow these transformation rules:
|
||||
|
||||
###1. Friendly Names
|
||||
### 1. Friendly Names
|
||||
|
||||
Each `ov::Node` has an unique name and a friendly name. In transformations we care only about friendly name because it represents the name from the model.
|
||||
To avoid losing friendly name when replacing node with other node or subgraph, set the original friendly name to the latest node in replacing subgraph. See the example below.
|
||||
@ -100,7 +100,7 @@ To avoid losing friendly name when replacing node with other node or subgraph, s
|
||||
|
||||
In more advanced cases, when replaced operation has several outputs and we add additional consumers to its outputs, we make a decision how to set friendly name by arrangement.
|
||||
|
||||
###2. Runtime Info
|
||||
### 2. Runtime Info
|
||||
|
||||
Runtime info is a map `std::map<std::string, ov::Any>` located inside `ov::Node` class. It represents additional attributes in `ov::Node`.
|
||||
These attributes can be set by users or by plugins and when executing transformation that changes `ov::Model` we need to preserve these attributes as they will not be automatically propagated.
|
||||
@ -111,9 +111,9 @@ Currently, there is no mechanism that automatically detects transformation types
|
||||
|
||||
When transformation has multiple fusions or decompositions, `ov::copy_runtime_info` must be called multiple times for each case.
|
||||
|
||||
**Note**: copy_runtime_info removes rt_info from destination nodes. If you want to keep it, you need to specify them in source nodes like this: copy_runtime_info({a, b, c}, {a, b})
|
||||
> **NOTE**: `copy_runtime_info` removes `rt_info` from destination nodes. If you want to keep it, you need to specify them in source nodes like this: `copy_runtime_info({a, b, c}, {a, b})`
|
||||
|
||||
###3. Constant Folding
|
||||
### 3. Constant Folding
|
||||
|
||||
If your transformation inserts constant sub-graphs that need to be folded, do not forget to use `ov::pass::ConstantFolding()` after your transformation or call constant folding directly for operation.
|
||||
The example below shows how constant subgraph can be constructed.
|
||||
@ -140,8 +140,8 @@ In transformation development process:
|
||||
## Using pass manager <a name="using_pass_manager"></a>
|
||||
|
||||
`ov::pass::Manager` is a container class that can store the list of transformations and execute them. The main idea of this class is to have high-level representation for grouped list of transformations.
|
||||
It can register and apply any [transformation pass](#transformations_types) on model.
|
||||
In addition, `ov::pass::Manager` has extended debug capabilities (find more information in the [how to debug transformations](#how_to_debug_transformations) section).
|
||||
It can register and apply any [transformation pass](#transformations-types) on model.
|
||||
In addition, `ov::pass::Manager` has extended debug capabilities (find more information in the [how to debug transformations](#how-to-debug-transformations) section).
|
||||
|
||||
The example below shows basic usage of `ov::pass::Manager`
|
||||
|
||||
@ -151,7 +151,7 @@ Another example shows how multiple matcher passes can be united into single Grap
|
||||
|
||||
@snippet src/transformations/template_pattern_transformation.cpp matcher_pass:manager2
|
||||
|
||||
## How to debug transformations <a name="how_to_debug_transformations"></a>
|
||||
## How to debug transformations <a name="how-to-debug-transformations"></a>
|
||||
|
||||
If you are using `ngraph::pass::Manager` to run sequence of transformations, you can get additional debug capabilities by using the following environment variables:
|
||||
|
||||
@ -160,7 +160,7 @@ OV_PROFILE_PASS_ENABLE=1 - enables performance measurement for each transformati
|
||||
OV_ENABLE_VISUALIZE_TRACING=1 - enables visualization after each transformation. By default, it saves dot and svg files.
|
||||
```
|
||||
|
||||
> **Note**: Make sure that you have dot installed on your machine; otherwise, it will silently save only dot file without svg file.
|
||||
> **NOTE**: Make sure that you have dot installed on your machine; otherwise, it will silently save only dot file without svg file.
|
||||
|
||||
## See Also
|
||||
|
||||
|
@ -1,4 +1,4 @@
|
||||
# Build Plugin Using CMake* {#openvino_docs_ie_plugin_dg_plugin_build}
|
||||
# Build Plugin Using CMake {#openvino_docs_ie_plugin_dg_plugin_build}
|
||||
|
||||
OpenVINO build infrastructure provides the OpenVINO Developer Package for plugin development.
|
||||
|
||||
|
@ -95,6 +95,6 @@ Returns a current value for a configuration key with the name `name`. The method
|
||||
|
||||
@snippet src/template_executable_network.cpp executable_network:get_config
|
||||
|
||||
This function is the only way to get configuration values when a network is imported and compiled by other developers and tools (for example, the [Compile tool](../_inference_engine_tools_compile_tool_README.html)).
|
||||
This function is the only way to get configuration values when a network is imported and compiled by other developers and tools (for example, the [Compile tool](@ref openvino_inference_engine_tools_compile_tool_README).
|
||||
|
||||
The next step in plugin library implementation is the [Synchronous Inference Request](@ref openvino_docs_ie_plugin_dg_infer_request) class.
|
||||
|
@ -47,13 +47,15 @@ Inference Engine plugin dynamic library consists of several main components:
|
||||
on several task executors based on a device-specific pipeline structure.
|
||||
|
||||
> **NOTE**: This documentation is written based on the `Template` plugin, which demonstrates plugin
|
||||
|
||||
development details. Find the complete code of the `Template`, which is fully compilable and up-to-date,
|
||||
at `<openvino source dir>/src/plugins/template`.
|
||||
|
||||
|
||||
Detailed guides
|
||||
-----------------------
|
||||
|
||||
* [Build](@ref openvino_docs_ie_plugin_dg_plugin_build) a plugin library using CMake\*
|
||||
* [Build](@ref openvino_docs_ie_plugin_dg_plugin_build) a plugin library using CMake
|
||||
* Plugin and its components [testing](@ref openvino_docs_ie_plugin_dg_plugin_testing)
|
||||
* [Quantized networks](@ref openvino_docs_ie_plugin_dg_quantized_networks)
|
||||
* [Low precision transformations](@ref openvino_docs_OV_UG_lpt) guide
|
||||
|
@ -81,7 +81,7 @@ The function accepts a const shared pointer to `ov::Model` object and performs t
|
||||
|
||||
1. Deep copies a const object to a local object, which can later be modified.
|
||||
2. Applies common and plugin-specific transformations on a copied graph to make the graph more friendly to hardware operations. For details how to write custom plugin-specific transformation, please, refer to [Writing OpenVINO™ transformations](@ref openvino_docs_transformations) guide. See detailed topics about network representation:
|
||||
* [Intermediate Representation and Operation Sets](../_docs_MO_DG_IR_and_opsets.html)
|
||||
* [Intermediate Representation and Operation Sets](@ref openvino_docs_MO_DG_IR_and_opsets)
|
||||
* [Quantized networks](@ref openvino_docs_ie_plugin_dg_quantized_networks).
|
||||
|
||||
@snippet template/src/template_plugin.cpp plugin:transform_network
|
||||
|
@ -32,7 +32,7 @@ Thus we can define:
|
||||
- **Scale** as `(output_high - output_low) / (levels-1)`
|
||||
- **Zero-point** as `-output_low / (output_high - output_low) * (levels-1)`
|
||||
|
||||
**Note**: During the quantization process the values `input_low`, `input_high`, `output_low`, `output_high` are selected so that to map a floating-point zero exactly to an integer value (zero-point) and vice versa.
|
||||
> **NOTE**: During the quantization process the values `input_low`, `input_high`, `output_low`, `output_high` are selected so that to map a floating-point zero exactly to an integer value (zero-point) and vice versa.
|
||||
|
||||
## Quantization specifics and restrictions
|
||||
In general, OpenVINO can represent and execute quantized models from different sources. However, the Post-training Optimization Tool (POT)
|
||||
|
@ -54,4 +54,4 @@ Attributes usage by transformations:
|
||||
| IntervalsAlignment | AlignQuantizationIntervals | FakeQuantizeDecompositionTransformation |
|
||||
| QuantizationAlignment | AlignQuantizationParameters | FakeQuantizeDecompositionTransformation |
|
||||
|
||||
> **Note:** the same type of attribute instances can be created in different transformations. This approach is the result of the transformation single-responsibility principle. For example, `Precision` attribute instances are created in `MarkupCanBeQuantized` and `MarkupPrecisions` transformations, but the reasons for their creation are different.
|
||||
> **NOTE**: The same type of attribute instances can be created in different transformations. This approach is the result of the transformation single-responsibility principle. For example, `Precision` attribute instances are created in `MarkupCanBeQuantized` and `MarkupPrecisions` transformations, but the reasons for their creation are different.
|
@ -22,7 +22,7 @@ The table of transformations and used attributes:
|
||||
| AlignQuantizationIntervals | IntervalsAlignment | PrecisionPreserved |
|
||||
| AlignQuantizationParameters | QuantizationAlignment | PrecisionPreserved, PerTensorQuantization |
|
||||
|
||||
> **Note:** the same type of attribute instances can be created in different transformations. This approach is the result of the transformation single-responsibility principle. For example, `Precision` attribute instances are created in `MarkupCanBeQuantized` and `MarkupPrecisions` transformations, but the reasons for their creation are different
|
||||
> **NOTE**: The same type of attribute instances can be created in different transformations. This approach is the result of the transformation single-responsibility principle. For example, `Precision` attribute instances are created in `MarkupCanBeQuantized` and `MarkupPrecisions` transformations, but the reasons for their creation are different
|
||||
|
||||
Common markup transformations can be decomposed into simpler utility markup transformations. The order of Markup utility transformations is not important:
|
||||
* [CreateAttribute](@ref openvino_docs_OV_UG_lpt_CreateAttribute)
|
||||
|
@ -46,4 +46,4 @@ Changes in the example model after main transformation:
|
||||
- dequantization operations.
|
||||
* Dequantization operations were moved via precision preserved (`concat1` and `concat2`) and quantized (`convolution2`) operations.
|
||||
|
||||
> **Note:** the left branch (branch #1) does not require per-tensor quantization. As a result, the `fakeQuantize1`output interval is [0, 255]. But quantized `convolution2` requires per-tensor quantization on the right branch (branch #2). Then all connected `FakeQuantize` interval operations (`fakeQuantize1` and `fakeQuantize2`) are aligned to have per-tensor quantization after the concatenation (`concat2`) operation.
|
||||
> **NOTE**: The left branch (branch #1) does not require per-tensor quantization. As a result, the `fakeQuantize1`output interval is [0, 255]. But quantized `convolution2` requires per-tensor quantization on the right branch (branch #2). Then all connected `FakeQuantize` interval operations (`fakeQuantize1` and `fakeQuantize2`) are aligned to have per-tensor quantization after the concatenation (`concat2`) operation.
|
||||
|
@ -59,8 +59,7 @@ edge attributes if needed. Meanwhile, most manipulations with nodes connections
|
||||
is strongly not recommended.
|
||||
|
||||
Further details and examples related to a model representation in memory are provided in the sections below, in a context
|
||||
for a better explanation. Also, for more information on how to use ports and connections, refer to the [Graph Traversal and Modification Using `Port`s and
|
||||
`Connection`s](#graph-ports-and-conneсtions) section.
|
||||
for a better explanation. Also, for more information on how to use ports and connections, refer to the [Graph Traversal and Modification Using Ports and Connections](@ref graph-ports-and-conneсtions) section.
|
||||
|
||||
## Model Conversion Pipeline <a name="model-conversion-pipeline"></a>
|
||||
A model conversion pipeline can be represented with the following diagram:
|
||||
@ -138,7 +137,7 @@ During the front phase, Model Optimizer knows shape of the model inputs and cons
|
||||
transformation. For example, the transformation `extensions/front/TopKNormalize.py` removes an attribute `k` from a
|
||||
`TopK` node and adds an input constant with the value `k`. The transformation is needed to convert a `TopK` operation.
|
||||
It comes from frameworks, where a number of output elements is defined as an attribute of the operation to the
|
||||
OpenVINO™ [TopK](../../../ops/sort/TopK_3.md) operation semantic, which requires this value to be a separate input.
|
||||
OpenVINO [TopK](../../../ops/sort/TopK_3.md) operation semantic, which requires this value to be a separate input.
|
||||
|
||||
It is important to mention that sometimes it seems like transformation cannot be implemented during the front phase
|
||||
because the actual values of inputs or shapes are needed. In fact, manipulations of shapes or values can be implemented
|
||||
@ -231,7 +230,7 @@ available in the `mo/ops/reshape.py` file):
|
||||
```
|
||||
|
||||
Methods `in_port()` and `output_port()` of the `Node` class are used to get and set data node attributes. For more information on
|
||||
how to use them, refer to the [Graph Traversal and Modification Using Ports and Connections](#graph-ports-and-conneсtions) section.
|
||||
how to use them, refer to the [Graph Traversal and Modification Using Ports and Connections](@ref graph-ports-and-conneсtions) section.
|
||||
|
||||
> **NOTE**: A shape inference function should perform output shape calculation in the original model layout. For
|
||||
> example, OpenVINO™ supports Convolution operations in NCHW layout only but TensorFlow supports NHWC layout as
|
||||
@ -246,8 +245,7 @@ how to use them, refer to the [Graph Traversal and Modification Using Ports and
|
||||
The middle phase starts after partial inference. At this phase, a graph contains data nodes and output shapes of all
|
||||
operations in the graph have been calculated. Any transformation implemented at this stage must update the `shape`
|
||||
attribute for all newly added operations. It is highly recommended to use API described in the
|
||||
[Graph Traversal and Modification Using Ports and Connections](#graph-ports-and-conneсtions) because modification of
|
||||
a graph using this API causes automatic re-inference of affected nodes as well as necessary data nodes creation.
|
||||
[Graph Traversal and Modification Using Ports and Connections](@ref graph-ports-and-conneсtions) because modification of a graph using this API causes automatic re-inference of affected nodes as well as necessary data nodes creation.
|
||||
|
||||
More information on how to develop middle transformations and dedicated API description is provided in the
|
||||
[Middle Phase Transformations](#middle-phase-transformations).
|
||||
@ -311,7 +309,9 @@ with the `backend_attrs()` or `supported_attrs()` of the `Op` class used for a g
|
||||
information on how the operation attributes are saved to XML, refer to the function `prepare_emit_ir()` in
|
||||
the `mo/pipeline/common.py` file and [Model Optimizer Operation](#extension-operation) section.
|
||||
|
||||
## Graph Traversal and Modification Using Ports and Connections <a name="graph-ports-and-conneсtions"></a>
|
||||
## Graph Traversal and Modification Using Ports and Connections <a name="ports-conneсtions"></a>
|
||||
@anchor graph-ports-and-conneсtions
|
||||
|
||||
There are three APIs for a graph traversal and transformation used in the Model Optimizer:
|
||||
1. The API provided with the `networkx` Python library for the `networkx.MultiDiGraph` class, which is the base class for
|
||||
the `mo.graph.graph.Graph` object. For more details, refer to the [Model Representation in Memory](#model-representation-in-memory) section.
|
||||
@ -410,8 +410,7 @@ op3.out_port(0).connect(op4.in_port(1))
|
||||
|
||||

|
||||
|
||||
> **NOTE**: For a full list of available methods, refer to the `Node` class implementation in the `mo/graph/graph.py` and `Port` class implementation in the
|
||||
`mo/graph/port.py` files.
|
||||
> **NOTE**: For a full list of available methods, refer to the `Node` class implementation in the `mo/graph/graph.py` and `Port` class implementation in the `mo/graph/port.py` files.
|
||||
|
||||
### Connections <a name="intro-conneсtions"></a>
|
||||
Connection is a concept introduced to easily and reliably perform graph modifications. Connection corresponds to a
|
||||
|
@ -91,7 +91,7 @@
|
||||
|
||||
Intel® Deep Learning Streamer <openvino_docs_dlstreamer>
|
||||
openvino_docs_gapi_gapi_intro
|
||||
OpenCV* Developer Guide <https://docs.opencv.org/master/>
|
||||
OpenCV Developer Guide <https://docs.opencv.org/master/>
|
||||
OpenCL™ Developer Guide <https://software.intel.com/en-us/openclsdk-devguide>
|
||||
OneVPL Developer Guide <https://www.intel.com/content/www/us/en/developer/articles/release-notes/oneapi-video-processing-library-release-notes.html>
|
||||
|
||||
|
@ -10,7 +10,7 @@ In this tutorial you will learn:
|
||||
## Prerequisites
|
||||
This sample requires:
|
||||
|
||||
* PC with GNU/Linux* or Microsoft Windows* (Apple macOS* is supported but was not tested)
|
||||
* PC with GNU/Linux or Microsoft Windows (Apple macOS is supported but was not tested)
|
||||
* OpenCV 4.2 or higher built with [Intel® Distribution of OpenVINO™ Toolkit](https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit.html) (building with [Intel® TBB](https://www.threadingbuildingblocks.org/intel-tbb-tutorial) is a plus)
|
||||
* The following pre-trained models from the [Open Model Zoo](@ref omz_models_group_intel)
|
||||
* [face-detection-adas-0001](@ref omz_models_model_face_detection_adas_0001)
|
||||
@ -23,7 +23,6 @@ We will implement a simple face beautification algorithm using a combination of
|
||||
|
||||

|
||||
|
||||
Briefly the algorithm is described as follows:
|
||||
|
||||
Briefly the algorithm is described as follows:
|
||||
- Input image \f$I\f$ is passed to unsharp mask and bilateral filters
|
||||
|
@ -9,8 +9,8 @@ In this tutorial you will learn:
|
||||
## Prerequisites
|
||||
This sample requires:
|
||||
|
||||
* PC with GNU/Linux* or Microsoft Windows* (Apple macOS* is supported but was not tested)
|
||||
* OpenCV 4.2 or higher built with [Intel® Distribution of OpenVINO™ Toolkit](https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit.html) (building with [Intel® TBB](https://www.threadingbuildingblocks.org/intel-tbb-tutorial)
|
||||
* PC with GNU/Linux or Microsoft Windows (Apple macOS is supported but was not tested)
|
||||
* OpenCV 4.2 or higher built with [Intel® Distribution of OpenVINO™ Toolkit](https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit.html) (building with [Intel® TBB](https://www.threadingbuildingblocks.org/intel-tbb-tutorial) is a plus)
|
||||
* The following pre-trained models from the [Open Model Zoo](@ref omz_models_group_intel):
|
||||
* [face-detection-adas-0001](@ref omz_models_model_face_detection_adas_0001)
|
||||
* [age-gender-recognition-retail-0013](@ref omz_models_model_age_gender_recognition_retail_0013)
|
||||
|
Loading…
Reference in New Issue
Block a user