diff --git a/docs/HOWTO/Custom_Layers_Guide.md b/docs/HOWTO/Custom_Layers_Guide.md
index 2315acb0637..d7c63b66c30 100644
--- a/docs/HOWTO/Custom_Layers_Guide.md
+++ b/docs/HOWTO/Custom_Layers_Guide.md
@@ -1,19 +1,19 @@
# Custom Operations Guide {#openvino_docs_HOWTO_Custom_Layers_Guide}
-The Intel® Distribution of OpenVINO™ toolkit supports neural network models trained with multiple frameworks including
-TensorFlow*, Caffe*, MXNet*, Kaldi* and ONNX* file format. The list of supported operations (layers) is different for
+The Intel® Distribution of OpenVINO™ toolkit supports neural network models trained with multiple frameworks, including
+TensorFlow, Caffe, MXNet, Kaldi, PaddlePaddle, and ONNX. The list of supported operations (layers) is different for
each of the supported frameworks. To see the operations supported by your framework, refer to
[Supported Framework Layers](../MO_DG/prepare_model/Supported_Frameworks_Layers.md).
Custom operations, that is those not included in the list, are not recognized by Model Optimizer out-of-the-box. Therefore, creating Intermediate Representation (IR) for a model using them requires additional steps. This guide illustrates the workflow for running inference on topologies featuring custom operations, allowing you to plug in your own implementation for existing or completely new operations.
-> **NOTE**: *Layer* is a legacy term for *operation* which came from Caffe\* framework. Currently it is not used.
+> **NOTE**: *Layer* is a legacy term for *operation* which came from Caffe framework. Currently it is not used.
> Refer to the [Deep Learning Network Intermediate Representation and Operation Sets in OpenVINO™](../MO_DG/IR_and_opsets.md)
> for more information on the topic.
## Terms Used in This Guide
-- *Intermediate Representation (IR)* — OpenVINO's Neural Network format used by Inference Engine. It abstracts different frameworks and describs model topology, operations parameters, and weights.
+- *Intermediate Representation (IR)* — OpenVINO's Neural Network format used by Inference Engine. It abstracts different frameworks and describes model topology, operations parameters, and weights.
- *Operation* — an abstract concept of a math function selected for a specific purpose. Operations supported by
OpenVINO™ are listed in the supported operation set provided in the [Available Operations Sets](../ops/opset.md).
@@ -28,8 +28,8 @@ Custom operations, that is those not included in the list, are not recognized by
## Custom Operation Support Overview
There are three steps to support inference of a model with custom operation(s):
-1. Add support for a custom operation in the [Model Optimizer](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md) so
-the Model Optimizer can generate the IR with the operation.
+1. Add support for a custom operation in [Model Optimizer](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md) so
+that it can generate the IR with the operation.
2. Create an operation set and implement a custom nGraph operation in it as described in the
[Custom nGraph Operation](../OV_Runtime_UG/Extensibility_DG/AddingNGraphOps.md).
3. Implement a customer operation in one of the [OpenVINO™ Runtime](../OV_Runtime_UG/openvino_intro.md)
@@ -59,7 +59,7 @@ operation. Refer to the "Operation Extractor" section of
> **NOTE**: In some cases you may need to implement some transformation to support the operation. This topic is covered in the "Graph Transformation Extensions" section of [Model Optimizer Extensibility](../MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md).
-## Custom Operations Extensions for the Inference Engine
+## Custom Operation Extensions for the Inference Engine
Inference Engine provides an extension mechanism to support new operations. This mechanism is described in [Inference Engine Extensibility Mechanism](../OV_Runtime_UG/Extensibility_DG/Intro.md).
@@ -80,8 +80,8 @@ operation and correctly infer output tensor shape and type.
## Enabling Magnetic Resonance Image Reconstruction Model
This chapter provides step-by-step instructions on how to enable the magnetic resonance image reconstruction model implemented in the [repository](https://github.com/rmsouza01/Hybrid-CS-Model-MRI/) using a custom operation on CPU. The example is prepared for a model generated from the repository with hash `2ede2f96161ce70dcdc922371fe6b6b254aafcc8`.
-### Download and Convert the Model to a Frozen TensorFlow\* Model Format
-The original pre-trained model is provided in the hdf5 format which is not supported by OpenVINO directly and needs to be converted to TensorFlow\* frozen model format first.
+### Download and Convert the Model to a Frozen TensorFlow Model Format
+The original pre-trained model is provided in the hdf5 format which is not supported by OpenVINO directly and needs to be converted to TensorFlow frozen model format first.
1. Download repository `https://github.com/rmsouza01/Hybrid-CS-Model-MRI`:
```bash
@@ -117,11 +117,11 @@ Keras==2.2.4) which should be executed from the root of the cloned repository:/wnet_20.pb -b 1
```
-> **NOTE**: This conversion guide is applicable for the 2021.3 release of OpenVINO and that starting from 2021.4
-> the OpenVINO supports this model out of the box.
+> **NOTE**: This conversion guide is applicable for the 2021.3 release of OpenVINO and starting from 2021.4
+> OpenVINO has supported this model out of the box.
Model Optimizer produces the following error:
```bash
@@ -172,12 +172,12 @@ additional parameter `--log_level DEBUG`. It is worth to mention the following l
This is a part of the log of the partial inference phase of the model conversion. See the "Partial Inference" section on
the [Model Optimizer Extensibility](../MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md) for
more information about this phase. Model Optimizer inferred output shape for the unknown operation of type "Complex"
-using a "fallback" to TensorFlow\*. However, it is not enough to generate the IR because Model Optimizer doesn't know
+using a "fallback" to TensorFlow. However, it is not enough to generate the IR because Model Optimizer doesn't know
which attributes of the operation should be saved to IR. So it is necessary to implement Model Optimizer extensions to
support these operations.
Before going into the extension development it is necessary to understand what these unsupported operations do according
-to the TensorFlow\* framework specification.
+to the TensorFlow framework specification.
* "Complex" - returns a tensor of complex type constructed from two real input tensors specifying real and imaginary
part of a complex number.
@@ -342,8 +342,9 @@ python3 mri_reconstruction_demo.py \
## Converting Models:
-- [Convert Your Caffe* Model](../MO_DG/prepare_model/convert_model/Convert_Model_From_Caffe.md)
-- [Convert Your TensorFlow* Model](../MO_DG/prepare_model/convert_model/Convert_Model_From_TensorFlow.md)
-- [Convert Your MXNet* Model](../MO_DG/prepare_model/convert_model/Convert_Model_From_MxNet.md)
-- [Convert Your Kaldi* Model](../MO_DG/prepare_model/convert_model/Convert_Model_From_Kaldi.md)
-- [Convert Your ONNX* Model](../MO_DG/prepare_model/convert_model/Convert_Model_From_ONNX.md)
+- [Convert Your Caffe Model](../MO_DG/prepare_model/convert_model/Convert_Model_From_Caffe.md)
+- [Convert Your TensorFlow Model](../MO_DG/prepare_model/convert_model/Convert_Model_From_TensorFlow.md)
+- [Convert Your MXNet Model](../MO_DG/prepare_model/convert_model/Convert_Model_From_MxNet.md)
+- [Convert Your Kaldi Model](../MO_DG/prepare_model/convert_model/Convert_Model_From_Kaldi.md)
+- [Convert Your ONNX Model](../MO_DG/prepare_model/convert_model/Convert_Model_From_ONNX.md)
+- [Convert Your PaddlePaddle Model](../MO_DG/prepare_model/convert_model/Convert_Model_From_Paddle.md)
diff --git a/docs/MO_DG/prepare_model/Supported_Frameworks_Layers.md b/docs/MO_DG/prepare_model/Supported_Frameworks_Layers.md
index b43910dd12c..272062f828f 100644
--- a/docs/MO_DG/prepare_model/Supported_Frameworks_Layers.md
+++ b/docs/MO_DG/prepare_model/Supported_Frameworks_Layers.md
@@ -1,9 +1,9 @@
# Supported Framework Layers {#openvino_docs_MO_DG_prepare_model_Supported_Frameworks_Layers}
-## Caffe\* Supported Layers
+## Caffe Supported Layers
-| Layer Name in Caffe\* | Limitations |
+| Layer Name in Caffe | Limitations |
|:---------- | :----------|
| Axpy | |
| BN | |
@@ -47,10 +47,10 @@
| Tile | |
-## MXNet\* Supported Symbols
+## MXNet Supported Symbols
-| Symbol Name in MXNet\*| Limitations|
+| Symbol Name in MXNet| Limitations|
| :----------| :----------|
| _Plus | |
| _contrib_arange_like | |
@@ -119,7 +119,7 @@
| Concat | |
| Convolution | |
| Crop | "center_crop" = 1 is not supported |
-| Custom | [Custom Layers in the Model Optimizer](customize_model_optimizer/Customize_Model_Optimizer.md) |
+| Custom | [Custom Layers in Model Optimizer](customize_model_optimizer/Customize_Model_Optimizer.md) |
| Deconvolution | |
| DeformableConvolution | |
| DeformablePSROIPooling | |
@@ -149,12 +149,12 @@
| zeros_like | |
-## TensorFlow\* Supported Operations
+## TensorFlow Supported Operations
-Some TensorFlow\* operations do not match to any Inference Engine layer, but are still supported by the Model Optimizer and can be used on constant propagation path. These layers are labeled 'Constant propagation' in the table.
+Some TensorFlow operations do not match to any Inference Engine layer, but are still supported by the Model Optimizer and can be used on constant propagation path. These layers are labeled 'Constant propagation' in the table.
-| Operation Name in TensorFlow\* | Limitations|
+| Operation Name in TensorFlow | Limitations|
| :----------| :----------|
| Abs | |
| Acosh | |
@@ -348,10 +348,10 @@ Some TensorFlow\* operations do not match to any Inference Engine layer, but are
| ZerosLike | |
-## TensorFlow 2 Keras\* Supported Operations
+## TensorFlow 2 Keras Supported Operations
-| Operation Name in TensorFlow 2 Keras\* | Limitations|
+| Operation Name in TensorFlow 2 Keras | Limitations|
| :----------| :----------|
| ActivityRegularization | |
| Add | |
@@ -431,10 +431,10 @@ Some TensorFlow\* operations do not match to any Inference Engine layer, but are
| ZeroPadding2D | |
| ZeroPadding3D | |
-## Kaldi\* Supported Layers
+## Kaldi Supported Layers
-| Symbol Name in Kaldi\*| Limitations|
+| Symbol Name in Kaldi| Limitations|
| :----------| :----------|
| addshift | |
| affinecomponent | |
@@ -478,10 +478,10 @@ Some TensorFlow\* operations do not match to any Inference Engine layer, but are
| timeheightconvolutioncomponent | |
-## ONNX\* Supported Operators
+## ONNX Supported Operators
-| Symbol Name in ONNX\*| Limitations|
+| Symbol Name in ONNX| Limitations|
| :----------| :----------|
| Abs | |
| Acos | |
@@ -621,11 +621,11 @@ Some TensorFlow\* operations do not match to any Inference Engine layer, but are
| Xor | |
-## PaddlePaddle\* Supported Operators
+## PaddlePaddle Supported Operators
paddlepaddle>=2.1
-| Operator Name in PaddlePaddle\*| Limitations|
+| Operator Name in PaddlePaddle| Limitations|
| :----------| :----------|
| adpative_pool2d | 'NHWC' data_layout is not supported |
| arg_max | 'int32' output data_type is not supported |
diff --git a/docs/MO_DG/prepare_model/convert_model/Convert_Model_From_Paddle.md b/docs/MO_DG/prepare_model/convert_model/Convert_Model_From_Paddle.md
index c7ae7277c7f..ddf5a3313c7 100644
--- a/docs/MO_DG/prepare_model/convert_model/Convert_Model_From_Paddle.md
+++ b/docs/MO_DG/prepare_model/convert_model/Convert_Model_From_Paddle.md
@@ -1,9 +1,9 @@
-# Converting a Paddle* Model {#openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Paddle}
+# Converting a PaddlePaddle Model {#openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Paddle}
-A summary of the steps for optimizing and deploying a model trained with Paddle\*:
+A summary of the steps for optimizing and deploying a model trained with PaddlePaddle:
-1. [Configure the Model Optimizer](../../Deep_Learning_Model_Optimizer_DevGuide.md) for Paddle\*.
-2. [Convert a Paddle\* Model](#Convert_From_Paddle) to produce an optimized [Intermediate Representation (IR)](../../IR_and_opsets.md) of the model based on the trained network topology, weights, and biases.
+1. [Configure Model Optimizer](../../Deep_Learning_Model_Optimizer_DevGuide.md) for PaddlePaddle.
+2. [Convert a PaddlePaddle Model](#Convert_From_Paddle) to produce an optimized [Intermediate Representation (IR)](../../IR_and_opsets.md) of the model based on the trained network topology, weights, and biases.
3. Test the model in the Intermediate Representation format using the [OpenVINO™ Runtime](../../../OV_Runtime_UG/openvino_intro.md) in the target environment via provided Inference Engine [sample applications](../../../OV_Runtime_UG/Samples_Overview.md).
4. [Integrate](../../../OV_Runtime_UG/Samples_Overview.md) the [OpenVINO™ Runtime](../../../OV_Runtime_UG/openvino_intro.md) in your application to deploy the model in the target environment.
@@ -29,11 +29,11 @@ A summary of the steps for optimizing and deploying a model trained with Paddle\
> **NOTE:** The verified models are exported from the repository of branch release/2.1.
-## Convert a Paddle* Model
+## Convert a PaddlePaddle Model
-To convert a Paddle\* model:
+To convert a PaddlePaddle model:
-1. Activate environment with installed OpenVINO if needed
+1. Activate environment with installed OpenVINO™ if needed
2. Use the `mo` script to simply convert a model, specifying the framework, the path to the input model `.pdmodel` file and the path to an output directory with write permissions:
```sh
mo --input_model .pdmodel --output_dir --framework=paddle
@@ -44,13 +44,13 @@ Parameters to convert your model:
* [Framework-agnostic parameters](Converting_Model.md): These parameters are used to convert a model trained with any supported framework.
> **NOTE:** `--scale`, `--scale_values`, `--mean_values` are not supported in the current version of mo_paddle.
-### Example of Converting a Paddle* Model
-Below is the example command to convert yolo v3 Paddle\* network to OpenVINO IR network with Model Optimizer.
+### Example of Converting a PaddlePaddle Model
+Below is the example command to convert yolo v3 PaddlePaddle network to OpenVINO IR network with Model Optimizer.
```sh
mo --model_name yolov3_darknet53_270e_coco --output_dir --framework=paddle --data_type=FP32 --reverse_input_channels --input_shape=[1,3,608,608],[1,2],[1,2] --input=image,im_shape,scale_factor --output=save_infer_model/scale_0.tmp_1,save_infer_model/scale_1.tmp_1 --input_model=yolov3.pdmodel
```
-## Supported Paddle\* Layers
+## Supported PaddlePaddle Layers
Refer to [Supported Framework Layers](../Supported_Frameworks_Layers.md) for the list of supported standard layers.
## Frequently Asked Questions (FAQ)
diff --git a/docs/MO_DG/prepare_model/convert_model/Converting_Model.md b/docs/MO_DG/prepare_model/convert_model/Converting_Model.md
index b0fdd565f19..468688f3f4e 100644
--- a/docs/MO_DG/prepare_model/convert_model/Converting_Model.md
+++ b/docs/MO_DG/prepare_model/convert_model/Converting_Model.md
@@ -37,6 +37,7 @@ Framework-specific parameters for:
* [TensorFlow](Convert_Model_From_TensorFlow.md)
* [MXNet](Convert_Model_From_MxNet.md)
* [ONNX](Convert_Model_From_ONNX.md)
+* [PaddlePaddle](Convert_Model_From_Paddle.md)
* [Kaldi](Convert_Model_From_Kaldi.md)
diff --git a/docs/MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md b/docs/MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md
index d3252704549..dd98faefda7 100644
--- a/docs/MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md
+++ b/docs/MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md
@@ -257,11 +257,13 @@ More information on how to develop middle transformations and dedicated API desc
[Middle Phase Transformations](#middle-phase-transformations).
### NHWC to NCHW Layout Change
-There are several middle transformations responsible for changing model layout from NHWC to NCHW. These transformations
-are triggered by default for TensorFlow\* models only because it is the only framework with Convolution operations in
-NHWC layout. This layout change is disabled if the model does not have operations that OpenVINO&trade needs to execute in
-NCHW layout, for example, Convolutions in NHWC layout. It is still possible to force Model Optimizer to do layout change
-using `--disable_nhwc_to_nchw` command-line parameter.
+
+There are several middle transformations responsible for changing model layout from NHWC to NCHW. These transformations are triggered by default for TensorFlow models as TensorFlow supports Convolution operations in the NHWC layout.
+
+This layout change is disabled automatically if the model does not have operations that OpenVINO&trade needs to execute in the NCHW layout, for example, Convolutions in NHWC layout.
+
+It is still possible to force Model Optimizer to do layout change, using `--disable_nhwc_to_nchw` command-line parameter, although it is not advised.
+
The layout change is a complex problem and detailed explanation of it is out of this document scope. A very brief
explanation of this process is provided below:
@@ -741,8 +743,7 @@ sub-graph of the original graph isomorphic to the specified pattern.
2. [Specific Operation Front Phase Transformations](#specific-operation-front-phase-transformations) triggered for the
node with a specific `op` attribute value.
3. [Generic Front Phase Transformations](#generic-front-phase-transformations).
-4. Manually enabled transformation defined with a JSON configuration file (for TensorFlow\*, ONNX\* and MXNet\* models
-only) specified using the `--transformations_config` command line parameter:
+4. Manually enabled transformation defined with a JSON configuration file (for TensorFlow, ONNX, MXNet, and PaddlePaddle models) specified using the `--transformations_config` command line parameter:
1. [Node Name Pattern Front Phase Transformations](#node-name-pattern-front-phase-transformation).
2. [Front Phase Transformations Using Start and End Points](#start-end-points-front-phase-transformations).
3. [Generic Front Phase Transformations Enabled with Transformations Configuration File](#generic-transformations-config-front-phase-transformations).
diff --git a/docs/OV_Runtime_UG/network_state_intro.md b/docs/OV_Runtime_UG/network_state_intro.md
index 2a04dd05dc9..5d39b56d32d 100644
--- a/docs/OV_Runtime_UG/network_state_intro.md
+++ b/docs/OV_Runtime_UG/network_state_intro.md
@@ -15,7 +15,7 @@ The section additionally provides small examples of stateful network and code to
between data portions should be addressed. For that, networks save some data between inferences - state. When one dependent sequence is over,
state should be reset to initial value and new sequence can be started.
- Several frameworks have special API for states in networks. For example, Keras have special option for RNNs `stateful` that turns on saving state
+ Several frameworks have special API for states in networks. For example, Keras has special option for RNNs `stateful` that turns on saving state
between inferences. Kaldi contains special specifier `Offset` to define time offset in a network.
OpenVINO also contains special API to simplify work with networks with states. State is automatically saved between inferences,
@@ -196,9 +196,7 @@ sink from `ngraph::Function` after deleting the node from graph with the `delete
Let's take an IR from the previous section example. The example below demonstrates inference of two independent sequences of data. State should be reset between these sequences.
-One infer request and one thread
-will be used in this example. Using several threads is possible if you have several independent sequences. Then each sequence can be processed in its own infer
-request. Inference of one sequence in several infer requests is not recommended. In one infer request state will be saved automatically between inferences, but
+One infer request and one thread will be used in this example. Using several threads is possible if you have several independent sequences. Then each sequence can be processed in its own infer request. Inference of one sequence in several infer requests is not recommended. In one infer request state will be saved automatically between inferences, but
if the first step is done in one infer request and the second in another, state should be set in new infer request manually (using `IVariableState::SetState` method).
@snippet openvino/docs/snippets/InferenceEngine_network_with_state_infer.cpp part1
@@ -213,7 +211,7 @@ Decsriptions can be found in [Samples Overview](./Samples_Overview.md)
If the original framework does not have a special API for working with states, after importing the model, OpenVINO representation will not contain Assign/ReadValue layers. For example, if the original ONNX model contains RNN operations, IR will contain TensorIterator operations and the values will be obtained only after execution of the whole TensorIterator primitive. Intermediate values from each iteration will not be available. To enable you to work with these intermediate values of each iteration and receive them with a low latency after each infer request, special LowLatency and LowLatency2 transformations were introduced.
-### How to get TensorIterator/Loop operaions from different frameworks via ModelOptimizer.
+### How to get TensorIterator/Loop operations from different frameworks via ModelOptimizer.
**ONNX and frameworks supported via ONNX format:** *LSTM, RNN, GRU* original layers are converted to the TensorIterator operation. TensorIterator body contains LSTM/RNN/GRU Cell. Peepholes, InputForget modifications are not supported, sequence_lengths optional input is supported.
*ONNX Loop* layer is converted to the OpenVINO Loop operation.
diff --git a/docs/OV_Runtime_UG/openvino_intro.md b/docs/OV_Runtime_UG/openvino_intro.md
index e6ce0f9c6c3..fd8e88f8c15 100644
--- a/docs/OV_Runtime_UG/openvino_intro.md
+++ b/docs/OV_Runtime_UG/openvino_intro.md
@@ -26,10 +26,10 @@
@endsphinxdirective
## Introduction
-OpenVINO Runtime is a set of C++ libraries with C and Python bindings providing a common API to deliver inference solutions on the platform of your choice. Use the OpenVINO Runtime API to read the Intermediate Representation (IR), ONNX, PDPD file formats and execute the model on devices.
-
-OpenVINO runtime uses a plugin architecture. Inference plugin is a software component that contains complete implementation for inference on a certain Intel® hardware device: CPU, GPU, VPU, GNA, etc. Each plugin implements the unified API and provides additional hardware-specific APIs to configure device or interoperability API between OpenVINO Runtime and underlaying plugin backend.
+OpenVINO Runtime is a set of C++ libraries with C and Python bindings providing a common API to deliver inference solutions on the platform of your choice. Use the OpenVINO Runtime API to read an Intermediate Representation (IR), ONNX, or PaddlePaddle model and execute it on preferred devices.
+OpenVINO Runtime uses a plugin architecture. Its plugins are software components that contain complete implementation for inference on a particular Intel® hardware device: CPU, GPU, VPU, etc. Each plugin implements the unified API and provides additional hardware-specific APIs, for configuring devices, or API interoperability between OpenVINO Runtime and underlying plugin backend.
+
The scheme below illustrates the typical workflow for deploying a trained deep learning model:
diff --git a/docs/_static/images/ov_chart.png b/docs/_static/images/ov_chart.png
index f91923443c3..fa25daf3601 100644
--- a/docs/_static/images/ov_chart.png
+++ b/docs/_static/images/ov_chart.png
@@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
-oid sha256:f9a0d138f7f6d2546f0e48d9240d6a90aec18dc6d8092e1082b2fc3125f1ce3d
-size 108434
+oid sha256:83f0013e02ea792b553b5bd0a5630fb456a6fefc8dd701cd4430fc83d75cbff7
+size 78205