DOCS shift to rst - Opset R (#17159)

* ops to rst

* sphinx transition

* try html tag

* try comment

* try code directive

* try code directive

* try highlight

* try concole directive

* try line directive

* add highlight for code

* another directive

* introduce consoke directive

* add code format
This commit is contained in:
Tatiana Savina 2023-04-24 11:02:09 +02:00 committed by GitHub
parent 656d7fe380
commit b3ea6ceefa
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
28 changed files with 2279 additions and 2033 deletions

View File

@ -1,22 +1,25 @@
# ReLU {#openvino_docs_ops_activation_ReLU_1}
@sphinxdirective
**Versioned name**: *ReLU-1*
**Category**: *Activation function*
**Short description**: ReLU element-wise activation function. ([Reference](http://caffe.berkeleyvision.org/tutorial/layers/relu.html))
**Short description**: ReLU element-wise activation function. (`Reference <http://caffe.berkeleyvision.org/tutorial/layers/relu.html>`__).
**Detailed description**: [Reference](https://github.com/Kulbear/deep-learning-nano-foundation/wiki/ReLU-and-Softmax-Activation-Functions#rectified-linear-units)
**Detailed description**: `Reference <https://github.com/Kulbear/deep-learning-nano-foundation/wiki/ReLU-and-Softmax-Activation-Functions#rectified-linear-units>`__.
**Attributes**: *ReLU* operation has no attributes.
**Mathematical Formulation**
For each element from the input tensor calculates corresponding
element in the output tensor with the following formula:
\f[
For each element from the input tensor calculates corresponding element in the output tensor with the following formula:
.. math::
Y_{i}^{( l )} = max(0,\ Y_{i}^{( l - 1 )})
\f]
**Inputs**:
@ -28,8 +31,9 @@ For each element from the input tensor calculates corresponding
**Example**
```xml
<layer ... type="ReLU">
.. code-block:: cpp
<layer ... type="ReLU">
<input>
<port id="0">
<dim>256</dim>
@ -42,6 +46,6 @@ For each element from the input tensor calculates corresponding
<dim>56</dim>
</port>
</output>
</layer>
</layer>
```
@endsphinxdirective

View File

@ -1,12 +1,16 @@
# Round {#openvino_docs_ops_arithmetic_Round_5}
@sphinxdirective
**Versioned name**: *Round-5*
**Category**: *Arithmetic unary*
**Short description**: *Round* performs element-wise round operation with given tensor.
**Detailed description**: Operation takes one input tensor and rounds the values, element-wise, meaning it finds the nearest integer for each value. In case of halves, the rule is to round them to the nearest even integer if `mode` attribute is `half_to_even` or rounding in such a way that the result heads away from zero if `mode` attribute is `half_away_from_zero`.
**Detailed description**: Operation takes one input tensor and rounds the values, element-wise, meaning it finds the nearest integer for each value. In case of halves, the rule is to round them to the nearest even integer if ``mode`` attribute is ``half_to_even`` or rounding in such a way that the result heads away from zero if ``mode`` attribute is ``half_away_from_zero``.
.. code-block:: cpp
Input = [-4.5, -1.9, -1.5, 0.5, 0.9, 1.5, 2.3, 2.5]
@ -18,10 +22,10 @@
* *mode*
* **Description**: If set to `half_to_even` then the rule is to round halves to the nearest even integer, if set to `half_away_from_zero` then rounding in such a way that the result heads away from zero.
* **Range of values**: `half_to_even` or `half_away_from_zero`
* **Description**: If set to ``half_to_even`` then the rule is to round halves to the nearest even integer, if set to ``half_away_from_zero`` then rounding in such a way that the result heads away from zero.
* **Range of values**: ``half_to_even`` or ``half_away_from_zero``
* **Type**: string
* **Default value**: `half_to_even`
* **Default value**: ``half_to_even``
* **Required**: *no*
**Inputs**
@ -38,8 +42,9 @@
**Example**
```xml
<layer ... type="Round">
.. code-block:: cpp
<layer ... type="Round">
<data mode="half_to_even"/>
<input>
<port id="0">
@ -53,5 +58,6 @@
<dim>56</dim>
</port>
</output>
</layer>
```
</layer>
@endsphinxdirective

View File

@ -1,14 +1,17 @@
# ROIAlign {#openvino_docs_ops_detection_ROIAlign_3}
@sphinxdirective
**Versioned name**: *ROIAlign-3*
**Category**: *Object detection*
**Short description**: *ROIAlign* is a *pooling layer* used over feature maps of non-uniform input sizes and outputs a feature map of a fixed size.
**Detailed description**: [Reference](https://arxiv.org/abs/1703.06870).
**Detailed description**: `Reference <https://arxiv.org/abs/1703.06870>`__.
*ROIAlign* performs the following for each Region of Interest (ROI) for each input feature map:
1. Multiply box coordinates with *spatial_scale* to produce box coordinates relative to the input feature map size.
2. Divide the box into bins according to the *sampling_ratio* attribute.
3. Apply bilinear interpolation with 4 points in each bin and apply maximum or average pooling based on *mode* attribute to produce output feature map element.
@ -19,35 +22,35 @@
* **Description**: *pooled_h* is the height of the ROI output feature map.
* **Range of values**: a positive integer
* **Type**: `int`
* **Type**: ``int``
* **Required**: *yes*
* *pooled_w*
* **Description**: *pooled_w* is the width of the ROI output feature map.
* **Range of values**: a positive integer
* **Type**: `int`
* **Type**: ``int``
* **Required**: *yes*
* *sampling_ratio*
* **Description**: *sampling_ratio* is the number of bins over height and width to use to calculate each output feature map element. If the value
is equal to 0 then use adaptive number of elements over height and width: `ceil(roi_height / pooled_h)` and `ceil(roi_width / pooled_w)` respectively.
* **Description**: *sampling_ratio* is the number of bins over height and width to use to calculate each output feature map element. If the value is equal to 0 then use adaptive number of elements over height and width: ``cei (roi_height / pooled_h)`` and ``ceil(roi_width / pooled_w)`` respectively.
* **Range of values**: a non-negative integer
* **Type**: `int`
* **Type**: ``int``
* **Required**: *yes*
* *spatial_scale*
* **Description**: *spatial_scale* is a multiplicative spatial scale factor to translate ROI coordinates from their input spatial scale to the scale used when pooling.
* **Range of values**: a positive floating-point number
* **Type**: `float`
* **Type**: ``float``
* **Required**: *yes*
* *mode*
* **Description**: *mode* specifies a method to perform pooling to produce output feature map elements.
* **Range of values**:
* *max* - maximum pooling
* *avg* - average pooling
* **Type**: string
@ -55,17 +58,15 @@
**Inputs**:
* **1**: 4D input tensor of shape `[N, C, H, W]` with feature maps of type *T*. **Required.**
* **1**: 4D input tensor of shape ``[N, C, H, W]`` with feature maps of type *T*. **Required.**
* **2**: 2D input tensor of shape `[NUM_ROIS, 4]` describing box consisting of 4 element tuples: `[x_1, y_1, x_2, y_2]` in relative coordinates of type *T*.
The box height and width are calculated the following way: `roi_width = max(spatial_scale * (x_2 - x_1), 1.0)`,
`roi_height = max(spatial_scale * (y_2 - y_1), 1.0)`, so the malformed boxes are expressed as a box of size `1 x 1`. **Required.**
* **2**: 2D input tensor of shape ``[NUM_ROIS, 4]`` describing box consisting of 4 element tuples: ``[x_1, y_1, x_2, y_2]`` in relative coordinates of type *T*. The box height and width are calculated the following way: ``roi_width = max(spatial_scale * (x_2 - x_1), 1.0)``, ``roi_height = max(spatial_scale * (y_2 - y_1), 1.0)``, so the malformed boxes are expressed as a box of size ``1 x 1``. **Required.**
* **3**: 1D input tensor of shape `[NUM_ROIS]` with batch indices of type *IND_T*. **Required.**
* **3**: 1D input tensor of shape ``[NUM_ROIS]`` with batch indices of type *IND_T*. **Required.**
**Outputs**:
* **1**: 4D output tensor of shape `[NUM_ROIS, C, pooled_h, pooled_w]` with feature maps of type *T*.
* **1**: 4D output tensor of shape ``[NUM_ROIS, C, pooled_h, pooled_w]`` with feature maps of type *T*.
**Types**
@ -76,8 +77,9 @@ The box height and width are calculated the following way: `roi_width = max(spat
**Example**
```xml
<layer ... type="ROIAlign" ... >
.. code-block:: cpp
<layer ... type="ROIAlign" ... >
<data pooled_h="6" pooled_w="6" spatial_scale="16.0" sampling_ratio="2" mode="avg"/>
<input>
<port id="0">
@ -102,5 +104,7 @@ The box height and width are calculated the following way: `roi_width = max(spat
<dim>6</dim>
</port>
</output>
</layer>
```
</layer>
@endsphinxdirective

View File

@ -1,14 +1,17 @@
# ROIAlign {#openvino_docs_ops_detection_ROIAlign_9}
@sphinxdirective
**Versioned name**: *ROIAlign-9*
**Category**: *Object detection*
**Short description**: *ROIAlign* is a *pooling layer* used over feature maps of non-uniform input sizes and outputs a feature map of a fixed size.
**Detailed description**: [Reference](https://arxiv.org/abs/1703.06870).
**Detailed description**: `Reference <https://arxiv.org/abs/1703.06870>`__.
*ROIAlign* performs the following for each Region of Interest (ROI) for each input feature map:
1. Multiply box coordinates with *spatial_scale* to produce box coordinates relative to the input feature map size based on *aligned_mode* attribute.
2. Divide the box into bins according to the *sampling_ratio* attribute.
3. Apply bilinear interpolation with 4 points in each bin and apply maximum or average pooling based on *mode* attribute to produce output feature map element.
@ -19,35 +22,35 @@
* **Description**: *pooled_h* is the height of the ROI output feature map.
* **Range of values**: a positive integer
* **Type**: `int`
* **Type**: ``int``
* **Required**: *yes*
* *pooled_w*
* **Description**: *pooled_w* is the width of the ROI output feature map.
* **Range of values**: a positive integer
* **Type**: `int`
* **Type**: ``int``
* **Required**: *yes*
* *sampling_ratio*
* **Description**: *sampling_ratio* is the number of bins over height and width to use to calculate each output feature map element. If the value
is equal to 0 then use adaptive number of elements over height and width: `ceil(roi_height / pooled_h)` and `ceil(roi_width / pooled_w)` respectively.
* **Description**: *sampling_ratio* is the number of bins over height and width to use to calculate each output feature map element. If the value is equal to 0 then use adaptive number of elements over height and width: ``ceil(roi_height / pooled_h)`` and ``ceil(roi_width / pooled_w)`` respectively.
* **Range of values**: a non-negative integer
* **Type**: `int`
* **Type**: ``int``
* **Required**: *yes*
* *spatial_scale*
* **Description**: *spatial_scale* is a multiplicative spatial scale factor to translate ROI coordinates from their input spatial scale to the scale used when pooling.
* **Range of values**: a positive floating-point number
* **Type**: `float`
* **Type**: ``float``
* **Required**: *yes*
* *mode*
* **Description**: *mode* specifies a method to perform pooling to produce output feature map elements.
* **Range of values**:
* *max* - maximum pooling
* *avg* - average pooling
* **Type**: string
@ -57,6 +60,7 @@
* **Description**: *aligned_mode* specifies how to transform the coordinate in original tensor to the resized tensor.
* **Range of values**: name of the transformation mode in string format (here spatial_scale is resized_shape[x] / original_shape[x], resized_shape[x] is the shape of resized tensor in axis x, original_shape[x] is the shape of original tensor in axis x and x_original is a coordinate in axis x, for any axis x from the input axes):
* *asymmetric* - the coordinate in the resized tensor axis x is calculated according to the formula x_original * spatial_scale
* *half_pixel_for_nn* - the coordinate in the resized tensor axis x is x_original * spatial_scale - 0.5
* *half_pixel* - the coordinate in the resized tensor axis x is calculated as ((x_original + 0.5) * spatial_scale) - 0.5
@ -66,19 +70,19 @@
**Inputs**:
* **1**: 4D input tensor of shape `[N, C, H, W]` with feature maps of type *T*. **Required.**
* **1**: 4D input tensor of shape ``[N, C, H, W]`` with feature maps of type *T*. **Required.**
* **2**: 2D input tensor of shape `[NUM_ROIS, 4]` describing box consisting of 4 element tuples: `[x_1, y_1, x_2, y_2]` in relative coordinates of type *T*.
The box height and width are calculated the following way:
* If *aligned_mode* equals *asymmetric*: `roi_width = max(spatial_scale * (x_2 - x_1), 1.0)`, `roi_height = max(spatial_scale * (y_2 - y_1), 1.0)`, so the malformed boxes are expressed as a box of size `1 x 1`.
* else: `roi_width = spatial_scale * (x_2 - x_1)`, `roi_height = spatial_scale * (y_2 - y_1)`.
* **2**: 2D input tensor of shape ``[NUM_ROIS, 4]`` describing box consisting of 4 element tuples: ``[x_1, y_1, x_2, y_2]`` in relative coordinates of type *T*. The box height and width are calculated the following way:
* If *aligned_mode* equals *asymmetric*: ``roi_width = max(spatial_scale * (x_2 - x_1), 1.0)``, ``roi_height = max(spatial_scale * (y_2 - y_1), 1.0)``, so the malformed boxes are expressed as a box of size ``1 x 1``.
* else: ``roi_width = spatial_scale * (x_2 - x_1)``, ``roi_height = spatial_scale * (y_2 - y_1)``.
* **Required.**
* **3**: 1D input tensor of shape `[NUM_ROIS]` with batch indices of type *IND_T*. **Required.**
* **3**: 1D input tensor of shape ``[NUM_ROIS]`` with batch indices of type *IND_T*. **Required.**
**Outputs**:
* **1**: 4D output tensor of shape `[NUM_ROIS, C, pooled_h, pooled_w]` with feature maps of type *T*.
* **1**: 4D output tensor of shape ``[NUM_ROIS, C, pooled_h, pooled_w]`` with feature maps of type *T*.
**Types**
@ -89,8 +93,9 @@ The box height and width are calculated the following way:
**Example**
```xml
<layer ... type="ROIAlign" ... >
.. code-block:: cpp
<layer ... type="ROIAlign" ... >
<data pooled_h="6" pooled_w="6" spatial_scale="16.0" sampling_ratio="2" mode="avg" aligned_mode="half_pixel"/>
<input>
<port id="0">
@ -115,5 +120,6 @@ The box height and width are calculated the following way:
<dim>6</dim>
</port>
</output>
</layer>
```
</layer>
@endsphinxdirective

View File

@ -1,5 +1,7 @@
# ROIPooling {#openvino_docs_ops_detection_ROIPooling_1}
@sphinxdirective
**Versioned name**: *ROIPooling-1*
**Category**: *Object detection*
@ -9,15 +11,16 @@
**Detailed description**:
*ROIPooling* performs the following operations for each Region of Interest (ROI) over the input feature maps:
1. Produce box coordinates relative to the input feature map size, based on *method* attribute.
2. Calculate box height and width.
3. Divide the box into bins according to the pooled size attributes, `[pooled_h, pooled_w]`.
3. Divide the box into bins according to the pooled size attributes, ``[pooled_h, pooled_w]``.
4. Apply maximum or bilinear interpolation pooling, for each bin, based on *method* attribute to produce output feature map element.
The box height and width have different representation based on **method** attribute:
* *max*: Expressed in relative coordinates. The box height and width are calculated the following way: `roi_width = max(spatial_scale * (x_2 - x_1), 1.0)`,
`roi_height = max(spatial_scale * (y_2 - y_1), 1.0)`, so the malformed boxes are expressed as a box of size `1 x 1`.
* *bilinear*: Expressed in absolute coordinates and normalized to the `[0, 1]` interval. The box height and width are calculated the following way: `roi_width = (W - 1) * (x_2 - x_1)`, `roi_height = (H - 1) * (y_2 - y_1)`.
* *max*: Expressed in relative coordinates. The box height and width are calculated the following way: ``roi_width = max(spatial_scale * (x_2 - x_1), 1.0)``, ``roi_height = max(spatial_scale * (y_2 - y_1), 1.0)``, so the malformed boxes are expressed as a box of size ``1 x 1``.
* *bilinear*: Expressed in absolute coordinates and normalized to the ``[0, 1]`` interval. The box height and width are calculated the following way: ``roi_width = (W - 1) * (x_2 - x_1)``, ``roi_height = (H - 1) * (y_2 - y_1)``.
**Attributes**
@ -25,26 +28,26 @@ The box height and width have different representation based on **method** attri
* **Description**: *pooled_h* is the height of the ROI output feature map. For example, *pooled_h* equal to 6 means that the height of the output of *ROIPooling* is 6.
* **Range of values**: a non-negative integer
* **Type**: `int`
* **Type**: ``int``
* **Required**: *yes*
* *pooled_w*
* **Description**: *pooled_w* is the width of the ROI output feature map. For example, *pooled_w* equal to 6 means that the width of the output of *ROIPooling* is 6.
* **Range of values**: a non-negative integer
* **Type**: `int`
* **Type**: ``int``
* **Required**: *yes*
* *spatial_scale*
* **Description**: *spatial_scale* is the ratio of the input feature map over the input image size.
* **Range of values**: a positive floating-point number
* **Type**: `float`
* **Type**: ``float``
* **Required**: *yes*
* *method*
* **Description**: *method* specifies a method to perform pooling. If the method is *bilinear*, the input box coordinates are normalized to the `[0, 1]` interval.
* **Description**: *method* specifies a method to perform pooling. If the method is *bilinear*, the input box coordinates are normalized to the ``[0, 1]`` interval.
* **Range of values**: *max* or *bilinear*
* **Type**: string
* **Default value**: *max*
@ -52,15 +55,15 @@ The box height and width have different representation based on **method** attri
**Inputs**:
* **1**: 4D input tensor of shape `[N, C, H, W]` with feature maps of type *T*. **Required.**
* **1**: 4D input tensor of shape ``[N, C, H, W]`` with feature maps of type *T*. **Required.**
* **2**: 2D input tensor of shape `[NUM_ROIS, 5]` describing region of interest box consisting of 5 element tuples of type *T*: `[batch_id, x_1, y_1, x_2, y_2]`. **Required.**
Batch indices must be in the range of `[0, N-1]`.
* **2**: 2D input tensor of shape ``[NUM_ROIS, 5]`` describing region of interest box consisting of 5 element tuples of type *T*: ``[batch_id, x_1, y_1, x_2, y_2]``. **Required.**
Batch indices must be in the range of ``[0, N-1]``.
**Outputs**:
* **1**: 4D output tensor of shape `[NUM_ROIS, C, pooled_h, pooled_w]` with feature maps of type *T*.
* **1**: 4D output tensor of shape ``[NUM_ROIS, C, pooled_h, pooled_w]`` with feature maps of type *T*.
**Types**
@ -68,10 +71,12 @@ Batch indices must be in the range of `[0, N-1]`.
**Example**
```xml
<layer ... type="ROIPooling" ... >
.. code-block:: cpp
<layer ... type="ROIPooling" ... >
<data pooled_h="6" pooled_w="6" spatial_scale="0.062500"/>
<input> ... </input>
<output> ... </output>
</layer>
```
@endsphinxdirective

View File

@ -1,65 +1,68 @@
# RegionYolo {#openvino_docs_ops_detection_RegionYolo_1}
@sphinxdirective
**Versioned name**: *RegionYolo-1*
**Category**: *Object detection*
**Short description**: *RegionYolo* computes the coordinates of regions with probability for each class.
**Detailed description**: This operation is directly mapped to the [YOLO9000: Better, Faster, Stronger](https://arxiv.org/pdf/1612.08242.pdf) paper.
**Detailed description**: This operation is directly mapped to the `YOLO9000: Better, Faster, Stronger <https://arxiv.org/pdf/1612.08242.pdf>`__ paper.
**Attributes**:
* *anchors*
* **Description**: *anchors* codes a flattened list of pairs `[width, height]` that codes prior box sizes. This attribute is not used in output computation, but it is required for post-processing to restore real box coordinates.
* **Description**: *anchors* codes a flattened list of pairs ``[width, height]`` that codes prior box sizes. This attribute is not used in output computation, but it is required for post-processing to restore real box coordinates.
* **Range of values**: list of any length of positive floating-point number
* **Type**: `float[]`
* **Type**: ``float[]``
* **Default value**: None
* **Required**: *no*
* *axis*
* **Description**: starting axis index in the input tensor `data` shape that will be flattened in the output; the end of flattened range is defined by `end_axis` attribute.
* **Range of values**: `-rank(data) .. rank(data)-1`
* **Type**: `int`
* **Description**: starting axis index in the input tensor ``data`` shape that will be flattened in the output; the end of flattened range is defined by ``end_axis`` attribute.
* **Range of values**: ``-rank(data) .. rank(data)-1``
* **Type**: ``int``
* **Required**: *yes*
* *coords*
* **Description**: *coords* is the number of coordinates for each region.
* **Range of values**: an integer
* **Type**: `int`
* **Type**: ``int``
* **Required**: *yes*
* *classes*
* **Description**: *classes* is the number of classes for each region.
* **Range of values**: an integer
* **Type**: `int`
* **Type**: ``int``
* **Required**: *yes*
* *end_axis*
* **Description**: ending axis index in the input tensor `data` shape that will be flattened in the output; the beginning of the flattened range is defined by `axis` attribute.
* **Range of values**: `-rank(data)..rank(data)-1`
* **Type**: `int`
* **Description**: ending axis index in the input tensor ``data`` shape that will be flattened in the output; the beginning of the flattened range is defined by ``axis`` attribute.
* **Range of values**: ``-rank(data)..rank(data)-1``
* **Type**: ``int``
* **Required**: *yes*
* *num*
* **Description**: *num* is the number of regions.
* **Range of values**: an integer
* **Type**: `int`
* **Type**: ``int``
* **Required**: *yes*
* *do_softmax*
* **Description**: *do_softmax* is a flag that specifies the inference method and affects how the number of regions is determined. It also affects output shape. If it is 0, then output shape is 4D, and 2D otherwise.
* **Range of values**:
* *false* - do not perform softmax
* *true* - perform softmax
* **Type**: `boolean`
* **Type**: ``boolean``
* **Default value**: true
* **Required**: *no*
@ -67,29 +70,31 @@
* **Description**: *mask* specifies the number of regions. Use this attribute instead of *num* when *do_softmax* is equal to 0.
* **Range of values**: a list of integers
* **Type**: `int[]`
* **Default value**: `[]`
* **Type**: ``int[]``
* **Default value**: ``[]``
* **Required**: *no*
**Inputs**:
* **1**: `data` - 4D tensor of type *T* and shape `[N, C, H, W]`. **Required.**
* **1**: ``data`` - 4D tensor of type *T* and shape ``[N, C, H, W]``. **Required.**
**Outputs**:
* **1**: tensor of type *T* and rank 4 or less that codes detected regions. Refer to the [YOLO9000: Better, Faster, Stronger](https://arxiv.org/pdf/1612.08242.pdf) paper to decode the output as boxes. `anchors` should be used to decode real box coordinates. If `do_softmax` is set to `0`, then the output shape is `[N, (classes + coords + 1) * len(mask), H, W]`. If `do_softmax` is set to `1`, then output shape is partially flattened and defined in the following way:
* **1**: tensor of type *T* and rank 4 or less that codes detected regions. Refer to the `YOLO9000: Better, Faster, Stronger <https://arxiv.org/pdf/1612.08242.pdf>`__ paper to decode the output as boxes. ``anchors`` should be used to decode real box coordinates. If ``do_softmax`` is set to ``0``, then the output shape is ``[N, (classes + coords + 1) * len(mask), H, W]``. If ``do_softmax`` is set to ``1``, then output shape is partially flattened and defined in the following way:
`flat_dim = data.shape[axis] * data.shape[axis+1] * ... * data.shape[end_axis]`
`output.shape = [data.shape[0], ..., data.shape[axis-1], flat_dim, data.shape[end_axis + 1], ...]`
``flat_dim = data.shape[axis] * data.shape[axis+1] * ... * data.shape[end_axis]``
``output.shape = [data.shape[0], ..., data.shape[axis-1], flat_dim, data.shape[end_axis + 1], ...]``
**Types**
* *T*: any supported floating-point type.
**Example**
```xml
<!-- YOLO V3 example -->
<layer type="RegionYolo" ... >
.. code-block:: cpp
< !-- YOLO V3 example -->
<layer type="RegionYolo" ... >
<data anchors="10,14,23,27,37,58,81,82,135,169,344,319" axis="1" classes="80" coords="4" do_softmax="0" end_axis="3" mask="0,1,2" num="6"/>
<input>
<port id="0">
@ -107,10 +112,10 @@
<dim>26</dim>
</port>
</output>
</layer>
</layer>
<!-- YOLO V2 Example -->
<layer type="RegionYolo" ... >
< !-- YOLO V2 Example -->
<layer type="RegionYolo" ... >
<data anchors="1.08,1.19,3.42,4.41,6.63,11.38,9.42,5.11,16.62,10.52" axis="1" classes="20" coords="4" do_softmax="1" end_axis="3" num="5"/>
<input>
<port id="0">
@ -126,6 +131,7 @@
<dim>21125</dim>
</port>
</output>
</layer>
</layer>
@endsphinxdirective
```

View File

@ -1,14 +1,14 @@
# ReorgYolo Layer {#openvino_docs_ops_detection_ReorgYolo_1}
@sphinxdirective
**Versioned name**: *ReorgYolo-1*
**Category**: *Object detection*
**Short description**: *ReorgYolo* reorganizes input tensor taking into account strides.
**Detailed description**:
[Reference](https://arxiv.org/pdf/1612.08242.pdf)
**Detailed description**: `Reference <https://arxiv.org/pdf/1612.08242.pdf>`__
**Attributes**
@ -16,21 +16,22 @@
* **Description**: *stride* is the distance between cut throws in output blobs.
* **Range of values**: positive integer
* **Type**: `int`
* **Type**: ``int``
* **Required**: *yes*
**Inputs**:
* **1**: 4D input tensor of any type and shape `[N, C, H, W]`. `H` and `W` should be divisible by `stride` and `C >= (stride*stride)`. **Required.**
* **1**: 4D input tensor of any type and shape ``[N, C, H, W]``. ``H`` and ``W`` should be divisible by ``stride`` and ``C >= (stride*stride)``. **Required.**
**Outputs**:
* **1**: 4D output tensor of the same type as input tensor and shape `[N, C*stride*stride, H/stride, W/stride]`.
* **1**: 4D output tensor of the same type as input tensor and shape ``[N, C*stride*stride, H/stride, W/stride]``.
**Example**
```xml
<layer id="89" name="reorg" type="ReorgYolo">
.. code-block:: cpp
<layer id="89" name="reorg" type="ReorgYolo">
<data stride="2"/>
<input>
<port id="0">
@ -48,5 +49,6 @@
<dim>13</dim>
</port>
</output>
</layer>
```
</layer>
@endsphinxdirective

View File

@ -1,4 +1,6 @@
## RandomUniform {#openvino_docs_ops_generation_RandomUniform_8}
# RandomUniform {#openvino_docs_ops_generation_RandomUniform_8}
@sphinxdirective
**Versioned name**: *RandomUniform-8*
@ -8,7 +10,7 @@
**Detailed description**:
*RandomUniform* operation generates random numbers from a uniform distribution in the range `[minval, maxval)`.
*RandomUniform* operation generates random numbers from a uniform distribution in the range ``[minval, maxval)``.
The generation algorithm is based on underlying random integer generator that uses Philox algorithm. Philox algorithm
is a counter-based pseudo-random generator, which produces uint32 values. Single invocation of Philox algorithm returns
four result random values, depending on the given *key* and *counter* values. *Key* and *counter* are initialized
@ -16,12 +18,13 @@ with *global_seed* and *op_seed* attributes respectively.
If both seed values equal to zero, RandomUniform generates non-deterministic sequence.
\f[
key = global_seed\\
counter = op_seed
\f]
.. math::
Link to the original paper [Parallel Random Numbers: As Easy as 1, 2, 3](https://www.thesalmons.org/john/random123/papers/random123sc11.pdf)
key = global_seed\\
counter = op_seed
Link to the original paper `Parallel Random Numbers: As Easy as 1, 2, 3 <https://www.thesalmons.org/john/random123/papers/random123sc11.pdf>`__.
The result of Philox is calculated by applying a fixed number of *key* and *counter* updating so-called "rounds".
This implementation uses 4x32_10 version of Philox algorithm, where number of rounds = 10.
@ -29,155 +32,180 @@ This implementation uses 4x32_10 version of Philox algorithm, where number of ro
Suppose we have *n* which determines *n*-th 4 elements of random sequence.
In each round *key*, *counter* and *n* are splitted to pairs of uint32 values:
\f[
R = cast\_to\_uint32(value)\\
L = cast\_to\_uint32(value >> 32),
\f]
.. math::
R = cast\_to\_uint32(value)\\
L = cast\_to\_uint32(value >> 32),
where *cast\_to\_uint32* - static cast to uint32, *value* - uint64 input value, *L*, *R* - uint32
result values, >> - bitwise right shift.
Then *n* and *counter* are updated with the following formula:
\f[
L'= mullo(R, M)\\
R' = mulhi(R, M) {\oplus} k {\oplus} L \\
mulhi(a, b) = floor((a {\times} b) / 2^{32}) \\
mullo(a, b) = (a {\times} b) \mod 2^{32}
\f]
where \f${\oplus}\f$ - bitwise xor, *k* = \f$R_{key}\f$ for updating counter, *k* = \f$L_{key}\f$ for updating *n*,
*M* = `0xD2511F53` for updating *n*, *M* = `0xCD9E8D57` for updating *counter*.
.. math::
L'= mullo(R, M)\\
R' = mulhi(R, M) {\oplus} k {\oplus} L \\
mulhi(a, b) = floor((a {\times} b) / 2^{32}) \\
mullo(a, b) = (a {\times} b) \mod 2^{32}
where :math:`{\oplus}` - bitwise xor, *k* = :math:`R_{key}` for updating counter, *k* = :math:`L_{key}` for updating *n*, *M* = ``0xD2511F53`` for updating *n*, *M* = ``0xCD9E8D57`` for updating *counter*.
After each round *key* is raised by summing with another pair of const values:
\f[
L += 0x9E3779B9 \\
R += 0xBB67AE85
\f]
Values \f$L'_{n}, R'_{n}, L'_{counter}, R'_{counter}\f$ are resulting four random numbers.
.. math::
L += 0x9E3779B9 \\
R += 0xBB67AE85
Values :math:`L'_{n}, R'_{n}, L'_{counter}, R'_{counter}` are resulting four random numbers.
Float values between [0..1) are obtained from 32-bit integers by the following rules.
Float16 is formatted as follows: *sign*(1 bit) *exponent*(5 bits) *mantissa*(10 bits). The value is interpreted
Float16 is formatted as follows: *sign* (1 bit) *exponent* (5 bits) *mantissa* (10 bits). The value is interpreted
using following formula:
\f[
(-1)^{sign} * 1, mantissa * 2 ^{exponent - 15}
\f]
.. math::
(-1)^{sign} * 1, mantissa * 2 ^{exponent - 15}
so to obtain float16 values *sign*, *exponent* and *mantissa* are set as follows:
```
sign = 0
exponent = 15 - representation of a zero exponent.
mantissa = 10 right bits from generated uint32 random value.
```
.. code-block:: cpp
sign = 0
exponent = 15 - representation of a zero exponent.
mantissa = 10 right bits from generated uint32 random value.
So the resulting float16 value is:
```
x_uint16 = x // Truncate the upper 16 bits.
val = ((exponent << 10) | x_uint16 & 0x3ffu) - 1.0,
```
.. code-block:: cpp
x_uint16 = x // Truncate the upper 16 bits.
val = ((exponent << 10) | x_uint16 & 0x3ffu) - 1.0,
where x is uint32 generated random value.
Float32 is formatted as follows: *sign*(1 bit) *exponent*(8 bits) *mantissa*(23 bits). The value is interpreted
using following formula:
\f[
(-1)^{sign} * 1, mantissa * 2 ^{exponent - 127}
\f]
Float32 is formatted as follows: *sign* (1 bit) *exponent* (8 bits) *mantissa* (23 bits). The value is interpreted using following formula:
.. math::
(-1)^{sign} * 1, mantissa * 2 ^{exponent - 127}
so to obtain float values *sign*, *exponent* and *mantissa* are set as follows:
```
sign = 0
exponent = 127 - representation of a zero exponent.
mantissa = 23 right bits from generated uint32 random value.
```
.. code-block:: cpp
sign = 0
exponent = 127 - representation of a zero exponent.
mantissa = 23 right bits from generated uint32 random value.
So the resulting float value is:
```
val = ((exponent << 23) | x & 0x7fffffu) - 1.0,
```
.. code-block:: cpp
val = ((exponent << 23) | x & 0x7fffffu) - 1.0,
where x is uint32 generated random value.
Double is formatted as follows: *sign*(1 bit) *exponent*(11 bits) *mantissa*(52 bits). The value is interpreted
using following formula:
\f[
(-1)^{sign} * 1, mantissa * 2 ^{exponent - 1023}
\f]
Double is formatted as follows: *sign* (1 bit) *exponent* (11 bits) *mantissa* (52 bits). The value is interpreted using following formula:
.. math::
(-1)^{sign} * 1, mantissa * 2 ^{exponent - 1023}
so to obtain double values *sign*, *exponent* and *mantissa* are set as follows:
```
sign = 0
exponent = 1023 - representation of a zero exponent.
mantissa = 52 right bits from two concatinated uint32 values from random integer generator.
```
.. code-block:: cpp
sign = 0
exponent = 1023 - representation of a zero exponent.
mantissa = 52 right bits from two concatinated uint32 values from random integer generator.
So the resulting double is obtained as follows:
```
mantissa_h = x0 & 0xfffffu; // upper 20 bits of mantissa
mantissa_l = x1; // lower 32 bits of mantissa
mantissa = (mantissa_h << 32) | mantissa_l;
val = ((exponent << 52) | mantissa) - 1.0,
```
.. code-block:: cpp
mantissa_h = x0 & 0xfffffu; // upper 20 bits of mantissa
mantissa_l = x1; // lower 32 bits of mantissa
mantissa = (mantissa_h << 32) | mantissa_l;
val = ((exponent << 52) | mantissa) - 1.0,
where x0, x1 are uint32 generated random values.
To obtain a value in a specified range each value is processed with the following formulas:
For float values:
\f[
result = x * (maxval - minval) + minval,
\f]
.. math::
result = x * (maxval - minval) + minval,
where *x* is random float or double value between [0..1).
For integer values:
\f[
result = x \mod (maxval - minval) + minval,
\f]
.. math::
result = x \mod (maxval - minval) + minval,
where *x* is uint32 random value.
Example 1. *RandomUniform* output with `global_seed` = 150, `op_seed` = 10, `output_type` = f32:
Example 1. *RandomUniform* output with ``global_seed`` = 150, ``op_seed`` = 10, ``output_type`` = f32:
```
input_shape = [ 3, 3 ]
output = [[0.7011236 0.30539632 0.93931055]
.. code-block:: cpp
input_shape = [ 3, 3 ]
output = [[0.7011236 0.30539632 0.93931055]
[0.9456035 0.11694777 0.50770056]
[0.5197197 0.22727466 0.991374 ]]
```
Example 2. *RandomUniform* output with `global_seed` = 80, `op_seed` = 100, `output_type` = double:
```
input_shape = [ 2, 2 ]
Example 2. *RandomUniform* output with ``global_seed`` = 80, ``op_seed`` = 100, ``output_type`` = double:
minval = 2
.. code-block:: cpp
maxval = 10
input_shape = [ 2, 2 ]
output = [[5.65927959 4.23122376]
minval = 2
maxval = 10
output = [[5.65927959 4.23122376]
[2.67008206 2.36423758]]
```
Example 3. *RandomUniform* output with `global_seed` = 80, `op_seed` = 100, `output_type` = i32:
```
input_shape = [ 2, 3 ]
Example 3. *RandomUniform* output with ``global_seed`` = 80, ``op_seed`` = 100, ``output_type`` = i32:
minval = 50
.. code-block:: cpp
maxval = 100
input_shape = [ 2, 3 ]
output = [[65 70 56]
minval = 50
maxval = 100
output = [[65 70 56]
[59 82 92]]
```
**Attributes**:
* *output_type*
* ``output_type``
* **Description**: the type of the output. Determines generation algorithm and affects resulting values.
Output numbers generated for different values of *output_type* may not be equal.
* **Description**: the type of the output. Determines generation algorithm and affects resulting values. Output numbers generated for different values of *output_type* may not be equal.
* **Range of values**: "i32", "i64", "f16", "bf16", "f32", "f64".
* **Type**: string
* **Required**: *Yes*
* *global_seed*
* ``global_seed``
* **Description**: global seed value.
* **Range of values**: positive integers
@ -185,7 +213,7 @@ output = [[65 70 56]
* **Default value**: 0
* **Required**: *Yes*
* *op_seed*
* ``op_seed``
* **Description**: operational seed value.
* **Range of values**: positive integers
@ -195,34 +223,33 @@ output = [[65 70 56]
**Inputs**:
* **1**: `shape` - 1D tensor of type *T_SHAPE* describing output shape. **Required.**
* **1**: ``shape`` - 1D tensor of type *T_SHAPE* describing output shape. **Required.**
* **2**: `minval` - scalar or 1D tensor with 1 element with type specified by the attribute *output_type*,
defines the lower bound on the range of random values to generate (inclusive). **Required.**
* **2**: ``minval`` - scalar or 1D tensor with 1 element with type specified by the attribute *output_type*, defines the lower bound on the range of random values to generate (inclusive). **Required.**
* **3**: `maxval` - scalar or 1D tensor with 1 element with type specified by the attribute *output_type*,
defines the upper bound on the range of random values to generate (exclusive). **Required.**
* **3**: ``maxval`` - scalar or 1D tensor with 1 element with type specified by the attribute *output_type*, defines the upper bound on the range of random values to generate (exclusive). **Required.**
**Outputs**:
* **1**: A tensor with type specified by the attribute *output_type* and shape defined by `shape` input tensor.
* **1**: A tensor with type specified by the attribute *output_type* and shape defined by ``shape`` input tensor.
**Types**
* *T_SHAPE*: `int32` or `int64`.
* *T_SHAPE*: ``int32`` or ``int64``.
*Example 1: IR example.*
```xml
<layer ... name="RandomUniform" type="RandomUniform">
.. code-block:: cpp
<layer ... name="RandomUniform" type="RandomUniform">
<data output_type="f32" global_seed="234" op_seed="148"/>
<input>
<port id="0" precision="I32"> <!-- shape value: [2, 3, 10] -->
<port id="0" precision="I32"> < !-- shape value: [2, 3, 10] -->
<dim>3</dim>
</port>
<port id="1" precision="FP32"/> <!-- min value -->
<port id="2" precision="FP32"/> <!-- max value -->
<port id="1" precision="FP32"/> < !-- min value -->
<port id="2" precision="FP32"/> < !-- max value -->
</input>
<output>
<port id="3" precision="FP32" names="RandomUniform:0">
@ -231,5 +258,8 @@ output = [[65 70 56]
<dim>10</dim>
</port>
</output>
</layer>
```
</layer>
@endsphinxdirective

View File

@ -1,5 +1,7 @@
# Range {#openvino_docs_ops_generation_Range_1}
@sphinxdirective
**Versioned name**: *Range-1*
**Category**: *Generation*
@ -28,62 +30,68 @@ No attributes available.
*Range* operation generates a sequence of numbers starting from the value in the first input (start) up to but not including the value in the second input (stop) with a step equal to the value in the third input, according to the following formula:
For a positive `step`:
For a positive ``step``:
\f[
start<=val[i]<stop,
\f]
.. math::
for a negative `step`:
start<=val[i]<stop,
for a negative ``step``:
.. math::
start>=val[i]>stop,
\f[
start>=val[i]>stop,
\f]
where
\f[
val[i]=start+i*step
\f]
.. math::
val[i]=start+i*step
**Examples**
*Example 1: positive step*
```xml
<layer ... type="Range">
.. code-block:: cpp
<layer ... type="Range">
<input>
<port id="0"> <!-- start value: 2 -->
<port id="0"> < !-- start value: 2 -->
</port>
<port id="1"> <!-- stop value: 23 -->
<port id="1"> < !-- stop value: 23 -->
</port>
<port id="2"> <!-- step value: 3 -->
<port id="2"> < !-- step value: 3 -->
</port>
</input>
<output>
<port id="3">
<dim>7</dim> <!-- [ 2, 5, 8, 11, 14, 17, 20] -->
<dim>7</dim> < !-- [ 2, 5, 8, 11, 14, 17, 20] -->
</port>
</output>
</layer>
```
</layer>
*Example 2: negative step*
```xml
<layer ... type="Range">
.. code-block:: cpp
<layer ... type="Range">
<input>
<port id="0"> <!-- start value: 23 -->
<port id="0"> < !-- start value: 23 -->
</port>
<port id="1"> <!-- stop value: 2 -->
<port id="1"> < !-- stop value: 2 -->
</port>
<port id="2"> <!-- step value: -3 -->
<port id="2"> < !-- step value: -3 -->
</port>
</input>
<output>
<port id="3">
<dim>7</dim> <!-- [23, 20, 17, 14, 11, 8, 5] -->
<dim>7</dim> < !-- [23, 20, 17, 14, 11, 8, 5] -->
</port>
</output>
</layer>
```
</layer>
@endsphinxdirective

View File

@ -1,5 +1,7 @@
# Range {#openvino_docs_ops_generation_Range_4}
@sphinxdirective
**Versioned name**: *Range-4*
**Category**: *Generation*
@ -19,7 +21,7 @@
* **1**: "start" - A scalar of type *T1*. **Required.**
* **2**: "stop" - A scalar of type *T2*. **Required.**
* **3**: "step" - A scalar of type *T3*. If `step` is equal to zero after casting to `output_type`, behavior is undefined. **Required.**
* **3**: "step" - A scalar of type *T3*. If ``step`` is equal to zero after casting to ``output_type``, behavior is undefined. **Required.**
**Outputs**:
@ -31,95 +33,103 @@
**Detailed description**:
*Range* operation generates a sequence of numbers starting from the value in the first input (`start`) up to but not including the value in the second input (`stop`) with a `step` equal to the value in the third input, according to the following formula:
*Range* operation generates a sequence of numbers starting from the value in the first input (``start``) up to but not including the value in the second input (``stop``) with a ``step`` equal to the value in the third input, according to the following formula:
For a positive `step`:
For a positive ``step``:
\f[
start<=val[i]<stop,
\f]
.. math::
for a negative `step`:
start<=val[i]<stop,
for a negative ``step``:
.. math::
start>=val[i]>stop,
\f[
start>=val[i]>stop,
\f]
the i-th element is calculated by the following formula:
\f[
val[i+1]=val[i]+step.
\f]
.. math::
The calculations are done after casting all values to `accumulate_type(output_type)`. `accumulate_type` is a type that have better or equal accuracy for accumulation than `output_type` on current hardware, e.g. `fp64` for `fp16`. The number of elements is calculated in the floating-point type according to the following formula:
val[i+1]=val[i]+step.
\f[
max(ceil((end start) / step), 0)
\f]
This is aligned with PyTorch's operation `torch.arange`, to align with tensorflow operation `tf.range` all inputs must be casted to `output_type` before calling *Range*. The rounding for casting values are done towards zero.
The calculations are done after casting all values to ``accumulate_type(output_type)``. ``accumulate_type`` is a type that have better or equal accuracy for accumulation than ``output_type`` on current hardware, e.g. ``fp64`` for ``fp16``. The number of elements is calculated in the floating-point type according to the following formula:
.. math::
max(ceil((end start) / step), 0)
This is aligned with PyTorch's operation ``torch.arange``, to align with tensorflow operation ``tf.range`` all inputs must be casted to ``output_type`` before calling *Range*. The rounding for casting values are done towards zero.
**Examples**
*Example 1: positive step*
```xml
<layer ... type="Range">
.. code-block:: cpp
<layer ... type="Range">
<data output_type="i32">
<input>
<port id="0"> <!-- start value: 2 -->
<port id="0"> < !-- start value: 2 -->
</port>
<port id="1"> <!-- stop value: 23 -->
<port id="1"> < !-- stop value: 23 -->
</port>
<port id="2"> <!-- step value: 3 -->
<port id="2"> < !-- step value: 3 -->
</port>
</input>
<output>
<port id="3">
<dim>7</dim> <!-- [ 2, 5, 8, 11, 14, 17, 20] -->
<dim>7</dim> < !-- [ 2, 5, 8, 11, 14, 17, 20] -->
</port>
</output>
</layer>
```
</layer>
*Example 2: negative step*
```xml
<layer ... type="Range">
.. code-block:: cpp
<layer ... type="Range">
<data output_type="i32">
<input>
<port id="0"> <!-- start value: 23 -->
<port id="0"> < !-- start value: 23 -->
</port>
<port id="1"> <!-- stop value: 2 -->
<port id="1"> < !-- stop value: 2 -->
</port>
<port id="2"> <!-- step value: -3 -->
<port id="2"> < !-- step value: -3 -->
</port>
</input>
<output>
<port id="3">
<dim>7</dim> <!-- [23, 20, 17, 14, 11, 8, 5] -->
<dim>7</dim> < !-- [23, 20, 17, 14, 11, 8, 5] -->
</port>
</output>
</layer>
```
</layer>
*Example 3: floating-point*
```xml
<layer ... type="Range">
.. code-block:: cpp
<layer ... type="Range">
<data output_type="f32">
<input>
<port id="0"> <!-- start value: 1 -->
<port id="0"> < !-- start value: 1 -->
</port>
<port id="1"> <!-- stop value: 2.5 -->
<port id="1"> < !-- stop value: 2.5 -->
</port>
<port id="2"> <!-- step value: 0.5 -->
<port id="2"> < !-- step value: 0.5 -->
</port>
</input>
<output>
<port id="3">
<dim>3</dim> <!-- [ 1.0, 1.5, 2.0] -->
<dim>3</dim> < !-- [ 1.0, 1.5, 2.0] -->
</port>
</output>
</layer>
```
</layer>
@endsphinxdirective

View File

@ -1,17 +1,19 @@
# ReadValue {#openvino_docs_ops_infrastructure_ReadValue_3}
@sphinxdirective
**Versioned name**: *ReadValue-3*
**Category**: *Infrastructure*
**Short description**: *ReadValue* returns value of the `variable_id` variable.
**Short description**: *ReadValue* returns value of the ``variable_id`` variable.
**Detailed description**:
*ReadValue* returns value from the corresponding `variable_id` variable if the variable was set already by *Assign* operation and was not reset.
*ReadValue* returns value from the corresponding ``variable_id`` variable if the variable was set already by *Assign* operation and was not reset.
The operation checks that the type and shape of the output are the same as
declared in `variable_id` and returns an error otherwise. If the corresponding variable was not set or was reset,
the operation returns the value from the 1 input, and initializes the `variable_id` shape and type
declared in ``variable_id`` and returns an error otherwise. If the corresponding variable was not set or was reset,
the operation returns the value from the 1 input, and initializes the ``variable_id`` shape and type
with the shape and type from the 1 input.
**Attributes**:
@ -25,16 +27,17 @@ with the shape and type from the 1 input.
**Inputs**
* **1**: `init_value` - input tensor with constant values of any supported type. **Required.**
* **1**: ``init_value`` - input tensor with constant values of any supported type. **Required.**
**Outputs**
* **1**: tensor with the same shape and type as `init_value`.
* **1**: tensor with the same shape and type as ``init_value``.
**Example**
```xml
<layer ... type="ReadValue" ...>
.. code-block:: cpp
<layer ... type="ReadValue" ...>
<data variable_id="lstm_state_1"/>
<input>
<port id="0">
@ -52,5 +55,7 @@ with the shape and type from the 1 input.
<dim>224</dim>
</port>
</output>
</layer>
```
</layer>
@endsphinxdirective

View File

@ -1,5 +1,7 @@
# Result {#openvino_docs_ops_infrastructure_Result_1}
@sphinxdirective
**Versioned name**: *Result-1*
**Category**: *Infrastructure*
@ -8,7 +10,7 @@
**Attributes**:
No attributes available.
No attributes available.
**Inputs**
@ -20,8 +22,9 @@
**Example**
```xml
<layer ... type="Result" ...>
.. code-block:: cpp
<layer ... type="Result" ...>
<input>
<port id="0">
<dim>1</dim>
@ -30,5 +33,8 @@
<dim>224</dim>
</port>
</input>
</layer>
```
</layer>
@endsphinxdirective

View File

@ -1,5 +1,7 @@
# ReverseSequence {#openvino_docs_ops_movement_ReverseSequence_1}
@sphinxdirective
**Versioned name**: *ReverseSequence-1*
**Category**: *Data movement*
@ -8,34 +10,34 @@
**Detailed description**
*ReverseSequence* slices a given input tensor `data` along the dimension specified in the *batch_axis* attribute. For each slice `i`, it reverses the first `seq_lengths[i]` elements along the dimension specified in the *seq_axis* attribute.
*ReverseSequence* slices a given input tensor ``data`` along the dimension specified in the *batch_axis* attribute. For each slice ``i``, it reverses the first ``seq_lengths[i]`` elements along the dimension specified in the *seq_axis* attribute.
**Attributes**
* *batch_axis*
* **Description**: *batch_axis* is the index of the batch dimension along which `data` input tensor is sliced.
* **Range of values**: an integer within the range `[-rank(data), rank(data) - 1]`
* **Type**: `int`
* **Default value**: `0`
* **Description**: *batch_axis* is the index of the batch dimension along which ``data`` input tensor is sliced.
* **Range of values**: an integer within the range ``[-rank(data), rank(data) - 1]``
* **Type**: ``int``
* **Default value**: ``0``
* **Required**: *no*
* *seq_axis*
* **Description**: *seq_axis* is the index of the sequence dimension along which elements of `data` input tensor are reversed.
* **Range of values**: an integer within the range `[-rank(data), rank(data) - 1]`
* **Type**: `int`
* **Default value**: `1`
* **Description**: *seq_axis* is the index of the sequence dimension along which elements of ``data`` input tensor are reversed.
* **Range of values**: an integer within the range ``[-rank(data), rank(data) - 1]``
* **Type**: ``int``
* **Default value**: ``1``
* **Required**: *no*
**Inputs**
* **1**: `data` - Input data to reverse. A tensor of type *T1* and rank greater or equal to 2. **Required.**
* **2**: `seq_lengths` - Sequence lengths to reverse in the input tensor `data`. A 1D tensor comprising `data_shape[batch_axis]` elements of type *T2*. All element values must be integer values within the range `[1, data_shape[seq_axis]]`. Value `1` means, no elements are reversed. **Required.**
* **1**: ``data`` - Input data to reverse. A tensor of type *T1* and rank greater or equal to 2. **Required.**
* **2**: ``seq_lengths`` - Sequence lengths to reverse in the input tensor ``data``. A 1D tensor comprising ``data_shape[batch_axis]`` elements of type *T2*. All element values must be integer values within the range ``[1, data_shape[seq_axis]]``. Value ``1`` means, no elements are reversed. **Required.**
**Outputs**
* **1**: The result of slice and reverse `data` input tensor. A tensor of type *T1* and the same shape as `data` input tensor.
* **1**: The result of slice and reverse ``data`` input tensor. A tensor of type *T1* and the same shape as ``data`` input tensor.
**Types**
@ -44,18 +46,19 @@
**Example**
```xml
<layer ... type="ReverseSequence">
.. code-block:: cpp
<layer ... type="ReverseSequence">
<data batch_axis="0" seq_axis="1"/>
<input>
<port id="0"> <!-- data -->
<dim>4</dim> <!-- batch_axis -->
<dim>10</dim> <!-- seq_axis -->
<port id="0"> < !-- data -->
<dim>4</dim> < !-- batch_axis -->
<dim>10</dim> < !-- seq_axis -->
<dim>100</dim>
<dim>200</dim>
</port>
<port id="1">
<dim>4</dim> <!-- seq_lengths value: [2, 4, 8, 10] -->
<dim>4</dim> < !-- seq_lengths value: [2, 4, 8, 10] -->
</port>
</input>
<output>
@ -66,5 +69,8 @@
<dim>200</dim>
</port>
</output>
</layer>
```
</layer>
@endsphinxdirective

View File

@ -1,5 +1,7 @@
# Reverse {#openvino_docs_ops_movement_Reverse_1}
@sphinxdirective
**Versioned name**: *Reverse-1*
**Category**: *Data movement*
@ -8,30 +10,30 @@
**Detailed description**: *Reverse* produces a tensor with the same shape as the first input tensor and with elements reversed along dimensions specified in the second input tensor. The axes can be represented either by dimension indices or as a mask. The interpretation of the second input is determined by *mode* attribute.
If `index` mode is used, the second tensor should contain indices of axes that should be reversed. The length of the second tensor should be in a range from 0 to rank of the 1st input tensor.
If ``index`` mode is used, the second tensor should contain indices of axes that should be reversed. The length of the second tensor should be in a range from 0 to rank of the 1st input tensor.
In case if `mask` mode is used, then the second input tensor length should be equal to the rank of the 1st input. And each value has boolean value `true` or `false`. `true` means the corresponding axes should be reverted, `false` means it should be untouched.
In case if ``mask`` mode is used, then the second input tensor length should be equal to the rank of the 1st input. And each value has boolean value ``true`` or ``false``. ``true`` means the corresponding axes should be reverted, ``false`` means it should be untouched.
If no axis specified, that means either the second input is empty if `index` mode is used or second input has only `false` elements if `mask` mode is used, then *Reverse* just passes the source tensor through output not doing any data movements.
If no axis specified, that means either the second input is empty if ``index`` mode is used or second input has only ``false`` elements if ``mask`` mode is used, then *Reverse* just passes the source tensor through output not doing any data movements.
**Attributes**
* *mode*
* **Description**: specifies how the second input tensor should be interpreted: as a set of indices or a mask
* **Range of values**: `index`, `mask`
* **Type**: `string`
* **Range of values**: ``index``, ``mask``
* **Type**: ``string``
* **Required**: *yes*
**Inputs**:
* **1**: `data` the tensor of type *T1* with input data to reverse. **Required.**
* **1**: ``data`` the tensor of type *T1* with input data to reverse. **Required.**
* **2**: `axis` 1D tensor of type *T2* populated with indices of reversed axes if *mode* attribute is set to `index`, otherwise 1D tensor of type *T3* and with a length equal to the rank of `data` input that specifies a mask for reversed axes.
* **2**: ``axis`` 1D tensor of type *T2* populated with indices of reversed axes if *mode* attribute is set to ``index``, otherwise 1D tensor of type *T3* and with a length equal to the rank of ``data`` input that specifies a mask for reversed axes.
**Outputs**:
* **1**: output reversed tensor with shape and type equal to `data` tensor.
* **1**: output reversed tensor with shape and type equal to ``data`` tensor.
**Types**
@ -41,8 +43,9 @@ If no axis specified, that means either the second input is empty if `index` mod
**Example**
```xml
<layer ... type="Reverse">
.. code-block:: cpp
<layer ... type="Reverse">
<data mode="index"/>
<input>
<port id="0">
@ -52,7 +55,7 @@ If no axis specified, that means either the second input is empty if `index` mod
<dim>200</dim>
</port>
<port id="1">
<dim>1</dim> <!-- reverting along single axis -->
<dim>1</dim> < !-- reverting along single axis -->
</port>
</input>
<output>
@ -63,5 +66,8 @@ If no axis specified, that means either the second input is empty if `index` mod
<dim>200</dim>
</port>
</output>
</layer>
```
</layer>
@endsphinxdirective

View File

@ -1,5 +1,7 @@
# Roll {#openvino_docs_ops_movement_Roll_7}
@sphinxdirective
**Versioned name**: *Roll-7*
**Category**: *Data movement*
@ -8,44 +10,47 @@
**Detailed description**: *Roll* produces a tensor with the same shape as the first input tensor and with elements shifted along dimensions specified in the *axes* tensor. The shift size is specified in the *shift* input tensor. Elements that are shifted beyond the last position will be added in the same order starting from the first position.
Example 1. *Roll* output with `shift` = 1, `axes` = 0:
Example 1. *Roll* output with ``shift`` = 1, ``axes`` = 0:
```
data = [[ 1, 2, 3],
.. code-block:: cpp
data = [[ 1, 2, 3],
[ 4, 5, 6],
[ 7, 8, 9],
[10, 11, 12]]
output = [[10, 11, 12],
output = [[10, 11, 12],
[ 1, 2, 3],
[ 4, 5, 6],
[ 7, 8, 9]]
```
Example 2. *Roll* output with `shift` = [-1, 2], `axes` = [0, 1]:
```
data = [[ 1, 2, 3],
Example 2. *Roll* output with ``shift`` = [-1, 2], ``axes`` = [0, 1]:
.. code-block:: cpp
data = [[ 1, 2, 3],
[ 4, 5, 6],
[ 7, 8, 9],
[10, 11, 12]]
output = [[ 5, 6, 4],
output = [[ 5, 6, 4],
[ 8, 9, 7],
[11, 12, 10],
[ 2, 3, 1]]
```
Example 3. *Roll* output with `shift` = [1, 2, 1], `axes` = [0, 1, 0]:
```
data = [[ 1, 2, 3],
Example 3. *Roll* output with ``shift`` = [1, 2, 1], ``axes`` = [0, 1, 0]:
.. code-block:: cpp
data = [[ 1, 2, 3],
[ 4, 5, 6],
[ 7, 8, 9],
[10, 11, 12]]
output = [[ 8, 9, 7],
output = [[ 8, 9, 7],
[11, 12, 10],
[ 2, 3, 1],
[ 5, 6, 4]]
```
**Attributes**
@ -53,29 +58,30 @@ No attributes available.
**Inputs**:
* **1**: `data` a tensor of type *T*. **Required.**
* **1**: ``data`` a tensor of type *T*. **Required.**
* **2**: a `shift` scalar or 1D tensor of type *T_IND_1*. Specifies the number of places by which the elements of the `data` tensor are shifted. If `shift` is a scalar, each dimension specified in the `axes` tensor are rolled by the same `shift` value. If `shift` is a 1D tensor, `axes` must be a 1D tensor of the same size, and each dimension from `axes` tensor are rolled by the corresponding value from the `shift` tensor. If the value of `shift` is positive, elements are shifted positively (towards larger indices). Otherwise, elements are shifted negatively (towards smaller indices). **Required.**
* **2**: a ``shift`` scalar or 1D tensor of type *T_IND_1*. Specifies the number of places by which the elements of the ``data`` tensor are shifted. If ``shift`` is a scalar, each dimension specified in the ``axes`` tensor are rolled by the same ``shift`` value. If ``shift`` is a 1D tensor, ``axes`` must be a 1D tensor of the same size, and each dimension from ``axes`` tensor are rolled by the corresponding value from the ``shift`` tensor. If the value of ``shift`` is positive, elements are shifted positively (towards larger indices). Otherwise, elements are shifted negatively (towards smaller indices). **Required.**
* **3**: `axes` a scalar or 1D tensor of type *T_IND_2*. Specifies axes along which elements are shifted. If the same axis is referenced more than once, the total shift for that axis will be the sum of all the shifts that belong to that axis. If `axes` has negative value, axis index will be calculated using the formula: `N_dims + axis`, where `N_dims` - total number of dimensions in the `data` tensor, `axis` - negative axis index from the `axes` tensor. **Required.**
* **3**: ``axes`` a scalar or 1D tensor of type *T_IND_2*. Specifies axes along which elements are shifted. If the same axis is referenced more than once, the total shift for that axis will be the sum of all the shifts that belong to that axis. If ``axes`` has negative value, axis index will be calculated using the formula: ``N_dims + axis``, where ``N_dims`` - total number of dimensions in the ``data`` tensor, ``axis`` - negative axis index from the ``axes`` tensor. **Required.**
**Outputs**:
* **1**: output tensor with shape and type equal to the `data` tensor.
* **1**: output tensor with shape and type equal to the ``data`` tensor.
**Types**
* *T*: any supported type.
* *T_IND_1*: `int32` or `int64`.
* *T_IND_2*: `int32` or `int64`.
* *T_IND_1*: ``int32`` or ``int64``.
* *T_IND_2*: ``int32`` or ``int64``.
**Example**
*Example 1: "shift" and "axes" are 1D tensors.*
```xml
<layer ... type="Roll">
.. code-block:: cpp
<layer ... type="Roll">
<input>
<port id="0">
<dim>3</dim>
@ -87,7 +93,7 @@ No attributes available.
<dim>2</dim>
</port>
<port id="2">
<dim>2</dim> <!-- shifting along specified axes with the corresponding shift values -->
<dim>2</dim> < !-- shifting along specified axes with the corresponding shift values -->
</port>
</input>
<output>
@ -98,13 +104,14 @@ No attributes available.
<dim>200</dim>
</port>
</output>
</layer>
```
</layer>
*Example 2: "shift" value is a scalar and multiple axes are specified.*
```xml
<layer ... type="Roll">
.. code-block:: cpp
<layer ... type="Roll">
<input>
<port id="0">
<dim>3</dim>
@ -116,7 +123,7 @@ No attributes available.
<dim>1</dim>
</port>
<port id="2">
<dim>2</dim> <!-- shifting along specified axes with the same shift value -->
<dim>2</dim> < !-- shifting along specified axes with the same shift value -->
</port>
</input>
<output>
@ -127,5 +134,7 @@ No attributes available.
<dim>200</dim>
</port>
</output>
</layer>
```
</layer>
@endsphinxdirective

View File

@ -1,44 +1,45 @@
# ReduceL1 {#openvino_docs_ops_reduction_ReduceL1_4}
@sphinxdirective
**Versioned name**: *ReduceL1-4*
**Category**: *Reduction*
**Short description**: *ReduceL1* operation performs the reduction with finding the L1 norm (sum of absolute values) on a given input `data` along dimensions specified by `axes` input.
**Short description**: *ReduceL1* operation performs the reduction with finding the L1 norm (sum of absolute values) on a given input ``data`` along dimensions specified by ``axes`` input.
**Detailed Description**
*ReduceL1* operation performs the reduction with finding the L1 norm (sum of absolute values) on a given input `data` along dimensions specified by `axes` input.
Each element in the output is calculated as follows:
*ReduceL1* operation performs the reduction with finding the L1 norm (sum of absolute values) on a given input ``data`` along dimensions specified by ``axes`` input. Each element in the output is calculated as follows:
`output[i0, i1, ..., iN] = L1[j0, ..., jN](x[j0, ..., jN]))`
``output[i0, i1, ..., iN] = L1[j0, ..., jN](x[j0, ..., jN]))``
where indices i0, ..., iN run through all valid indices for input `data`, and finding the L1 norm `L1[j0, ..., jN]` has `jk = ik` for those dimensions `k` that are not in the set of indices specified by `axes` input.
where indices i0, ..., iN run through all valid indices for input ``data``, and finding the L1 norm ``L1[j0, ..., jN]`` has ``jk = ik`` for those dimensions ``k`` that are not in the set of indices specified by ``axes`` input.
Particular cases:
1. If `axes` is an empty list, *ReduceL1* corresponds to the identity operation.
2. If `axes` contains all dimensions of input `data`, a single reduction value is calculated for the entire input tensor.
1. If ``axes`` is an empty list, *ReduceL1* corresponds to the identity operation.
2. If ``axes`` contains all dimensions of input ``data``, a single reduction value is calculated for the entire input tensor.
**Attributes**
* *keep_dims*
* **Description**: If set to `true`, it holds axes that are used for the reduction. For each such axis, the output dimension is equal to 1.
* **Range of values**: `true` or `false`
* **Type**: `boolean`
* **Default value**: `false`
* **Description**: If set to ``true``, it holds axes that are used for the reduction. For each such axis, the output dimension is equal to 1.
* **Range of values**: ``true`` or ``false``
* **Type**: ``boolean``
* **Default value**: ``false``
* **Required**: *no*
**Inputs**
* **1**: `data` - A tensor of type *T* and arbitrary shape. **Required.**
* **1**: ``data`` - A tensor of type *T* and arbitrary shape. **Required.**
* **2**: `axes` - Axis indices of `data` input tensor, along which the reduction is performed. A scalar or 1D tensor of unique elements and type *T_IND*. The range of elements is `[-r, r-1]`, where `r` is the rank of `data` input tensor. **Required.**
* **2**: ``axes`` - Axis indices of ``data`` input tensor, along which the reduction is performed. A scalar or 1D tensor of unique elements and type *T_IND*. The range of elements is ``[-r, r-1]``, where ``r`` is the rank of ``data`` input tensor. **Required.**
**Outputs**
* **1**: The result of *ReduceL1* function applied to `data` input tensor. A tensor of type *T* and `shape[i] = shapeOf(data)[i]` for all `i` dimensions not in `axes` input tensor. For dimensions in `axes`, `shape[i] == 1` if `keep_dims == true`; otherwise, the `i`-th dimension is removed from the output.
* **1**: The result of *ReduceL1* function applied to ``data`` input tensor. A tensor of type *T* and ``shape[i] = shapeOf(data)[i]`` for all ``i`` dimensions not in ``axes`` input tensor. For dimensions in ``axes``, ``shape[i] == 1`` if ``keep_dims == true``; otherwise, the ``i``-th dimension is removed from the output.
**Types**
@ -47,8 +48,10 @@ Particular cases:
**Examples**
```xml
<layer id="1" type="ReduceL1" ...>
.. code-block:: cpp
<layer id="1" type="ReduceL1" ...>
<data keep_dims="true" />
<input>
<port id="0">
@ -58,7 +61,7 @@ Particular cases:
<dim>24</dim>
</port>
<port id="1">
<dim>2</dim> <!-- value is [2, 3] that means independent reduction in each channel and batch -->
<dim>2</dim> < !-- value is [2, 3] that means independent reduction in each channel and batch -->
</port>
</input>
<output>
@ -69,11 +72,13 @@ Particular cases:
<dim>1</dim>
</port>
</output>
</layer>
```
</layer>
```xml
<layer id="1" type="ReduceL1" ...>
.. code-block:: cpp
<layer id="1" type="ReduceL1" ...>
<data keep_dims="false" />
<input>
<port id="0">
@ -83,7 +88,7 @@ Particular cases:
<dim>24</dim>
</port>
<port id="1">
<dim>2</dim> <!-- value is [2, 3] that means independent reduction in each channel and batch -->
<dim>2</dim> < !-- value is [2, 3] that means independent reduction in each channel and batch -->
</port>
</input>
<output>
@ -92,11 +97,13 @@ Particular cases:
<dim>12</dim>
</port>
</output>
</layer>
```
</layer>
```xml
<layer id="1" type="ReduceL1" ...>
.. code-block:: cpp
<layer id="1" type="ReduceL1" ...>
<data keep_dims="false" />
<input>
<port id="0">
@ -106,7 +113,7 @@ Particular cases:
<dim>24</dim>
</port>
<port id="1">
<dim>1</dim> <!-- value is [1] that means independent reduction in each channel and spatial dimensions -->
<dim>1</dim> < !-- value is [1] that means independent reduction in each channel and spatial dimensions -->
</port>
</input>
<output>
@ -116,11 +123,13 @@ Particular cases:
<dim>24</dim>
</port>
</output>
</layer>
```
</layer>
```xml
<layer id="1" type="ReduceL1" ...>
.. code-block:: cpp
<layer id="1" type="ReduceL1" ...>
<data keep_dims="false" />
<input>
<port id="0">
@ -130,7 +139,7 @@ Particular cases:
<dim>24</dim>
</port>
<port id="1">
<dim>1</dim> <!-- value is [-2] that means independent reduction in each channel, batch and second spatial dimension -->
<dim>1</dim> < !-- value is [-2] that means independent reduction in each channel, batch and second spatial dimension -->
</port>
</input>
<output>
@ -140,5 +149,6 @@ Particular cases:
<dim>24</dim>
</port>
</output>
</layer>
```
</layer>
@endsphinxdirective

View File

@ -1,44 +1,46 @@
# ReduceL2 {#openvino_docs_ops_reduction_ReduceL2_4}
@sphinxdirective
**Versioned name**: *ReduceL2-4*
**Category**: *Reduction*
**Short description**: *ReduceL2* operation performs the reduction with finding the L2 norm (square root of sum of squares) on a given input `data` along dimensions specified by `axes` input.
**Short description**: *ReduceL2* operation performs the reduction with finding the L2 norm (square root of sum of squares) on a given input ``data`` along dimensions specified by ``axes`` input.
**Detailed Description**
*ReduceL2* operation performs the reduction with finding the L2 norm (square root of sum of squares) on a given input `data` along dimensions specified by `axes` input.
*ReduceL2* operation performs the reduction with finding the L2 norm (square root of sum of squares) on a given input ``data`` along dimensions specified by ``axes`` input.
Each element in the output is calculated as follows:
`output[i0, i1, ..., iN] = L2[j0, ..., jN](x[j0, ..., jN]))`
``output[i0, i1, ..., iN] = L2[j0, ..., jN](x[j0, ..., jN]))``
where indices i0, ..., iN run through all valid indices for input `data`, and finding the L2 norm `L2[j0, ..., jN]` has `jk = ik` for those dimensions `k` that are not in the set of indices specified by `axes` input.
where indices i0, ..., iN run through all valid indices for input ``data``, and finding the L2 norm ``L2[j0, ..., jN]`` has ``jk = ik`` for those dimensions ``k`` that are not in the set of indices specified by ``axes`` input.
Particular cases:
1. If `axes` is an empty list, *ReduceL2* corresponds to the identity operation.
2. If `axes` contains all dimensions of input `data`, a single reduction value is calculated for the entire input tensor.
1. If ``axes`` is an empty list, *ReduceL2* corresponds to the identity operation.
2. If ``axes`` contains all dimensions of input ``data``, a single reduction value is calculated for the entire input tensor.
**Attributes**
* *keep_dims*
* **Description**: If set to `true`, it holds axes that are used for the reduction. For each such axis, the output dimension is equal to 1.
* **Range of values**: `true` or `false`
* **Type**: `boolean`
* **Default value**: `false`
* **Description**: If set to ``true``, it holds axes that are used for the reduction. For each such axis, the output dimension is equal to 1.
* **Range of values**: ``true`` or ``false``
* **Type**: ``boolean``
* **Default value**: ``false``
* **Required**: *no*
**Inputs**
* **1**: `data` - A tensor of type *T* and arbitrary shape. **Required.**
* **1**: ``data`` - A tensor of type *T* and arbitrary shape. **Required.**
* **2**: `axes` - Axis indices of `data` input tensor, along which the reduction is performed. A scalar or 1D tensor of unique elements and type *T_IND*. The range of elements is `[-r, r-1]`, where `r` is the rank of `data` input tensor. **Required.**
* **2**: ``axes`` - Axis indices of ``data`` input tensor, along which the reduction is performed. A scalar or 1D tensor of unique elements and type *T_IND*. The range of elements is ``[-r, r-1]``, where ``r`` is the rank of ``data`` input tensor. **Required.**
**Outputs**
* **1**: The result of *ReduceL2* function applied to `data` input tensor. A tensor of type *T* and `shape[i] = shapeOf(data)[i]` for all `i` dimensions not in `axes` input tensor. For dimensions in `axes`, `shape[i] == 1` if `keep_dims == true`; otherwise, the `i`-th dimension is removed from the output.
* **1**: The result of *ReduceL2* function applied to ``data`` input tensor. A tensor of type *T* and ``shape[i] = shapeOf(data)[i]`` for all ``i`` dimensions not in ``axes`` input tensor. For dimensions in ``axes``, ``shape[i] == 1`` if ``keep_dims == true``; otherwise, the ``i``-th dimension is removed from the output.
**Types**
@ -47,8 +49,9 @@ Particular cases:
**Examples**
```xml
<layer id="1" type="ReduceL2" ...>
.. code-block:: cpp
<layer id="1" type="ReduceL2" ...>
<data keep_dims="true" />
<input>
<port id="0">
@ -58,7 +61,7 @@ Particular cases:
<dim>24</dim>
</port>
<port id="1">
<dim>2</dim> <!-- value is [2, 3] that means independent reduction in each channel and batch -->
<dim>2</dim> < !-- value is [2, 3] that means independent reduction in each channel and batch -->
</port>
</input>
<output>
@ -69,11 +72,12 @@ Particular cases:
<dim>1</dim>
</port>
</output>
</layer>
```
</layer>
```xml
<layer id="1" type="ReduceL2" ...>
.. code-block:: cpp
<layer id="1" type="ReduceL2" ...>
<data keep_dims="false" />
<input>
<port id="0">
@ -83,7 +87,7 @@ Particular cases:
<dim>24</dim>
</port>
<port id="1">
<dim>2</dim> <!-- value is [2, 3] that means independent reduction in each channel and batch -->
<dim>2</dim> < !-- value is [2, 3] that means independent reduction in each channel and batch -->
</port>
</input>
<output>
@ -92,11 +96,12 @@ Particular cases:
<dim>12</dim>
</port>
</output>
</layer>
```
</layer>
```xml
<layer id="1" type="ReduceL2" ...>
.. code-block:: cpp
<layer id="1" type="ReduceL2" ...>
<data keep_dims="false" />
<input>
<port id="0">
@ -106,7 +111,7 @@ Particular cases:
<dim>24</dim>
</port>
<port id="1">
<dim>1</dim> <!-- value is [1] that means independent reduction in each channel and spatial dimensions -->
<dim>1</dim> < !-- value is [1] that means independent reduction in each channel and spatial dimensions -->
</port>
</input>
<output>
@ -116,11 +121,12 @@ Particular cases:
<dim>24</dim>
</port>
</output>
</layer>
```
</layer>
```xml
<layer id="1" type="ReduceL2" ...>
.. code-block:: cpp
<layer id="1" type="ReduceL2" ...>
<data keep_dims="false" />
<input>
<port id="0">
@ -130,7 +136,7 @@ Particular cases:
<dim>24</dim>
</port>
<port id="1">
<dim>1</dim> <!-- value is [-2] that means independent reduction in each channel, batch and second spatial dimension -->
<dim>1</dim> < !-- value is [-2] that means independent reduction in each channel, batch and second spatial dimension -->
</port>
</input>
<output>
@ -140,5 +146,6 @@ Particular cases:
<dim>24</dim>
</port>
</output>
</layer>
```
</layer>
@endsphinxdirective

View File

@ -1,54 +1,59 @@
# ReduceLogicalAnd {#openvino_docs_ops_reduction_ReduceLogicalAnd_1}
@sphinxdirective
**Versioned name**: *ReduceLogicalAnd-1*
**Category**: *Reduction*
**Short description**: *ReduceLogicalAnd* operation performs the reduction with *logical and* operation on a given input `data` along dimensions specified by `axes` input.
**Short description**: *ReduceLogicalAnd* operation performs the reduction with *logical and* operation on a given input ``data`` along dimensions specified by ``axes`` input.
**Detailed Description**
*ReduceLogicalAnd* operation performs the reduction with *logical and* operation on a given input `data` along dimensions specified by `axes` input.
*ReduceLogicalAnd* operation performs the reduction with *logical and* operation on a given input ``data`` along dimensions specified by ``axes`` input.
Each element in the output is calculated as follows:
.. code-block:: cpp
output[i0, i1, ..., iN] = and[j0,c..., jN](x[j0, ..., jN]))
where indices i0, ..., iN run through all valid indices for input `data`, and *logical and* operation `and[j0, ..., jN]` has `jk = ik` for those dimensions `k` that are not in the set of indices specified by `axes` input.
where indices i0, ..., iN run through all valid indices for input ``data``, and *logical and* operation ``and[j0, ..., jN]`` has ``jk = ik`` for those dimensions ``k`` that are not in the set of indices specified by ``axes`` input.
Particular cases:
1. If `axes` is an empty list, *ReduceLogicalAnd* corresponds to the identity operation.
2. If `axes` contains all dimensions of input `data`, a single reduction value is calculated for the entire input tensor.
1. If ``axes`` is an empty list, *ReduceLogicalAnd* corresponds to the identity operation.
2. If ``axes`` contains all dimensions of input ``data``, a single reduction value is calculated for the entire input tensor.
**Attributes**
* *keep_dims*
* **Description**: If set to `true`, it holds axes that are used for the reduction. For each such axis, the output dimension is equal to 1.
* **Range of values**: `true` or `false`
* **Type**: `boolean`
* **Default value**: `false`
* **Description**: If set to ``true``, it holds axes that are used for the reduction. For each such axis, the output dimension is equal to 1.
* **Range of values**: ``true`` or ``false``
* **Type**: ``boolean``
* **Default value**: ``false``
* **Required**: *no*
**Inputs**
* **1**: `data` - A tensor of type *T_BOOL* and arbitrary shape. **Required.**
* **1**: ``data`` - A tensor of type *T_BOOL* and arbitrary shape. **Required.**
* **2**: `axes` - Axis indices of `data` input tensor, along which the reduction is performed. A scalar or 1D tensor of unique elements and type *T_IND*. The range of elements is `[-r, r-1]`, where `r` is the rank of `data` input tensor. **Required.**
* **2**: ``axes`` - Axis indices of ``data`` input tensor, along which the reduction is performed. A scalar or 1D tensor of unique elements and type *T_IND*. The range of elements is ``[-r, r-1]``, where ``r`` is the rank of ``data`` input tensor. **Required.**
**Outputs**
* **1**: The result of *ReduceLogicalAnd* function applied to `data` input tensor. A tensor of type *T_BOOL* and `shape[i] = shapeOf(data)[i]` for all `i` dimensions not in `axes` input tensor. For dimensions in `axes`, `shape[i] == 1` if `keep_dims == true`; otherwise, the `i`-th dimension is removed from the output.
* **1**: The result of *ReduceLogicalAnd* function applied to ``data`` input tensor. A tensor of type *T_BOOL* and ``shape[i] = shapeOf(data)[i]`` for all ``i`` dimensions not in ``axes`` input tensor. For dimensions in ``axes``, ``shape[i] == 1`` if ``keep_dims == true``; otherwise, the ``i``-th dimension is removed from the output.
**Types**
* *T_BOOL*: `boolean`.
* *T_BOOL*: ``boolean``.
* *T_IND*: any supported integer type.
**Examples**
```xml
<layer id="1" type="ReduceLogicalAnd" ...>
.. code-block:: cpp
<layer id="1" type="ReduceLogicalAnd" ...>
<data keep_dims="true" />
<input>
<port id="0">
@ -58,7 +63,7 @@ Particular cases:
<dim>24</dim>
</port>
<port id="1">
<dim>2</dim> <!-- value is [2, 3] that means independent reduction in each channel and batch -->
<dim>2</dim> < !-- value is [2, 3] that means independent reduction in each channel and batch -->
</port>
</input>
<output>
@ -69,11 +74,12 @@ Particular cases:
<dim>1</dim>
</port>
</output>
</layer>
```
</layer>
```xml
<layer id="1" type="ReduceLogicalAnd" ...>
.. code-block:: cpp
<layer id="1" type="ReduceLogicalAnd" ...>
<data keep_dims="false" />
<input>
<port id="0">
@ -83,7 +89,7 @@ Particular cases:
<dim>24</dim>
</port>
<port id="1">
<dim>2</dim> <!-- value is [2, 3] that means independent reduction in each channel and batch -->
<dim>2</dim> < !-- value is [2, 3] that means independent reduction in each channel and batch -->
</port>
</input>
<output>
@ -92,11 +98,12 @@ Particular cases:
<dim>12</dim>
</port>
</output>
</layer>
```
</layer>
```xml
<layer id="1" type="ReduceLogicalAnd" ...>
.. code-block:: cpp
<layer id="1" type="ReduceLogicalAnd" ...>
<data keep_dims="false" />
<input>
<port id="0">
@ -106,7 +113,7 @@ Particular cases:
<dim>24</dim>
</port>
<port id="1">
<dim>1</dim> <!-- value is [1] that means independent reduction in each channel and spatial dimensions -->
<dim>1</dim> < !-- value is [1] that means independent reduction in each channel and spatial dimensions -->
</port>
</input>
<output>
@ -116,11 +123,12 @@ Particular cases:
<dim>24</dim>
</port>
</output>
</layer>
```
</layer>
```xml
<layer id="1" type="ReduceLogicalAnd" ...>
.. code-block:: cpp
<layer id="1" type="ReduceLogicalAnd" ...>
<data keep_dims="false" />
<input>
<port id="0">
@ -130,7 +138,7 @@ Particular cases:
<dim>24</dim>
</port>
<port id="1">
<dim>1</dim> <!-- value is [-2] that means independent reduction in each channel, batch and second spatial dimension -->
<dim>1</dim> < !-- value is [-2] that means independent reduction in each channel, batch and second spatial dimension -->
</port>
</input>
<output>
@ -140,5 +148,6 @@ Particular cases:
<dim>24</dim>
</port>
</output>
</layer>
```
</layer>
@endsphinxdirective

View File

@ -1,54 +1,59 @@
# ReduceLogicalOr {#openvino_docs_ops_reduction_ReduceLogicalOr_1}
@sphinxdirective
**Versioned name**: *ReduceLogicalOr-1*
**Category**: *Reduction*
**Short description**: *ReduceLogicalOr* operation performs the reduction with *logical or* operation on a given input `data` along dimensions specified by `axes` input.
**Short description**: *ReduceLogicalOr* operation performs the reduction with *logical or* operation on a given input ``data`` along dimensions specified by ``axes`` input.
**Detailed Description**
*ReduceLogicalOr* operation performs the reduction with *logical or* operation on a given input `data` along dimensions specified by `axes` input.
*ReduceLogicalOr* operation performs the reduction with *logical or* operation on a given input ``data`` along dimensions specified by ``axes`` input.
Each element in the output is calculated as follows:
.. code-block:: cpp
output[i0, i1, ..., iN] = or[j0, ..., jN](x[j0, ..., jN]))
where indices i0, ..., iN run through all valid indices for input `data`, and *logical or* operation `or[j0, ..., jN]` has `jk = ik` for those dimensions `k` that are not in the set of indices specified by `axes` input.
where indices i0, ..., iN run through all valid indices for input ``data``, and *logical or* operation ``or[j0, ..., jN]`` has ``jk = ik`` for those dimensions ``k`` that are not in the set of indices specified by ``axes`` input.
Particular cases:
1. If `axes` is an empty list, *ReduceLogicalOr* corresponds to the identity operation.
2. If `axes` contains all dimensions of input `data`, a single reduction value is calculated for the entire input tensor.
1. If ``axes`` is an empty list, *ReduceLogicalOr* corresponds to the identity operation.
2. If ``axes`` contains all dimensions of input ``data``, a single reduction value is calculated for the entire input tensor.
**Attributes**
* *keep_dims*
* **Description**: If set to `true`, it holds axes that are used for the reduction. For each such axis, the output dimension is equal to 1.
* **Range of values**: `true` or `false`
* **Type**: `boolean`
* **Default value**: `false`
* **Description**: If set to ``true``, it holds axes that are used for the reduction. For each such axis, the output dimension is equal to 1.
* **Range of values**: ``true`` or ``false``
* **Type**: ``boolean``
* **Default value**: ``false``
* **Required**: *no*
**Inputs**
* **1**: `data` - A tensor of type *T_BOOL* and arbitrary shape. **Required.**
* **1**: ``data`` - A tensor of type *T_BOOL* and arbitrary shape. **Required.**
* **2**: `axes` - Axis indices of `data` input tensor, along which the reduction is performed. A scalar or 1D tensor of unique elements and type *T_IND*. The range of elements is `[-r, r-1]`, where `r` is the rank of `data` input tensor. **Required.**
* **2**: ``axes`` - Axis indices of ``data`` input tensor, along which the reduction is performed. A scalar or 1D tensor of unique elements and type *T_IND*. The range of elements is ``[-r, r-1]``, where ``r`` is the rank of ``data`` input tensor. **Required.**
**Outputs**
* **1**: The result of *ReduceLogicalOr* function applied to `data` input tensor. A tensor of type *T_BOOL* and `shape[i] = shapeOf(data)[i]` for all `i` dimensions not in `axes` input tensor. For dimensions in `axes`, `shape[i] == 1` if `keep_dims == true`; otherwise, the `i`-th dimension is removed from the output.
* **1**: The result of *ReduceLogicalOr* function applied to ``data`` input tensor. A tensor of type *T_BOOL* and ``shape[i] = shapeOf(data)[i]`` for all ``i`` dimensions not in ``axes`` input tensor. For dimensions in ``axes``, ``shape[i] == 1`` if ``keep_dims == true``; otherwise, the ``i``-th dimension is removed from the output.
**Types**
* *T_BOOL*: `boolean`.
* *T_BOOL*: ``boolean``.
* *T_IND*: any supported integer type.
**Examples**
```xml
<layer id="1" type="ReduceLogicalOr" ...>
.. code-block:: cpp
<layer id="1" type="ReduceLogicalOr" ...>
<data keep_dims="true" />
<input>
<port id="0">
@ -58,7 +63,7 @@ Particular cases:
<dim>24</dim>
</port>
<port id="1">
<dim>2</dim> <!-- value is [2, 3] that means independent reduction in each channel and batch -->
<dim>2</dim> < !-- value is [2, 3] that means independent reduction in each channel and batch -->
</port>
</input>
<output>
@ -69,11 +74,12 @@ Particular cases:
<dim>1</dim>
</port>
</output>
</layer>
```
</layer>
```xml
<layer id="1" type="ReduceLogicalOr" ...>
.. code-block:: cpp
<layer id="1" type="ReduceLogicalOr" ...>
<data keep_dims="false" />
<input>
<port id="0">
@ -83,7 +89,7 @@ Particular cases:
<dim>24</dim>
</port>
<port id="1">
<dim>2</dim> <!-- value is [2, 3] that means independent reduction in each channel and batch -->
<dim>2</dim> < !-- value is [2, 3] that means independent reduction in each channel and batch -->
</port>
</input>
<output>
@ -92,11 +98,11 @@ Particular cases:
<dim>12</dim>
</port>
</output>
</layer>
```
</layer>
```xml
<layer id="1" type="ReduceLogicalOr" ...>
.. code-block:: cpp
<layer id="1" type="ReduceLogicalOr" ...>
<data keep_dims="false" />
<input>
<port id="0">
@ -106,7 +112,7 @@ Particular cases:
<dim>24</dim>
</port>
<port id="1">
<dim>1</dim> <!-- value is [1] that means independent reduction in each channel and spatial dimensions -->
<dim>1</dim> < !-- value is [1] that means independent reduction in each channel and spatial dimensions -->
</port>
</input>
<output>
@ -116,11 +122,12 @@ Particular cases:
<dim>24</dim>
</port>
</output>
</layer>
```
</layer>
```xml
<layer id="1" type="ReduceLogicalOr" ...>
.. code-block:: cpp
<layer id="1" type="ReduceLogicalOr" ...>
<data keep_dims="false" />
<input>
<port id="0">
@ -130,7 +137,7 @@ Particular cases:
<dim>24</dim>
</port>
<port id="1">
<dim>1</dim> <!-- value is [-2] that means independent reduction in each channel, batch and second spatial dimension -->
<dim>1</dim> < !-- value is [-2] that means independent reduction in each channel, batch and second spatial dimension -->
</port>
</input>
<output>
@ -140,5 +147,6 @@ Particular cases:
<dim>24</dim>
</port>
</output>
</layer>
```
</layer>
@endsphinxdirective

View File

@ -1,44 +1,48 @@
# ReduceMax {#openvino_docs_ops_reduction_ReduceMax_1}
@sphinxdirective
**Versioned name**: *ReduceMax-1*
**Category**: *Reduction*
**Short description**: *ReduceMax* operation performs the reduction with finding the maximum value on a given input `data` along dimensions specified by `axes` input.
**Short description**: *ReduceMax* operation performs the reduction with finding the maximum value on a given input ``data`` along dimensions specified by ``axes`` input.
**Detailed Description**
*ReduceMax* operation performs the reduction with finding the maximum value on a given input `data` along dimensions specified by `axes` input.
*ReduceMax* operation performs the reduction with finding the maximum value on a given input ``data`` along dimensions specified by ``axes`` input.
Each element in the output is calculated as follows:
.. code-block:: cpp
output[i0, i1, ..., iN] = max[j0, ..., jN](x[j0, ..., jN]))
where indices i0, ..., iN run through all valid indices for input `data`, and finding the maximum value `max[j0, ..., jN]` has `jk = ik` for those dimensions `k` that are not in the set of indices specified by `axes` input.
where indices i0, ..., iN run through all valid indices for input ``data``, and finding the maximum value ``max[j0, ..., jN]`` has ``jk = ik`` for those dimensions ``k`` that are not in the set of indices specified by ``axes`` input.
Particular cases:
1. If `axes` is an empty list, *ReduceMax* corresponds to the identity operation.
2. If `axes` contains all dimensions of input `data`, a single reduction value is calculated for the entire input tensor.
1. If ``axes`` is an empty list, *ReduceMax* corresponds to the identity operation.
2. If ``axes`` contains all dimensions of input ``data``, a single reduction value is calculated for the entire input tensor.
**Attributes**
* *keep_dims*
* **Description**: If set to `true`, it holds axes that are used for the reduction. For each such axis, the output dimension is equal to 1.
* **Range of values**: `true` or `false`
* **Type**: `boolean`
* **Default value**: `false`
* **Description**: If set to ``true``, it holds axes that are used for the reduction. For each such axis, the output dimension is equal to 1.
* **Range of values**: ``true`` or ``false``
* **Type**: ``boolean``
* **Default value**: ``false``
* **Required**: *no*
**Inputs**
* **1**: `data` - A tensor of type *T* and arbitrary shape. **Required.**
* **1**: ``data`` - A tensor of type *T* and arbitrary shape. **Required.**
* **2**: `axes` - Axis indices of `data` input tensor, along which the reduction is performed. A scalar or 1D tensor of unique elements and type *T_IND*. The range of elements is `[-r, r-1]`, where `r` is the rank of `data` input tensor. **Required.**
* **2**: ``axes`` - Axis indices of ``data`` input tensor, along which the reduction is performed. A scalar or 1D tensor of unique elements and type *T_IND*. The range of elements is ``[-r, r-1]``, where ``r`` is the rank of ``data`` input tensor. **Required.**
**Outputs**
* **1**: The result of *ReduceMax* function applied to `data` input tensor. A tensor of type *T* and `shape[i] = shapeOf(data)[i]` for all `i` dimensions not in `axes` input tensor. For dimensions in `axes`, `shape[i] == 1` if `keep_dims == true`; otherwise, the `i`-th dimension is removed from the output.
* **1**: The result of *ReduceMax* function applied to ``data`` input tensor. A tensor of type *T* and ``shape[i] = shapeOf(data)[i]`` for all ``i`` dimensions not in ``axes`` input tensor. For dimensions in ``axes``, ``shape[i] == 1`` if ``keep_dims == true``; otherwise, the ``i``-th dimension is removed from the output.
**Types**
@ -47,8 +51,9 @@ Particular cases:
**Examples**
```xml
<layer id="1" type="ReduceMax" ...>
.. code-block:: cpp
<layer id="1" type="ReduceMax" ...>
<data keep_dims="true" />
<input>
<port id="0">
@ -58,7 +63,7 @@ Particular cases:
<dim>24</dim>
</port>
<port id="1">
<dim>2</dim> <!-- value is [2, 3] that means independent reduction in each channel and batch -->
<dim>2</dim> < !-- value is [2, 3] that means independent reduction in each channel and batch -->
</port>
</input>
<output>
@ -69,11 +74,12 @@ Particular cases:
<dim>1</dim>
</port>
</output>
</layer>
```
</layer>
```xml
<layer id="1" type="ReduceMax" ...>
.. code-block:: cpp
<layer id="1" type="ReduceMax" ...>
<data keep_dims="false" />
<input>
<port id="0">
@ -83,7 +89,7 @@ Particular cases:
<dim>24</dim>
</port>
<port id="1">
<dim>2</dim> <!-- value is [2, 3] that means independent reduction in each channel and batch -->
<dim>2</dim> < !-- value is [2, 3] that means independent reduction in each channel and batch -->
</port>
</input>
<output>
@ -92,11 +98,12 @@ Particular cases:
<dim>12</dim>
</port>
</output>
</layer>
```
</layer>
```xml
<layer id="1" type="ReduceMax" ...>
.. code-block:: cpp
<layer id="1" type="ReduceMax" ...>
<data keep_dims="false" />
<input>
<port id="0">
@ -106,7 +113,7 @@ Particular cases:
<dim>24</dim>
</port>
<port id="1">
<dim>1</dim> <!-- value is [1] that means independent reduction in each channel and spatial dimensions -->
<dim>1</dim> < !-- value is [1] that means independent reduction in each channel and spatial dimensions -->
</port>
</input>
<output>
@ -116,11 +123,12 @@ Particular cases:
<dim>24</dim>
</port>
</output>
</layer>
```
</layer>
```xml
<layer id="1" type="ReduceMax" ...>
.. code-block:: cpp
<layer id="1" type="ReduceMax" ...>
<data keep_dims="false" />
<input>
<port id="0">
@ -130,7 +138,7 @@ Particular cases:
<dim>24</dim>
</port>
<port id="1">
<dim>1</dim> <!-- value is [-2] that means independent reduction in each channel, batch and second spatial dimension -->
<dim>1</dim> < !-- value is [-2] that means independent reduction in each channel, batch and second spatial dimension -->
</port>
</input>
<output>
@ -140,5 +148,7 @@ Particular cases:
<dim>24</dim>
</port>
</output>
</layer>
```
</layer>
@endsphinxdirective

View File

@ -1,44 +1,48 @@
# ReduceMean {#openvino_docs_ops_reduction_ReduceMean_1}
@sphinxdirective
**Versioned name**: *ReduceMean-1*
**Category**: *Reduction*
**Short description**: *ReduceMean* operation performs the reduction with finding the arithmetic mean on a given input `data` along dimensions specified by `axes` input.
**Short description**: *ReduceMean* operation performs the reduction with finding the arithmetic mean on a given input ``data`` along dimensions specified by ``axes`` input.
**Detailed Description**
*ReduceMean* operation performs the reduction with finding the arithmetic mean on a given input `data` along dimensions specified by `axes` input.
*ReduceMean* operation performs the reduction with finding the arithmetic mean on a given input ``data`` along dimensions specified by ``axes`` input.
Each element in the output is calculated as follows:
.. code-block:: cpp
output[i0, i1, ..., iN] = mean[j0, ..., jN](x[j0, ..., jN]))
where indices i0, ..., iN run through all valid indices for input `data`, and finding the arithmetic mean `mean[j0, ..., jN]` has `jk = ik` for those dimensions `k` that are not in the set of indices specified by `axes` input.
where indices i0, ..., iN run through all valid indices for input ``data``, and finding the arithmetic mean ``mean[j0, ..., jN]`` has ``jk = ik`` for those dimensions ``k`` that are not in the set of indices specified by ``axes`` input.
Particular cases:
1. If `axes` is an empty list, *ReduceMean* corresponds to the identity operation.
2. If `axes` contains all dimensions of input `data`, a single reduction value is calculated for the entire input tensor.
1. If ``axes`` is an empty list, *ReduceMean* corresponds to the identity operation.
2. If ``axes`` contains all dimensions of input ``data``, a single reduction value is calculated for the entire input tensor.
**Attributes**
* *keep_dims*
* **Description**: If set to `true`, it holds axes that are used for the reduction. For each such axis, the output dimension is equal to 1.
* **Range of values**: `true` or `false`
* **Type**: `boolean`
* **Default value**: `false`
* **Description**: If set to ``true``, it holds axes that are used for the reduction. For each such axis, the output dimension is equal to 1.
* **Range of values**: ``true`` or ``false``
* **Type**: ``boolean``
* **Default value**: ``false``
* **Required**: *no*
**Inputs**
* **1**: `data` - A tensor of type *T* and arbitrary shape. **Required.**
* **1**: ``data`` - A tensor of type *T* and arbitrary shape. **Required.**
* **2**: `axes` - Axis indices of `data` input tensor, along which the reduction is performed. A scalar or 1D tensor of unique elements and type *T_IND*. The range of elements is `[-r, r-1]`, where `r` is the rank of `data` input tensor. **Required.**
* **2**: ``axes`` - Axis indices of ``data`` input tensor, along which the reduction is performed. A scalar or 1D tensor of unique elements and type *T_IND*. The range of elements is ``[-r, r-1]``, where ``r`` is the rank of ``data`` input tensor. **Required.**
**Outputs**
* **1**: The result of *ReduceMean* function applied to `data` input tensor. A tensor of type *T* and `shape[i] = shapeOf(data)[i]` for all `i` dimensions not in `axes` input tensor. For dimensions in `axes`, `shape[i] == 1` if `keep_dims == true`; otherwise, the `i`-th dimension is removed from the output.
* **1**: The result of *ReduceMean* function applied to ``data`` input tensor. A tensor of type *T* and ``shape[i] = shapeOf(data)[i]`` for all ``i`` dimensions not in ``axes`` input tensor. For dimensions in ``axes``, ``shape[i] == 1`` if ``keep_dims == true``; otherwise, the ``i``-th dimension is removed from the output.
**Types**
@ -47,8 +51,9 @@ Particular cases:
**Examples**
```xml
<layer id="1" type="ReduceMean" ...>
.. code-block:: cpp
<layer id="1" type="ReduceMean" ...>
<data keep_dims="true" />
<input>
<port id="0">
@ -58,7 +63,7 @@ Particular cases:
<dim>24</dim>
</port>
<port id="1">
<dim>2</dim> <!-- value is [2, 3] that means independent reduction in each channel and batch -->
<dim>2</dim> < !-- value is [2, 3] that means independent reduction in each channel and batch -->
</port>
</input>
<output>
@ -69,11 +74,12 @@ Particular cases:
<dim>1</dim>
</port>
</output>
</layer>
```
</layer>
```xml
<layer id="1" type="ReduceMean" ...>
.. code-block:: cpp
<layer id="1" type="ReduceMean" ...>
<data keep_dims="false" />
<input>
<port id="0">
@ -83,7 +89,7 @@ Particular cases:
<dim>24</dim>
</port>
<port id="1">
<dim>2</dim> <!-- value is [2, 3] that means independent reduction in each channel and batch -->
<dim>2</dim> < !-- value is [2, 3] that means independent reduction in each channel and batch -->
</port>
</input>
<output>
@ -92,11 +98,13 @@ Particular cases:
<dim>12</dim>
</port>
</output>
</layer>
```
</layer>
```xml
<layer id="1" type="ReduceMean" ...>
.. code-block:: cpp
<layer id="1" type="ReduceMean" ...>
<data keep_dims="false" />
<input>
<port id="0">
@ -106,7 +114,7 @@ Particular cases:
<dim>24</dim>
</port>
<port id="1">
<dim>1</dim> <!-- value is [1] that means independent reduction in each channel and spatial dimensions -->
<dim>1</dim> < !-- value is [1] that means independent reduction in each channel and spatial dimensions -->
</port>
</input>
<output>
@ -116,11 +124,11 @@ Particular cases:
<dim>24</dim>
</port>
</output>
</layer>
```
</layer>
```xml
<layer id="1" type="ReduceMean" ...>
.. code-block:: cpp
<layer id="1" type="ReduceMean" ...>
<data keep_dims="false" />
<input>
<port id="0">
@ -130,7 +138,7 @@ Particular cases:
<dim>24</dim>
</port>
<port id="1">
<dim>1</dim> <!-- value is [-2] that means independent reduction in each channel, batch and second spatial dimension -->
<dim>1</dim> < !-- value is [-2] that means independent reduction in each channel, batch and second spatial dimension -->
</port>
</input>
<output>
@ -140,5 +148,8 @@ Particular cases:
<dim>24</dim>
</port>
</output>
</layer>
```
</layer>
@endsphinxdirective

View File

@ -1,44 +1,48 @@
# ReduceMin {#openvino_docs_ops_reduction_ReduceMin_1}
@sphinxdirective
**Versioned name**: *ReduceMin-1*
**Category**: *Reduction*
**Short description**: *ReduceMin* operation performs the reduction with finding the minimum value on a given input `data` along dimensions specified by `axes` input.
**Short description**: *ReduceMin* operation performs the reduction with finding the minimum value on a given input ``data`` along dimensions specified by ``axes`` input.
**Detailed Description**
*ReduceMin* operation performs the reduction with finding the minimum value on a given input `data` along dimensions specified by `axes` input.
*ReduceMin* operation performs the reduction with finding the minimum value on a given input ``data`` along dimensions specified by ``axes`` input.
Each element in the output is calculated as follows:
.. code-block:: cpp
output[i0, i1, ..., iN] = min[j0, ..., jN](x[j0, ..., jN]))
where indices i0, ..., iN run through all valid indices for input `data`, and finding the minimum value `min[j0, ..., jN]` has `jk = ik` for those dimensions `k` that are not in the set of indices specified by `axes` input.
where indices i0, ..., iN run through all valid indices for input ``data``, and finding the minimum value ``min[j0, ..., jN]`` has ``jk = ik`` for those dimensions ``k`` that are not in the set of indices specified by ``axes`` input.
Particular cases:
1. If `axes` is an empty list, *ReduceMin* corresponds to the identity operation.
2. If `axes` contains all dimensions of input `data`, a single reduction value is calculated for the entire input tensor.
1. If ``axes`` is an empty list, *ReduceMin* corresponds to the identity operation.
2. If ``axes`` contains all dimensions of input ``data``, a single reduction value is calculated for the entire input tensor.
**Attributes**
* *keep_dims*
* **Description**: If set to `true`, it holds axes that are used for the reduction. For each such axis, the output dimension is equal to 1.
* **Range of values**: `true` or `false`
* **Type**: `boolean`
* **Default value**: `false`
* **Description**: If set to ``true``, it holds axes that are used for the reduction. For each such axis, the output dimension is equal to 1.
* **Range of values**: ``true`` or ``false``
* **Type**: ``boolean``
* **Default value**: ``false``
* **Required**: *no*
**Inputs**
* **1**: `data` - A tensor of type *T* and arbitrary shape. **Required.**
* **1**: ``data`` - A tensor of type *T* and arbitrary shape. **Required.**
* **2**: `axes` - Axis indices of `data` input tensor, along which the reduction is performed. A scalar or 1D tensor of unique elements and type *T_IND*. The range of elements is `[-r, r-1]`, where `r` is the rank of `data` input tensor. **Required.**
* **2**: ``axes`` - Axis indices of ``data`` input tensor, along which the reduction is performed. A scalar or 1D tensor of unique elements and type *T_IND*. The range of elements is ``[-r, r-1]``, where ``r`` is the rank of ``data`` input tensor. **Required.**
**Outputs**
* **1**: The result of *ReduceMin* function applied to `data` input tensor. A tensor of type *T* and `shape[i] = shapeOf(data)[i]` for all `i` dimensions not in `axes` input tensor. For dimensions in `axes`, `shape[i] == 1` if `keep_dims == true`; otherwise, the `i`-th dimension is removed from the output.
* **1**: The result of *ReduceMin* function applied to ``data`` input tensor. A tensor of type *T* and ``shape[i] = shapeOf(data)[i]`` for all ``i`` dimensions not in ``axes`` input tensor. For dimensions in ``axes``, ``shape[i] == 1`` if ``keep_dims == true``; otherwise, the ``i``-th dimension is removed from the output.
**Types**
@ -47,8 +51,9 @@ Particular cases:
**Examples**
```xml
<layer id="1" type="ReduceMin" ...>
.. code-block:: cpp
<layer id="1" type="ReduceMin" ...>
<data keep_dims="true" />
<input>
<port id="0">
@ -58,7 +63,7 @@ Particular cases:
<dim>24</dim>
</port>
<port id="1">
<dim>2</dim> <!-- value is [2, 3] that means independent reduction in each channel and batch -->
<dim>2</dim> < !-- value is [2, 3] that means independent reduction in each channel and batch -->
</port>
</input>
<output>
@ -69,11 +74,12 @@ Particular cases:
<dim>1</dim>
</port>
</output>
</layer>
```
</layer>
```xml
<layer id="1" type="ReduceMin" ...>
.. code-block:: cpp
<layer id="1" type="ReduceMin" ...>
<data keep_dims="false" />
<input>
<port id="0">
@ -83,7 +89,7 @@ Particular cases:
<dim>24</dim>
</port>
<port id="1">
<dim>2</dim> <!-- value is [2, 3] that means independent reduction in each channel and batch -->
<dim>2</dim> < !-- value is [2, 3] that means independent reduction in each channel and batch -->
</port>
</input>
<output>
@ -92,11 +98,12 @@ Particular cases:
<dim>12</dim>
</port>
</output>
</layer>
```
</layer>
```xml
<layer id="1" type="ReduceMin" ...>
.. code-block:: cpp
<layer id="1" type="ReduceMin" ...>
<data keep_dims="false" />
<input>
<port id="0">
@ -106,7 +113,7 @@ Particular cases:
<dim>24</dim>
</port>
<port id="1">
<dim>1</dim> <!-- value is [1] that means independent reduction in each channel and spatial dimensions -->
<dim>1</dim> < !-- value is [1] that means independent reduction in each channel and spatial dimensions -->
</port>
</input>
<output>
@ -116,11 +123,12 @@ Particular cases:
<dim>24</dim>
</port>
</output>
</layer>
```
</layer>
```xml
<layer id="1" type="ReduceMin" ...>
.. code-block:: cpp
<layer id="1" type="ReduceMin" ...>
<data keep_dims="false" />
<input>
<port id="0">
@ -130,7 +138,7 @@ Particular cases:
<dim>24</dim>
</port>
<port id="1">
<dim>1</dim> <!-- value is [-2] that means independent reduction in each channel, batch and second spatial dimension -->
<dim>1</dim> < !-- value is [-2] that means independent reduction in each channel, batch and second spatial dimension -->
</port>
</input>
<output>
@ -140,5 +148,8 @@ Particular cases:
<dim>24</dim>
</port>
</output>
</layer>
```
</layer>
@endsphinxdirective

View File

@ -1,44 +1,48 @@
# ReduceProd {#openvino_docs_ops_reduction_ReduceProd_1}
@sphinxdirective
**Versioned name**: *ReduceProd-1*
**Category**: *Reduction*
**Short description**: *ReduceProd* operation performs the reduction with multiplication on a given input `data` along dimensions specified by `axes` input.
**Short description**: *ReduceProd* operation performs the reduction with multiplication on a given input ``data`` along dimensions specified by ``axes`` input.
**Detailed Description**
*ReduceProd* operation performs the reduction with multiplication on a given input `data` along dimensions specified by `axes` input.
*ReduceProd* operation performs the reduction with multiplication on a given input ``data`` along dimensions specified by ``axes`` input.
Each element in the output is calculated as follows:
.. code-block:: cpp
output[i0, i1, ..., iN] = prod[j0, ..., jN](x[j0, ..., jN]))
where indices i0, ..., iN run through all valid indices for input `data`, and multiplication `prod[j0, ..., jN]` has `jk = ik` for those dimensions `k` that are not in the set of indices specified by `axes` input.
where indices i0, ..., iN run through all valid indices for input ``data``, and multiplication ``prod[j0, ..., jN]`` has ``jk = ik`` for those dimensions ``k`` that are not in the set of indices specified by ``axes`` input.
Particular cases:
1. If `axes` is an empty list, *ReduceProd* corresponds to the identity operation.
2. If `axes` contains all dimensions of input `data`, a single reduction value is calculated for the entire input tensor.
1. If ``axes`` is an empty list, *ReduceProd* corresponds to the identity operation.
2. If ``axes`` contains all dimensions of input ``data``, a single reduction value is calculated for the entire input tensor.
**Attributes**
* *keep_dims*
* **Description**: If set to `true`, it holds axes that are used for the reduction. For each such axis, the output dimension is equal to 1.
* **Range of values**: `true` or `false`
* **Type**: `boolean`
* **Default value**: `false`
* **Description**: If set to ``true``, it holds axes that are used for the reduction. For each such axis, the output dimension is equal to 1.
* **Range of values**: ``true`` or ``false``
* **Type**: ``boolean``
* **Default value**: ``false``
* **Required**: *no*
**Inputs**
* **1**: `data` - A tensor of type *T* and arbitrary shape. **Required.**
* **1**: ``data`` - A tensor of type *T* and arbitrary shape. **Required.**
* **2**: `axes` - Axis indices of `data` input tensor, along which the reduction is performed. A scalar or 1D tensor of unique elements and type *T_IND*. The range of elements is `[-r, r-1]`, where `r` is the rank of `data` input tensor. **Required.**
* **2**: ``axes`` - Axis indices of ``data`` input tensor, along which the reduction is performed. A scalar or 1D tensor of unique elements and type *T_IND*. The range of elements is ``[-r, r-1]``, where ``r`` is the rank of ``data`` input tensor. **Required.**
**Outputs**
* **1**: The result of *ReduceProd* function applied to `data` input tensor. A tensor of type *T* and `shape[i] = shapeOf(data)[i]` for all `i` dimensions not in `axes` input tensor. For dimensions in `axes`, `shape[i] == 1` if `keep_dims == true`; otherwise, the `i`-th dimension is removed from the output.
* **1**: The result of *ReduceProd* function applied to ``data`` input tensor. A tensor of type *T* and ``shape[i] = shapeOf(data)[i]`` for all ``i`` dimensions not in ``axes`` input tensor. For dimensions in ``axes``, ``shape[i] == 1`` if ``keep_dims == true``; otherwise, the ``i``-th dimension is removed from the output.
**Types**
@ -47,8 +51,9 @@ Particular cases:
**Examples**
```xml
<layer id="1" type="ReduceProd" ...>
.. code-block:: cpp
<layer id="1" type="ReduceProd" ...>
<data keep_dims="true" />
<input>
<port id="0">
@ -58,7 +63,7 @@ Particular cases:
<dim>24</dim>
</port>
<port id="1">
<dim>2</dim> <!-- value is [2, 3] that means independent reduction in each channel and batch -->
<dim>2</dim> < !-- value is [2, 3] that means independent reduction in each channel and batch -->
</port>
</input>
<output>
@ -69,11 +74,12 @@ Particular cases:
<dim>1</dim>
</port>
</output>
</layer>
```
</layer>
```xml
<layer id="1" type="ReduceProd" ...>
.. code-block:: cpp
<layer id="1" type="ReduceProd" ...>
<data keep_dims="false" />
<input>
<port id="0">
@ -83,7 +89,7 @@ Particular cases:
<dim>24</dim>
</port>
<port id="1">
<dim>2</dim> <!-- value is [2, 3] that means independent reduction in each channel and batch -->
<dim>2</dim> < !-- value is [2, 3] that means independent reduction in each channel and batch -->
</port>
</input>
<output>
@ -92,11 +98,12 @@ Particular cases:
<dim>12</dim>
</port>
</output>
</layer>
```
</layer>
```xml
<layer id="1" type="ReduceProd" ...>
.. code-block:: cpp
<layer id="1" type="ReduceProd" ...>
<data keep_dims="false" />
<input>
<port id="0">
@ -106,7 +113,7 @@ Particular cases:
<dim>24</dim>
</port>
<port id="1">
<dim>1</dim> <!-- value is [1] that means independent reduction in each channel and spatial dimensions -->
<dim>1</dim> < !-- value is [1] that means independent reduction in each channel and spatial dimensions -->
</port>
</input>
<output>
@ -116,11 +123,12 @@ Particular cases:
<dim>24</dim>
</port>
</output>
</layer>
```
</layer>
```xml
<layer id="1" type="ReduceProd" ...>
.. code-block:: cpp
<layer id="1" type="ReduceProd" ...>
<data keep_dims="false" />
<input>
<port id="0">
@ -130,7 +138,7 @@ Particular cases:
<dim>24</dim>
</port>
<port id="1">
<dim>1</dim> <!-- value is [-2] that means independent reduction in each channel, batch and second spatial dimension -->
<dim>1</dim> < !-- value is [-2] that means independent reduction in each channel, batch and second spatial dimension -->
</port>
</input>
<output>
@ -140,5 +148,8 @@ Particular cases:
<dim>24</dim>
</port>
</output>
</layer>
```
</layer>
@endsphinxdirective

View File

@ -1,44 +1,48 @@
# ReduceSum {#openvino_docs_ops_reduction_ReduceSum_1}
@sphinxdirective
**Versioned name**: *ReduceSum-1*
**Category**: *Reduction*
**Short description**: *ReduceSum* operation performs the reduction with addition on a given input `data` along dimensions specified by `axes` input.
**Short description**: *ReduceSum* operation performs the reduction with addition on a given input ``data`` along dimensions specified by ``axes`` input.
**Detailed Description**
*ReduceSum* operation performs the reduction with addition on a given input `data` along dimensions specified by `axes` input.
*ReduceSum* operation performs the reduction with addition on a given input ``data`` along dimensions specified by ``axes`` input.
Each element in the output is calculated as follows:
.. code-block:: cpp
output[i0, i1, ..., iN] = sum[j0, ..., jN](x[j0, ..., jN]))
where indices i0, ..., iN run through all valid indices for input `data`, and summation `sum[j0, ..., jN]` has `jk = ik` for those dimensions `k` that are not in the set of indices specified by `axes` input.
where indices i0, ..., iN run through all valid indices for input ``data``, and summation ``sum[j0, ..., jN]`` has ``jk = ik`` for those dimensions ``k`` that are not in the set of indices specified by ``axes`` input.
Particular cases:
1. If `axes` is an empty list, *ReduceSum* corresponds to the identity operation.
2. If `axes` contains all dimensions of input `data`, a single reduction value is calculated for the entire input tensor.
1. If ``axes`` is an empty list, *ReduceSum* corresponds to the identity operation.
2. If ``axes`` contains all dimensions of input ``data``, a single reduction value is calculated for the entire input tensor.
**Attributes**
* *keep_dims*
* **Description**: If set to `true`, it holds axes that are used for the reduction. For each such axis, the output dimension is equal to 1.
* **Range of values**: `true` or `false`
* **Type**: `boolean`
* **Default value**: `false`
* **Description**: If set to ``true``, it holds axes that are used for the reduction. For each such axis, the output dimension is equal to 1.
* **Range of values**: ``true`` or ``false``
* **Type**: ``boolean``
* **Default value**: ``false``
* **Required**: *no*
**Inputs**
* **1**: `data` - A tensor of type *T* and arbitrary shape. **Required.**
* **1**: ``data`` - A tensor of type *T* and arbitrary shape. **Required.**
* **2**: `axes` - Axis indices of `data` input tensor, along which the reduction is performed. A scalar or 1D tensor of unique elements and type *T_IND*. The range of elements is `[-r, r-1]`, where `r` is the rank of `data` input tensor. **Required.**
* **2**: ``axes`` - Axis indices of ``data`` input tensor, along which the reduction is performed. A scalar or 1D tensor of unique elements and type *T_IND*. The range of elements is ``[-r, r-1]``, where ``r`` is the rank of ``data`` input tensor. **Required.**
**Outputs**
* **1**: The result of *ReduceSum* function applied to `data` input tensor. A tensor of type *T* and `shape[i] = shapeOf(data)[i]` for all `i` dimensions not in `axes` input tensor. For dimensions in `axes`, `shape[i] == 1` if `keep_dims == true`; otherwise, the `i`-th dimension is removed from the output.
* **1**: The result of *ReduceSum* function applied to ``data`` input tensor. A tensor of type *T* and ``shape[i] = shapeOf(data)[i]`` for all ``i`` dimensions not in ``axes`` input tensor. For dimensions in ``axes``, ``shape[i] == 1`` if ``keep_dims == true``; otherwise, the ``i``-th dimension is removed from the output.
**Types**
@ -47,8 +51,9 @@ Particular cases:
**Examples**
```xml
<layer id="1" type="ReduceSum" ...>
.. code-block:: cpp
<layer id="1" type="ReduceSum" ...>
<data keep_dims="true" />
<input>
<port id="0">
@ -58,7 +63,7 @@ Particular cases:
<dim>24</dim>
</port>
<port id="1">
<dim>2</dim> <!-- value is [2, 3] that means independent reduction in each channel and batch -->
<dim>2</dim> < !-- value is [2, 3] that means independent reduction in each channel and batch -->
</port>
</input>
<output>
@ -69,11 +74,12 @@ Particular cases:
<dim>1</dim>
</port>
</output>
</layer>
```
</layer>
```xml
<layer id="1" type="ReduceSum" ...>
.. code-block:: cpp
<layer id="1" type="ReduceSum" ...>
<data keep_dims="false" />
<input>
<port id="0">
@ -83,7 +89,7 @@ Particular cases:
<dim>24</dim>
</port>
<port id="1">
<dim>2</dim> <!-- value is [2, 3] that means independent reduction in each channel and batch -->
<dim>2</dim> < !-- value is [2, 3] that means independent reduction in each channel and batch -->
</port>
</input>
<output>
@ -92,11 +98,12 @@ Particular cases:
<dim>12</dim>
</port>
</output>
</layer>
```
</layer>
```xml
<layer id="1" type="ReduceSum" ...>
.. code-block:: cpp
<layer id="1" type="ReduceSum" ...>
<data keep_dims="false" />
<input>
<port id="0">
@ -106,7 +113,7 @@ Particular cases:
<dim>24</dim>
</port>
<port id="1">
<dim>1</dim> <!-- value is [1] that means independent reduction in each channel and spatial dimensions -->
<dim>1</dim> < !-- value is [1] that means independent reduction in each channel and spatial dimensions -->
</port>
</input>
<output>
@ -116,11 +123,12 @@ Particular cases:
<dim>24</dim>
</port>
</output>
</layer>
```
</layer>
```xml
<layer id="1" type="ReduceSum" ...>
.. code-block:: cpp
<layer id="1" type="ReduceSum" ...>
<data keep_dims="false" />
<input>
<port id="0">
@ -130,7 +138,7 @@ Particular cases:
<dim>24</dim>
</port>
<port id="1">
<dim>1</dim> <!-- value is [-2] that means independent reduction in each channel, batch and second spatial dimension -->
<dim>1</dim> < !-- value is [-2] that means independent reduction in each channel, batch and second spatial dimension -->
</port>
</input>
<output>
@ -140,5 +148,7 @@ Particular cases:
<dim>24</dim>
</port>
</output>
</layer>
```
</layer>
@endsphinxdirective

View File

@ -1,22 +1,25 @@
# RNNCell {#openvino_docs_ops_sequence_RNNCell_3}
@sphinxdirective
**Versioned name**: *RNNCell-3*
**Category**: *Sequence processing*
**Short description**: *RNNCell* represents a single RNN cell that computes the output using the formula described in the [article](https://hackernoon.com/understanding-architecture-of-lstm-cell-from-scratch-with-code-8da40f0b71f4).
**Short description**: *RNNCell* represents a single RNN cell that computes the output using the formula described in the `article <https://hackernoon.com/understanding-architecture-of-lstm-cell-from-scratch-with-code-8da40f0b71f4>`__.
**Detailed description**:
*RNNCell* represents a single RNN cell and is part of [RNNSequence](RNNSequence_5.md) operation.
*RNNCell* represents a single RNN cell and is part of :doc:`RNNSequence <openvino_docs_ops_sequence_RNNSequence_5>` operation.
```
Formula:
.. code-block:: cpp
Formula:
* - matrix multiplication
^T - matrix transpose
f - activation function
Ht = f(Xt*(Wi^T) + Ht-1*(Ri^T) + Wbi + Rbi)
```
**Attributes**
@ -24,7 +27,7 @@ Formula:
* **Description**: *hidden_size* specifies hidden state size.
* **Range of values**: a positive integer
* **Type**: `int`
* **Type**: ``int``
* **Required**: *yes*
* *activations*
@ -39,7 +42,7 @@ Formula:
* **Description**: *activations_alpha, activations_beta* functions attributes
* **Range of values**: a list of floating-point numbers
* **Type**: `float[]`
* **Type**: ``float[]``
* **Default value**: None
* **Required**: *no*
@ -47,33 +50,35 @@ Formula:
* **Description**: *clip* specifies value for tensor clipping to be in *[-C, C]* before activations
* **Range of values**: a positive floating-point number
* **Type**: `float`
* **Type**: ``float``
* **Default value**: *infinity* that means that the clipping is not applied
* **Required**: *no*
**Inputs**
* **1**: `X` - 2D tensor of type *T* `[batch_size, input_size]`, input data. **Required.**
* **1**: ``X`` - 2D tensor of type *T* ``[batch_size, input_size]``, input data. **Required.**
* **2**: `H` - 2D tensor of type *T* `[batch_size, hidden_size]`, initial hidden state. **Required.**
* **2**: ``H`` - 2D tensor of type *T* ``[batch_size, hidden_size]``, initial hidden state. **Required.**
* **3**: `W` - 2D tensor of type *T* `[hidden_size, input_size]`, the weights for matrix multiplication. **Required.**
* **3**: ``W`` - 2D tensor of type *T* ``[hidden_size, input_size]``, the weights for matrix multiplication. **Required.**
* **4**: `R` - 2D tensor of type *T* `[hidden_size, hidden_size]`, the recurrence weights for matrix multiplication. **Required.**
* **4**: ``R`` - 2D tensor of type *T* ``[hidden_size, hidden_size]``, the recurrence weights for matrix multiplication. **Required.**
* **5**: `B` 1D tensor of type *T* `[hidden_size]`, the sum of biases (weights and recurrence weights). **Required.**
* **5**: ``B`` 1D tensor of type *T* ``[hidden_size]``, the sum of biases (weights and recurrence weights). **Required.**
**Outputs**
* **1**: `Ho` - 2D tensor of type *T* `[batch_size, hidden_size]`, the last output value of hidden state.
* **1**: ``Ho`` - 2D tensor of type *T* ``[batch_size, hidden_size]``, the last output value of hidden state.
**Types**
* *T*: any supported floating-point type.
**Example**
```xml
<layer ... type="RNNCell" ...>
.. code-block:: cpp
<layer ... type="RNNCell" ...>
<data hidden_size="128"/>
<input>
<port id="0">
@ -102,5 +107,6 @@ Formula:
<dim>128</dim>
</port>
</output>
</layer>
```
</layer>
@endsphinxdirective

View File

@ -1,14 +1,16 @@
# RNNSequence {#openvino_docs_ops_sequence_RNNSequence_5}
@sphinxdirective
**Versioned name**: *RNNSequence-5*
**Category**: *Sequence processing*
**Short description**: *RNNSequence* operation represents a series of RNN cells. Each cell is implemented as <a href="#RNNCell">RNNCell</a> operation.
**Short description**: *RNNSequence* operation represents a series of RNN cells. Each cell is implemented as `RNNCell <#RNNCell>`__ operation.
**Detailed description**
A single cell in the sequence is implemented in the same way as in <a href="#RNNCell">RNNCell</a> operation. *RNNSequence* represents a sequence of RNN cells. The sequence can be connected differently depending on `direction` attribute that specifies the direction of traversing of input data along sequence dimension or specifies whether it should be a bidirectional sequence. The most of the attributes are in sync with the specification of ONNX RNN operator defined <a href="https://github.com/onnx/onnx/blob/master/docs/Operators.md#rnn">RNNCell</a>.
A single cell in the sequence is implemented in the same way as in `RNNCell <#RNNCell>`__ operation. *RNNSequence* represents a sequence of RNN cells. The sequence can be connected differently depending on `direction` attribute that specifies the direction of traversing of input data along sequence dimension or specifies whether it should be a bidirectional sequence. The most of the attributes are in sync with the specification of ONNX RNN operator defined `RNNCell <https://github.com/onnx/onnx/blob/master/docs/Operators.md#rnn>`__.
**Attributes**
@ -17,7 +19,7 @@ A single cell in the sequence is implemented in the same way as in <a href="#RNN
* **Description**: *hidden_size* specifies hidden state size.
* **Range of values**: a positive integer
* **Type**: `int`
* **Type**: ``int``
* **Required**: *yes*
* *activations*
@ -32,7 +34,7 @@ A single cell in the sequence is implemented in the same way as in <a href="#RNN
* **Description**: *activations_alpha, activations_beta* attributes of functions; applicability and meaning of these attributes depends on chosen activation functions
* **Range of values**: a list of floating-point numbers
* **Type**: `float[]`
* **Type**: ``float[]``
* **Default value**: None
* **Required**: *no*
@ -40,36 +42,36 @@ A single cell in the sequence is implemented in the same way as in <a href="#RNN
* **Description**: *clip* specifies bound values *[-C, C]* for tensor clipping. Clipping is performed before activations.
* **Range of values**: a positive floating-point number
* **Type**: `float`
* **Type**: ``float``
* **Default value**: *infinity* that means that the clipping is not applied
* **Required**: *no*
* *direction*
* **Description**: Specify if the RNN is forward, reverse, or bidirectional. If it is one of *forward* or *reverse*, then `num_directions = 1`. If it is *bidirectional*, then `num_directions = 2`. This `num_directions` value specifies input/output shape requirements. When the operation is bidirectional, the input goes through forward and reverse ways. The outputs are concatenated.
* **Description**: Specify if the RNN is forward, reverse, or bidirectional. If it is one of *forward* or *reverse*, then ``num_directions = 1``. If it is *bidirectional*, then ``num_directions = 2``. This ``num_directions`` value specifies input/output shape requirements. When the operation is bidirectional, the input goes through forward and reverse ways. The outputs are concatenated.
* **Range of values**: *forward*, *reverse*, *bidirectional*
* **Type**: `string`
* **Type**: ``string``
* **Required**: *yes*
**Inputs**
* **1**: `X` - 3D tensor of type *T1* `[batch_size, seq_length, input_size]`, input data. It differs from RNNCell 1st input only by additional axis with size `seq_length`. **Required.**
* **1**: ``X`` - 3D tensor of type *T1* ``[batch_size, seq_length, input_size]``, input data. It differs from RNNCell 1st input only by additional axis with size ``seq_length``. **Required.**
* **2**: `H` - 3D tensor of type *T1* `[batch_size, num_directions, hidden_size]`, input hidden state data. **Required.**
* **2**: ``H`` - 3D tensor of type *T1* ``[batch_size, num_directions, hidden_size]``, input hidden state data. **Required.**
* **3**: `sequence_lengths` - 1D tensor of type *T2* `[batch_size]`, specifies real sequence lengths for each batch element. In case of negative values in this input, the operation behavior is undefined. **Required.**
* **3**: ``sequence_lengths`` - 1D tensor of type *T2* ``[batch_size]``, specifies real sequence lengths for each batch element. In case of negative values in this input, the operation behavior is undefined. **Required.**
* **4**: `W` - 3D tensor of type *T1* `[num_directions, hidden_size, input_size]`, the weights for matrix multiplication. **Required.**
* **4**: ``W`` - 3D tensor of type *T1* ``[num_directions, hidden_size, input_size]``, the weights for matrix multiplication. **Required.**
* **5**: `R` - 3D tensor of type *T1* `[num_directions, hidden_size, hidden_size]`, the recurrence weights for matrix multiplication. **Required.**
* **5**: ``R`` - 3D tensor of type *T1* ``[num_directions, hidden_size, hidden_size]``, the recurrence weights for matrix multiplication. **Required.**
* **6**: `B` - 2D tensor of type *T1* `[num_directions, hidden_size]`, the sum of biases (weights and recurrence weights). **Required.**
* **6**: ``B`` - 2D tensor of type *T1* ``[num_directions, hidden_size]``, the sum of biases (weights and recurrence weights). **Required.**
**Outputs**
* **1**: `Y` - 4D tensor of type *T1* `[batch_size, num_directions, seq_len, hidden_size]`, concatenation of all the intermediate output values of the hidden.
* **1**: ``Y`` - 4D tensor of type *T1* ``[batch_size, num_directions, seq_len, hidden_size]``, concatenation of all the intermediate output values of the hidden.
* **2**: `Ho` - 3D tensor of type *T1* `[batch_size, num_directions, hidden_size]`, the last output value of hidden state.
* **2**: ``Ho`` - 3D tensor of type *T1* ``[batch_size, num_directions, hidden_size]``, the last output value of hidden state.
**Types**
@ -77,8 +79,10 @@ A single cell in the sequence is implemented in the same way as in <a href="#RNN
* *T2*: any supported integer type.
**Example**
```xml
<layer ... type="RNNSequence" ...>
.. code-block:: cpp
<layer ... type="RNNSequence" ...>
<data hidden_size="128"/>
<input>
<port id="0">
@ -122,5 +126,7 @@ A single cell in the sequence is implemented in the same way as in <a href="#RNN
<dim>128</dim>
</port>
</output>
</layer>
```
</layer>
@endsphinxdirective

View File

@ -1,5 +1,7 @@
# Reshape {#openvino_docs_ops_shape_Reshape_1}
@sphinxdirective
**Versioned name**: *Reshape-1*
**Category**: *Shape manipulation*
@ -8,30 +10,31 @@
**Detailed description**:
*Reshape* takes two input tensors: `data` to be resized and `shape` of the new output. The values in the `shape` could be `-1`, `0` and any positive integer number. The two special values `-1` and `0`:
* `0` means "copy the respective dimension *(left aligned)* of the input tensor" if `special_zero` is set to `true`; otherwise it is a normal dimension and is applicable to empty tensors.
* `-1` means that this dimension is calculated to keep the overall elements count the same as in the input tensor. Not more than one `-1` can be used in a reshape operation.
*Reshape* takes two input tensors: ``data`` to be resized and ``shape`` of the new output. The values in the ``shape`` could be ``-1``, ``0`` and any positive integer number. The two special values ``-1`` and ``0``:
If `special_zero` is set to `true` index of `0` cannot be larger than the rank of the input tensor.
* ``0`` means "copy the respective dimension *(left aligned)* of the input tensor" if ``special_zero`` is set to ``true``; otherwise it is a normal dimension and is applicable to empty tensors.
* ``-1`` means that this dimension is calculated to keep the overall elements count the same as in the input tensor. Not more than one ``-1`` can be used in a reshape operation.
If ``special_zero`` is set to ``true`` index of ``0`` cannot be larger than the rank of the input tensor.
**Attributes**:
* *special_zero*
* **Description**: *special_zero* controls how zero values in `shape` are interpreted. If *special_zero* is `false`, then `0` is interpreted as-is which means that output shape will contain a zero dimension at the specified location. Input and output tensors are empty in this case. If *special_zero* is `true`, then all zeros in `shape` implies the copying of corresponding dimensions from `data.shape` into the output shape *(left aligned)*.
* **Range of values**: `false` or `true`
* **Type**: `boolean`
* **Description**: *special_zero* controls how zero values in ``shape`` are interpreted. If *special_zero* is ``false``, then ``0`` is interpreted as-is which means that output shape will contain a zero dimension at the specified location. Input and output tensors are empty in this case. If *special_zero* is ``true``, then all zeros in ``shape`` implies the copying of corresponding dimensions from ``data.shape`` into the output shape *(left aligned)*.
* **Range of values**: ``false`` or ``true``
* **Type**: ``boolean``
* **Required**: *yes*
**Inputs**:
* **1**: `data` a tensor of type *T* and arbitrary shape. **Required.**
* **1**: ``data`` a tensor of type *T* and arbitrary shape. **Required.**
* **2**: `shape` 1D tensor of type *T_SHAPE* describing output shape. **Required.**
* **2**: ``shape`` 1D tensor of type *T_SHAPE* describing output shape. **Required.**
**Outputs**:
* **1**: Output tensor of type *T* with the same content as `data` input tensor but with shape defined by `shape` input tensor.
* **1**: Output tensor of type *T* with the same content as ``data`` input tensor but with shape defined by ``shape`` input tensor.
**Types**
@ -42,8 +45,10 @@ If `special_zero` is set to `true` index of `0` cannot be larger than the rank o
**Examples**
*Example 1: reshape empty tensor*
```xml
<layer ... type="Reshape" ...>
.. code-block:: cpp
<layer ... type="Reshape" ...>
<data special_zero="false"/>
<input>
<port id="0">
@ -53,7 +58,7 @@ If `special_zero` is set to `true` index of `0` cannot be larger than the rank o
<dim>0</dim>
</port>
<port id="1">
<dim>2</dim> <!--The tensor contains 2 elements: 0, 4 -->
<dim>2</dim> < !--The tensor contains 2 elements: 0, 4 -->
</port>
</input>
<output>
@ -62,12 +67,14 @@ If `special_zero` is set to `true` index of `0` cannot be larger than the rank o
<dim>4</dim>
</port>
</output>
</layer>
```
</layer>
*Example 2: reshape tensor - preserve first dim, calculate second and fix value for third dim*
```xml
<layer ... type="Reshape" ...>
.. code-block:: cpp
<layer ... type="Reshape" ...>
<data special_zero="true"/>
<input>
<port id="0">
@ -77,7 +84,7 @@ If `special_zero` is set to `true` index of `0` cannot be larger than the rank o
<dim>24</dim>
</port>
<port id="1">
<dim>3</dim> <!--The tensor contains 3 elements: 0, -1, 4 -->
<dim>3</dim> < !--The tensor contains 3 elements: 0, -1, 4 -->
</port>
</input>
<output>
@ -87,12 +94,14 @@ If `special_zero` is set to `true` index of `0` cannot be larger than the rank o
<dim>4</dim>
</port>
</output>
</layer>
```
</layer>
*Example 3: reshape tensor - preserve first two dims, fix value for third dim and calculate fourth*
```xml
<layer ... type="Reshape" ...>
.. code-block:: cpp
<layer ... type="Reshape" ...>
<data special_zero="true"/>
<input>
<port id="0">
@ -101,7 +110,7 @@ If `special_zero` is set to `true` index of `0` cannot be larger than the rank o
<dim>3</dim>
</port>
<port id="1">
<dim>4</dim> <!--The tensor contains 4 elements: 0, 0, 1, -1 -->
<dim>4</dim> < !--The tensor contains 4 elements: 0, 0, 1, -1 -->
</port>
</input>
<output>
@ -112,12 +121,14 @@ If `special_zero` is set to `true` index of `0` cannot be larger than the rank o
<dim>3</dim>
</port>
</output>
</layer>
```
</layer>
*Example 4: reshape tensor - calculate first dim and preserve second dim*
```xml
<layer ... type="Reshape" ...>
.. code-block:: cpp
<layer ... type="Reshape" ...>
<data special_zero="true"/>
<input>
<port id="0">
@ -126,7 +137,7 @@ If `special_zero` is set to `true` index of `0` cannot be larger than the rank o
<dim>1</dim>
</port>
<port id="1">
<dim>2</dim> <!--The tensor contains 2 elements: -1, 0 -->
<dim>2</dim> < !--The tensor contains 2 elements: -1, 0 -->
</port>
</input>
<output>
@ -135,12 +146,14 @@ If `special_zero` is set to `true` index of `0` cannot be larger than the rank o
<dim>1</dim>
</port>
</output>
</layer>
```
</layer>
*Example 5: reshape tensor - preserve first dim and calculate second dim*
```xml
<layer ... type="Reshape" ...>
.. code-block:: cpp
<layer ... type="Reshape" ...>
<data special_zero="true"/>
<input>
<port id="0">
@ -149,7 +162,7 @@ If `special_zero` is set to `true` index of `0` cannot be larger than the rank o
<dim>1</dim>
</port>
<port id="1">
<dim>2</dim> <!--The tensor contains 2 elements: 0, -1 -->
<dim>2</dim> < !--The tensor contains 2 elements: 0, -1 -->
</port>
</input>
<output>
@ -158,5 +171,7 @@ If `special_zero` is set to `true` index of `0` cannot be larger than the rank o
<dim>1</dim>
</port>
</output>
</layer>
```
</layer>
@endsphinxdirective

View File

@ -1,5 +1,7 @@
# Discrete Fourier Transformation for real-valued input (RDFT) {#openvino_docs_ops_signals_RDFT_9}
@sphinxdirective
**Versioned name**: *RDFT-9*
**Category**: *Signal processing*
@ -8,56 +10,67 @@
**Attributes**:
No attributes available.
No attributes available.
**Inputs**
* **1**: `data` - Input tensor of type *T* with data for the RDFT transformation. **Required.**
* **2**: `axes` - 1D tensor of type *T_IND* specifying dimension indices where RDFT is applied, and `axes` is any unordered list of indices of different dimensions of input tensor, for example, `[0, 4]`, `[4, 0]`, `[4, 2, 1]`, `[1, 2, 3]`, `[-3, 0, -2]`. These indices should be integers from `-r` to `r - 1` inclusively, where `r = rank(data)`. A negative axis `a` is interpreted as an axis `r + a`. Other dimensions do not change. The order of elements in `axes` attribute matters, and is mapped directly to elements in the third input `signal_size`. **Required.**
* **3**: `signal_size` - 1D tensor of type *T_SIZE* describing signal size with respect to axes from the input `axes`. If `signal_size[i] == -1`, then RDFT is calculated for full size of the axis `axes[i]`. If `signal_size[i] > data_shape[axes[i]]`, then input data is zero-padded with respect to the axis `axes[i]` at the end. Finally, `signal_size[i] < data_shape[axes[i]]`, then input data is trimmed with respect to the axis `axes[i]`. More precisely, if `signal_size[i] < data_shape[axes[i]]`, the slice `0: signal_size[i]` of the axis `axes[i]` is considered. Optionally, with default value `[data_shape[a] for a in axes]`.
* **NOTE**: If the input `signal_size` is specified, the size of `signal_size` must be the same as the size of `axes`.
* **1**: ``data`` - Input tensor of type *T* with data for the RDFT transformation. **Required.**
* **2**: ``axes`` - 1D tensor of type *T_IND* specifying dimension indices where RDFT is applied, and ``axes`` is any unordered list of indices of different dimensions of input tensor, for example, ``[0, 4]``, ``[4, 0]``, ``[4, 2, 1]``, ``[1, 2, 3]``, ``[-3, 0, -2]``. These indices should be integers from ``-r`` to ``r - 1`` inclusively, where ``r = rank(data)``. A negative axis ``a`` is interpreted as an axis ``r + a``. Other dimensions do not change. The order of elements in ``axes`` attribute matters, and is mapped directly to elements in the third input ``signal_size``. **Required.**
* **3**: ``signal_size`` - 1D tensor of type *T_SIZE* describing signal size with respect to axes from the input ``axes``. If ``signal_size[i] == -1``, then RDFT is calculated for full size of the axis ``axes[i]``. If ``signal_size[i] > data_shape[axes[i]]``, then input data is zero-padded with respect to the axis ``axes[i]`` at the end. Finally, ``signal_size[i] < data_shape[axes[i]]``, then input data is trimmed with respect to the axis ``axes[i]``. More precisely, if ``signal_size[i] < data_shape[axes[i]]``, the slice ``0: signal_size[i]`` of the axis ``axes[i]`` is considered. Optionally, with default value ``[data_shape[a] for a in axes]``.
* **NOTE**: If the input ``signal_size`` is specified, the size of ``signal_size`` must be the same as the size of ``axes``.
**Outputs**
* **1**: Resulting tensor with elements of the same type as input `data` tensor and with rank `r + 1`, where `r = rank(data)`. The shape of the output has the form `[S_0, S_1, ..., S_{r-1}, 2]`, where all `S_a` are calculated as follows:
* **1**: Resulting tensor with elements of the same type as input ``data`` tensor and with rank ``r + 1``, where ``r = rank(data)``. The shape of the output has the form ``[S_0, S_1, ..., S_{r-1}, 2]``, where all ``S_a`` are calculated as follows:
1. Calculate `normalized_axes`, where each `normalized_axes[i] = axes[i]`, if `axes[i] >= 0`, and `normalized_axes[i] = axes[i] + r` otherwise.
1. Calculate ``normalized_axes``, where each ``normalized_axes[i] = axes[i]``, if ``axes[i] >= 0``, and ``normalized_axes[i] = axes[i] + r`` otherwise.
2. If `a not in normalized_axes`, then `S_a = data_shape[a]`.
2. If ``a not in normalized_axes``, then ``S_a = data_shape[a]``.
3. If `a in normalized_axes`, then `a = normalized_axes[i]` for some `i`.
+ When `i != len(normalized_axes) - 1`, `S_a` is calculated as `S_a = data_shape[a]` if the `signal_size` input is not specified, or, if it is specified, `signal_size[i] = -1`; and `S_a = signal_size[a]` otherwise.
+ When `i = len(normalized_axes) - 1`, `S_a` is calculated as `S_a = data_shape[a] // 2 + 1` if the `signal_size` input is not specified, or, if it is specified, `signal_size[i] = -1`; and `S_a = signal_size[a] // 2 + 1` otherwise.
3. If ``a in normalized_axes``, then ``a = normalized_axes[i]`` for some ``i``.
+ When ``i != len(normalized_axes) - 1``, ``S_a`` is calculated as ``S_a = data_shape[a]`` if the ``signal_size`` input is not specified, or, if it is specified, ``signal_size[i] = -1``; and ``S_a = signal_size[a]`` otherwise.
+ When ``i = len(normalized_axes) - 1``, ``S_a`` is calculated as ``S_a = data_shape[a] // 2 + 1`` if the ``signal_size`` input is not specified, or, if it is specified, ``signal_size[i] = -1``; and ``S_a = signal_size[a] // 2 + 1`` otherwise.
**Types**
* *T*: any supported floating-point type.
* *T_IND*: `int64` or `int32`.
* *T_IND*: ``int64`` or ``int32``.
* *T_SIZE*: `int64` or `int32`.
* *T_SIZE*: ``int64`` or ``int32``.
**Detailed description**: *RDFT* performs the discrete Fourier transformation of real-valued input tensor with respect to specified axes. Calculations are performed according to the following rules.
For simplicity, assume that an input tensor `A` has the shape `[B_0, ..., B_{k-1}, M_0, ..., M_{q-1}]`, `axes=[k,...,k+q-1]`, and `signal_size=[S_0,...,S_{q-1}]`.
For simplicity, assume that an input tensor ``A`` has the shape ``[B_0, ..., B_{k-1}, M_0, ..., M_{q-1}]``, ``axes=[k,...,k+q-1]``, and ``signal_size=[S_0,...,S_{q-1}]``.
Let `D` be an input tensor `A`, taking into account the `signal_size`, and, hence, `D` has the shape `[B_0, ..., B_{k-1}, S_0, ..., S_{q-1}]`.
Let ``D`` be an input tensor ``A``, taking into account the ``signal_size``, and, hence, ``D`` has the shape ``[B_0, ..., B_{k-1}, S_0, ..., S_{q-1}]``.
Next, let
\f[X=X[j_0,\dots,j_{k-1},j_k,\dots,j_{k+q-1}]\f]
for all indices `j_0,...,j_{k+q-1}`, be a real-valued input tensor.
Then the transformation RDFT of the tensor `X` is the tensor `Y` of the shape `[B_0, ..., B_{k-1}, S_0 // 2 + 1, ..., S_{r-1} // 2 + 1]`, such that
\f[Y[n_0,\dots,n_{k-1},m_0,\dots,m_{q-1}]=\sum\limits_{j_0=0}^{S_0-1}\cdots\sum\limits_{j_{q-1}=0}^{S_{q-1}-1}X[n_0,\dots,n_{k-1},j_0,\dots,j_{q-1}]\exp\left(-2\pi i\sum\limits_{b=0}^{q-1}\frac{m_bj_b}{S_b}\right)\f]
for all indices `n_0,...,n_{k-1}`, `m_0,...,m_{q-1}`.
.. math::
X=X[j_0,\dots,j_{k-1},j_k,\dots,j_{k+q-1}]
for all indices ``j_0,...,j_{k+q-1}``, be a real-valued input tensor.
Then the transformation RDFT of the tensor ``X`` is the tensor ``Y`` of the shape ``[B_0, ..., B_{k-1}, S_0 // 2 + 1, ..., S_{r-1} // 2 + 1]``, such that
.. math::
Y[n_0,\dots,n_{k-1},m_0,\dots,m_{q-1}]=\sum\limits_{j_0=0}^{S_0-1}\cdots\sum\limits_{j_{q-1}=0}^{S_{q-1}-1}X[n_0,\dots,n_{k-1},j_0,\dots,j_{q-1}]\exp\left(-2\pi i\sum\limits_{b=0}^{q-1}\frac{m_bj_b}{S_b}\right)
for all indices ``n_0,...,n_{k-1}``, ``m_0,...,m_{q-1}``.
Calculations for the generic case of axes and signal sizes are similar.
**Example**:
There is no `signal_size` input (3D input tensor):
```xml
<layer ... type="RDFT" ... >
There is no ``signal_size`` input (3D input tensor):
.. code-block:: cpp
<layer ... type="RDFT" ... >
<input>
<port id="0">
<dim>1</dim>
@ -65,7 +78,7 @@ There is no `signal_size` input (3D input tensor):
<dim>320</dim>
</port>
<port id="1">
<dim>2</dim> <!-- axes input contains [1, 2] -->
<dim>2</dim> < !-- axes input contains [1, 2] -->
</port>
<output>
<port id="2">
@ -75,19 +88,21 @@ There is no `signal_size` input (3D input tensor):
<dim>2</dim>
</port>
</output>
</layer>
```
</layer>
There is no `signal_size` input (2D input tensor):
```xml
<layer ... type="RDFT" ... >
There is no ``signal_size`` input (2D input tensor):
.. code-block:: cpp
<layer ... type="RDFT" ... >
<input>
<port id="0">
<dim>320</dim>
<dim>320</dim>
</port>
<port id="1">
<dim>2</dim> <!-- axes input contains [0, 1] -->
<dim>2</dim> < !-- axes input contains [0, 1] -->
</port>
<output>
<port id="2">
@ -96,13 +111,15 @@ There is no `signal_size` input (2D input tensor):
<dim>2</dim>
</port>
</output>
</layer>
```
</layer>
There is `signal_size` input (3D input tensor):
```xml
<layer ... type="RDFT" ... >
There is ``signal_size`` input (3D input tensor):
.. code-block:: cpp
<layer ... type="RDFT" ... >
<input>
<port id="0">
<dim>1</dim>
@ -110,10 +127,10 @@ There is `signal_size` input (3D input tensor):
<dim>320</dim>
</port>
<port id="1">
<dim>2</dim> <!-- axes input contains [1, 2] -->
<dim>2</dim> < !-- axes input contains [1, 2] -->
</port>
<port id="2">
<dim>2</dim> <!-- signal_size input contains [512, 100] -->
<dim>2</dim> < !-- signal_size input contains [512, 100] -->
</port>
<output>
<port id="3">
@ -123,23 +140,23 @@ There is `signal_size` input (3D input tensor):
<dim>2</dim>
</port>
</output>
</layer>
```
</layer>
There is ``signal_size`` input (2D input tensor):
There is `signal_size` input (2D input tensor):
```xml
<layer ... type="RDFT" ... >
.. code-block:: cpp
<layer ... type="RDFT" ... >
<input>
<port id="0">
<dim>320</dim>
<dim>320</dim>
</port>
<port id="1">
<dim>2</dim> <!-- axes input contains [0, 1] -->
<dim>2</dim> < !-- axes input contains [0, 1] -->
</port>
<port id="2">
<dim>2</dim> <!-- signal_size input contains [512, 100] -->
<dim>2</dim> < !-- signal_size input contains [512, 100] -->
</port>
<output>
<port id="3">
@ -148,13 +165,14 @@ There is `signal_size` input (2D input tensor):
<dim>2</dim>
</port>
</output>
</layer>
```
</layer>
There is `signal_size` input (4D input tensor, `-1` in `signal_size`, unsorted axes):
```xml
<layer ... type="RDFT" ... >
There is ``signal_size`` input (4D input tensor, ``-1`` in ``signal_size``, unsorted axes):
.. code-block:: cpp
<layer ... type="RDFT" ... >
<input>
<port id="0">
<dim>16</dim>
@ -163,10 +181,10 @@ There is `signal_size` input (4D input tensor, `-1` in `signal_size`, unsorted a
<dim>320</dim>
</port>
<port id="1">
<dim>3</dim> <!-- axes input contains [3, 1, 2] -->
<dim>3</dim> < !-- axes input contains [3, 1, 2] -->
</port>
<port id="2">
<dim>3</dim> <!-- signal_size input contains [170, -1, 1024] -->
<dim>3</dim> < !-- signal_size input contains [170, -1, 1024] -->
</port>
<output>
<port id="3">
@ -177,13 +195,13 @@ There is `signal_size` input (4D input tensor, `-1` in `signal_size`, unsorted a
<dim>2</dim>
</port>
</output>
</layer>
```
</layer>
There is ``signal_size`` input (4D input tensor, ``-1`` in ``signal_size``, unsorted axes, the second example):
There is `signal_size` input (4D input tensor, `-1` in `signal_size`, unsorted axes, the second example):
```xml
<layer ... type="RDFT" ... >
.. code-block:: cpp
<layer ... type="RDFT" ... >
<input>
<port id="0">
<dim>16</dim>
@ -192,10 +210,10 @@ There is `signal_size` input (4D input tensor, `-1` in `signal_size`, unsorted a
<dim>320</dim>
</port>
<port id="1">
<dim>3</dim> <!-- axes input contains [3, 0, 2] -->
<dim>3</dim> < !-- axes input contains [3, 0, 2] -->
</port>
<port id="2">
<dim>3</dim> <!-- signal_size input contains [258, -1, 2056] -->
<dim>3</dim> < !-- signal_size input contains [258, -1, 2056] -->
</port>
<output>
<port id="3">
@ -206,5 +224,6 @@ There is `signal_size` input (4D input tensor, `-1` in `signal_size`, unsorted a
<dim>2</dim>
</port>
</output>
</layer>
```
</layer>
@endsphinxdirective