DOCS shift to rst - Opset S (#17158)

* ops to rst

* fix errors

* formula fix

* change code

* console directive

* vsplit try hoghlight

* fix code snippets

* comment fixes

* fix list
This commit is contained in:
Tatiana Savina 2023-04-24 11:02:30 +02:00 committed by GitHub
parent b3ea6ceefa
commit aa5b6ecac2
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
37 changed files with 2460 additions and 2175 deletions

View File

@ -1,5 +1,7 @@
# Selu {#openvino_docs_ops_activation_Selu_1}
@sphinxdirective
**Versioned name**: *Selu-1*
**Category**: *Activation function*
@ -8,38 +10,35 @@
**Detailed Description**
*Selu* operation is introduced in this [article](https://arxiv.org/abs/1706.02515), as activation function for self-normalizing neural networks (SNNs).
*Selu* operation is introduced in this `article <https://arxiv.org/abs/1706.02515>`__, as activation function for self-normalizing neural networks (SNNs).
*Selu* performs element-wise activation function on a given input tensor `data`, based on the following mathematical formula:
*Selu* performs element-wise activation function on a given input tensor ``data``, based on the following mathematical formula:
\f[
Selu(x) = \lambda \left\{\begin{array}{r}
x \quad \mbox{if } x > 0 \\
\alpha(e^{x} - 1) \quad \mbox{if } x \le 0
\end{array}\right.
\f]
.. math::
where α and λ correspond to inputs `alpha` and `lambda` respectively.
Selu(x) = \lambda \left\{\begin{array}{r} x \quad \mbox{if } x > 0 \\ \alpha(e^{x} - 1) \quad \mbox{if } x \le 0 \end{array}\right.
where α and λ correspond to inputs ``alpha`` and ``lambda`` respectively.
Another mathematical representation that may be found in other references:
\f[
Selu(x) = \lambda\cdot\big(\max(0, x) + \min(0, \alpha(e^{x}-1))\big)
\f]
.. math::
Selu(x) = \lambda\cdot\big(\max(0, x) + \min(0, \alpha(e^{x}-1))\big)
**Attributes**: *Selu* operation has no attributes.
**Inputs**
* **1**: `data`. A tensor of type *T* and arbitrary shape. **Required.**
* **1**: ``data``. A tensor of type *T* and arbitrary shape. **Required.**
* **2**: `alpha`. 1D tensor with one element of type *T*. **Required.**
* **2**: ``alpha``. 1D tensor with one element of type *T*. **Required.**
* **3**: `lambda`. 1D tensor with one element of type *T*. **Required.**
* **3**: ``lambda``. 1D tensor with one element of type *T*. **Required.**
**Outputs**
* **1**: The result of element-wise *Selu* function applied to `data` input tensor. A tensor of type *T* and the same shape as `data` input tensor.
* **1**: The result of element-wise *Selu* function applied to ``data`` input tensor. A tensor of type *T* and the same shape as ``data`` input tensor.
**Types**
@ -47,25 +46,27 @@ Selu(x) = \lambda\cdot\big(\max(0, x) + \min(0, \alpha(e^{x}-1))\big)
**Example**
```xml
<layer ... type="Selu">
<input>
<port id="0">
<dim>256</dim>
<dim>56</dim>
</port>
<port id="1">
<dim>1</dim>
</port>
<port id="2">
<dim>1</dim>
</port>
</input>
<output>
<port id="3">
<dim>256</dim>
<dim>56</dim>
</port>
</output>
</layer>
```
.. code-block:: cpp
<layer ... type="Selu">
<input>
<port id="0">
<dim>256</dim>
<dim>56</dim>
</port>
<port id="1">
<dim>1</dim>
</port>
<port id="2">
<dim>1</dim>
</port>
</input>
<output>
<port id="3">
<dim>256</dim>
<dim>56</dim>
</port>
</output>
</layer>
@endsphinxdirective

View File

@ -1,22 +1,25 @@
# Sigmoid {#openvino_docs_ops_activation_Sigmoid_1}
@sphinxdirective
**Versioned name**: *Sigmoid-1*
**Category**: *Activation function*
**Short description**: Sigmoid element-wise activation function.
**Detailed description**: [Reference](https://deepai.org/machine-learning-glossary-and-terms/sigmoid-function)
**Detailed description**: `Reference <https://deepai.org/machine-learning-glossary-and-terms/sigmoid-function>`__
**Attributes**: *Sigmoid* operation has no attributes.
**Mathematical Formulation**
For each element from the input tensor calculates corresponding
element in the output tensor with the following formula:
\f[
sigmoid( x ) = \frac{1}{1+e^{-x}}
\f]
For each element from the input tensor calculates corresponding element in the output tensor with the following formula:
.. math::
sigmoid( x ) = \frac{1}{1+e^{-x}}
**Inputs**:
@ -28,20 +31,22 @@ sigmoid( x ) = \frac{1}{1+e^{-x}}
**Example**
```xml
<layer ... type="Sigmoid">
<input>
<port id="0">
<dim>256</dim>
<dim>56</dim>
</port>
</input>
<output>
<port id="1">
<dim>256</dim>
<dim>56</dim>
</port>
</output>
</layer>
.. code-block:: cpp
```
<layer ... type="Sigmoid">
<input>
<port id="0">
<dim>256</dim>
<dim>56</dim>
</port>
</input>
<output>
<port id="1">
<dim>256</dim>
<dim>56</dim>
</port>
</output>
</layer>
@endsphinxdirective

View File

@ -1,12 +1,14 @@
# SoftMax {#openvino_docs_ops_activation_SoftMax_1}
@sphinxdirective
**Versioned name**: *SoftMax-1*
**Category**: *Activation function*
**Short description**: [Reference](https://github.com/Kulbear/deep-learning-nano-foundation/wiki/ReLU-and-Softmax-Activation-Functions#softmax)
**Short description**: `Reference <https://github.com/Kulbear/deep-learning-nano-foundation/wiki/ReLU-and-Softmax-Activation-Functions#softmax>`__
**Detailed description**: [Reference](http://cs231n.github.io/linear-classify/#softmax)
**Detailed description**: `Reference <http://cs231n.github.io/linear-classify/#softmax>`__
**Attributes**
@ -20,10 +22,11 @@
**Mathematical Formulation**
\f[
y_{c} = \frac{e^{Z_{c}}}{\sum_{d=1}^{C}e^{Z_{d}}}
\f]
where \f$C\f$ is a size of tensor along *axis* dimension.
.. math::
y_{c} = \frac{e^{Z_{c}}}{\sum_{d=1}^{C}e^{Z_{d}}}
where :math:`C` is a size of tensor along *axis* dimension.
**Inputs**:
@ -35,10 +38,12 @@ where \f$C\f$ is a size of tensor along *axis* dimension.
**Example**
```xml
<layer ... type="SoftMax" ... >
<data axis="1" />
<input> ... </input>
<output> ... </output>
</layer>
```
.. code-block:: cpp
<layer ... type="SoftMax" ... >
<data axis="1" />
<input> ... </input>
<output> ... </output>
</layer>
@endsphinxdirective

View File

@ -1,30 +1,32 @@
# SoftMax {#openvino_docs_ops_activation_SoftMax_8}
@sphinxdirective
**Versioned name**: *SoftMax-8*
**Category**: *Activation function*
**Short description**: [Reference](https://github.com/Kulbear/deep-learning-nano-foundation/wiki/ReLU-and-Softmax-Activation-Functions#softmax)
**Short description**: `Reference <https://github.com/Kulbear/deep-learning-nano-foundation/wiki/ReLU-and-Softmax-Activation-Functions#softmax>`__
**Detailed description**: [Reference](http://cs231n.github.io/linear-classify/#softmax)
**Detailed description**: `Reference <http://cs231n.github.io/linear-classify/#softmax>`__
**Attributes**
* *axis*
* **Description**: *axis* represents the axis of which the *SoftMax* is calculated. Negative value means counting
dimensions from the back. *axis* equal 1 is a default value.
* **Range of values**: `[-rank, rank - 1]`
* **Description**: *axis* represents the axis of which the *SoftMax* is calculated. Negative value means counting dimensions from the back. *axis* equal 1 is a default value.
* **Range of values**: ``[-rank, rank - 1]``
* **Type**: int
* **Default value**: 1
* **Required**: *no*
**Mathematical Formulation**
\f[
y_{c} = \frac{e^{Z_{c}}}{\sum_{d=1}^{C}e^{Z_{d}}}
\f]
where \f$C\f$ is a size of tensor along *axis* dimension.
.. math::
y_{c} = \frac{e^{Z_{c}}}{\sum_{d=1}^{C}e^{Z_{d}}}
where :math:`C` is a size of tensor along *axis* dimension.
**Inputs**:
@ -36,10 +38,12 @@ where \f$C\f$ is a size of tensor along *axis* dimension.
**Example**
```xml
<layer ... type="SoftMax" ... >
<data axis="1" />
<input> ... </input>
<output> ... </output>
</layer>
```
.. code-block:: cpp
<layer ... type="SoftMax" ... >
<data axis="1" />
<input> ... </input>
<output> ... </output>
</layer>
@endsphinxdirective

View File

@ -1,5 +1,7 @@
# SoftPlus {#openvino_docs_ops_activation_SoftPlus_4}
@sphinxdirective
**Versioned name**: *SoftPlus-4*
**Category**: *Activation function*
@ -10,23 +12,25 @@
*SoftPlus* performs element-wise activation function on a given input tensor, based on the following mathematical formula:
\f[
SoftPlus(x) = \left\{\begin{array}{r}
x \qquad \mbox{if } x \geq threshold \\
log(e^{x} + 1.0) \qquad \mbox{if } x < threshold
\end{array}\right.
\f]
.. math::
\begin{equation*}
\mathrm{SoftPlus}(x) = \begin{cases}
x & \text{if } x \geq \mathrm{threshold} \\
\log(e^{x} + 1.0) & \text{if } x < \mathrm{threshold}
\end{cases}
\end{equation*}
**Note**: For numerical stability the operation reverts to the linear function when `x > threshold` where `threshold` depends on *T* and
is chosen in such a way that the difference between the linear function and exact calculation is no more than `1e-6`.
The `threshold` can be calculated with the following formula where `alpha` is the number of digits after the decimal point,
`beta` is maximum value of *T* data type:
**Note**: For numerical stability the operation reverts to the linear function when ``x > threshold`` where ``threshold`` depends on *T* and
is chosen in such a way that the difference between the linear function and exact calculation is no more than ``1e-6``.
The ``threshold`` can be calculated with the following formula where ``alpha`` is the number of digits after the decimal point,
``beta`` is maximum value of *T* data type:
\f[
-log(e^{10^{-\alpha}} - 1.0) < threshold < log(\beta)
\f]
.. math::
For example, if *T* is `fp32`, `threshold` should be `20` or if *T* is `fp16`, `threshold` should be `11`.
-log(e^{10^{-\alpha}} - 1.0) < threshold < log(\beta)
For example, if *T* is ``fp32``, ``threshold`` should be ``20`` or if *T* is ``fp16``, ``threshold`` should be ``11``.
**Attributes**: *SoftPlus* operation has no attributes.
@ -45,19 +49,22 @@ For example, if *T* is `fp32`, `threshold` should be `20` or if *T* is `fp16`, `
**Example**
```xml
<layer ... type="SoftPlus">
<input>
<port id="0">
<dim>256</dim>
<dim>56</dim>
</port>
</input>
<output>
<port id="1">
<dim>256</dim>
<dim>56</dim>
</port>
</output>
</layer>
```
.. code-block:: cpp
<layer ... type="SoftPlus">
<input>
<port id="0">
<dim>256</dim>
<dim>56</dim>
</port>
</input>
<output>
<port id="1">
<dim>256</dim>
<dim>56</dim>
</port>
</output>
</layer>
@endsphinxdirective

View File

@ -1,5 +1,7 @@
# SoftSign {#openvino_docs_ops_activation_SoftSign_9}
@sphinxdirective
**Versioned name**: *SoftSign-9*
**Category**: *Activation function*
@ -8,13 +10,14 @@
**Detailed description**:
*SoftSign* operation is introduced in this [article](https://arxiv.org/abs/2010.09458).
*SoftSign* operation is introduced in this `article <https://arxiv.org/abs/2010.09458>`__.
*SoftSign Activation Function* is a neuron activation function based on the mathematical function:
\f[
SoftSign(x) = \frac{x}{1+|x|}
\f]
.. math::
SoftSign(x) = \frac{x}{1+|x|}
**Inputs**:
@ -30,19 +33,21 @@ SoftSign(x) = \frac{x}{1+|x|}
**Example**
```xml
<layer ... type="SoftSign">
<input>
<port id="0">
<dim>256</dim>
<dim>56</dim>
</port>
</input>
<output>
<port id="1">
<dim>256</dim>
<dim>56</dim>
</port>
</output>
</layer>
```
.. code-block:: cpp
<layer ... type="SoftSign">
<input>
<port id="0">
<dim>256</dim>
<dim>56</dim>
</port>
</input>
<output>
<port id="1">
<dim>256</dim>
<dim>56</dim>
</port>
</output>
</layer>
@endsphinxdirective

View File

@ -1,5 +1,7 @@
# Swish {#openvino_docs_ops_activation_Swish_4}
@sphinxdirective
**Versioned name**: *Swish-4*
**Category**: *Activation function*
@ -8,27 +10,27 @@
**Detailed description**
*Swish* operation is introduced in this [article](https://arxiv.org/abs/1710.05941).
*Swish* operation is introduced in this `article <https://arxiv.org/abs/1710.05941>`__.
*Swish* is a smooth, non-monotonic function. The non-monotonicity property of *Swish* distinguishes itself from most common activation functions. It performs element-wise activation function on a given input tensor, based on the following mathematical formula:
\f[
Swish(x) = x\cdot \sigma(\beta x) = x \left(1 + e^{-(\beta x)}\right)^{-1}
\f]
.. math::
where β corresponds to `beta` scalar input.
Swish(x) = x\cdot \sigma(\beta x) = x \left(1 + e^{-(\beta x)}\right)^{-1}
where β corresponds to ``beta`` scalar input.
**Attributes**: *Swish* operation has no attributes.
**Inputs**:
* **1**: `data`. A tensor of type *T* and arbitrary shape. **Required.**
* **1**: ``data``. A tensor of type *T* and arbitrary shape. **Required.**
* **2**: `beta`. A non-negative scalar value of type *T*. Multiplication parameter for the sigmoid. Default value 1.0 is used. **Optional.**
* **2**: ``beta``. A non-negative scalar value of type *T*. Multiplication parameter for the sigmoid. Default value 1.0 is used. **Optional.**
**Outputs**:
* **1**: The result of element-wise *Swish* function applied to the input tensor `data`. A tensor of type *T* and the same shape as `data` input tensor.
* **1**: The result of element-wise *Swish* function applied to the input tensor ``data``. A tensor of type *T* and the same shape as ``data`` input tensor.
**Types**
@ -36,38 +38,43 @@ where β corresponds to `beta` scalar input.
**Examples**
*Example: Second input `beta` provided*
```xml
<layer ... type="Swish">
<input>
<port id="0">
<dim>256</dim>
<dim>56</dim>
</port>
<port id="1"> <!-- beta value: 2.0 -->
</port>
</input>
<output>
<port id="2">
<dim>256</dim>
<dim>56</dim>
</port>
</output>
</layer>
```
Example: Second input ``beta`` provided
*Example: Second input `beta` not provided*
```xml
<layer ... type="Swish">
<input>
<port id="0">
<dim>128</dim>
</port>
</input>
<output>
<port id="1">
<dim>128</dim>
</port>
</output>
</layer>
```
.. code-block:: cpp
<layer ... type="Swish">
<input>
<port id="0">
<dim>256</dim>
<dim>56</dim>
</port>
<port id="1"> < !-- beta value: 2.0 -->
</port>
</input>
<output>
<port id="2">
<dim>256</dim>
<dim>56</dim>
</port>
</output>
</layer>
Example: Second input ``beta`` not provided
.. code-block:: cpp
<layer ... type="Swish">
<input>
<port id="0">
<dim>128</dim>
</port>
</input>
<output>
<port id="1">
<dim>128</dim>
</port>
</output>
</layer>
@endsphinxdirective

View File

@ -1,5 +1,7 @@
# Sign {#openvino_docs_ops_arithmetic_Sign_1}
@sphinxdirective
**Versioned name**: *Sign-1*
**Category**: *Arithmetic unary*
@ -8,9 +10,9 @@
**Detailed description**: *Sign* performs element-wise sign operation on a given input tensor, based on the following mathematical formula:
\f[
a_{i} = sign(a_{i})
\f]
.. math::
a_{i} = sign(a_{i})
**Attributes**: *Sign* operation has no attributes.
@ -29,19 +31,22 @@ a_{i} = sign(a_{i})
**Example**
```xml
<layer ... type="Sign">
<input>
<port id="0">
<dim>256</dim>
<dim>56</dim>
</port>
</input>
<output>
<port id="1">
<dim>256</dim>
<dim>56</dim>
</port>
</output>
</layer>
```
.. code-block:: cpp
<layer ... type="Sign">
<input>
<port id="0">
<dim>256</dim>
<dim>56</dim>
</port>
</input>
<output>
<port id="1">
<dim>256</dim>
<dim>56</dim>
</port>
</output>
</layer>
@endsphinxdirective

View File

@ -1,5 +1,7 @@
# Sin {#openvino_docs_ops_arithmetic_Sin_1}
@sphinxdirective
**Versioned name**: *Sin-1*
**Category**: *Arithmetic unary*
@ -7,15 +9,17 @@
**Short description**: *Sin* performs element-wise sine operation with given tensor.
**Detailed description**: *sin* does the following with the input tensor *a*:
\f[
a_{i} = sin(a_{i})
\f]
.. math::
a_{i} = sin(a_{i})
a - value representing angle in radians.
**Attributes**:
No attributes available.
No attributes available.
**Inputs**
@ -34,19 +38,21 @@ a - value representing angle in radians.
*Example 1*
```xml
<layer ... type="Sin">
<input>
<port id="0">
<dim>256</dim>
<dim>56</dim>
</port>
</input>
<output>
<port id="1">
<dim>256</dim>
<dim>56</dim>
</port>
</output>
</layer>
```
.. code-block:: cpp
<layer ... type="Sin">
<input>
<port id="0">
<dim>256</dim>
<dim>56</dim>
</port>
</input>
<output>
<port id="1">
<dim>256</dim>
<dim>56</dim>
</port>
</output>
</layer>
@endsphinxdirective

View File

@ -1,5 +1,7 @@
# Sinh {#openvino_docs_ops_arithmetic_Sinh_1}
@sphinxdirective
**Versioned name**: *Sinh-1*
**Category**: *Arithmetic unary*
@ -8,9 +10,9 @@
**Detailed description**: *Sinh* performs element-wise hyperbolic sine (sinh) operation on a given input tensor, based on the following mathematical formula:
\f[
a_{i} = sinh(a_{i})
\f]
.. math::
a_{i} = sinh(a_{i})
**Attributes**: *Sinh* operation has no attributes.
@ -28,19 +30,22 @@ a_{i} = sinh(a_{i})
**Example**
```xml
<layer ... type="Sinh">
<input>
<port id="0">
<dim>256</dim>
<dim>56</dim>
</port>
</input>
<output>
<port id="1">
<dim>256</dim>
<dim>56</dim>
</port>
</output>
</layer>
```
.. code-block:: cpp
<layer ... type="Sinh">
<input>
<port id="0">
<dim>256</dim>
<dim>56</dim>
</port>
</input>
<output>
<port id="1">
<dim>256</dim>
<dim>56</dim>
</port>
</output>
</layer>
@endsphinxdirective

View File

@ -1,16 +1,18 @@
# Sqrt {#openvino_docs_ops_arithmetic_Sqrt_1}
@sphinxdirective
**Versioned name**: *Sqrt-1*
**Category**: *Arithmetic unary*
**Short description**: Square root element-wise operation.
**Detailed description**: *Sqrt* performs element-wise square root operation on a given input tensor `a`, as in the following mathematical formula, where `o` is the output tensor:
**Detailed description**: *Sqrt* performs element-wise square root operation on a given input tensor ``a``, as in the following mathematical formula, where ``o`` is the output tensor:
\f[
o_{i} = \sqrt{a_{i}}
\f]
.. math::
o_{i} = \sqrt{a_{i}}
* If the input value is negative, then the result is undefined.
* For integer element type the result is rounded (half up) to the nearest integer value.
@ -34,53 +36,57 @@ o_{i} = \sqrt{a_{i}}
*Example 1*
```xml
<layer ... type="Sqrt">
<input>
<port id="0">
<dim>4</dim> <!-- float input values: [4.0, 7.0, 9.0, 10.0] -->
</port>
</input>
<output>
<port id="1">
<dim>4</dim> <!-- float output values: [2.0, 2.6457512, 3.0, 3.1622777] -->
</port>
</output>
</layer>
```
.. code-block:: cpp
<layer ... type="Sqrt">
<input>
<port id="0">
<dim>4</dim> < !-- float input values: [4.0, 7.0, 9.0, 10.0] -->
</port>
</input>
<output>
<port id="1">
<dim>4</dim> < !-- float output values: [2.0, 2.6457512, 3.0, 3.1622777] -->
</port>
</output>
</layer>
*Example 2*
```xml
<layer ... type="Sqrt">
<input>
<port id="0">
<dim>4</dim> <!-- int input values: [4, 7, 9, 10] -->
</port>
</input>
<output>
<port id="1">
<dim>4</dim> <!-- int output values: [2, 3, 3, 3] -->
</port>
</output>
</layer>
```
.. code-block:: cpp
<layer ... type="Sqrt">
<input>
<port id="0">
<dim>4</dim> < !-- int input values: [4, 7, 9, 10] -->
</port>
</input>
<output>
<port id="1">
<dim>4</dim> < !-- int output values: [2, 3, 3, 3] -->
</port>
</output>
</layer>
*Example 3*
```xml
<layer ... type="Sqrt">
<input>
<port id="0">
<dim>256</dim>
<dim>56</dim>
</port>
</input>
<output>
<port id="1">
<dim>256</dim>
<dim>56</dim>
</port>
</output>
</layer>
```
.. code-block:: cpp
<layer ... type="Sqrt">
<input>
<port id="0">
<dim>256</dim>
<dim>56</dim>
</port>
</input>
<output>
<port id="1">
<dim>256</dim>
<dim>56</dim>
</port>
</output>
</layer>
@endsphinxdirective

View File

@ -1,5 +1,7 @@
# SquaredDifference {#openvino_docs_ops_arithmetic_SquaredDifference_1}
@sphinxdirective
**Versioned name**: *SquaredDifference-1*
**Category**: *Arithmetic binary*
@ -9,9 +11,10 @@
**Detailed description**
As a first step input tensors *a* and *b* are broadcasted if their shapes differ. Broadcasting is performed according to `auto_broadcast` attribute specification. As a second step *Substract* and *Square* the result operation is computed element-wise on the input tensors *a* and *b* according to the formula below:
\f[
o_{i} = (a_{i} - b_{i})^2
\f]
.. math::
o_{i} = (a_{i} - b_{i})^2
**Attributes**:
@ -19,8 +22,9 @@ o_{i} = (a_{i} - b_{i})^2
* **Description**: specifies rules used for auto-broadcasting of input tensors.
* **Range of values**:
* *none* - no auto-broadcasting is allowed, all input shapes must match
* *numpy* - numpy broadcasting rules, description is available in [Broadcast Rules For Elementwise Operations](../broadcast_rules.md)
* *numpy* - numpy broadcasting rules, description is available in :doc:`Broadcast Rules For Elementwise Operations <openvino_docs_ops_broadcast_rules>`
* **Type**: string
* **Default value**: "numpy"
* **Required**: *no*
@ -42,51 +46,55 @@ o_{i} = (a_{i} - b_{i})^2
*Example 1 - no broadcasting*
```xml
<layer ... type="SquaredDifference">
<data auto_broadcast="none"/>
<input>
<port id="0">
<dim>256</dim>
<dim>56</dim>
</port>
<port id="1">
<dim>256</dim>
<dim>56</dim>
</port>
</input>
<output>
<port id="2">
<dim>256</dim>
<dim>56</dim>
</port>
</output>
</layer>
```
.. code-block:: cpp
<layer ... type="SquaredDifference">
<data auto_broadcast="none"/>
<input>
<port id="0">
<dim>256</dim>
<dim>56</dim>
</port>
<port id="1">
<dim>256</dim>
<dim>56</dim>
</port>
</input>
<output>
<port id="2">
<dim>256</dim>
<dim>56</dim>
</port>
</output>
</layer>
*Example 2: numpy broadcasting*
```xml
<layer ... type="SquaredDifference">
<data auto_broadcast="numpy"/>
<input>
<port id="0">
<dim>8</dim>
<dim>1</dim>
<dim>6</dim>
<dim>1</dim>
</port>
<port id="1">
<dim>7</dim>
<dim>1</dim>
<dim>5</dim>
</port>
</input>
<output>
<port id="2">
<dim>8</dim>
<dim>7</dim>
<dim>6</dim>
<dim>5</dim>
</port>
</output>
</layer>
```
.. code-block:: cpp
<layer ... type="SquaredDifference">
<data auto_broadcast="numpy"/>
<input>
<port id="0">
<dim>8</dim>
<dim>1</dim>
<dim>6</dim>
<dim>1</dim>
</port>
<port id="1">
<dim>7</dim>
<dim>1</dim>
<dim>5</dim>
</port>
</input>
<output>
<port id="2">
<dim>8</dim>
<dim>7</dim>
<dim>6</dim>
<dim>5</dim>
</port>
</output>
</layer>
@endsphinxdirective

View File

@ -1,5 +1,7 @@
# Subtract {#openvino_docs_ops_arithmetic_Subtract_1}
@sphinxdirective
**Versioned name**: *Subtract-1*
**Category**: *Arithmetic binary*
@ -7,12 +9,13 @@
**Short description**: *Subtract* performs element-wise subtraction operation with two given tensors applying broadcasting rule specified in the *auto_broacast* attribute.
**Detailed description**
Before performing arithmetic operation, input tensors *a* and *b* are broadcasted if their shapes are different and `auto_broadcast` attribute is not `none`. Broadcasting is performed according to `auto_broadcast` value.
Before performing arithmetic operation, input tensors *a* and *b* are broadcasted if their shapes are different and ``auto_broadcast`` attribute is not ``none``. Broadcasting is performed according to ``auto_broadcast`` value.
After broadcasting *Subtract* performs subtraction operation for the input tensors *a* and *b* using the formula below:
\f[
o_{i} = a_{i} - b_{i}
\f]
.. math::
o_{i} = a_{i} - b_{i}
**Attributes**:
@ -20,9 +23,10 @@ o_{i} = a_{i} - b_{i}
* **Description**: specifies rules used for auto-broadcasting of input tensors.
* **Range of values**:
* *none* - no auto-broadcasting is allowed, all input shapes must match,
* *numpy* - numpy broadcasting rules, description is available in [Broadcast Rules For Elementwise Operations](../broadcast_rules.md),
* *pdpd* - PaddlePaddle-style implicit broadcasting, description is available in [Broadcast Rules For Elementwise Operations](../broadcast_rules.md).
* *numpy* - numpy broadcasting rules, description is available in :doc:`Broadcast Rules For Elementwise Operations <openvino_docs_ops_broadcast_rules>`,
* *pdpd* - PaddlePaddle-style implicit broadcasting, description is available in :doc:`Broadcast Rules For Elementwise Operations <openvino_docs_ops_broadcast_rules>`.
* **Type**: string
* **Default value**: "numpy"
* **Required**: *no*
@ -44,51 +48,56 @@ o_{i} = a_{i} - b_{i}
*Example 1*
```xml
<layer ... type="Subtract">
<data auto_broadcast="none"/>
<input>
<port id="0">
<dim>256</dim>
<dim>56</dim>
</port>
<port id="1">
<dim>256</dim>
<dim>56</dim>
</port>
</input>
<output>
<port id="2">
<dim>256</dim>
<dim>56</dim>
</port>
</output>
</layer>
```
.. code-block:: cpp
<layer ... type="Subtract">
<data auto_broadcast="none"/>
<input>
<port id="0">
<dim>256</dim>
<dim>56</dim>
</port>
<port id="1">
<dim>256</dim>
<dim>56</dim>
</port>
</input>
<output>
<port id="2">
<dim>256</dim>
<dim>56</dim>
</port>
</output>
</layer>
*Example 2: broadcast*
```xml
<layer ... type="Subtract">
<data auto_broadcast="numpy"/>
<input>
<port id="0">
<dim>8</dim>
<dim>1</dim>
<dim>6</dim>
<dim>1</dim>
</port>
<port id="1">
<dim>7</dim>
<dim>1</dim>
<dim>5</dim>
</port>
</input>
<output>
<port id="2">
<dim>8</dim>
<dim>7</dim>
<dim>6</dim>
<dim>5</dim>
</port>
</output>
</layer>
```
.. code-block:: cpp
<layer ... type="Subtract">
<data auto_broadcast="numpy"/>
<input>
<port id="0">
<dim>8</dim>
<dim>1</dim>
<dim>6</dim>
<dim>1</dim>
</port>
<port id="1">
<dim>7</dim>
<dim>1</dim>
<dim>5</dim>
</port>
</input>
<output>
<port id="2">
<dim>8</dim>
<dim>7</dim>
<dim>6</dim>
<dim>5</dim>
</port>
</output>
</layer>
@endsphinxdirective

View File

@ -1,5 +1,7 @@
# Tan {#openvino_docs_ops_arithmetic_Tan_1}
@sphinxdirective
**Versioned name**: *Tan-1*
**Category**: *Arithmetic unary*
@ -8,19 +10,23 @@
**Detailed description**: Operation takes one input tensor and performs the element-wise tangent function on a given input tensor, based on the following mathematical formula:
\f[
a_{i} = tan(a_{i})
\f]
.. math::
a_{i} = tan(a_{i})
*Example 1*
input = [0.0, 0.25, -0.25, 0.5, -0.5]
output = [0.0, 0.25534192, -0.25534192, 0.54630249, -0.54630249]
.. code-block:: cpp
input = [0.0, 0.25, -0.25, 0.5, -0.5]
output = [0.0, 0.25534192, -0.25534192, 0.54630249, -0.54630249]
*Example 2*
input = [-2, -1, 0, 1, 2]
output = [2, -2, 0, 2, -2]
.. code-block:: cpp
input = [-2, -1, 0, 1, 2]
output = [2, -2, 0, 2, -2]
**Attributes**: *tan* operation has no attributes.
@ -39,19 +45,21 @@ a_{i} = tan(a_{i})
**Examples**
```xml
<layer ... type="Tan">
<input>
<port id="0">
<dim>256</dim>
<dim>56</dim>
</port>
</input>
<output>
<port id="1">
<dim>256</dim>
<dim>56</dim>
</port>
</output>
</layer>
```
.. code-block:: cpp
<layer ... type="Tan">
<input>
<port id="0">
<dim>256</dim>
<dim>56</dim>
</port>
</input>
<output>
<port id="1">
<dim>256</dim>
<dim>56</dim>
</port>
</output>
</layer>
@endsphinxdirective

View File

@ -1,5 +1,7 @@
# Tanh {#openvino_docs_ops_arithmetic_Tanh_1}
@sphinxdirective
**Versioned name**: *Tanh-1*
**Category**: *Arithmetic unary*
@ -9,9 +11,11 @@
**Detailed description**
For each element from the input tensor calculates corresponding element in the output tensor with the following formula:
\f[
tanh ( x ) = \frac{2}{1+e^{-2x}} - 1 = 2sigmoid(2x) - 1
\f]
.. math::
tanh ( x ) = \frac{2}{1+e^{-2x}} - 1 = 2sigmoid(2x) - 1
* For integer element type the result is rounded (half up) to the nearest integer value.
@ -33,19 +37,22 @@ tanh ( x ) = \frac{2}{1+e^{-2x}} - 1 = 2sigmoid(2x) - 1
*Example 1*
```xml
<layer ... type="Tanh">
<input>
<port id="0">
<dim>256</dim>
<dim>56</dim>
</port>
</input>
<output>
<port id="1">
<dim>256</dim>
<dim>56</dim>
</port>
</output>
</layer>
```
.. code-block:: cpp
<layer ... type="Tanh">
<input>
<port id="0">
<dim>256</dim>
<dim>56</dim>
</port>
</input>
<output>
<port id="1">
<dim>256</dim>
<dim>56</dim>
</port>
</output>
</layer>
@endsphinxdirective

View File

@ -1,5 +1,7 @@
# Select {#openvino_docs_ops_condition_Select_1}
@sphinxdirective
**Versioned name**: *Select-1*
**Category**: *Condition*
@ -8,8 +10,7 @@
**Detailed description**
*Select* takes elements from `then` input tensor or the `else` input tensor based on a condition mask
provided in the first input `cond`. Before performing selection, input tensors `then` and `else` are broadcasted to each other if their shapes are different and `auto_broadcast` attributes is not `none`. Then the `cond` tensor is one-way broadcasted to the resulting shape of broadcasted `then` and `else`. Broadcasting is performed according to `auto_broadcast` value.
*Select* takes elements from ``then`` input tensor or the ``else`` input tensor based on a condition mask provided in the first input ``cond``. Before performing selection, input tensors ``then`` and ``else`` are broadcasted to each other if their shapes are different and ``auto_broadcast`` attributes is not ``none``. Then the ``cond`` tensor is one-way broadcasted to the resulting shape of broadcasted ``then`` and ``else``. Broadcasting is performed according to ``auto_broadcast`` value.
**Attributes**
@ -17,55 +18,60 @@
* **Description**: specifies rules used for auto-broadcasting of input tensors.
* **Range of values**:
* *none* - no auto-broadcasting is allowed, all input shapes must match
* *numpy* - numpy broadcasting rules, description is available in [Broadcast Rules For Elementwise Operations](../broadcast_rules.md)
* *pdpd* - PaddlePaddle-style implicit broadcasting, description is available in [Broadcast Rules For Elementwise Operations](../broadcast_rules.md)
* **Type**: `string`
* *numpy* - numpy broadcasting rules, description is available in :doc:`Broadcast Rules For Elementwise Operations <openvino_docs_ops_broadcast_rules>`
* *pdpd* - PaddlePaddle-style implicit broadcasting, description is available in :doc:`Broadcast Rules For Elementwise Operations <openvino_docs_ops_broadcast_rules>`
* **Type**: ``string``
* **Default value**: "numpy"
* **Required**: *no*
**Inputs**:
* **1**: `cond` - tensor of type *T_COND* and arbitrary shape with selection mask. **Required**.
* **1**: ``cond`` - tensor of type *T_COND* and arbitrary shape with selection mask. **Required**.
* **2**: `then` - tensor of type *T* and arbitrary shape with elements to take where the corresponding element in `cond` is `true`. **Required**.
* **2**: ``then`` - tensor of type *T* and arbitrary shape with elements to take where the corresponding element in ``cond`` is ``true``. **Required**.
* **3**: `else` - tensor of type *T* and arbitrary shape with elements to take where the corresponding element in `cond` is `false`. **Required**.
* **3**: ``else`` - tensor of type *T* and arbitrary shape with elements to take where the corresponding element in ``cond`` is ``false``. **Required**.
**Outputs**:
* **1**: blended output tensor that is tailored from values of inputs tensors `then` and `else` based on `cond` and broadcasting rules. It has the same type of elements as `then` and `else`.
* **1**: blended output tensor that is tailored from values of inputs tensors ``then`` and ``else`` based on ``cond`` and broadcasting rules. It has the same type of elements as ``then`` and ``else``.
**Types**
* *T_COND*: `boolean` type.
* *T_COND*: ``boolean`` type.
* *T*: any supported numeric type.
**Example**
```xml
<layer ... type="Select">
<input>
<port id="0"> <!-- cond value is: [[false, false], [true, false], [true, true]] -->
<dim>3</dim>
<dim>2</dim>
</port>
<port id="1"> <!-- then value is: [[-1, 0], [1, 2], [3, 4]] -->
<dim>3</dim>
<dim>2</dim>
</port>
<port id="2"> <!-- else value is: [[11, 10], [9, 8], [7, 6]] -->
<dim>3</dim>
<dim>2</dim>
</port>
</input>
<output>
<port id="1"> <!-- output value is: [[11, 10], [1, 8], [3, 4]] -->
<dim>3</dim>
<dim>2</dim>
</port>
</output>
</layer>
```
.. code-block:: cpp
<layer ... type="Select">
<input>
<port id="0"> < !-- cond value is: [[false, false], [true, false], [true, true]] -->
<dim>3</dim>
<dim>2</dim>
</port>
<port id="1"> < !-- then value is: [[-1, 0], [1, 2], [3, 4]] -->
<dim>3</dim>
<dim>2</dim>
</port>
<port id="2"> < !-- else value is: [[11, 10], [9, 8], [7, 6]] -->
<dim>3</dim>
<dim>2</dim>
</port>
</input>
<output>
<port id="1"> < !-- output value is: [[11, 10], [1, 8], [3, 4]] -->
<dim>3</dim>
<dim>2</dim>
</port>
</output>
</layer>
@endsphinxdirective

View File

@ -1,96 +1,99 @@
# TensorIterator {#openvino_docs_ops_infrastructure_TensorIterator_1}
@sphinxdirective
**Versioned name**: *TensorIterator-1*
**Category**: *Infrastructure*
**Short description**: *TensorIterator* layer performs recurrent execution of the network, which is described in the `body`, iterating through the data.
**Short description**: *TensorIterator* layer performs recurrent execution of the network, which is described in the ``body``, iterating through the data.
**TensorIterator attributes**:
* **Body**:
`body` is a network that will be recurrently executed. The network is described layer by layer as a typical IR network.
``body`` is a network that will be recurrently executed. The network is described layer by layer as a typical IR network.
* **Body attributes**:
* **Body attributes**:
No attributes available.
No attributes available.
* **Port map**:
*port_map* is a set of rules to map input or output data tensors of `TensorIterator` layer onto `body` data tensors. The `port_map` entries can be` input` and `output`. Each entry describes a corresponding mapping rule.
*port_map* is a set of rules to map input or output data tensors of ``TensorIterator`` layer onto ``body`` data tensors. The ``port_map`` entries can be ``input`` and ``output``. Each entry describes a corresponding mapping rule.
* **Port map attributes**:
* **Port map attributes**:
* *external_port_id*
* **Description**: *external_port_id* is a port ID of the `TensorIterator` layer.
* **Range of values**: indexes of the *TensorIterator* outputs
* **Type**: `int`
* **Default value**: None
* **Required**: *yes*
* *external_port_id*
* **Description**: *external_port_id* is a port ID of the ``TensorIterator`` layer.
* **Range of values**: indexes of the *TensorIterator* outputs
* **Type**: ``int``
* **Default value**: None
* **Required**: *yes*
* *internal_layer_id*
* *internal_layer_id*
* **Description**: *internal_layer_id* is a *Parameter* or *Result* layer ID inside the `body` network to map to.
* **Range of values**: IDs of the *Parameter* layers inside in the *TensorIterator* layer
* **Type**: `int`
* **Default value**: None
* **Required**: *yes*
* **Description**: *internal_layer_id* is a *Parameter* or *Result* layer ID inside the ``body`` network to map to.
* **Range of values**: IDs of the *Parameter* layers inside in the *TensorIterator* layer
* **Type**: ``int``
* **Default value**: None
* **Required**: *yes*
* *axis*
* *axis*
* **Description**: *axis* is an axis to iterate through. It triggers the slicing of this tensor. Only if it is specified, the corresponding `input` or `output` is divided into pieces and start, end and stride attributes define how slicing is performed.
* **Range of values**: an integer
* **Type**: `int`
* **Default value**: None
* **Required**: *no*
* **Description**: *axis* is an axis to iterate through. It triggers the slicing of this tensor. Only if it is specified, the corresponding ``input`` or ``output`` is divided into pieces and start, end and stride attributes define how slicing is performed.
* **Range of values**: an integer
* **Type**: ``int``
* **Default value**: None
* **Required**: *no*
* *start*
* *start*
* **Description**: *start* is an index where the iteration starts from. Negative value means counting indexes from the end. Applies only when the attribute `axis` is specified.
* **Range of values**: an integer
* **Type**: `int`
* **Default value**: 0
* **Required**: *no*
* **Description**: *start* is an index where the iteration starts from. Negative value means counting indexes from the end. Applies only when the attribute ``axis`` is specified.
* **Range of values**: an integer
* **Type**: ``int``
* **Default value**: 0
* **Required**: *no*
* *end*
* *end*
* **Description**: *end* is an index where iteration ends. Negative value means counting indexes from the end. Applies only when the attribute `axis` is specified.
* **Range of values**: an integer
* **Type**: `int`
* **Default value**: -1
* **Required**: *no*
* **Description**: *end* is an index where iteration ends. Negative value means counting indexes from the end. Applies only when the attribute ``axis`` is specified.
* **Range of values**: an integer
* **Type**: ``int``
* **Default value**: -1
* **Required**: *no*
* *stride*
* *stride*
* **Description**: *stride* is a step of iteration. Negative value means backward iteration. Applies only when the attribute `axis` is specified.
* **Range of values**: an integer
* **Type**: `int`
* **Default value**: 1
* **Required**: *no*
* **Description**: *stride* is a step of iteration. Negative value means backward iteration. Applies only when the attribute ``axis`` is specified.
* **Range of values**: an integer
* **Type**: ``int``
* **Default value**: 1
* **Required**: *no*
* **Back edges**:
*back_edges* is a set of rules to transfer tensor values from `body` outputs at one iteration to `body` parameters at the next iteration. Back edge connects some *Result* layer in `body` to *Parameter* layer in the same `body`.
*back_edges* is a set of rules to transfer tensor values from ``body`` outputs at one iteration to ``body`` parameters at the next iteration. Back edge connects some *Result* layer in ``body`` to *Parameter* layer in the same ``body``.
* **Back edge attributes**:
* **Back edge attributes**:
* *from-layer*
* *from-layer*
* **Description**: *from-layer* is a *Result* layer ID inside the `body` network.
* **Range of values**: IDs of the *Result* layers inside the *TensorIterator*
* **Type**: `int`
* **Default value**: None
* **Required**: *yes*
* **Description**: *from-layer* is a *Result* layer ID inside the ``body`` network.
* **Range of values**: IDs of the *Result* layers inside the *TensorIterator*
* **Type**: ``int``
* **Default value**: None
* **Required**: *yes*
* *to-layer*
* *to-layer*
* **Description**: *to-layer* is a *Parameter* layer ID inside the `body` network to end mapping.
* **Range of values**: IDs of the *Parameter* layers inside the *TensorIterator*
* **Type**: `int`
* **Default value**: None
* **Required**: *yes*
* **Description**: *to-layer* is a *Parameter* layer ID inside the ``body`` network to end mapping.
* **Range of values**: IDs of the *Parameter* layers inside the *TensorIterator*
* **Type**: ``int``
* **Default value**: None
* **Required**: *yes*
**Inputs**
@ -98,277 +101,284 @@
**Outputs**
* **Multiple outputs**: Results of execution of the `body`. Tensors of any type and shape.
* **Multiple outputs**: Results of execution of the ``body``. Tensors of any type and shape.
**Detailed description**
Similar to other layers, TensorIterator has regular sections: `input` and `output`. It allows connecting TensorIterator to the rest of the IR.
TensorIterator also has several special sections: `body`, `port_map`, `back_edges`. The principles of their work are described below.
Similar to other layers, TensorIterator has regular sections: ``input`` and ``output``. It allows connecting TensorIterator to the rest of the IR.
TensorIterator also has several special sections: ``body``, ``port_map``, ``back_edges``. The principles of their work are described below.
How `body` is iterated:
How ``body`` is iterated:
*At the first iteration:*
TensorIterator slices input tensors by a specified axis and iterates over all parts in a specified order. It process input tensors with arbitrary network specified as an IR network in the `body` section. IR is executed as no back-edges are present. Edges from `port map` are used to connect input ports of TensorIterator to `Parameters` in body.
*At the first iteration:* TensorIterator slices input tensors by a specified axis and iterates over all parts in a specified order. It process input tensors with arbitrary network specified as an IR network in the ``body`` section. IR is executed as no back-edges are present. Edges from ``port map`` are used to connect input ports of TensorIterator to ``Parameters`` in body.
[`inputs`] - `Port map` edges -> [`Parameters:body:Results`]
[``inputs``] - ``Port map`` edges -> [``Parameters:body:Results``]
`Parameter` and `Result` layers are part of the `body`. `Parameters` are stable entry points in the `body`. The results of the execution of the `body` are presented as stable `Result` layers. Stable means that these nodes cannot be fused.
``Parameter`` and ``Result`` layers are part of the ``body``. ``Parameters`` are stable entry points in the ``body``. The results of the execution of the ``body`` are presented as stable ``Result`` layers. Stable means that these nodes cannot be fused.
*Next iterations:*
Back edges define which data is copied back to `Parameters` layers from `Results` layers between IR iterations in TensorIterator `body`. That means they pass data from source layer back to target layer. Each layer that is a target for back-edge has also an incoming `port map` edge as an input. The values from back-edges are used instead of corresponding edges from `port map`. After each iteration of the network, all back edges are executed.
Back edges define which data is copied back to ``Parameters`` layers from ``Results`` layers between IR iterations in TensorIterator ``body``. That means they pass data from source layer back to target layer. Each layer that is a target for back-edge has also an incoming ``port map`` edge as an input. The values from back-edges are used instead of corresponding edges from ``port map``. After each iteration of the network, all back edges are executed.
Iterations can be considered as statically unrolled sequence: all edges that flow between two neighbor iterations are back-edges. So in the unrolled loop, each back-edge is transformed to regular edge.
... -> [`Parameters:body:Results`] - back-edges -> [`Parameters:body:Results`] - back-edges -> [`Parameters:body:Results`] - back-edges -> ...
... -> [``Parameters:body:Results``] - back-edges -> [``Parameters:body:Results``] - back-edges -> [``Parameters:body:Results``] - back-edges -> ...
*Calculation of results:*
If `output` entry in the `Port map` doesn't have partitioning (`axis, begin, end, strides`) attributes, then the final value of `output` of TensorIterator is the value of `Result` node from the last iteration. Otherwise the final value of `output` of TensorIterator is a concatenation of tensors in the `Result` node for all `body` iterations. Concatenation order is specified by `stride` attribute.
If ``output`` entry in the ``Port map`` doesn't have partitioning (``axis, begin, end, strides``) attributes, then the final value of ``output`` of TensorIterator is the value of ``Result`` node from the last iteration. Otherwise the final value of ``output`` of TensorIterator is a concatenation of tensors in the ``Result`` node for all ``body`` iterations. Concatenation order is specified by ``stride`` attribute.
The last iteration:
[`Parameters:body:Results`] - `Port map` edges -> [`outputs`], if partitioning attributes are not set.
[``Parameters:body:Results``] - ``Port map`` edges -> [``outputs``], if partitioning attributes are not set.
if there are partitioning attributes, then an output tensor is a concatenation of tensors from all body iterations. If `stride > 0`:
```
output = Concat(S[0], S[1], ..., S[N-1])
```
where `Si` is value of `Result` operation at i-th iteration in the tensor iterator body that corresponds to this output port. If `stride < 0`, then output is concatenated in a reverse order:
```
output = Concat(S[N-1], S[N-2], ..., S[0])
```
if there are partitioning attributes, then an output tensor is a concatenation of tensors from all body iterations. If ``stride > 0``:
.. code-block:: cpp
output = Concat(S[0], S[1], ..., S[N-1])
where ``Si`` is value of ``Result`` operation at i-th iteration in the tensor iterator body that corresponds to this output port. If ``stride < 0``, then output is concatenated in a reverse order:
.. code-block:: cpp
output = Concat(S[N-1], S[N-2], ..., S[0])
**Examples**
*Example 1: a typical TensorIterator structure*
```xml
<layer type="TensorIterator" ... >
<input> ... </input>
<output> ... </output>
<port_map>
<input external_port_id="0" internal_layer_id="0" axis="1" start="-1" end="0" stride="-1"/>
<input external_port_id="1" internal_layer_id="1"/>
...
<output external_port_id="3" internal_layer_id="2" axis="1" start="-1" end="0" stride="-1"/>
...
</port_map>
<back_edges>
<edge from-layer="1" to-layer="1"/>
...
</back_edges>
<body>
<layers> ... </layers>
<edges> ... </edges>
</body>
</layer>
```
.. code-block:: cpp
<layer type="TensorIterator" ... >
<input> ... </input>
<output> ... </output>
<port_map>
<input external_port_id="0" internal_layer_id="0" axis="1" start="-1" end="0" stride="-1"/>
<input external_port_id="1" internal_layer_id="1"/>
...
<output external_port_id="3" internal_layer_id="2" axis="1" start="-1" end="0" stride="-1"/>
...
</port_map>
<back_edges>
<edge from-layer="1" to-layer="1"/>
...
</back_edges>
<body>
<layers> ... </layers>
<edges> ... </edges>
</body>
</layer>
*Example 2: a full TensorIterator layer*
```xml
<layer type="TensorIterator" ...>
<input>
<port id="0">
<dim>1</dim>
<dim>25</dim>
<dim>512</dim>
</port>
<port id="1">
<dim>1</dim>
<dim>256</dim>
</port>
<port id="2">
<dim>1</dim>
<dim>256</dim>
</port>
</input>
<output>
<port id="3" precision="FP32">
<dim>1</dim>
<dim>25</dim>
<dim>256</dim>
</port>
</output>
<port_map>
<input axis="1" external_port_id="0" internal_layer_id="0" start="0"/>
<input external_port_id="1" internal_layer_id="3"/>
<input external_port_id="2" internal_layer_id="4"/>
<output axis="1" external_port_id="3" internal_layer_id="12"/>
</port_map>
<back_edges>
<edge from-layer="8" to-layer="4"/>
<edge from-layer="9" to-layer="3"/>
</back_edges>
<body>
<layers>
<layer id="0" type="Parameter" ...>
<output>
<port id="0" precision="FP32">
<dim>1</dim>
<dim>1</dim>
<dim>512</dim>
</port>
</output>
</layer>
<layer id="1" type="Const" ...>
<data offset="0" size="16"/>
<output>
<port id="1" precision="I64">
<dim>2</dim>
</port>
</output>
</layer>
<layer id="2" type="Reshape" ...>
<input>
<port id="0">
<dim>1</dim>
<dim>1</dim>
<dim>512</dim>
</port>
<port id="1">
<dim>2</dim>
</port>
</input>
<output>
<port id="2" precision="FP32">
<dim>1</dim>
<dim>512</dim>
</port>
</output>
</layer>
<layer id="3" type="Parameter" ...>
<output>
<port id="0" precision="FP32">
<dim>1</dim>
<dim>256</dim>
</port>
</output>
</layer>
<layer id="4" type="Parameter" ...>
<output>
<port id="0" precision="FP32">
<dim>1</dim>
<dim>256</dim>
</port>
</output>
</layer>
<layer id="5" type="Const" ...>
<data offset="16" size="3145728"/>
<output>
<port id="1" precision="FP32">
<dim>1024</dim>
<dim>768</dim>
</port>
</output>
</layer>
<layer id="6" type="Const" ...>
<data offset="3145744" size="4096"/>
<output>
<port id="1" precision="FP32">
<dim>1024</dim>
</port>
</output>
</layer>
<layer id="7" type="LSTMCell" ...>
<data hidden_size="256"/>
<input>
<port id="0">
<dim>1</dim>
<dim>512</dim>
</port>
<port id="1">
<dim>1</dim>
<dim>256</dim>
</port>
<port id="2">
<dim>1</dim>
<dim>256</dim>
</port>
<port id="3">
<dim>1024</dim>
<dim>768</dim>
</port>
<port id="4">
<dim>1024</dim>
</port>
</input>
<output>
<port id="5" precision="FP32">
<dim>1</dim>
<dim>256</dim>
</port>
<port id="6" precision="FP32">
<dim>1</dim>
<dim>256</dim>
</port>
</output>
</layer>
<layer id="8" type="Result" ...>
<input>
<port id="0">
<dim>1</dim>
<dim>256</dim>
</port>
</input>
</layer>
<layer id="9" type="Result" ...>
<input>
<port id="0">
<dim>1</dim>
<dim>256</dim>
</port>
</input>
</layer>
<layer id="10" type="Const" ...>
<data offset="3149840" size="24"/>
<output>
<port id="1" precision="I64">
<dim>3</dim>
</port>
</output>
</layer>
<layer id="11" type="Reshape" ...>
<input>
<port id="0">
<dim>1</dim>
<dim>256</dim>
</port>
<port id="1">
<dim>3</dim>
</port>
</input>
<output>
<port id="2" precision="FP32">
<dim>1</dim>
<dim>1</dim>
<dim>256</dim>
</port>
</output>
</layer>
<layer id="12" type="Result" ...>
<input>
<port id="0">
<dim>1</dim>
<dim>1</dim>
<dim>256</dim>
</port>
</input>
</layer>
</layers>
<edges>
<edge from-layer="0" from-port="0" to-layer="2" to-port="0"/>
<edge from-layer="1" from-port="1" to-layer="2" to-port="1"/>
<edge from-layer="2" from-port="2" to-layer="7" to-port="0"/>
<edge from-layer="3" from-port="0" to-layer="7" to-port="1"/>
<edge from-layer="4" from-port="0" to-layer="7" to-port="2"/>
<edge from-layer="5" from-port="1" to-layer="7" to-port="3"/>
<edge from-layer="6" from-port="1" to-layer="7" to-port="4"/>
<edge from-layer="7" from-port="6" to-layer="8" to-port="0"/>
<edge from-layer="7" from-port="5" to-layer="9" to-port="0"/>
<edge from-layer="7" from-port="5" to-layer="11" to-port="0"/>
<edge from-layer="10" from-port="1" to-layer="11" to-port="1"/>
<edge from-layer="11" from-port="2" to-layer="12" to-port="0"/>
</edges>
</body>
</layer>
```
.. code-block:: cpp
<layer type="TensorIterator" ...>
<input>
<port id="0">
<dim>1</dim>
<dim>25</dim>
<dim>512</dim>
</port>
<port id="1">
<dim>1</dim>
<dim>256</dim>
</port>
<port id="2">
<dim>1</dim>
<dim>256</dim>
</port>
</input>
<output>
<port id="3" precision="FP32">
<dim>1</dim>
<dim>25</dim>
<dim>256</dim>
</port>
</output>
<port_map>
<input axis="1" external_port_id="0" internal_layer_id="0" start="0"/>
<input external_port_id="1" internal_layer_id="3"/>
<input external_port_id="2" internal_layer_id="4"/>
<output axis="1" external_port_id="3" internal_layer_id="12"/>
</port_map>
<back_edges>
<edge from-layer="8" to-layer="4"/>
<edge from-layer="9" to-layer="3"/>
</back_edges>
<body>
<layers>
<layer id="0" type="Parameter" ...>
<output>
<port id="0" precision="FP32">
<dim>1</dim>
<dim>1</dim>
<dim>512</dim>
</port>
</output>
</layer>
<layer id="1" type="Const" ...>
<data offset="0" size="16"/>
<output>
<port id="1" precision="I64">
<dim>2</dim>
</port>
</output>
</layer>
<layer id="2" type="Reshape" ...>
<input>
<port id="0">
<dim>1</dim>
<dim>1</dim>
<dim>512</dim>
</port>
<port id="1">
<dim>2</dim>
</port>
</input>
<output>
<port id="2" precision="FP32">
<dim>1</dim>
<dim>512</dim>
</port>
</output>
</layer>
<layer id="3" type="Parameter" ...>
<output>
<port id="0" precision="FP32">
<dim>1</dim>
<dim>256</dim>
</port>
</output>
</layer>
<layer id="4" type="Parameter" ...>
<output>
<port id="0" precision="FP32">
<dim>1</dim>
<dim>256</dim>
</port>
</output>
</layer>
<layer id="5" type="Const" ...>
<data offset="16" size="3145728"/>
<output>
<port id="1" precision="FP32">
<dim>1024</dim>
<dim>768</dim>
</port>
</output>
</layer>
<layer id="6" type="Const" ...>
<data offset="3145744" size="4096"/>
<output>
<port id="1" precision="FP32">
<dim>1024</dim>
</port>
</output>
</layer>
<layer id="7" type="LSTMCell" ...>
<data hidden_size="256"/>
<input>
<port id="0">
<dim>1</dim>
<dim>512</dim>
</port>
<port id="1">
<dim>1</dim>
<dim>256</dim>
</port>
<port id="2">
<dim>1</dim>
<dim>256</dim>
</port>
<port id="3">
<dim>1024</dim>
<dim>768</dim>
</port>
<port id="4">
<dim>1024</dim>
</port>
</input>
<output>
<port id="5" precision="FP32">
<dim>1</dim>
<dim>256</dim>
</port>
<port id="6" precision="FP32">
<dim>1</dim>
<dim>256</dim>
</port>
</output>
</layer>
<layer id="8" type="Result" ...>
<input>
<port id="0">
<dim>1</dim>
<dim>256</dim>
</port>
</input>
</layer>
<layer id="9" type="Result" ...>
<input>
<port id="0">
<dim>1</dim>
<dim>256</dim>
</port>
</input>
</layer>
<layer id="10" type="Const" ...>
<data offset="3149840" size="24"/>
<output>
<port id="1" precision="I64">
<dim>3</dim>
</port>
</output>
</layer>
<layer id="11" type="Reshape" ...>
<input>
<port id="0">
<dim>1</dim>
<dim>256</dim>
</port>
<port id="1">
<dim>3</dim>
</port>
</input>
<output>
<port id="2" precision="FP32">
<dim>1</dim>
<dim>1</dim>
<dim>256</dim>
</port>
</output>
</layer>
<layer id="12" type="Result" ...>
<input>
<port id="0">
<dim>1</dim>
<dim>1</dim>
<dim>256</dim>
</port>
</input>
</layer>
</layers>
<edges>
<edge from-layer="0" from-port="0" to-layer="2" to-port="0"/>
<edge from-layer="1" from-port="1" to-layer="2" to-port="1"/>
<edge from-layer="2" from-port="2" to-layer="7" to-port="0"/>
<edge from-layer="3" from-port="0" to-layer="7" to-port="1"/>
<edge from-layer="4" from-port="0" to-layer="7" to-port="2"/>
<edge from-layer="5" from-port="1" to-layer="7" to-port="3"/>
<edge from-layer="6" from-port="1" to-layer="7" to-port="4"/>
<edge from-layer="7" from-port="6" to-layer="8" to-port="0"/>
<edge from-layer="7" from-port="5" to-layer="9" to-port="0"/>
<edge from-layer="7" from-port="5" to-layer="11" to-port="0"/>
<edge from-layer="10" from-port="1" to-layer="11" to-port="1"/>
<edge from-layer="11" from-port="2" to-layer="12" to-port="0"/>
</edges>
</body>
</layer>
@endsphinxdirective

View File

@ -1,44 +1,46 @@
# ScatterElementsUpdate {#openvino_docs_ops_movement_ScatterElementsUpdate_3}
@sphinxdirective
**Versioned name**: *ScatterElementsUpdate-3*
**Category**: *Data movement*
**Short description**: Creates a copy of the first input tensor with updated elements specified with second and third input tensors.
**Detailed description**: For each entry in `updates`, the target index in `data` is obtained by combining the corresponding entry in
`indices` with the index of the entry itself: the index-value for dimension equal to `axis` is obtained from the value of the corresponding entry in
`indices` and the index-value for dimension not equal to `axis` is obtained from the index of the entry itself.
**Detailed description**: For each entry in ``updates``, the target index in ``data`` is obtained by combining the corresponding entry in
``indices`` with the index of the entry itself: the index-value for dimension equal to ``axis`` is obtained from the value of the corresponding entry in
``indices`` and the index-value for dimension not equal to ``axis`` is obtained from the index of the entry itself.
For instance, in a 3D tensor case, the update corresponding to the `[i][j][k]` entry is performed as below:
For instance, in a 3D tensor case, the update corresponding to the ``[i][j][k]`` entry is performed as below:
```
output[indices[i][j][k]][j][k] = updates[i][j][k] if axis = 0,
output[i][indices[i][j][k]][k] = updates[i][j][k] if axis = 1,
output[i][j][indices[i][j][k]] = updates[i][j][k] if axis = 2
```
.. code-block:: cpp
`update` tensor dimensions are less or equal to the corresponding `data` tensor dimensions.
output[indices[i][j][k]][j][k] = updates[i][j][k] if axis = 0,
output[i][indices[i][j][k]][k] = updates[i][j][k] if axis = 1,
output[i][j][indices[i][j][k]] = updates[i][j][k] if axis = 2
``update`` tensor dimensions are less or equal to the corresponding ``data`` tensor dimensions.
**Attributes**: *ScatterElementsUpdate* does not have attributes.
**Inputs**:
* **1**: `data` tensor of arbitrary rank `r` and of type *T*. **Required.**
* **1**: ``data`` tensor of arbitrary rank ``r`` and of type *T*. **Required.**
* **2**: `indices` tensor with indices of type *T_IND*. The rank of the tensor is equal to the rank of `data` tensor.
All index values are expected to be within bounds `[0, s - 1]` along axis of size `s`. If multiple indices point to the
* **2**: ``indices`` tensor with indices of type *T_IND*. The rank of the tensor is equal to the rank of ``data`` tensor. All index values are expected to be within bounds ``[0, s - 1]`` along axis of size ``s``. If multiple indices point to the
same output location then the order of updating the values is undefined. If an index points to non-existing output
tensor element or is negative then exception is raised. **Required.**
* **3**: `updates` tensor of shape equal to the shape of `indices` tensor and of type *T*. **Required.**
* **3**: ``updates`` tensor of shape equal to the shape of ``indices`` tensor and of type *T*. **Required.**
* **4**: `axis` tensor with scalar or 1D tensor with one element of type *T_AXIS* specifying axis for scatter.
The value can be in range `[-r, r - 1]` where `r` is the rank of `data`. **Required.**
* **4**: ``axis`` tensor with scalar or 1D tensor with one element of type *T_AXIS* specifying axis for scatter.
The value can be in range ``[-r, r - 1]`` where ``r`` is the rank of ``data``. **Required.**
**Outputs**:
* **1**: tensor with shape equal to `data` tensor of the type *T*.
* **1**: tensor with shape equal to ``data`` tensor of the type *T*.
**Types**
@ -50,38 +52,41 @@ The value can be in range `[-r, r - 1]` where `r` is the rank of `data`. **Requi
**Example**
```xml
<layer ... type="ScatterElementsUpdate">
<input>
<port id="0">
<dim>1000</dim>
<dim>256</dim>
<dim>7</dim>
<dim>7</dim>
</port>
<port id="1">
<dim>125</dim>
<dim>20</dim>
<dim>7</dim>
<dim>6</dim>
</port>
<port id="2">
<dim>125</dim>
<dim>20</dim>
<dim>7</dim>
<dim>6</dim>
</port>
<port id="3"> <!-- value [0] -->
<dim>1</dim>
</port>
</input>
<output>
<port id="4" precision="FP32">
<dim>1000</dim>
<dim>256</dim>
<dim>7</dim>
<dim>7</dim>
</port>
</output>
</layer>
```
.. code-block:: cpp
<layer ... type="ScatterElementsUpdate">
<input>
<port id="0">
<dim>1000</dim>
<dim>256</dim>
<dim>7</dim>
<dim>7</dim>
</port>
<port id="1">
<dim>125</dim>
<dim>20</dim>
<dim>7</dim>
<dim>6</dim>
</port>
<port id="2">
<dim>125</dim>
<dim>20</dim>
<dim>7</dim>
<dim>6</dim>
</port>
<port id="3"> < !-- value [0] -->
<dim>1</dim>
</port>
</input>
<output>
<port id="4" precision="FP32">
<dim>1000</dim>
<dim>256</dim>
<dim>7</dim>
<dim>7</dim>
</port>
</output>
</layer>
@endsphinxdirective

View File

@ -1,94 +1,101 @@
# ScatterNDUpdate {#openvino_docs_ops_movement_ScatterNDUpdate_3}
@sphinxdirective
**Versioned name**: *ScatterNDUpdate-3*
**Category**: *Data movement*
**Short description**: Creates a copy of the first input tensor with updated elements specified with second and third input tensors.
**Detailed description**: The operation produces a copy of `data` tensor and updates its value to values specified
by `updates` at specific index positions specified by `indices`. The output shape is the same as the shape of `data`.
`indices` tensor must not have duplicate entries. In case of duplicate entries in `indices` the result is undefined.
**Detailed description**: The operation produces a copy of ``data`` tensor and updates its value to values specified
by ``updates`` at specific index positions specified by ``indices``. The output shape is the same as the shape of ``data``.
``indices`` tensor must not have duplicate entries. In case of duplicate entries in ``indices`` the result is undefined.
The last dimension of `indices` can be at most the rank of `data.shape`.
The last dimension of `indices` corresponds to indices into elements if `indices.shape[-1]` = `data.shape.rank` or slices
if `indices.shape[-1]` < `data.shape.rank`. `updates` is a tensor with shape `indices.shape[:-1] + data.shape[indices.shape[-1]:]`
The last dimension of ``indices`` can be at most the rank of ``data.shape``.
The last dimension of ``indices`` corresponds to indices into elements if ``indices.shape[-1]`` = ``data.shape.rank`` or slices
if ``indices.shape[-1]`` < ``data.shape.rank``. ``updates`` is a tensor with shape ``indices.shape[:-1] + data.shape[indices.shape[-1]:]``
Example 1 that shows update of four single elements in `data`:
Example 1 that shows update of four single elements in ``data``:
```
data = [1, 2, 3, 4, 5, 6, 7, 8]
indices = [[4], [3], [1], [7]]
updates = [9, 10, 11, 12]
output = [1, 11, 3, 10, 9, 6, 7, 12]
```
.. code-block:: cpp
Example 2 that shows update of two slices of `4x4` shape in `data`:
data = [1, 2, 3, 4, 5, 6, 7, 8]
indices = [[4], [3], [1], [7]]
updates = [9, 10, 11, 12]
output = [1, 11, 3, 10, 9, 6, 7, 12]
Example 2 that shows update of two slices of ``4x4`` shape in ``data``:
.. code-block:: cpp
data = [[[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]],
[[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]],
[[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]],
[[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]]]
indices = [[0], [2]]
updates = [[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]],
[[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]]]
output = [[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]],
[[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]],
[[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]],
[[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]]]
```
data = [[[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]],
[[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]],
[[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]],
[[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]]]
indices = [[0], [2]]
updates = [[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]],
[[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]]]
output = [[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]],
[[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]],
[[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]],
[[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]]]
```
**Attributes**: *ScatterNDUpdate* does not have attributes.
**Inputs**:
* **1**: `data` tensor of arbitrary rank `r` >= 1 and of type *T*. **Required.**
* **1**: ``data`` tensor of arbitrary rank ``r`` >= 1 and of type *T*. **Required.**
* **2**: `indices` tensor with indices of arbitrary rank `q` >= 1 and of type *T_IND*. All index values `i_j` in index entry `(i_0, i_1, ...,i_k)` (where `k = indices.shape[-1]`) must be within bounds `[0, s_j - 1]` where `s_j = data.shape[j]`. `k` must be at most `r`. **Required.**
* **2**: ``indices`` tensor with indices of arbitrary rank ``q`` >= 1 and of type *T_IND*. All index values ``i_j`` in index entry ``(i_0, i_1, ...,i_k)`` (where ``k = indices.shape[-1]``) must be within bounds ``[0, s_j - 1]`` where ``s_j = data.shape[j]``. ``k`` must be at most ``r``. **Required.**
* **3**: `updates` tensor of rank `r - indices.shape[-1] + q - 1` of type *T*. If expected `updates` rank is 0D it can be a tensor with single element. **Required.**
* **3**: ``updates`` tensor of rank ``r - indices.shape[-1] + q - 1`` of type *T*. If expected ``updates`` rank is 0D it can be a tensor with single element. **Required.**
**Outputs**:
* **1**: tensor with shape equal to `data` tensor of the type *T*.
* **1**: tensor with shape equal to ``data`` tensor of the type *T*.
**Types**
* *T*: any numeric type.
* *T_IND*: `int32` or `int64`
* *T_IND*: ``int32`` or ``int64``
**Example**
```xml
<layer ... type="ScatterNDUpdate">
<input>
<port id="0">
<dim>1000</dim>
<dim>256</dim>
<dim>10</dim>
<dim>15</dim>
</port>
<port id="1">
<dim>25</dim>
<dim>125</dim>
<dim>3</dim>
</port>
<port id="2">
<dim>25</dim>
<dim>125</dim>
<dim>15</dim>
</port>
</input>
<output>
<port id="3">
<dim>1000</dim>
<dim>256</dim>
<dim>10</dim>
<dim>15</dim>
</port>
</output>
</layer>
```
.. code-block:: cpp
<layer ... type="ScatterNDUpdate">
<input>
<port id="0">
<dim>1000</dim>
<dim>256</dim>
<dim>10</dim>
<dim>15</dim>
</port>
<port id="1">
<dim>25</dim>
<dim>125</dim>
<dim>3</dim>
</port>
<port id="2">
<dim>25</dim>
<dim>125</dim>
<dim>15</dim>
</port>
</input>
<output>
<port id="3">
<dim>1000</dim>
<dim>256</dim>
<dim>10</dim>
<dim>15</dim>
</port>
</output>
</layer>
@endsphinxdirective

View File

@ -1,47 +1,46 @@
# ScatterUpdate {#openvino_docs_ops_movement_ScatterUpdate_3}
@sphinxdirective
**Versioned name**: *ScatterUpdate-3*
**Category**: *Data movement*
**Short description**: *ScatterUpdate* creates a copy of the first input tensor with updated elements specified with second and third input tensors.
**Detailed description**: *ScatterUpdate* creates a copy of the first input tensor with updated elements in positions specified with `indices` input
and values specified with `updates` tensor starting from the dimension with index `axis`. For the `data` tensor of shape \f$[d_0,\;d_1,\;\dots,\;d_n]\f$,
`indices` tensor of shape \f$[i_0,\;i_1,\;\dots,\;i_k]\f$ and `updates` tensor of shape
\f$[d_0,\;d_1,\;\dots,\;d_{axis - 1},\;i_0,\;i_1,\;\dots,\;i_k,\;d_{axis + 1},\;\dots, d_n]\f$ the operation computes
for each `m, n, ..., p` of the `indices` tensor indices:
**Detailed description**: *ScatterUpdate* creates a copy of the first input tensor with updated elements in positions specified with ``indices`` input
and values specified with ``updates`` tensor starting from the dimension with index ``axis``. For the ``data`` tensor of shape :math:`[d_0,\;d_1,\;\dots,\;d_n]`, ``indices`` tensor of shape :math:`[i_0,\;i_1,\;\dots,\;i_k]` and ``updates`` tensor of shape :math:`[d_0,\;d_1,\;\dots,\;d_{axis - 1},\;i_0,\;i_1,\;\dots,\;i_k,\;d_{axis + 1},\;\dots, d_n]` the operation computes for each ``m, n, ..., p`` of the ``indices`` tensor indices:
.. math::
\f[data[\dots,\;indices[m,\;n,\;\dots,\;p],\;\dots] = updates[\dots,\;m,\;n,\;\dots,\;p,\;\dots]\f]
data[\dots,\;indices[m,\;n,\;\dots,\;p],\;\dots] = updates[\dots,\;m,\;n,\;\dots,\;p,\;\dots]
where first \f$\dots\f$ in the `data` corresponds to \f$[d_0,\;\dots,\;d_{axis - 1}]\f$ dimensions, last\f$\dots\f$ in the `data` corresponds to the
`rank(data) - (axis + 1)` dimensions.
where first :math:`\dots` in the ``data`` corresponds to :math:`[d_0,\;\dots,\;d_{axis - 1}]` dimensions, last :math:`\dots` in the ``data`` corresponds to the ``rank(data) - (axis + 1)`` dimensions.
Several examples for case when `axis = 0`:
1. `indices` is a \f$0\f$D tensor: \f$data[indices,\;\dots] = updates[\dots]\f$
2. `indices` is a \f$1\f$D tensor (\f$\forall_{i}\f$): \f$data[indices[i],\;\dots] = updates[i,\;\dots]\f$
3. `indices` is a \f$N\f$D tensor (\f$\forall_{i,\;\dots,\;j}\f$): \f$data[indices[i],\;\dots,\;j],\;\dots] = updates[i,\;\dots,\;j,\;\dots]\f$
1. ``indices`` is a :math:`0` D tensor: :math:`data[indices,\;\dots] = updates[\dots]`
2. ``indices`` is a :math:`1` D tensor (:math:`\forall_{i}`): :math:`data[indices[i],\;\dots] = updates[i,\;\dots]`
3. ``indices`` is a :math:`N` D tensor (:math:`\forall_{i,\;\dots,\;j}`): :math:`data[indices[i],\;\dots,\;j],\;\dots] = updates[i,\;\dots,\;j,\;\dots]`
**Attributes**: *ScatterUpdate* does not have attributes.
**Inputs**:
* **1**: `data` tensor of arbitrary rank `r` and type *T_NUMERIC*. **Required.**
* **1**: ``data`` tensor of arbitrary rank ``r`` and type *T_NUMERIC*. **Required.**
* **2**: `indices` tensor with indices of type *T_IND*.
All index values are expected to be within bounds `[0, s - 1]` along the axis of size `s`. If multiple indices point to the
* **2**: ``indices`` tensor with indices of type *T_IND*. All index values are expected to be within bounds ``[0, s - 1]`` along the axis of size ``s``. If multiple indices point to the
same output location, the order of updating the values is undefined. If an index points to a non-existing output
tensor element or is negative, then an exception is raised. **Required.**
* **3**: `updates` tensor of type *T_NUMERIC* and rank equal to `rank(indices) + rank(data) - 1` **Required.**
* **3**: ``updates`` tensor of type *T_NUMERIC* and rank equal to ``rank(indices) + rank(data) - 1`` **Required.**
* **4**: `axis` tensor with scalar or 1D tensor with one element of type *T_AXIS* specifying axis for scatter.
The value can be in the range `[ -r, r - 1]`, where `r` is the rank of `data`. **Required.**
* **4**: ``axis`` tensor with scalar or 1D tensor with one element of type *T_AXIS* specifying axis for scatter.
The value can be in the range ``[ -r, r - 1]``, where ``r`` is the rank of ``data``. **Required.**
**Outputs**:
* **1**: tensor with shape equal to `data` tensor of the type *T_NUMERIC*.
* **1**: tensor with shape equal to ``data`` tensor of the type *T_NUMERIC*.
**Types**
@ -55,66 +54,70 @@ The value can be in the range `[ -r, r - 1]`, where `r` is the rank of `data`. *
*Example 1*
```xml
<layer ... type="ScatterUpdate">
<input>
<port id="0"> <!-- data -->
<dim>1000</dim>
<dim>256</dim>
<dim>10</dim>
<dim>15</dim>
</port>
<port id="1"> <!-- indices -->
<dim>125</dim>
<dim>20</dim>
</port>
<port id="2"> <!-- udpates -->
<dim>1000</dim>
<dim>125</dim>
<dim>20</dim>
<dim>10</dim>
<dim>15</dim>
</port>
<port id="3"> <!-- axis -->
<dim>1</dim> <!-- value [1] -->
</port>
</input>
<output>
<port id="4" precision="FP32"> <!-- output -->
<dim>1000</dim>
<dim>256</dim>
<dim>10</dim>
<dim>15</dim>
</port>
</output>
</layer>
```
.. code-block:: cpp
<layer ... type="ScatterUpdate">
<input>
<port id="0"> < !-- data -->
<dim>1000</dim>
<dim>256</dim>
<dim>10</dim>
<dim>15</dim>
</port>
<port id="1"> < !-- indices -->
<dim>125</dim>
<dim>20</dim>
</port>
<port id="2"> < !-- udpates -->
<dim>1000</dim>
<dim>125</dim>
<dim>20</dim>
<dim>10</dim>
<dim>15</dim>
</port>
<port id="3"> < !-- axis -->
<dim>1</dim> < !-- value [1] -->
</port>
</input>
<output>
<port id="4" precision="FP32"> < !-- output -->
<dim>1000</dim>
<dim>256</dim>
<dim>10</dim>
<dim>15</dim>
</port>
</output>
</layer>
*Example 2*
```xml
<layer ... type="ScatterUpdate">
<input>
<port id="0"> <!-- data -->
<dim>3</dim> <!-- {{-1.0f, 1.0f, -1.0f, 3.0f, 4.0f}, -->
<dim>5</dim> <!-- {-1.0f, 6.0f, -1.0f, 8.0f, 9.0f}, -->
</port> <!-- {-1.0f, 11.0f, 1.0f, 13.0f, 14.0f}} -->
<port id="1"> <!-- indices -->
<dim>2</dim> <!-- {0, 2} -->
</port>
<port id="2"> <!-- udpates -->
<dim>3</dim> <!-- {1.0f, 1.0f} -->
<dim>2</dim> <!-- {1.0f, 1.0f} -->
</port> <!-- {1.0f, 2.0f} -->
<port id="3"> <!-- axis -->
<dim>1</dim> <!-- {1} -->
</port>
</input>
<output>
<port id="4"> <!-- output -->
<dim>3</dim> <!-- {{1.0f, 1.0f, 1.0f, 3.0f, 4.0f}, -->
<dim>5</dim> <!-- {1.0f, 6.0f, 1.0f, 8.0f, 9.0f}, -->
</port> <!-- {1.0f, 11.0f, 2.0f, 13.0f, 14.0f}} -->
</output>
</layer>
```
.. code-block:: cpp
<layer ... type="ScatterUpdate">
<input>
<port id="0"> < !-- data -->
<dim>3</dim> < !-- {{-1.0f, 1.0f, -1.0f, 3.0f, 4.0f}, -->
<dim>5</dim> < !-- {-1.0f, 6.0f, -1.0f, 8.0f, 9.0f}, -->
</port> < !-- {-1.0f, 11.0f, 1.0f, 13.0f, 14.0f}} -->
<port id="1"> < !-- indices -->
<dim>2</dim> < !-- {0, 2} -->
</port>
<port id="2"> < !-- udpates -->
<dim>3</dim> < !-- {1.0f, 1.0f} -->
<dim>2</dim> < !-- {1.0f, 1.0f} -->
</port> < !-- {1.0f, 2.0f} -->
<port id="3"> < !-- axis -->
<dim>1</dim> < !-- {1} -->
</port>
</input>
<output>
<port id="4"> < !-- output -->
<dim>3</dim> < !-- {{1.0f, 1.0f, 1.0f, 3.0f, 4.0f}, -->
<dim>5</dim> < !-- {1.0f, 6.0f, 1.0f, 8.0f, 9.0f}, -->
</port> < !-- {1.0f, 11.0f, 2.0f, 13.0f, 14.0f}} -->
</output>
</layer>
@endsphinxdirective

View File

@ -1,5 +1,7 @@
# ShuffleChannels {#openvino_docs_ops_movement_ShuffleChannels_1}
@sphinxdirective
**Versioned name**: *ShuffleChannels-1*
**Name**: *ShuffleChannels*
@ -10,50 +12,53 @@
**Detailed description**:
Input tensor of `data_shape` is always interpreted as 4D tensor with the following shape:
Input tensor of ``data_shape`` is always interpreted as 4D tensor with the following shape:
dim 0: data_shape[0] * data_shape[1] * ... * data_shape[axis-1]
(or 1 if axis == 0)
dim 1: group
dim 2: data_shape[axis] / group
dim 3: data_shape[axis+1] * data_shape[axis+2] * ... * data_shape[data_shape.size()-1]
(or 1 if axis points to last dimension)
.. code-block:: cpp
dim 0: data_shape[0] * data_shape[1] * ... * data_shape[axis-1]
(or 1 if axis == 0)
dim 1: group
dim 2: data_shape[axis] / group
dim 3: data_shape[axis+1] * data_shape[axis+2] * ... * data_shape[data_shape.size()-1]
(or 1 if axis points to last dimension)
Trailing and leading to `axis` dimensions are flattened and reshaped back to the original shape after channels shuffling.
Trailing and leading to ``axis`` dimensions are flattened and reshaped back to the original shape after channels shuffling.
The operation is equivalent to the following transformation of the input tensor `x` of shape `[N, C, H, W]` and `axis = 1`:
The operation is equivalent to the following transformation of the input tensor ``x`` of shape ``[N, C, H, W]`` and ``axis = 1``:
\f[
x' = reshape(x, [N, group, C / group, H * W])\\
x'' = transpose(x', [0, 2, 1, 3])\\
y = reshape(x'', [N, C, H, W])\\
\f]
.. math::
where `group` is the layer attribute described below.
x' = reshape(x, [N, group, C / group, H * W])\\
x'' = transpose(x', [0, 2, 1, 3])\\
y = reshape(x'', [N, C, H, W])\\
where ``group`` is the layer attribute described below.
**Attributes**:
* *axis*
* **Description**: *axis* specifies the index of a channel dimension.
* **Range of values**: an integer number in the range `[-rank(data_shape), rank(data_shape) - 1]`
* **Type**: `int`
* **Range of values**: an integer number in the range ``[-rank(data_shape), rank(data_shape) - 1]``
* **Type**: ``int``
* **Default value**: 1
* **Required**: *no*
* *group*
* **Description**: *group* specifies the number of groups to split the channel dimension into. This number must evenly divide the channel dimension size.
* **Range of values**: a positive integer in the range `[1, data_shape[axis]]`
* **Type**: `int`
* **Range of values**: a positive integer in the range ``[1, data_shape[axis]]``
* **Type**: ``int``
* **Default value**: 1
* **Required**: *no*
**Inputs**:
* **1**: `data` input tensor of type *T* and rank greater or equal to 1. **Required.**
* **1**: ``data`` input tensor of type *T* and rank greater or equal to 1. **Required.**
**Outputs**:
@ -65,24 +70,27 @@ where `group` is the layer attribute described below.
**Example**
```xml
<layer ... type="ShuffleChannels" ...>
<data group="3" axis="1"/>
<input>
<port id="0">
<dim>5</dim>
<dim>12</dim>
<dim>200</dim>
<dim>400</dim>
</port>
</input>
<output>
<port id="1">
<dim>5</dim>
<dim>12</dim>
<dim>200</dim>
<dim>400</dim>
</port>
</output>
</layer>
```
.. code-block:: cpp
<layer ... type="ShuffleChannels" ...>
<data group="3" axis="1"/>
<input>
<port id="0">
<dim>5</dim>
<dim>12</dim>
<dim>200</dim>
<dim>400</dim>
</port>
</input>
<output>
<port id="1">
<dim>5</dim>
<dim>12</dim>
<dim>200</dim>
<dim>400</dim>
</port>
</output>
</layer>
@endsphinxdirective

View File

@ -1,4 +1,6 @@
## Slice {#openvino_docs_ops_movement_Slice_8}
# Slice {#openvino_docs_ops_movement_Slice_8}
@sphinxdirective
**Versioned name**: *Slice-8*
@ -6,53 +8,57 @@
**Short description**: *Slice* operation extracts a slice of the input tensor.
**Detailed Description**: *Slice* operation selects a region of values from the `data` tensor.
Selected values start at indexes provided in the `start` input (inclusively) and end
at indexes provides in `stop` input (exclusively).
**Detailed Description**: *Slice* operation selects a region of values from the ``data`` tensor.
Selected values start at indexes provided in the ``start`` input (inclusively) and end
at indexes provides in ``stop`` input (exclusively).
The `step` input allows subsampling of `data`, selecting every *n*-th element,
where `n` is equal to `step` element for corresponding axis.
Negative `step` value indicates slicing backwards, so the sequence along the corresponding axis is reversed in the output tensor.
To select all values contiguously set `step` to `1` for each axis.
The ``step`` input allows subsampling of ``data``, selecting every *n*-th element,
where ``n`` is equal to ``step`` element for corresponding axis.
Negative ``step`` value indicates slicing backwards, so the sequence along the corresponding axis is reversed in the output tensor.
To select all values contiguously set ``step`` to ``1`` for each axis.
The optional `axes` input allows specifying slice indexes only on selected axes.
The optional ``axes`` input allows specifying slice indexes only on selected axes.
Other axes will not be affected and will be output in full.
The rules follow python language slicing `data[start:stop:step]`.
The rules follow python language slicing ``data[start:stop:step]``.
**Attributes**: *Slice* operation has no attributes.
**Inputs**
* **1**: `data` - tensor (to be sliced) of type *T* and shape rank greater or equal to 1. **Required.**
* **1**: ``data`` - tensor (to be sliced) of type *T* and shape rank greater or equal to 1. **Required.**
* **2**: `start` - 1D tensor of type *T_IND*. Indices corresponding to axes in `data`.
Defines the starting coordinate of the slice in the `data` tensor.
* **2**: ``start`` - 1D tensor of type *T_IND*. Indices corresponding to axes in ``data``.
Defines the starting coordinate of the slice in the ``data`` tensor.
A negative index value represents counting elements from the end of that dimension.
A value larger than the size of a dimension is silently clamped. **Required.**
* **3**: `stop` - 1D, type *T_IND*, similar to `start`.
* **3**: ``stop`` - 1D, type *T_IND*, similar to ``start``.
Defines the coordinate of the opposite vertex of the slice, or where the slice ends.
Stop indexes are exclusive, which means values lying on the ending edge are
not included in the output slice.
To slice to the end of a dimension of unknown size `INT_MAX`
may be used (or `INT_MIN` if slicing backwards). **Required.**
To slice to the end of a dimension of unknown size ``INT_MAX``
may be used (or ``INT_MIN`` if slicing backwards). **Required.**
* **4**: ``step`` - 1D tensor of type *T_IND* and the same shape as ``start`` and ``stop``.
* **4**: `step` - 1D tensor of type *T_IND* and the same shape as `start` and `stop`.
Integer value that specifies the increment between each index used in slicing.
Value cannot be `0`, negative value indicates slicing backwards. **Required.**
Value cannot be ``0``, negative value indicates slicing backwards. **Required.**
* **5**: `axes` - 1D tensor of type *T_AXIS*.
Optional 1D tensor indicating which dimensions the values in `start` and `stop` apply to.
Negative value means counting dimensions from the end. The range is `[-r, r - 1]`, where `r` is the rank of the `data` input tensor.
* **5**: ``axes`` - 1D tensor of type *T_AXIS*.
Optional 1D tensor indicating which dimensions the values in ``start`` and ``stop`` apply to.
Negative value means counting dimensions from the end. The range is ``[-r, r - 1]``, where ``r`` is the rank of the ``data`` input tensor.
Values are required to be unique. If a particular axis is unspecified, it will be output in full and not sliced.
Default value: `[0, 1, 2, ..., start.shape[0] - 1]`. **Optional.**
Default value: ``[0, 1, 2, ..., start.shape[0] - 1]``. **Optional.**
Number of elements in `start`, `stop`, `step`, and `axes` inputs are required to be equal.
Number of elements in ``start``, ``stop``, ``step``, and ``axes`` inputs are required to be equal.
**Outputs**
* **1**: Tensor of type *T* with values of the selected slice. The shape of the output tensor has the same rank as the shape of `data` input and reduced dimensions according to the values specified by `start`, `stop`, and `step` inputs.
* **1**: Tensor of type *T* with values of the selected slice. The shape of the output tensor has the same rank as the shape of ``data`` input and reduced dimensions according to the values specified by ``start``, ``stop``, and ``step`` inputs.
**Types**
@ -63,357 +69,368 @@ Number of elements in `start`, `stop`, `step`, and `axes` inputs are required to
**Examples**
*Example 1: basic slicing*
Example 1: basic slicing
```xml
<layer id="1" type="Slice" ...>
<input>
<port id="0"> <!-- data: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] -->
<dim>10</dim>
</port>
<port id="1"> <!-- start: [1] -->
<dim>1</dim>
</port>
<port id="2"> <!-- stop: [8] -->
<dim>1</dim>
</port>
<port id="3"> <!-- step: [1] -->
<dim>1</dim>
</port>
<port id="4"> <!-- axes: [0] -->
<dim>1</dim>
</port>
</input>
<output>
<port id="5"> <!-- output: [1, 2, 3, 4, 5, 6, 7] -->
<dim>7</dim>
</port>
</output>
</layer>
```
.. code-block:: cpp
*Example 2: basic slicing, `axes` default*
```xml
<layer id="1" type="Slice" ...>
<input>
<port id="0"> <!-- data: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] -->
<dim>10</dim>
</port>
<port id="1"> <!-- start: [1] -->
<dim>1</dim>
</port>
<port id="2"> <!-- stop: [8] -->
<dim>1</dim>
</port>
<port id="3"> <!-- step: [1] -->
<dim>1</dim>
</port>
</input>
<output>
<port id="4"> <!-- output: [1, 2, 3, 4, 5, 6, 7] -->
<dim>7</dim>
</port>
</output>
</layer>
```
*Example 3: basic slicing, `step: [2]`*
```xml
<layer id="1" type="Slice" ...>
<input>
<port id="0"> <!-- data: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] -->
<dim>10</dim>
</port>
<port id="1"> <!-- start: [1] -->
<dim>1</dim>
</port>
<port id="2"> <!-- stop: [8] -->
<dim>1</dim>
</port>
<port id="3"> <!-- step: [2] -->
<dim>1</dim>
</port>
<port id="4"> <!-- axes: [0] -->
<dim>1</dim>
</port>
</input>
<output>
<port id="5"> <!-- output: [1, 3, 5, 7] -->
<dim>4</dim>
</port>
</output>
</layer>
```
*Example 4: `start` and `stop` out of the dimension size, `step: [1]`*
```xml
<layer id="1" type="Slice" ...>
<input>
<port id="0"> <!-- data: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] -->
<dim>10</dim>
</port>
<port id="1"> <!-- start: [-100] -->
<dim>1</dim>
</port>
<port id="2"> <!-- stop: [100] -->
<dim>1</dim>
</port>
<port id="3"> <!-- step: [1] -->
<dim>1</dim>
</port>
<port id="4"> <!-- axes: [0] -->
<dim>1</dim>
</port>
</input>
<output>
<port id="5"> <!-- output: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] -->
<layer id="1" type="Slice" ...>
<input>
<port id="0"> < !-- data: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] -->
<dim>10</dim>
</port>
</output>
</layer>
```
</port>
<port id="1"> < !-- start: [1] -->
<dim>1</dim>
</port>
<port id="2"> < !-- stop: [8] -->
<dim>1</dim>
</port>
<port id="3"> < !-- step: [1] -->
<dim>1</dim>
</port>
<port id="4"> < !-- axes: [0] -->
<dim>1</dim>
</port>
</input>
<output>
<port id="5"> < !-- output: [1, 2, 3, 4, 5, 6, 7] -->
<dim>7</dim>
</port>
</output>
</layer>
*Example 5: slicing backward all elements, `step: [-1]`, `stop: [-11]`*
```xml
<layer id="1" type="Slice" ...>
<input>
<port id="0"> <!-- data: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] -->
<dim>10</dim>
</port>
<port id="1"> <!-- start: [9] -->
<dim>1</dim>
</port>
<port id="2"> <!-- stop: [-11] -->
<dim>1</dim>
</port>
<port id="3"> <!-- step: [-1] -->
<dim>1</dim>
</port>
<port id="4"> <!-- axes: [0] -->
<dim>1</dim>
</port>
</input>
<output>
<port id="5"> <!-- output: [9, 8, 7, 6, 5, 4, 3, 2, 1, 0] -->
Example 2: basic slicing, ``axes`` default
.. code-block:: cpp
<layer id="1" type="Slice" ...>
<input>
<port id="0"> < !-- data: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] -->
<dim>10</dim>
</port>
</output>
</layer>
```
</port>
<port id="1"> < !-- start: [1] -->
<dim>1</dim>
</port>
<port id="2"> < !-- stop: [8] -->
<dim>1</dim>
</port>
<port id="3"> < !-- step: [1] -->
<dim>1</dim>
</port>
</input>
<output>
<port id="4"> < !-- output: [1, 2, 3, 4, 5, 6, 7] -->
<dim>7</dim>
</port>
</output>
</layer>
*Example 6: slicing backward, `step: [-1]`, `stop: [0]`*
```xml
<layer id="1" type="Slice" ...>
<input>
<port id="0"> <!-- data: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] -->
<dim>10</dim>
</port>
<port id="1"> <!-- start: [9] -->
<dim>1</dim>
</port>
<port id="2"> <!-- stop: [0] -->
<dim>1</dim>
</port>
<port id="3"> <!-- step: [-1] -->
<dim>1</dim>
</port>
<port id="4"> <!-- axes: [0] -->
<dim>1</dim>
</port>
</input>
<output>
<port id="5"> <!-- output: [9, 8, 7, 6, 5, 4, 3, 2, 1] -->
<dim>9</dim>
</port>
</output>
</layer>
```
Example 3: basic slicing, ``step: [2]``
*Example 7: slicing backward, `step: [-1]`, `stop: [-10]`*
.. code-block:: cpp
```xml
<layer id="1" type="Slice" ...>
<input>
<port id="0"> <!-- data: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] -->
<dim>10</dim>
</port>
<port id="1"> <!-- start: [9] -->
<dim>1</dim>
</port>
<port id="2"> <!-- stop: [-10] -->
<dim>1</dim>
</port>
<port id="3"> <!-- step: [-1] -->
<dim>1</dim>
</port>
<port id="4"> <!-- axes: [0] -->
<dim>1</dim>
</port>
</input>
<output>
<port id="5"> <!-- output: [9, 8, 7, 6, 5, 4, 3, 2, 1] -->
<dim>9</dim>
</port>
</output>
</layer>
```
*Example 8: slicing backward, `step: [-2]`*
```xml
<layer id="1" type="Slice" ...>
<input>
<port id="0"> <!-- data: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] -->
<dim>10</dim>
</port>
<port id="1"> <!-- start: [9] -->
<dim>1</dim>
</port>
<port id="2"> <!-- stop: [-11] -->
<dim>1</dim>
</port>
<port id="3"> <!-- step: [-2] -->
<dim>1</dim>
</port>
<port id="4"> <!-- axes: [0] -->
<dim>1</dim>
</port>
</input>
<output>
<port id="5"> <!-- output: [9, 7, 5, 3, 1] -->
<dim>5</dim>
</port>
</output>
</layer>
```
*Example 9: `start` and `stop` out of the dimension size, slicing backward*
```xml
<layer id="1" type="Slice" ...>
<input>
<port id="0"> <!-- data: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] -->
<dim>10</dim>
</port>
<port id="1"> <!-- start: [100] -->
<dim>1</dim>
</port>
<port id="2"> <!-- stop: [-100] -->
<dim>1</dim>
</port>
<port id="3"> <!-- step: [-1] -->
<dim>1</dim>
</port>
<port id="4"> <!-- axes: [0] -->
<dim>1</dim>
</port>
</input>
<output>
<port id="5"> <!-- output: [9, 8, 7, 6, 5, 4, 3, 2, 1, 0] -->
<layer id="1" type="Slice" ...>
<input>
<port id="0"> < !-- data: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] -->
<dim>10</dim>
</port>
</output>
</layer>
```
</port>
<port id="1"> < !-- start: [1] -->
<dim>1</dim>
</port>
<port id="2"> < !-- stop: [8] -->
<dim>1</dim>
</port>
<port id="3"> < !-- step: [2] -->
<dim>1</dim>
</port>
<port id="4"> < !-- axes: [0] -->
<dim>1</dim>
</port>
</input>
<output>
<port id="5"> < !-- output: [1, 3, 5, 7] -->
<dim>4</dim>
</port>
</output>
</layer>
*Example 10: slicing 2D tensor, all axes specified*
Example 4: ``start`` and ``stop`` out of the dimension size, ``step: [1]``
```xml
<layer id="1" type="Slice" ...>
<input>
<port id="0"> <!-- data: data: [[0, 1, 2, 3, 4], [5, 6, 7, 8, 9]] -->
<dim>2</dim>
<dim>5</dim>
</port>
<port id="1"> <!-- start: [0, 1] -->
<dim>2</dim>
</port>
<port id="2"> <!-- stop: [2, 4] -->
<dim>2</dim>
</port>
<port id="3"> <!-- step: [1, 2] -->
<dim>2</dim>
</port>
<port id="4"> <!-- axes: [0, 1] -->
<dim>2</dim>
</port>
</input>
<output>
<port id="5"> <!-- output: [1, 3, 6, 8] -->
.. code-block:: cpp
<layer id="1" type="Slice" ...>
<input>
<port id="0"> < !-- data: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] -->
<dim>10</dim>
</port>
<port id="1"> < !-- start: [-100] -->
<dim>1</dim>
</port>
<port id="2"> < !-- stop: [100] -->
<dim>1</dim>
</port>
<port id="3"> < !-- step: [1] -->
<dim>1</dim>
</port>
<port id="4"> < !-- axes: [0] -->
<dim>1</dim>
</port>
</input>
<output>
<port id="5"> < !-- output: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] -->
<dim>10</dim>
</port>
</output>
</layer>
Example 5: slicing backward all elements, ``step: [-1]``, ``stop: [-11]``
.. code-block:: cpp
<layer id="1" type="Slice" ...>
<input>
<port id="0"> < !-- data: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] -->
<dim>10</dim>
</port>
<port id="1"> < !-- start: [9] -->
<dim>1</dim>
</port>
<port id="2"> < !-- stop: [-11] -->
<dim>1</dim>
</port>
<port id="3"> < !-- step: [-1] -->
<dim>1</dim>
</port>
<port id="4"> < !-- axes: [0] -->
<dim>1</dim>
</port>
</input>
<output>
<port id="5"> < !-- output: [9, 8, 7, 6, 5, 4, 3, 2, 1, 0] -->
<dim>10</dim>
</port>
</output>
</layer>
Example 6: slicing backward, ``step: [-1]``, ``stop: [0]``
.. code-block:: cpp
<layer id="1" type="Slice" ...>
<input>
<port id="0"> < !-- data: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] -->
<dim>10</dim>
</port>
<port id="1"> < !-- start: [9] -->
<dim>1</dim>
</port>
<port id="2"> < !-- stop: [0] -->
<dim>1</dim>
</port>
<port id="3"> < !-- step: [-1] -->
<dim>1</dim>
</port>
<port id="4"> < !-- axes: [0] -->
<dim>1</dim>
</port>
</input>
<output>
<port id="5"> < !-- output: [9, 8, 7, 6, 5, 4, 3, 2, 1] -->
<dim>9</dim>
</port>
</output>
</layer>
Example 7: slicing backward, ``step: [-1]``, ``stop: [-10]``
.. code-block:: cpp
<layer id="1" type="Slice" ...>
<input>
<port id="0"> < !-- data: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] -->
<dim>10</dim>
</port>
<port id="1"> < !-- start: [9] -->
<dim>1</dim>
</port>
<port id="2"> < !-- stop: [-10] -->
<dim>1</dim>
</port>
<port id="3"> < !-- step: [-1] -->
<dim>1</dim>
</port>
<port id="4"> < !-- axes: [0] -->
<dim>1</dim>
</port>
</input>
<output>
<port id="5"> < !-- output: [9, 8, 7, 6, 5, 4, 3, 2, 1] -->
<dim>9</dim>
</port>
</output>
</layer>
Example 8: slicing backward, ``step: [-2]``
.. code-block:: cpp
<layer id="1" type="Slice" ...>
<input>
<port id="0"> < !-- data: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] -->
<dim>10</dim>
</port>
<port id="1"> < !-- start: [9] -->
<dim>1</dim>
</port>
<port id="2"> < !-- stop: [-11] -->
<dim>1</dim>
</port>
<port id="3"> < !-- step: [-2] -->
<dim>1</dim>
</port>
<port id="4"> < !-- axes: [0] -->
<dim>1</dim>
</port>
</input>
<output>
<port id="5"> < !-- output: [9, 7, 5, 3, 1] -->
<dim>5</dim>
</port>
</output>
</layer>
Example 9: ``start`` and ``stop`` out of the dimension size, slicing backward
.. code-block:: cpp
<layer id="1" type="Slice" ...>
<input>
<port id="0"> < !-- data: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] -->
<dim>10</dim>
</port>
<port id="1"> < !-- start: [100] -->
<dim>1</dim>
</port>
<port id="2"> < !-- stop: [-100] -->
<dim>1</dim>
</port>
<port id="3"> < !-- step: [-1] -->
<dim>1</dim>
</port>
<port id="4"> < !-- axes: [0] -->
<dim>1</dim>
</port>
</input>
<output>
<port id="5"> < !-- output: [9, 8, 7, 6, 5, 4, 3, 2, 1, 0] -->
<dim>10</dim>
</port>
</output>
</layer>
Example 10: slicing 2D tensor, all axes specified
.. code-block:: cpp
<layer id="1" type="Slice" ...>
<input>
<port id="0"> < !-- data: data: [[0, 1, 2, 3, 4], [5, 6, 7, 8, 9]] -->
<dim>2</dim>
<dim>5</dim>
</port>
<port id="1"> < !-- start: [0, 1] -->
<dim>2</dim>
</port>
</output>
</layer>
```
</port>
<port id="2"> < !-- stop: [2, 4] -->
<dim>2</dim>
</port>
<port id="3"> < !-- step: [1, 2] -->
<dim>2</dim>
</port>
<port id="4"> < !-- axes: [0, 1] -->
<dim>2</dim>
</port>
</input>
<output>
<port id="5"> < !-- output: [1, 3, 6, 8] -->
<dim>2</dim>
<dim>2</dim>
</port>
</output>
</layer>
*Example 11: slicing 3D tensor, all axes specified*
```xml
<layer id="1" type="Slice" ...>
<input>
<port id="0"> <!-- data -->
<dim>20</dim>
<dim>10</dim>
<dim>5</dim>
</port>
<port id="1"> <!-- start: [0, 0, 0] -->
<dim>2</dim>
</port>
<port id="2"> <!-- stop: [4, 10, 5] -->
<dim>2</dim>
</port>
<port id="3"> <!-- step: [1, 1, 1] -->
<dim>2</dim>
</port>
<port id="4"> <!-- axes: [0, 1, 2] -->
<dim>2</dim>
</port>
</input>
<output>
<port id="5"> <!-- output -->
<dim>4</dim>
Example 11: slicing 3D tensor, all axes specified
.. code-block:: cpp
<layer id="1" type="Slice" ...>
<input>
<port id="0"> < !-- data -->
<dim>20</dim>
<dim>10</dim>
<dim>5</dim>
</port>
</output>
</layer>
```
</port>
<port id="1"> < !-- start: [0, 0, 0] -->
<dim>2</dim>
</port>
<port id="2"> < !-- stop: [4, 10, 5] -->
<dim>2</dim>
</port>
<port id="3"> < !-- step: [1, 1, 1] -->
<dim>2</dim>
</port>
<port id="4"> < !-- axes: [0, 1, 2] -->
<dim>2</dim>
</port>
</input>
<output>
<port id="5"> < !-- output -->
<dim>4</dim>
<dim>10</dim>
<dim>5</dim>
</port>
</output>
</layer>
*Example 12: slicing 3D tensor, last axes default*
Example 12: slicing 3D tensor, last axes default
```xml
<layer id="1" type="Slice" ...>
<input>
<port id="0"> <!-- data -->
<dim>20</dim>
<dim>10</dim>
<dim>5</dim>
</port>
<port id="1"> <!-- start: [0, 0] -->
<dim>2</dim>
</port>
<port id="2"> <!-- stop: [4, 10] -->
<dim>2</dim>
</port>
<port id="3"> <!-- step: [1, 1] -->
<dim>2</dim>
</port>
<port id="4"> <!-- axes: [0, 1] -->
<dim>2</dim>
</port>
</input>
<output>
<port id="5"> <!-- output -->
<dim>4</dim>
.. code-block:: cpp
<layer id="1" type="Slice" ...>
<input>
<port id="0"> < !-- data -->
<dim>20</dim>
<dim>10</dim>
<dim>5</dim>
</port>
</output>
</layer>
```
</port>
<port id="1"> < !-- start: [0, 0] -->
<dim>2</dim>
</port>
<port id="2"> < !-- stop: [4, 10] -->
<dim>2</dim>
</port>
<port id="3"> < !-- step: [1, 1] -->
<dim>2</dim>
</port>
<port id="4"> < !-- axes: [0, 1] -->
<dim>2</dim>
</port>
</input>
<output>
<port id="5"> < !-- output -->
<dim>4</dim>
<dim>10</dim>
<dim>5</dim>
</port>
</output>
</layer>
@endsphinxdirective

View File

@ -1,42 +1,65 @@
# SpaceToBatch {#openvino_docs_ops_movement_SpaceToBatch_2}
@sphinxdirective
**Versioned name**: *SpaceToBatch-2*
**Category**: *Data movement*
**Short description**: The *SpaceToBatch* operation divides "spatial" dimensions `[1, ..., N - 1]` of the `data` input into a grid of blocks of shape `block_shape`, and interleaves these blocks with the batch dimension (0) such that in the output, the spatial dimensions `[1, ..., N - 1]` correspond to the position within the grid, and the batch dimension combines both the position within a spatial block and the original batch position. Prior to division into blocks, the spatial dimensions of the input are optionally zero padded according to `pads_begin` and `pads_end`.
**Short description**: The *SpaceToBatch* operation divides "spatial" dimensions ``[1, ..., N - 1]`` of the ``data`` input into a grid of blocks of shape ``block_shape``, and interleaves these blocks with the batch dimension (0) such that in the output, the spatial dimensions ``[1, ..., N - 1]`` correspond to the position within the grid, and the batch dimension combines both the position within a spatial block and the original batch position. Prior to division into blocks, the spatial dimensions of the input are optionally zero padded according to ``pads_begin`` and ``pads_end``.
**Detailed description**:
The operation is equivalent to the following transformation of the input tensor `data` of shape `[batch, D_1, D_2 ... D_{N - 1}]` and `block_shape`, `pads_begin`, `pads_end` of shapes `[N]` to *Y* output tensor.
The operation is equivalent to the following transformation of the input tensor ``data`` of shape ``[batch, D_1, D_2 ... D_{N - 1}]`` and ``block_shape``, ``pads_begin``, ``pads_end`` of shapes ``[N]`` to *Y* output tensor.
Zero-pad the start and end of dimensions \f$[D_0, \dots, D_{N - 1}]\f$ of the input according to `pads_begin` and `pads_end`:
Zero-pad the start and end of dimensions :math:`[D_0, \dots, D_{N - 1}]` of the input according to ``pads_begin`` and ``pads_end``:
\f[x = [batch + P_0, D_1 + P_1, D_2 + P_2, \dots, D_{N - 1} + P_{N - 1}]\f]
\f[x' = reshape(x, [batch, \frac{D_1 + P_1}{B_1}, B_1, \frac{D_2 + P_2}{B_2}, B_2, \dots, \frac{D_{N - 1} + P_{N - 1}}{B_{N - 1}}, B_{N - 1}])\f]
\f[x'' = transpose(x', [2, 4, \dots, (N - 1) + (N - 1), 0, 1, 3, \dots, N + (N - 1)])\f]
\f[y = reshape(x'', [batch \times B_1 \times \dots \times B_{N - 1}, \frac{D_1 + P_1}{B_1}, \frac{D_2 + P_2}{B_2}, \dots, \frac{D_{N - 1} + P_{N - 1}}{B_{N - 1}}]\f]
.. math::
x = [batch + P_0, D_1 + P_1, D_2 + P_2, \dots, D_{N - 1} + P_{N - 1}]
.. math::
x' = reshape(x, [batch, \frac{D_1 + P_1}{B_1}, B_1, \frac{D_2 + P_2}{B_2}, B_2, \dots, \frac{D_{N - 1} + P_{N - 1}}{B_{N - 1}}, B_{N - 1}])
.. math::
x'' = transpose(x', [2, 4, \dots, (N - 1) + (N - 1), 0, 1, 3, \dots, N + (N - 1)])
.. math::
y = reshape(x'', [batch \times B_1 \times \dots \times B_{N - 1}, \frac{D_1 + P_1}{B_1}, \frac{D_2 + P_2}{B_2}, \dots, \frac{D_{N - 1} + P_{N - 1}}{B_{N - 1}}]
where
- \f$P_i\f$ = pads_begin[i] + pads_end[i]
- \f$B_i\f$ = block_shape[i]
- \f$P_0\f$ for batch dimension is expected to be 0 (no-padding)
- \f$B_0\f$ for batch is ignored
* :math:`P_i` = pads_begin[i] + pads_end[i]
* :math:`B_i` = block_shape[i]
* :math:`P_0` for batch dimension is expected to be 0 (no-padding)
* :math:`B_0` for batch is ignored
**Attributes**
No attributes available.
No attributes available.
**Inputs**
* **1**: `data` - input N-D tensor `[batch, D_1, D_2 ... D_{N - 1}]` of *T1* type with rank >= 2. **Required.**
* **2**: `block_shape` - input 1-D tensor of *T2* type with shape `[N]` that is equal to the size of `data` input shape. All values must be >= 1. `block_shape[0]` is expected to be 1. **Required.**
* **3**: `pads_begin` - input 1-D tensor of *T2* type with shape `[N]` that is equal to the size of `data` input shape. All values must be non-negative. `pads_begin` specifies the padding for the beginning along each axis of `data` input . It is required that `block_shape[i]` divides `data_shape[i] + pads_begin[i] + pads_end[i]`. `pads_begin[0]` is expected to be 0. **Required.**
* **4**: `pads_end` - input 1-D tensor of *T2* type with shape `[N]` that is equal to the size of `data` input shape. All values must be non-negative. `pads_end` specifies the padding for the ending along each axis of `data` input. It is required that `block_shape[i]` divides `data_shape[i] + pads_begin[i] + pads_end[i]`. `pads_end[0]` is expected to be 0. **Required.**
* **1**: ``data`` - input N-D tensor ``[batch, D_1, D_2 ... D_{N - 1}]`` of *T1* type with rank >= 2. **Required.**
* **2**: ``block_shape`` - input 1-D tensor of *T2* type with shape ``[N]`` that is equal to the size of ``data`` input shape. All values must be >= 1. ``block_shape[0]`` is expected to be 1. **Required.**
* **3**: ``pads_begin`` - input 1-D tensor of *T2* type with shape ``[N]`` that is equal to the size of ``data`` input shape. All values must be non-negative. ``pads_begin`` specifies the padding for the beginning along each axis of ``data`` input . It is required that ``block_shape[i]`` divides ``data_shape[i] + pads_begin[i] + pads_end[i]``. ``pads_begin[0]`` is expected to be 0. **Required.**
* **4**: ``pads_end`` - input 1-D tensor of *T2* type with shape ``[N]`` that is equal to the size of ``data`` input shape. All values must be non-negative. ``pads_end`` specifies the padding for the ending along each axis of ``data`` input. It is required that ``block_shape[i]`` divides ``data_shape[i] + pads_begin[i] + pads_end[i]``. ``pads_end[0]`` is expected to be 0. **Required.**
**Outputs**
* **1**: N-D tensor with shape `[batch * block_shape[0] * block_shape[1] * ... * block_shape[N - 1], (D_1 + pads_begin[1] + pads_end[1]) / block_shape[1], (D_2 + pads_begin[2] + pads_end[2]) / block_shape[2], ..., (D_{N -1} + pads_begin[N - 1] + pads_end[N - 1]) / block_shape[N - 1]` of the same type as `data` input.
* **1**: N-D tensor with shape ``[batch * block_shape[0] * block_shape[1] * ... * block_shape[N - 1], (D_1 + pads_begin[1] + pads_end[1]) / block_shape[1], (D_2 + pads_begin[2] + pads_end[2]) / block_shape[2], ..., (D_{N -1} + pads_begin[N - 1] + pads_end[N - 1]) / block_shape[N - 1]`` of the same type as ``data`` input.
**Types**
@ -45,34 +68,36 @@ where
**Example**
```xml
<layer type="SpaceToBatch" ...>
<input>
<port id="0"> <!-- data -->
<dim>2</dim> <!-- batch -->
<dim>6</dim> <!-- spatial dimension 1 -->
<dim>10</dim> <!-- spatial dimension 2 -->
<dim>3</dim> <!-- spatial dimension 3 -->
<dim>3</dim> <!-- spatial dimension 4 -->
</port>
<port id="1"> <!-- block_shape value: [1, 2, 4, 3, 1] -->
<dim>5</dim>
</port>
<port id="2"> <!-- pads_begin value: [0, 0, 1, 0, 0] -->
<dim>5</dim>
</port>
<port id="3"> <!-- pads_end value: [0, 0, 1, 0, 0] -->
<dim>5</dim>
</port>
</input>
<output>
<port id="3">
<dim>48</dim> <!-- data.shape[0] * block_shape.shape[0] * block_shape.shape[1] *... * block_shape.shape[4] -->
<dim>3</dim> <!-- (data.shape[1] + pads_begin[1] + pads_end[1]) / block_shape.shape[1] -->
<dim>3</dim> <!-- (data.shape[2] + pads_begin[2] + pads_end[2]) / block_shape.shape[2] -->
<dim>1</dim> <!-- (data.shape[3] + pads_begin[3] + pads_end[3]) / block_shape.shape[3] -->
<dim>3</dim> <!-- (data.shape[4] + pads_begin[4] + pads_end[4]) / block_shape.shape[4] -->
</port>
</output>
</layer>
```
.. code-block:: cpp
<layer type="SpaceToBatch" ...>
<input>
<port id="0"> < !-- data -->
<dim>2</dim> < !-- batch -->
<dim>6</dim> < !-- spatial dimension 1 -->
<dim>10</dim> < !-- spatial dimension 2 -->
<dim>3</dim> < !-- spatial dimension 3 -->
<dim>3</dim> < !-- spatial dimension 4 -->
</port>
<port id="1"> < !-- block_shape value: [1, 2, 4, 3, 1] -->
<dim>5</dim>
</port>
<port id="2"> < !-- pads_begin value: [0, 0, 1, 0, 0] -->
<dim>5</dim>
</port>
<port id="3"> < !-- pads_end value: [0, 0, 1, 0, 0] -->
<dim>5</dim>
</port>
</input>
<output>
<port id="3">
<dim>48</dim> < !-- data.shape[0] * block_shape.shape[0] * block_shape.shape[1] *... * block_shape.shape[4] -->
<dim>3</dim> < !-- (data.shape[1] + pads_begin[1] + pads_end[1]) / block_shape.shape[1] -->
<dim>3</dim> < !-- (data.shape[2] + pads_begin[2] + pads_end[2]) / block_shape.shape[2] -->
<dim>1</dim> < !-- (data.shape[3] + pads_begin[3] + pads_end[3]) / block_shape.shape[3] -->
<dim>3</dim> < !-- (data.shape[4] + pads_begin[4] + pads_end[4]) / block_shape.shape[4] -->
</port>
</output>
</layer>
@endsphinxdirective

View File

@ -1,5 +1,7 @@
# SpaceToDepth {#openvino_docs_ops_movement_SpaceToDepth_1}
@sphinxdirective
**Versioned name**: *SpaceToDepth-1*
**Category**: *Data movement*
@ -9,31 +11,35 @@
**Detailed description**
*SpaceToDepth* operation permutes element from the input tensor with shape `[N, C, D1, D2, ..., DK]`, to the output tensor where values from the input spatial dimensions `D1, D2, ..., DK` are moved to the new depth dimension.
*SpaceToDepth* operation permutes element from the input tensor with shape ``[N, C, D1, D2, ..., DK]``, to the output tensor where values from the input spatial dimensions ``D1, D2, ..., DK`` are moved to the new depth dimension.
The operation is equivalent to the following transformation of the input tensor `data` with `K` spatial dimensions of shape `[N, C, D1, D2, ..., DK]` to *Y* output tensor. If `mode = blocks_first`:
The operation is equivalent to the following transformation of the input tensor ``data`` with ``K`` spatial dimensions of shape ``[N, C, D1, D2, ..., DK]`` to *Y* output tensor. If ``mode = blocks_first``:
x' = reshape(data, [N, C, D1 / block_size, block_size, D2 / block_size, block_size, ... , DK / block_size, block_size])
.. code-block:: cpp
x'' = transpose(x', [0, 3, 5, ..., K + (K + 1), 1, 2, 4, ..., K + K])
x' = reshape(data, [N, C, D1 / block_size, block_size, D2 / block_size, block_size, ... , DK / block_size, block_size])
y = reshape(x'', [N, C * (block_size ^ K), D1 / block_size, D2 / block_size, ... , DK / block_size])
x'' = transpose(x', [0, 3, 5, ..., K + (K + 1), 1, 2, 4, ..., K + K])
If `mode = depth_first`:
y = reshape(x'', [N, C * (block_size ^ K), D1 / block_size, D2 / block_size, ... , DK / block_size])
x' = reshape(data, [N, C, D1 / block_size, block_size, D2 / block_size, block_size, ..., DK / block_size, block_size])
If ``mode = depth_first``:
x'' = transpose(x', [0, 1, 3, 5, ..., K + (K + 1), 2, 4, ..., K + K])
.. code-block:: cpp
y = reshape(x'', [N, C * (block_size ^ K), D1 / block_size, D2 / block_size, ..., DK / block_size])
x' = reshape(data, [N, C, D1 / block_size, block_size, D2 / block_size, block_size, ..., DK / block_size, block_size])
x'' = transpose(x', [0, 1, 3, 5, ..., K + (K + 1), 2, 4, ..., K + K])
y = reshape(x'', [N, C * (block_size ^ K), D1 / block_size, D2 / block_size, ..., DK / block_size])
**Attributes**
* *block_size*
* **Description**: specifies the size of the value block to be moved. The spatial dimensions must be evenly divided by `block_size`.
* **Description**: specifies the size of the value block to be moved. The spatial dimensions must be evenly divided by ``block_size``.
* **Range of values**: a positive integer
* **Type**: `int`
* **Type**: ``int``
* **Default value**: 1
* **Required**: *no*
@ -41,18 +47,19 @@ If `mode = depth_first`:
* **Description**: specifies how the output depth dimension is gathered from block coordinates and the old depth dimension.
* **Range of values**:
* *blocks_first*: the output depth is gathered from `[block_size, ..., block_size, C]`
* *depth_first*: the output depth is gathered from `[C, block_size, ..., block_size]`
* **Type**: `string`
* *blocks_first*: the output depth is gathered from ``[block_size, ..., block_size, C]``
* *depth_first*: the output depth is gathered from ``[C, block_size, ..., block_size]``
* **Type**: ``string``
* **Required**: *yes*
**Inputs**
* **1**: `data` - input tensor of type *T* with rank >= 3. **Required.**
* **1**: ``data`` - input tensor of type *T* with rank >= 3. **Required.**
**Outputs**
* **1**: permuted tensor of type *T* and shape `[N, C * (block_size ^ K), D1 / block_size, D2 / block_size, ..., DK / block_size]`.
* **1**: permuted tensor of type *T* and shape ``[N, C * (block_size ^ K), D1 / block_size, D2 / block_size, ..., DK / block_size]``.
**Types**
@ -60,24 +67,26 @@ If `mode = depth_first`:
**Example**
```xml
<layer type="SpaceToDepth" ...>
<data block_size="2" mode="blocks_first"/>
<input>
<port id="0">
<dim>5</dim>
<dim>7</dim>
<dim>4</dim>
<dim>6</dim>
</port>
</input>
<output>
<port id="1">
<dim>5</dim> <!-- data.shape[0] -->
<dim>28</dim> <!-- data.shape[1] * (block_size ^ 2) -->
<dim>2</dim> <!-- data.shape[2] / block_size -->
<dim>3</dim> <!-- data.shape[3] / block_size -->
</port>
</output>
</layer>
```
.. code-block:: cpp
<layer type="SpaceToDepth" ...>
<data block_size="2" mode="blocks_first"/>
<input>
<port id="0">
<dim>5</dim>
<dim>7</dim>
<dim>4</dim>
<dim>6</dim>
</port>
</input>
<output>
<port id="1">
<dim>5</dim> <!-- data.shape[0] -->
<dim>28</dim> <!-- data.shape[1] * (block_size ^ 2) -->
<dim>2</dim> <!-- data.shape[2] / block_size -->
<dim>3</dim> <!-- data.shape[3] / block_size -->
</port>
</output>
</layer>
@endsphinxdirective

View File

@ -1,5 +1,7 @@
# Split {#openvino_docs_ops_movement_Split_1}
@sphinxdirective
**Versioned name**: *Split-1*
**Category**: *Data movement*
@ -8,33 +10,34 @@
**Detailed Description**
*Split* operation splits a given input tensor `data` into chunks of the same length along a scalar `axis`. It produces multiple output tensors based on *num_splits* attribute.
The i-th output tensor shape is equal to the input tensor `data` shape, except for dimension along `axis` which is `data.shape[axis]/num_splits`.
*Split* operation splits a given input tensor ``data`` into chunks of the same length along a scalar ``axis``. It produces multiple output tensors based on *num_splits* attribute.
The i-th output tensor shape is equal to the input tensor ``data`` shape, except for dimension along ``axis`` which is ``data.shape[axis]/num_splits``.
\f[
shape\_output\_tensor = [data.shape[0], data.shape[1], \dotsc , data.shape[axis]/num\_splits, \dotsc data.shape[D-1]]
\f]
.. math::
Where D is the rank of input tensor `data`. The axis being split must be evenly divided by *num_splits* attribute.
shape\_output\_tensor = [data.shape[0], data.shape[1], \dotsc , data.shape[axis]/num\_splits, \dotsc data.shape[D-1]]
Where D is the rank of input tensor ``data``. The axis being split must be evenly divided by *num_splits* attribute.
**Attributes**
* *num_splits*
* **Description**: number of outputs into which the input tensor `data` will be split along `axis` dimension. The dimension of `data` shape along `axis` must be evenly divisible by *num_splits*
* **Range of values**: an integer within the range `[1, data.shape[axis]]`
* **Type**: `int`
* **Description**: number of outputs into which the input tensor ``data`` will be split along ``axis`` dimension. The dimension of ``data`` shape along ``axis`` must be evenly divisible by *num_splits*
* **Range of values**: an integer within the range ``[1, data.shape[axis]]``
* **Type**: ``int``
* **Required**: *yes*
**Inputs**
* **1**: `data`. A tensor of type *T* and arbitrary shape. **Required.**
* **2**: `axis`. Axis along `data` to split. A scalar of type *T_AXIS* within the range `[-rank(data), rank(data) - 1]`. Negative values address dimensions from the end. **Required.**
* **Note**: The dimension of input tensor `data` shape along `axis` must be evenly divisible by *num_splits* attribute.
* **1**: ``data``. A tensor of type *T* and arbitrary shape. **Required.**
* **2**: ``axis``. Axis along ``data`` to split. A scalar of type *T_AXIS* within the range ``[-rank(data), rank(data) - 1]``. Negative values address dimensions from the end. **Required.**
* **Note**: The dimension of input tensor ``data`` shape along ``axis`` must be evenly divisible by *num_splits* attribute.
**Outputs**
* **Multiple outputs**: Tensors of type *T*. The i-th output has the same shape as `data` input tensor except for dimension along `axis` which is `data.shape[axis]/num_splits`.
* **Multiple outputs**: Tensors of type *T*. The i-th output has the same shape as ``data`` input tensor except for dimension along ``axis`` which is ``data.shape[axis]/num_splits``.
**Types**
@ -43,38 +46,40 @@ Where D is the rank of input tensor `data`. The axis being split must be evenly
**Example**
```xml
<layer id="1" type="Split" ...>
<data num_splits="3" />
<input>
<port id="0"> <!-- some data -->
<dim>6</dim>
<dim>12</dim>
<dim>10</dim>
<dim>24</dim>
</port>
<port id="1"> <!-- axis: 1 -->
</port>
</input>
<output>
<port id="2">
<dim>6</dim>
<dim>4</dim>
<dim>10</dim>
<dim>24</dim>
</port>
<port id="3">
<dim>6</dim>
<dim>4</dim>
<dim>10</dim>
<dim>24</dim>
</port>
<port id="4">
<dim>6</dim>
<dim>4</dim>
<dim>10</dim>
<dim>24</dim>
</port>
</output>
</layer>
```
.. code-block:: cpp
<layer id="1" type="Split" ...>
<data num_splits="3" />
<input>
<port id="0"> < !-- some data -->
<dim>6</dim>
<dim>12</dim>
<dim>10</dim>
<dim>24</dim>
</port>
<port id="1"> < !-- axis: 1 -->
</port>
</input>
<output>
<port id="2">
<dim>6</dim>
<dim>4</dim>
<dim>10</dim>
<dim>24</dim>
</port>
<port id="3">
<dim>6</dim>
<dim>4</dim>
<dim>10</dim>
<dim>24</dim>
</port>
<port id="4">
<dim>6</dim>
<dim>4</dim>
<dim>10</dim>
<dim>24</dim>
</port>
</output>
</layer>
@endsphinxdirective

View File

@ -1,5 +1,7 @@
# StridedSlice {#openvino_docs_ops_movement_StridedSlice_1}
@sphinxdirective
**Versioned name**: *StridedSlice-1*
**Category**: *Data movement*
@ -8,153 +10,162 @@
**Attributes**
* *begin_mask*
* *begin_mask*
* **Description**: *begin_mask* is a bit mask. *begin_mask[i]* equal to `1` means that the corresponding dimension of the `begin` input is ignored and the 'real' beginning of the tensor is used along corresponding dimension.
* **Range of values**: a list of `0`s and `1`s
* **Type**: `int[]`
* **Default value**: None
* **Required**: *yes*
* **Description**: *begin_mask* is a bit mask. *begin_mask[i]* equal to ``1`` means that the corresponding dimension of the ``begin`` input is ignored and the 'real' beginning of the tensor is used along corresponding dimension.
* **Range of values**: a list of ``0`` s and ``1`` s
* **Type**: ``int[]``
* **Default value**: None
* **Required**: *yes*
* *end_mask*
* *end_mask*
* **Description**: *end_mask* is a bit mask. If *end_mask[i]* is `1`, the corresponding dimension of the `end` input is ignored and the real 'end' of the tensor is used along corresponding dimension.
* **Range of values**: a list of `0`s and `1`s
* **Type**: `int[]`
* **Default value**: None
* **Required**: *yes*
* **Description**: *end_mask* is a bit mask. If *end_mask[i]* is ``1``, the corresponding dimension of the ``end`` input is ignored and the real 'end' of the tensor is used along corresponding dimension.
* **Range of values**: a list of ``0`` s and ``1`` s
* **Type**: ``int[]``
* **Default value**: None
* **Required**: *yes*
* *new_axis_mask*
* *new_axis_mask*
* **Description**: *new_axis_mask* is a bit mask. If *new_axis_mask[i]* is `1`, a length 1 dimension is inserted on the `i`-th position of input tensor.
* **Range of values**: a list of `0`s and `1`s
* **Type**: `int[]`
* **Default value**: `[0]`
* **Required**: *no*
* **Description**: *new_axis_mask* is a bit mask. If *new_axis_mask[i]* is ``1``, a length 1 dimension is inserted on the ``i``-th position of input tensor.
* **Range of values**: a list of ``0`` s and ``1`` s
* **Type**: ``int[]``
* **Default value**: ``[0]``
* **Required**: *no*
* *shrink_axis_mask*
* *shrink_axis_mask*
* **Description**: *shrink_axis_mask* is a bit mask. If *shrink_axis_mask[i]* is `1`, the dimension on the `i`-th position is deleted.
* **Range of values**: a list of `0`s and `1`s
* **Type**: `int[]`
* **Default value**: `[0]`
* **Required**: *no*
* **Description**: *shrink_axis_mask* is a bit mask. If *shrink_axis_mask[i]* is ``1``, the dimension on the ``i``-th position is deleted.
* **Range of values**: a list of ``0`` s and ``1`` s
* **Type**: ``int[]``
* **Default value**: ``[0]``
* **Required**: *no*
* *ellipsis_mask*
* *ellipsis_mask*
* **Description**: *ellipsis_mask* is a bit mask. It inserts missing dimensions on a position of a non-zero bit.
* **Range of values**: a list of `0`s and `1`. Only one non-zero bit is allowed.
* **Type**: `int[]`
* **Default value**: `[0]`
* **Required**: *no*
* **Description**: *ellipsis_mask* is a bit mask. It inserts missing dimensions on a position of a non-zero bit.
* **Range of values**: a list of ``0`` s and ``1``. Only one non-zero bit is allowed.
* **Type**: ``int[]``
* **Default value**: ``[0]``
* **Required**: *no*
**Inputs**:
* **1**: `data` - input tensor to be sliced of type *T* and arbitrary shape. **Required.**
* **1**: ``data`` - input tensor to be sliced of type *T* and arbitrary shape. **Required.**
* **2**: `begin` - 1D tensor of type *T_IND* with begin indexes for input tensor slicing. **Required.**
Out-of-bounds values are silently clamped. If `begin_mask[i]` is `1`, the value of `begin[i]` is ignored and the range of the appropriate dimension starts from `0`. Negative values mean indexing starts from the end. For example, if `data=[1,2,3]`, `begin[0]=-1` means `begin[0]=3`.
* **2**: ``begin`` - 1D tensor of type *T_IND* with begin indexes for input tensor slicing. **Required.**
Out-of-bounds values are silently clamped. If ``begin_mask[i]`` is ``1`` , the value of ``begin[i]`` is ignored and the range of the appropriate dimension starts from ``0``. Negative values mean indexing starts from the end. For example, if ``data=[1,2,3]``, ``begin[0]=-1`` means ``begin[0]=3``.
* **3**: `end` - 1D tensor of type *T_IND* with end indexes for input tensor slicing. **Required.**
Out-of-bounds values will be silently clamped. If `end_mask[i]` is `1`, the value of `end[i]` is ignored and the full range of the appropriate dimension is used instead. Negative values mean indexing starts from the end. For example, if `data=[1,2,3]`, `end[0]=-1` means `end[0]=3`.
* **3**: ``end`` - 1D tensor of type *T_IND* with end indexes for input tensor slicing. **Required.**
Out-of-bounds values will be silently clamped. If ``end_mask[i]`` is ``1``, the value of ``end[i]`` is ignored and the full range of the appropriate dimension is used instead. Negative values mean indexing starts from the end. For example, if ``data=[1,2,3]``, ``end[0]=-1`` means ``end[0]=3``.
* **4**: `stride` - 1D tensor of type *T_IND* with strides. **Optional.**
* **4**: ``stride`` - 1D tensor of type *T_IND* with strides. **Optional.**
**Types**
* *T*: any supported type.
* *T_IND*: any supported integer type.
**Example**
Example of `begin_mask` & `end_mask` usage.
```xml
<layer ... type="StridedSlice" ...>
<data begin_mask="0,1,1" ellipsis_mask="0,0,0" end_mask="1,1,0" new_axis_mask="0,0,0" shrink_axis_mask="0,0,0"/>
<input>
<port id="0">
<dim>2</dim>
<dim>3</dim>
<dim>4</dim>
</port>
<port id="1">
<dim>2</dim> <!-- begin: [1, 0, 0] -->
</port>
<port id="2">
<dim>2</dim> <!-- end: [0, 0, 2] -->
</port>
<port id="3">
<dim>2</dim> <!-- stride: [1, 1, 1] -->
</port>
</input>
<output>
<port id="4">
<dim>1</dim>
<dim>3</dim>
<dim>2</dim>
</port>
</output>
</layer>
```
Example of ``begin_mask`` & ``end_mask`` usage.
Example of `new_axis_mask` usage.
```xml
<layer ... type="StridedSlice" ...>
<data begin_mask="0,1,1" ellipsis_mask="0,0,0" end_mask="0,1,1" new_axis_mask="1,0,0" shrink_axis_mask="0,0,0"/>
<input>
<port id="0">
<dim>2</dim>
<dim>3</dim>
<dim>4</dim>
</port>
<port id="1">
<dim>2</dim>
</port>
<port id="2">
<dim>2</dim>
</port>
<port id="3">
<dim>2</dim>
</port>
</input>
<output>
<port id="4">
<dim>1</dim>
<dim>2</dim>
<dim>3</dim>
<dim>4</dim>
</port>
</output>
</layer>
```
.. code-block:: cpp
<layer ... type="StridedSlice" ...>
<data begin_mask="0,1,1" ellipsis_mask="0,0,0" end_mask="1,1,0" new_axis_mask="0,0,0" shrink_axis_mask="0,0,0"/>
<input>
<port id="0">
<dim>2</dim>
<dim>3</dim>
<dim>4</dim>
</port>
<port id="1">
<dim>2</dim> < !-- begin: [1, 0, 0] -->
</port>
<port id="2">
<dim>2</dim> < !-- end: [0, 0, 2] -->
</port>
<port id="3">
<dim>2</dim> < !-- stride: [1, 1, 1] -->
</port>
</input>
<output>
<port id="4">
<dim>1</dim>
<dim>3</dim>
<dim>2</dim>
</port>
</output>
</layer>
Example of ``new_axis_mask`` usage.
.. code-block:: cpp
<layer ... type="StridedSlice" ...>
<data begin_mask="0,1,1" ellipsis_mask="0,0,0" end_mask="0,1,1" new_axis_mask="1,0,0" shrink_axis_mask="0,0,0"/>
<input>
<port id="0">
<dim>2</dim>
<dim>3</dim>
<dim>4</dim>
</port>
<port id="1">
<dim>2</dim>
</port>
<port id="2">
<dim>2</dim>
</port>
<port id="3">
<dim>2</dim>
</port>
</input>
<output>
<port id="4">
<dim>1</dim>
<dim>2</dim>
<dim>3</dim>
<dim>4</dim>
</port>
</output>
</layer>
Example of ``shrink_axis_mask`` usage.
.. code-block:: cpp
<layer ... type="StridedSlice" ...>
<data begin_mask="1,0,1,1,1" ellipsis_mask="0,0,0,0,0" end_mask="1,0,1,1,1" new_axis_mask="0,0,0,0,0" shrink_axis_mask="0,1,0,0,0"/>
<input>
<port id="0">
<dim>1</dim>
<dim>2</dim>
<dim>384</dim>
<dim>640</dim>
<dim>8</dim>
</port>
<port id="1">
<dim>5</dim>
</port>
<port id="2">
<dim>5</dim>
</port>
<port id="3">
<dim>5</dim>
</port>
</input>
<output>
<port id="4">
<dim>1</dim>
<dim>384</dim>
<dim>640</dim>
<dim>8</dim>
</port>
</output>
</layer>
@endsphinxdirective
Example of `shrink_axis_mask` usage.
```xml
<layer ... type="StridedSlice" ...>
<data begin_mask="1,0,1,1,1" ellipsis_mask="0,0,0,0,0" end_mask="1,0,1,1,1" new_axis_mask="0,0,0,0,0" shrink_axis_mask="0,1,0,0,0"/>
<input>
<port id="0">
<dim>1</dim>
<dim>2</dim>
<dim>384</dim>
<dim>640</dim>
<dim>8</dim>
</port>
<port id="1">
<dim>5</dim>
</port>
<port id="2">
<dim>5</dim>
</port>
<port id="3">
<dim>5</dim>
</port>
</input>
<output>
<port id="4">
<dim>1</dim>
<dim>384</dim>
<dim>640</dim>
<dim>8</dim>
</port>
</output>
</layer>
```

View File

@ -1,10 +1,13 @@
# Tile {#openvino_docs_ops_movement_Tile_1}
@sphinxdirective
**Versioned name**: *Tile-1*
**Category**: *Data movement*
**Short description**: *Tile* operation repeats an input tensor *"data"* the number of times given by *"repeats"* input tensor along each dimension.
* If number of elements in *"repeats"* is more than shape of *"data"*, then *"data"* will be promoted to "*repeats*" by prepending new axes, e.g. let's shape of *"data"* is equal to (2, 3) and *"repeats"* is equal to [2, 2, 2], then shape of *"data"* will be promoted to (1, 2, 3) and result shape will be (2, 4, 6).
* If number of elements in *"repeats"* is less than shape of *"data"*, then *"repeats"* will be promoted to "*data*" by prepending 1's to it, e.g. let's shape of *"data"* is equal to (4, 2, 3) and *"repeats"* is equal to [2, 2], then *"repeats"* will be promoted to [1, 2, 2] and result shape will be (4, 4, 6)
@ -30,81 +33,90 @@ No attributes available.
*Tile* operation extends input tensor and filling in output tensor by the following rules:
\f[out_i=input_i[inner_dim*t]\f] \f[ t \in \left ( 0, \quad tiles \right ) \f]
.. math::
out_i=input_i[inner_dim*t]
.. math::
t \in \left ( 0, \quad tiles \right )
**Examples**
*Example 1: number elements in "repeats" is equal to shape of data*
```xml
<layer ... type="Tile">
<input>
<port id="0">
<dim>2</dim>
<dim>3</dim>
<dim>4</dim>
</port>
<port id="1">
<dim>3</dim> <!-- [1, 2, 3] -->
</port>
</input>
<output>
<port id="2">
<dim>2</dim>
<dim>6</dim>
<dim>12</dim>
</port>
</output>
</layer>
```
.. code-block:: cpp
<layer ... type="Tile">
<input>
<port id="0">
<dim>2</dim>
<dim>3</dim>
<dim>4</dim>
</port>
<port id="1">
<dim>3</dim> < !-- [1, 2, 3] -->
</port>
</input>
<output>
<port id="2">
<dim>2</dim>
<dim>6</dim>
<dim>12</dim>
</port>
</output>
</layer>
*Example 2: number of elements in "repeats" is more than shape of "data"*
```xml
<layer ... type="Tile">
<input>
<port id="0"> <!-- will be promoted to shape (1, 2, 3, 4) -->
<dim>2</dim>
<dim>3</dim>
<dim>4</dim>
</port>
<port id="1">
<dim>4</dim> <!-- [5, 1, 2, 3] -->
</port>
</input>
<output>
<port id="2">
<dim>5/dim>
<dim>2</dim>
<dim>6</dim>
<dim>12</dim>
</port>
</output>
</layer>
```
.. code-block:: cpp
<layer ... type="Tile">
<input>
<port id="0"> < !-- will be promoted to shape (1, 2, 3, 4) -->
<dim>2</dim>
<dim>3</dim>
<dim>4</dim>
</port>
<port id="1">
<dim>4</dim> < !-- [5, 1, 2, 3] -->
</port>
</input>
<output>
<port id="2">
<dim>5/dim>
<dim>2</dim>
<dim>6</dim>
<dim>12</dim>
</port>
</output>
</layer>
*Example 3: number of elements in "repeats" is less than shape of "data"*
```xml
<layer ... type="Tile">
<input>
<port id="0">
<dim>5</dim>
<dim>2</dim>
<dim>3</dim>
<dim>4</dim>
</port>
<port id="1">
<dim>3</dim> <!-- [1, 2, 3] will be promoted to [1, 1, 2, 3] -->
</port>
</input>
<output>
<port id="2">
<dim>5</dim>
<dim>2</dim>
<dim>6</dim>
<dim>12</dim>
</port>
</output>
</layer>
```
.. code-block:: cpp
<layer ... type="Tile">
<input>
<port id="0">
<dim>5</dim>
<dim>2</dim>
<dim>3</dim>
<dim>4</dim>
</port>
<port id="1">
<dim>3</dim> < !-- [1, 2, 3] will be promoted to [1, 1, 2, 3] -->
</port>
</input>
<output>
<port id="2">
<dim>5</dim>
<dim>2</dim>
<dim>6</dim>
<dim>12</dim>
</port>
</output>
</layer>
@endsphinxdirective

View File

@ -1,5 +1,7 @@
# Transpose {#openvino_docs_ops_movement_Transpose_1}
@sphinxdirective
**Versioned name**: *Transpose-1*
**Category**: *Data movement*
@ -7,15 +9,18 @@
**Short description**: *Transpose* operation reorders the input tensor dimensions.
**Detailed description**: *Transpose* operation reorders the input tensor dimensions. Source indexes and destination indexes are bound by the formula:
\f[output[i(order[0]), i(order[1]), ..., i(order[N-1])] = input[i(0), i(1), ..., i(N-1)]\\ \quad \textrm{where} \quad i(j) \quad\textrm{is in the range} \quad [0, (input.shape[j]-1)]\f]
.. math::
[output[i(order[0]), i(order[1]), ..., i(order[N-1])] = input[i(0), i(1), ..., i(N-1)]\\ \quad \textrm{where} \quad i(j) \quad\textrm{is in the range} \quad [0, (input.shape[j]-1)]
**Attributes**: *Transpose* operation has no attributes.
**Inputs**:
* **1**: `arg` - the tensor to be transposed. A tensor of type *T* and arbitrary shape. **Required.**
* **2**: `input_order` - the permutation to apply to the axes of the first input shape. A 1D tensor of `n` elements *T_AXIS* type and shape `[n]`, where `n` is the rank of the first input or `0`. The tensor's value must contain every integer in the range `[0, n-1]`, but if an empty tensor is specified (shape `[0]`), then the axes will be inverted. **Required.**
* **1**: ``arg`` - the tensor to be transposed. A tensor of type *T* and arbitrary shape. **Required.**
* **2**: ``input_order`` - the permutation to apply to the axes of the first input shape. A 1D tensor of ``n`` elements *T_AXIS* type and shape ``[n]``, where ``n`` is the rank of the first input or `0`. The tensor's value must contain every integer in the range ``[0, n-1]``, but if an empty tensor is specified (shape ``[0]``), then the axes will be inverted. **Required.**
**Outputs**:
@ -31,48 +36,51 @@
*Example 1*
```xml
<layer ... type="Transpose">
<input>
<port id="0">
<dim>2</dim>
<dim>3</dim>
<dim>4</dim>
</port>
<port id="1">
<dim>3</dim> <!-- [2, 0, 1] -->
</port>
</input>
<output>
<port id="2">
<dim>4</dim>
<dim>2</dim>
<dim>3</dim>
</port>
</output>
</layer>
```
.. code-block:: cpp
<layer ... type="Transpose">
<input>
<port id="0">
<dim>2</dim>
<dim>3</dim>
<dim>4</dim>
</port>
<port id="1">
<dim>3</dim> < !-- [2, 0, 1] -->
</port>
</input>
<output>
<port id="2">
<dim>4</dim>
<dim>2</dim>
<dim>3</dim>
</port>
</output>
</layer>
*Example 2: input_order = empty 1D tensor of Shape[0]*
```xml
<layer ... type="Transpose">
<input>
<port id="0">
<dim>2</dim>
<dim>3</dim>
<dim>4</dim>
</port>
<port id="1">
<dim>0</dim> <!-- input_order is an empty 1D tensor -->
</port>
</input>
<output>
<port id="2">
<dim>4</dim>
<dim>3</dim>
<dim>2</dim>
</port>
</output>
</layer>
```
.. code-block:: cpp
<layer ... type="Transpose">
<input>
<port id="0">
<dim>2</dim>
<dim>3</dim>
<dim>4</dim>
</port>
<port id="1">
<dim>0</dim> < !-- input_order is an empty 1D tensor -->
</port>
</input>
<output>
<port id="2">
<dim>4</dim>
<dim>3</dim>
<dim>2</dim>
</port>
</output>
</layer>
@endsphinxdirective

View File

@ -1,5 +1,7 @@
# Unique {#openvino_docs_ops_movement_Unique_10}
@sphinxdirective
**Versioned name**: *Unique-10*
**Category**: *Data movement*
@ -15,6 +17,7 @@ The operator can either work in elementwise mode searching for unique values in
* **Description**: controls whether the unique elements in the output tensor are sorted in ascending order.
* **Range of values**:
* false - output tensor's elements are not sorted
* true - output tensor's elements are sorted
* **Type**: boolean
@ -41,105 +44,112 @@ The operator can either work in elementwise mode searching for unique values in
* **1**: A tensor of type *T* and arbitrary shape. **Required.**
* **2**: A tensor of type *T_AXIS*. The allowed tensor shape is 1D with a single element or a scalar. If provided, this input has to be connected to a Constant. **Optional**
When provided this input is used to "divide" the input tensor into slices along the specified axis before unique elements processing starts. When this input is not provided the operator works on a flattened version of the input tensor (elementwise processing). The range of allowed values is `[-r; r-1]` where `r` is the rank of the input tensor.
When provided this input is used to "divide" the input tensor into slices along the specified axis before unique elements processing starts. When this input is not provided the operator works on a flattened version of the input tensor (elementwise processing). The range of allowed values is ``[-r; r-1]`` where ``r`` is the rank of the input tensor.
**Outputs**
* **1**: The output tensor containing unique elements (individual values or subtensors). This tensor's type matches the type of the first input tensor: *T*. The values in this tensor are either sorted ascendingly or maintain the same order as in the input tensor. The shape of this output depends on the values of the input tensor and will very often be dynamic. Please refer to the article describing how [Dynamic Shapes](https://docs.openvino.ai/latest/openvino_docs_OV_UG_DynamicShapes.html) are handled in OpenVINO.
* **2**: The output tensor containing indices of the locations of unique elements. The indices map the elements in the first output tensor to their locations in the input tensor. The index always points to the first occurrence of a given unique output element in the input tensor. This is a 1D tensor with type controlled by the `index_element_type` attribute.
* **3**: The output tensor containing indices of the locations of elements of the input tensor in the first output tensor. This means that for each element of the input tensor this output will point to the unique value in the first output tensor of this operator. This is a 1D tensor with type controlled by the `index_element_type` attribute.
* **4**: The output tensor containing the number of occurrences of each unique value produced by this operator in the first output tensor. This is a 1D tensor with type controlled by the `count_element_type` attribute.
* **1**: The output tensor containing unique elements (individual values or subtensors). This tensor's type matches the type of the first input tensor: *T*. The values in this tensor are either sorted ascendingly or maintain the same order as in the input tensor. The shape of this output depends on the values of the input tensor and will very often be dynamic. Please refer to the article describing how :doc:`Dynamic Shapes <openvino_docs_OV_UG_DynamicShapes>` are handled in OpenVINO.
* **2**: The output tensor containing indices of the locations of unique elements. The indices map the elements in the first output tensor to their locations in the input tensor. The index always points to the first occurrence of a given unique output element in the input tensor. This is a 1D tensor with type controlled by the ``index_element_type`` attribute.
* **3**: The output tensor containing indices of the locations of elements of the input tensor in the first output tensor. This means that for each element of the input tensor this output will point to the unique value in the first output tensor of this operator. This is a 1D tensor with type controlled by the ``index_element_type`` attribute.
* **4**: The output tensor containing the number of occurrences of each unique value produced by this operator in the first output tensor. This is a 1D tensor with type controlled by the ``count_element_type`` attribute.
**Types**
* *T*: any supported data type.
* *T_AXIS*: `int64` or `int32`.
* *T_AXIS*: ``int64`` or ``int32``.
**Examples**
*Example 1: axis input connected to a constant containing a 'zero'*
```xml
<layer ... type="Unique" ... >
<data sorted="false" index_element_type="i32"/>
<input>
<port id="0" precision="FP32">
<dim>3</dim>
<dim>3</dim>
</port>
</input>
<input>
<port id="1" precision="I64">
<dim>1</dim>
</port>
</input>
<output>
<port id="2" precision="FP32">
<dim>-1</dim>
<dim>3</xdim>
</port>
<port id="3" precision="I32">
<dim>-1</dim>
</port>
<port id="4" precision="I32">
<dim>3</dim>
</port>
<port id="5" precision="I64">
<dim>-1</dim>
</port>
</output>
</layer>
```
.. code-block:: cpp
<layer ... type="Unique" ... >
<data sorted="false" index_element_type="i32"/>
<input>
<port id="0" precision="FP32">
<dim>3</dim>
<dim>3</dim>
</port>
</input>
<input>
<port id="1" precision="I64">
<dim>1</dim>
</port>
</input>
<output>
<port id="2" precision="FP32">
<dim>-1</dim>
<dim>3</xdim>
</port>
<port id="3" precision="I32">
<dim>-1</dim>
</port>
<port id="4" precision="I32">
<dim>3</dim>
</port>
<port id="5" precision="I64">
<dim>-1</dim>
</port>
</output>
</layer>
*Example 2: no axis provided*
```xml
<layer ... type="Unique" ... >
<input>
<port id="0" precision="FP32">
<dim>3</dim>
<dim>3</dim>
</port>
</input>
<output>
<port id="1" precision="FP32">
<dim>-1</dim>
</port>
<port id="2" precision="I64">
<dim>-1</dim>
</port>
<port id="3" precision="I64">
<dim>9</dim>
</port>
<port id="4" precision="I64">
<dim>-1</dim>
</port>
</output>
</layer>
```
*Example 3: no axis provided, non-default outputs precision *
```xml
<layer ... type="Unique" ... >
<data sorted="false" index_element_type="i32" count_element_type="i32"/>
<input>
<port id="0" precision="FP32">
<dim>3</dim>
<dim>3</dim>
</port>
</input>
<output>
<port id="1" precision="FP32">
<dim>-1</dim>
</port>
<port id="2" precision="I32">
<dim>-1</dim>
</port>
<port id="3" precision="I32">
<dim>9</dim>
</port>
<port id="4" precision="I32">
<dim>-1</dim>
</port>
</output>
</layer>
```
.. code-block:: cpp
<layer ... type="Unique" ... >
<input>
<port id="0" precision="FP32">
<dim>3</dim>
<dim>3</dim>
</port>
</input>
<output>
<port id="1" precision="FP32">
<dim>-1</dim>
</port>
<port id="2" precision="I64">
<dim>-1</dim>
</port>
<port id="3" precision="I64">
<dim>9</dim>
</port>
<port id="4" precision="I64">
<dim>-1</dim>
</port>
</output>
</layer>
*Example 3: no axis provided, non-default outputs precision*
.. code-block:: cpp
<layer ... type="Unique" ... >
<data sorted="false" index_element_type="i32" count_element_type="i32"/>
<input>
<port id="0" precision="FP32">
<dim>3</dim>
<dim>3</dim>
</port>
</input>
<output>
<port id="1" precision="FP32">
<dim>-1</dim>
</port>
<port id="2" precision="I32">
<dim>-1</dim>
</port>
<port id="3" precision="I32">
<dim>9</dim>
</port>
<port id="4" precision="I32">
<dim>-1</dim>
</port>
</output>
</layer>
@endsphinxdirective

View File

@ -1,36 +1,35 @@
# VariadicSplit {#openvino_docs_ops_movement_VariadicSplit_1}
@sphinxdirective
**Versioned name**: *VariadicSplit-1*
**Category**: *Data movement*
**Short description**: *VariadicSplit* operation splits an input tensor into chunks along some axis. The chunks may have variadic lengths depending on `split_lengths` input tensor.
**Short description**: *VariadicSplit* operation splits an input tensor into chunks along some axis. The chunks may have variadic lengths depending on ``split_lengths`` input tensor.
**Detailed Description**
*VariadicSplit* operation splits a given input tensor `data` into chunks along a scalar or tensor with shape `[1]` `axis`. It produces multiple output tensors based on additional input tensor `split_lengths`.
The i-th output tensor shape is equal to the input tensor `data` shape, except for dimension along `axis` which is `split_lengths[i]`.
*VariadicSplit* operation splits a given input tensor `data` into chunks along a scalar or tensor with shape ``[1]`` ``axis``. It produces multiple output tensors based on additional input tensor ``split_lengths``.
The i-th output tensor shape is equal to the input tensor `data` shape, except for dimension along `axis` which is ``split_lengths[i]``.
\f[
shape\_output\_tensor = [data.shape[0], data.shape[1], \dotsc , split\_lengths[i], \dotsc , data.shape[D-1]]
\f]
.. math::
shape\_output\_tensor = [data.shape[0], data.shape[1], \dotsc , split\_lengths[i], \dotsc , data.shape[D-1]]
Where D is the rank of input tensor `data`. The sum of elements in `split_lengths` must match `data.shape[axis]`.
Where D is the rank of input tensor `data`. The sum of elements in ``split_lengths`` must match ``data.shape[axis]``.
**Attributes**: *VariadicSplit* operation has no attributes.
**Inputs**
* **1**: `data`. A tensor of type `T1` and arbitrary shape. **Required.**
* **2**: `axis`. Axis along `data` to split. A scalar or tensor with shape `[1]` of type `T2` with value from range `-rank(data) .. rank(data)-1`. Negative values address dimensions from the end.
**Required.**
* **3**: `split_lengths`. A list containing the dimension values of each output tensor shape along the split `axis`. A 1D tensor of type `T2`. The number of elements in `split_lengths` determines the number of outputs. The sum of elements in `split_lengths` must match `data.shape[axis]`. In addition `split_lengths` can contain a single `-1` element, which means, all remaining items along specified `axis` that are not consumed by other parts. **Required.**
* **1**: ``data``. A tensor of type `T1` and arbitrary shape. **Required.**
* **2**: ``axis``. Axis along ``data`` to split. A scalar or tensor with shape ``[1]`` of type ``T2`` with value from range ``-rank(data) .. rank(data)-1``. Negative values address dimensions from the end. **Required.**
* **3**: ``split_lengths``. A list containing the dimension values of each output tensor shape along the split ``axis``. A 1D tensor of type ``T2``. The number of elements in ``split_lengths`` determines the number of outputs. The sum of elements in ``split_lengths`` must match ``data.shape[axis]``. In addition ``split_lengths`` can contain a single ``-1`` element, which means, all remaining items along specified ``axis`` that are not consumed by other parts. **Required.**
**Outputs**
* **Multiple outputs**: Tensors of type `T1`. The i-th output has the same shape as `data` input tensor except for dimension along `axis` which is `split_lengths[i]` if `split_lengths[i] != -1`. Otherwise, the dimension along `axis` is processed as described in `split_lengths` input description.
* **Multiple outputs**: Tensors of type ``T1``. The i-th output has the same shape as `data` input tensor except for dimension along ``axis`` which is ``split_lengths[i]`` if ``split_lengths[i] != -1``. Otherwise, the dimension along ``axis`` is processed as described in ``split_lengths`` input description.
**Types**
@ -39,72 +38,77 @@ Where D is the rank of input tensor `data`. The sum of elements in `split_length
**Examples**
```xml
<layer id="1" type="VariadicSplit" ...>
<input>
<port id="0"> <!-- some data -->
<dim>6</dim>
<dim>12</dim>
<dim>10</dim>
<dim>24</dim>
</port>
<port id="1"> <!-- axis: 0 -->
</port>
<port id="2">
<dim>3</dim> <!-- split_lengths: [1, 2, 3] -->
</port>
</input>
<output>
<port id="3">
<dim>1</dim>
<dim>12</dim>
<dim>10</dim>
<dim>24</dim>
</port>
<port id="4">
<dim>2</dim>
<dim>12</dim>
<dim>10</dim>
<dim>24</dim>
</port>
<port id="5">
<dim>3</dim>
<dim>12</dim>
<dim>10</dim>
<dim>24</dim>
</port>
</output>
</layer>
```
.. code-block:: cpp
<layer id="1" type="VariadicSplit" ...>
<input>
<port id="0"> < !-- some data -->
<dim>6</dim>
<dim>12</dim>
<dim>10</dim>
<dim>24</dim>
</port>
<port id="1"> < !-- axis: 0 -->
</port>
<port id="2">
<dim>3</dim> < !-- split_lengths: [1, 2, 3] -->
</port>
</input>
<output>
<port id="3">
<dim>1</dim>
<dim>12</dim>
<dim>10</dim>
<dim>24</dim>
</port>
<port id="4">
<dim>2</dim>
<dim>12</dim>
<dim>10</dim>
<dim>24</dim>
</port>
<port id="5">
<dim>3</dim>
<dim>12</dim>
<dim>10</dim>
<dim>24</dim>
</port>
</output>
</layer>
.. code-block:: cpp
<layer id="1" type="VariadicSplit" ...>
<input>
<port id="0"> < !-- some data -->
<dim>6</dim>
<dim>12</dim>
<dim>10</dim>
<dim>24</dim>
</port>
<port id="1"> < !-- axis: 0 -->
</port>
<port id="2">
<dim>2</dim> < !-- split_lengths: [-1, 2] -->
</port>
</input>
<output>
<port id="3">
<dim>4</dim> < !-- 4 = 6 - 2 -->
<dim>12</dim>
<dim>10</dim>
<dim>24</dim>
</port>
<port id="4">
<dim>2</dim>
<dim>12</dim>
<dim>10</dim>
<dim>24</dim>
</port>
</output>
</layer>
@endsphinxdirective
```xml
<layer id="1" type="VariadicSplit" ...>
<input>
<port id="0"> <!-- some data -->
<dim>6</dim>
<dim>12</dim>
<dim>10</dim>
<dim>24</dim>
</port>
<port id="1"> <!-- axis: 0 -->
</port>
<port id="2">
<dim>2</dim> <!-- split_lengths: [-1, 2] -->
</port>
</input>
<output>
<port id="3">
<dim>4</dim> <!-- 4 = 6 - 2 -->
<dim>12</dim>
<dim>10</dim>
<dim>24</dim>
</port>
<port id="4">
<dim>2</dim>
<dim>12</dim>
<dim>10</dim>
<dim>24</dim>
</port>
</output>
</layer>
```

View File

@ -1,5 +1,7 @@
# ShapeOf {#openvino_docs_ops_shape_ShapeOf_1}
@sphinxdirective
**Versioned name**: *ShapeOf-1*
**Category**: *Shape manipulation*
@ -18,20 +20,23 @@
**Example**
```xml
<layer ... type="ShapeOf">
<input>
<port id="0">
<dim>2</dim>
<dim>3</dim>
<dim>224</dim>
<dim>224</dim>
</port>
</input>
<output>
<port id="1"> <!-- output value is: [2,3,224,224]-->
<dim>4</dim>
</port>
</output>
</layer>
```
.. code-block:: cpp
<layer ... type="ShapeOf">
<input>
<port id="0">
<dim>2</dim>
<dim>3</dim>
<dim>224</dim>
<dim>224</dim>
</port>
</input>
<output>
<port id="1"> < !-- output value is: [2,3,224,224]-->
<dim>4</dim>
</port>
</output>
</layer>
@endsphinxdirective

View File

@ -1,5 +1,7 @@
# ShapeOf {#openvino_docs_ops_shape_ShapeOf_3}
@sphinxdirective
**Versioned name**: *ShapeOf-3*
**Category**: *Shape manipulation*
@ -28,25 +30,28 @@
* *T*: any numeric type.
* *T_IND*: `int64` or `int32`.
* *T_IND*: ``int64`` or ``int32``.
**Example**
```xml
<layer ... type="ShapeOf">
<data output_type="i64"/>
<input>
<port id="0">
<dim>2</dim>
<dim>3</dim>
<dim>224</dim>
<dim>224</dim>
</port>
</input>
<output>
<port id="1"> <!-- output value is: [2,3,224,224]-->
<dim>4</dim>
</port>
</output>
</layer>
```
.. code-block:: cpp
<layer ... type="ShapeOf">
<data output_type="i64"/>
<input>
<port id="0">
<dim>2</dim>
<dim>3</dim>
<dim>224</dim>
<dim>224</dim>
</port>
</input>
<output>
<port id="1"> < !-- output value is: [2,3,224,224]-->
<dim>4</dim>
</port>
</output>
</layer>
@endsphinxdirective

View File

@ -1,5 +1,7 @@
# Squeeze {#openvino_docs_ops_shape_Squeeze_1}
@sphinxdirective
**Versioned name**: *Squeeze-1*
**Category**: *Shape manipulation*
@ -7,6 +9,7 @@
**Short description**: *Squeeze* removes dimensions equal to 1 from the first input tensor.
**Detailed description**: *Squeeze* can be used with or without the second input tensor.
* If only the first input is provided, every dimension that is equal to 1 will be removed from it.
* With the second input provided, each value is an index of a dimension from the first tensor that is to be removed. Specified dimension has to be equal to 1, otherwise an error will be raised. Dimension indices can be specified directly, or by negative indices (counting dimensions from the end).
@ -16,7 +19,7 @@
* **1**: Multidimensional input tensor of type *T*. **Required.**
* **2**: Scalar or 1D tensor of type *T_INT* with indices of dimensions to squeeze. Values could be negative (have to be from range `[-R, R-1]`, where `R` is the rank of the first input). **Optional.**
* **2**: Scalar or 1D tensor of type *T_INT* with indices of dimensions to squeeze. Values could be negative (have to be from range ``[-R, R-1]``, where ``R`` is the rank of the first input). **Optional.**
**Outputs**:
@ -31,46 +34,50 @@
**Example**
*Example 1: squeeze 4D tensor to a 2D tensor*
```xml
<layer ... type="Squeeze">
<input>
<port id="0">
<dim>1</dim>
<dim>3</dim>
<dim>1</dim>
<dim>2</dim>
</port>
</input>
<input>
<port id="1">
<dim>2</dim> <!-- value [0, 2] -->
</port>
</input>
<output>
<port id="2">
<dim>3</dim>
<dim>2</dim>
</port>
</output>
</layer>
```
.. code-block:: cpp
<layer ... type="Squeeze">
<input>
<port id="0">
<dim>1</dim>
<dim>3</dim>
<dim>1</dim>
<dim>2</dim>
</port>
</input>
<input>
<port id="1">
<dim>2</dim> < !-- value [0, 2] -->
</port>
</input>
<output>
<port id="2">
<dim>3</dim>
<dim>2</dim>
</port>
</output>
</layer>
*Example 2: squeeze 1D tensor with 1 element to a 0D tensor (constant)*
```xml
<layer ... type="Squeeze">
<input>
<port id="0">
<dim>1</dim>
</port>
</input>
<input>
<port id="1">
<dim>1</dim> <!-- value is [0] -->
</port>
</input>
<output>
<port id="2">
</port>
</output>
</layer>
```
.. code-block:: cpp
<layer ... type="Squeeze">
<input>
<port id="0">
<dim>1</dim>
</port>
</input>
<input>
<port id="1">
<dim>1</dim> < !-- value is [0] -->
</port>
</input>
<output>
<port id="2">
</port>
</output>
</layer>
@endsphinxdirective

View File

@ -1,5 +1,7 @@
# Unsqueeze {#openvino_docs_ops_shape_Unsqueeze_1}
@sphinxdirective
**Versioned name**: *Unsqueeze-1*
**Category**: *Shape manipulation*
@ -12,7 +14,7 @@
* **1**: Tensor of type *T* and arbitrary shape. **Required.**
* **2**: Scalar or 1D tensor of type *T_INT* with indices of dimensions to unsqueeze. Values could be negative (have to be from range `[-R, R-1]`, where `R` is the rank of the output). **Required.**
* **2**: Scalar or 1D tensor of type *T_INT* with indices of dimensions to unsqueeze. Values could be negative (have to be from range ``[-R, R-1]``, where ``R`` is the rank of the output). **Required.**
**Outputs**:
@ -27,46 +29,52 @@
**Example**
*Example 1: unsqueeze 2D tensor to a 4D tensor*
```xml
<layer ... type="Unsqueeze">
<input>
<port id="0">
<dim>2</dim>
<dim>3</dim>
</port>
</input>
<input>
<port id="1">
<dim>2</dim> <!-- value is [0, 3] -->
</port>
</input>
<output>
<port id="2">
<dim>1</dim>
<dim>2</dim>
<dim>3</dim>
<dim>1</dim>
</port>
</output>
</layer>
```
.. code-block:: cpp
<layer ... type="Unsqueeze">
<input>
<port id="0">
<dim>2</dim>
<dim>3</dim>
</port>
</input>
<input>
<port id="1">
<dim>2</dim> < !-- value is [0, 3] -->
</port>
</input>
<output>
<port id="2">
<dim>1</dim>
<dim>2</dim>
<dim>3</dim>
<dim>1</dim>
</port>
</output>
</layer>
*Example 2: unsqueeze 0D tensor (constant) to 1D tensor*
```xml
<layer ... type="Unsqueeze">
<input>
<port id="0">
</port>
</input>
<input>
<port id="1">
<dim>1</dim> <!-- value is [0] -->
</port>
</input>
<output>
<port id="2">
<dim>1</dim>
</port>
</output>
</layer>
```
.. code-block:: cpp
<layer ... type="Unsqueeze">
<input>
<port id="0">
</port>
</input>
<input>
<port id="1">
<dim>1</dim> < !-- value is [0] -->
</port>
</input>
<output>
<port id="2">
<dim>1</dim>
</port>
</output>
</layer>
@endsphinxdirective

View File

@ -1,5 +1,7 @@
# TopK {#openvino_docs_ops_sort_TopK_1}
@sphinxdirective
**Versioned name**: *TopK-1*
**Category**: *Sorting and maximization*
@ -12,21 +14,21 @@
* **Description**: Specifies the axis along which the values are retrieved.
* **Range of values**: An integer. Negative value means counting dimension from the end.
* **Type**: `int`
* **Type**: ``int``
* **Required**: *yes*
* *mode*
* **Description**: Specifies which operation is used to select the biggest element of two.
* **Range of values**: `min`, `max`
* **Type**: `string`
* **Range of values**: ``min``, ``max``
* **Type**: ``string``
* **Required**: *yes*
* *sort*
* **Description**: Specifies order of output elements and/or indices.
* **Range of values**: `value`, `index`, `none`
* **Type**: `string`
* **Range of values**: ``value``, ``index``, ``none``
* **Type**: ``string``
* **Required**: *yes*
* *index_element_type*
@ -45,50 +47,56 @@
**Outputs**:
* **1**: Output tensor with top *k* values from the input tensor along specified dimension *axis*. The shape of the tensor is `[input1.shape[0], ..., input1.shape[axis-1], k, input1.shape[axis+1], ...]`.
* **1**: Output tensor with top *k* values from the input tensor along specified dimension *axis*. The shape of the tensor is ``[input1.shape[0], ..., input1.shape[axis-1], k, input1.shape[axis+1], ...]``.
* **2**: Output tensor with top *k* indices for each slice along *axis* dimension. It is 1D tensor of shape `[k]`. The shape of the tensor is the same as for the 1st output, that is `[input1.shape[0], ..., input1.shape[axis-1], k, input1.shape[axis+1], ...]`
* **2**: Output tensor with top *k* indices for each slice along *axis* dimension. It is 1D tensor of shape ``[k]``. The shape of the tensor is the same as for the 1st output, that is ``[input1.shape[0], ..., input1.shape[axis-1], k, input1.shape[axis+1], ...]``
**Detailed Description**
The output tensor is populated by values computed in the following way:
output[i1, ..., i(axis-1), j, i(axis+1) ..., iN] = top_k(input[i1, ...., i(axis-1), :, i(axis+1), ..., iN]), k, sort, mode)
.. code-block:: cpp
So for each slice `input[i1, ...., i(axis-1), :, i(axis+1), ..., iN]` which represents 1D array, top_k value is computed individually.
output[i1, ..., i(axis-1), j, i(axis+1) ..., iN] = top_k(input[i1, ...., i(axis-1), :, i(axis+1), ..., iN]), k, sort, mode)
Sorting and minimum/maximum are controlled by `sort` and `mode` attributes:
* *mode*=`max`, *sort*=`value` - descending by value
* *mode*=`max`, *sort*=`index` - ascending by index
* *mode*=`max`, *sort*=`none` - undefined
* *mode*=`min`, *sort*=`value` - ascending by value
* *mode*=`min`, *sort*=`index` - ascending by index
* *mode*=`min`, *sort*=`none` - undefined
So for each slice ``input[i1, ...., i(axis-1), :, i(axis+1), ..., iN]`` which represents 1D array, top_k value is computed individually.
Sorting and minimum/maximum are controlled by ``sort`` and ``mode`` attributes:
* *mode* = ``max``, *sort* = ``value`` - descending by value
* *mode* = ``max``, *sort* = ``index`` - ascending by index
* *mode* = ``max``, *sort* = ``none`` - undefined
* *mode* = ``min``, *sort* = ``value`` - ascending by value
* *mode* = ``min``, *sort* = ``index`` - ascending by index
* *mode* = ``min``, *sort* = ``none`` - undefined
If there are several elements with the same value then their output order is not determined.
**Example**
```xml
<layer ... type="TopK" ... >
<data axis="1" mode="max" sort="value"/>
<input>
<port id="0">
<dim>6</dim>
<dim>12</dim>
<dim>10</dim>
<dim>24</dim>
</port>
<port id="1">
<!-- k = 3 -->
</port>
<output>
<port id="2">
<dim>6</dim>
<dim>3</dim>
<dim>10</dim>
<dim>24</dim>
</port>
</output>
</layer>
```
.. code-block:: cpp
<layer ... type="TopK" ... >
<data axis="1" mode="max" sort="value"/>
<input>
<port id="0">
<dim>6</dim>
<dim>12</dim>
<dim>10</dim>
<dim>24</dim>
</port>
<port id="1">
<!-- k = 3 -->
</port>
<output>
<port id="2">
<dim>6</dim>
<dim>3</dim>
<dim>10</dim>
<dim>24</dim>
</port>
</output>
</layer>
@endsphinxdirective

View File

@ -1,8 +1,10 @@
# TopK {#openvino_docs_ops_sort_TopK_11}
@sphinxdirective
**Versioned name**: *TopK-11*
**Category**: *Sorting and maximization*
**Category**: *sorting and maximization*
**Short description**: *TopK* computes indices and values of the *k* maximum/minimum values for each slice along a specified axis.
@ -12,29 +14,29 @@
* **Description**: Specifies the axis along which the values are retrieved.
* **Range of values**: An integer. Negative values means counting dimension from the back.
* **Type**: `int`
* **Type**: ``int``
* **Required**: *yes*
* *mode*
* **Description**: Specifies whether *TopK* selects the largest or the smallest elements from each slice.
* **Range of values**: "min", "max"
* **Type**: `string`
* **Type**: ``string``
* **Required**: *yes*
* *sort*
* **Description**: Specifies the order of corresponding elements of the output tensor.
* **Range of values**: `value`, `index`, `none`
* **Type**: `string`
* **Range of values**: ``value``, ``index``, ``none``
* **Type**: ``string``
* **Required**: *yes*
* *stable*
* **Description**: Specifies whether the equivalent elements should maintain their relative order from the input tensor. Takes effect only if the `sort` attribute is set to `value` or `index`.
* **Range of values**: `true` of `false`
* **Type**: `boolean`
* **Default value**: `false`
* **Description**: Specifies whether the equivalent elements should maintain their relative order from the input tensor. Takes effect only if the ``sort`` attribute is set to ``value`` or ``index``.
* **Range of values**: *true* of *false*
* **Type**: ``boolean``
* **Default value**: *false*
* **Required**: *no*
* *index_element_type*
@ -50,98 +52,112 @@
* **1**: tensor with arbitrary rank and type *T*. **Required.**
* **2**: The value of *K* - a scalar of any integer type that specifies how many elements from the input tensor should be selected. The accepted range of values of *K* is `<1;input1.shape[axis]>`. The behavior of this operator is undefined if the value of *K* does not meet those requirements. **Required.**
* **2**: The value of *K* - a scalar of any integer type that specifies how many elements from the input tensor should be selected. The accepted range of values of *K* is ``<1;input1.shape[axis]>``. The behavior of this operator is undefined if the value of *K* does not meet those requirements. **Required.**
**Outputs**:
* **1**: Output tensor of type *T* with *k* values from the input tensor along a specified *axis*. The shape of the tensor is `[input1.shape[0], ..., input1.shape[axis-1], 1..k, input1.shape[axis+1], ..., input1.shape[input1.rank - 1]]`.
* **1**: Output tensor of type *T* with *k* values from the input tensor along a specified *axis*. The shape of the tensor is ``[input1.shape[0], ..., input1.shape[axis-1], 1..k, input1.shape[axis+1], ..., input1.shape[input1.rank - 1]]``.
* **2**: Output tensor containing indices of the corresponding elements(values) from the first output tensor. The indices point to the location of selected values in the original input tensor. The shape of this output tensor is the same as the shape of the first output, that is `[input1.shape[0], ..., input1.shape[axis-1], 1..k, input1.shape[axis+1], ..., input1.shape[input1.rank - 1]]`. The type of this tensor *T_IND* is controlled by the `index_element_type` attribute.
* **2**: Output tensor containing indices of the corresponding elements(values) from the first output tensor. The indices point to the location of selected values in the original input tensor. The shape of this output tensor is the same as the shape of the first output, that is ``[input1.shape[0], ..., input1.shape[axis-1], 1..k, input1.shape[axis+1], ..., input1.shape[input1.rank - 1]]``. The type of this tensor *T_IND* is controlled by the ``index_element_type`` attribute.
**Types**
* *T*: any numeric type.
* *T_IND*: `int64` or `int32`.
* *T_IND*: ``int64`` or ``int32``.
**Detailed Description**
The output tensor is populated by values computed in the following way:
output[i1, ..., i(axis-1), j, i(axis+1) ..., iN] = top_k(input[i1, ...., i(axis-1), :, i(axis+1), ..., iN]), k, sort, mode)
.. code-block:: cpp
meaning that for each slice `input[i1, ...., i(axis-1), :, i(axis+1), ..., iN]` the *TopK* values are computed individually.
output[i1, ..., i(axis-1), j, i(axis+1) ..., iN] = top_k(input[i1, ...., i(axis-1), :, i(axis+1), ..., iN]), k, sort, mode)
Sorting and minimum/maximum are controlled by `sort` and `mode` attributes with additional configurability provided by `stable`:
* *sort*=`value`, *mode*=`max`, *stable*=`false` - descending by value, relative order of equal elements not guaranteed to be maintained
* *sort*=`value`, *mode*=`max`, *stable*=`true` - descending by value, relative order of equal elements guaranteed to be maintained
* *sort*=`value`, *mode*=`min`, *stable*=`false` - ascending by value, relative order of equal elements not guaranteed to be maintained
* *sort*=`value`, *mode*=`min`, *stable*=`true` - ascending by value, relative order of equal elements guaranteed to be maintained
* *sort*=`index`, *mode*=`max`, *stable*=`false` - ascending by index, relative order of equal elements not guaranteed to be maintained
* *sort*=`index`, *mode*=`max`, *stable*=`true` - ascending by index, relative order of equal elements guaranteed to be maintained
* *sort*=`index`, *mode*=`min`, *stable*=`false` - ascending by index, relative order of equal elements not guaranteed to be maintained
* *sort*=`index`, *mode*=`min`, *stable*=`true` - ascending by index, relative order of equal elements guaranteed to be maintained
* *sort*=`none` , *mode*=`max` - undefined
* *sort*=`none` , *mode*=`min` - undefined
meaning that for each slice ``input[i1, ...., i(axis-1), :, i(axis+1), ..., iN]`` the *TopK* values are computed individually.
The relative order of equivalent elements is only preserved if the *stable* attribute is set to `true`. This makes the implementation use stable sorting algorithm during the computation of TopK elements. Otherwise the output order is undefined.
Sorting and minimum/maximum are controlled by ``sort`` and ``mode`` attributes with additional configurability provided by ``stable``:
* *sort* = ``value`` , *mode* = ``max`` , *stable* = ``false`` - descending by value, relative order of equal elements not guaranteed to be maintained
* *sort* = ``value`` , *mode* = ``max`` , *stable* = ``true`` - descending by value, relative order of equal elements guaranteed to be maintained
* *sort* = ``value`` , *mode* = ``min`` , *stable* = ``false`` - ascending by value, relative order of equal elements not guaranteed to be maintained
* *sort* = ``value`` , *mode* = ``min`` , *stable* = ``true`` - ascending by value, relative order of equal elements guaranteed to be maintained
* *sort* = ``index`` , *mode* = ``max`` , *stable* = ``false`` - ascending by index, relative order of equal elements not guaranteed to be maintained
* *sort* = ``index`` , *mode* = ``max`` , *stable* = ``true`` - ascending by index, relative order of equal elements guaranteed to be maintained
* *sort* = ``index`` , *mode* = ``min`` , *stable* = ``false`` - ascending by index, relative order of equal elements not guaranteed to be maintained
* *sort* = ``index`` , *mode* = ``min`` , *stable* = ``true`` - ascending by index, relative order of equal elements guaranteed to be maintained
* *sort* = ``none`` , *mode* = ``max`` - undefined
* *sort* = ``none`` , *mode* = ``min`` - undefined
The relative order of equivalent elements is only preserved if the ``stable`` attribute is set to ``true``. This makes the implementation use stable sorting algorithm during the computation of TopK elements. Otherwise the output order is undefined.
The "by index" order means that the input tensor's elements are still sorted by value but their order in the output tensor is additionally determined by the indices of those elements in the input tensor. This might yield multiple correct results though. For example if the input tensor contains the following elements:
input = [5, 3, 1, 2, 5, 5]
.. code-block:: cpp
input = [5, 3, 1, 2, 5, 5]
and when TopK is configured the following way:
mode = min
sort = index
k = 4
.. code-block:: cpp
mode = min
sort = index
k = 4
then the 3 following results are correct:
output_values = [5, 3, 1, 2]
output_indices = [0, 1, 2, 3]
.. code-block:: cpp
output_values = [3, 1, 2, 5]
output_indices = [1, 2, 3, 4]
output_values = [5, 3, 1, 2]
output_indices = [0, 1, 2, 3]
output_values = [3, 1, 2, 5]
output_indices = [1, 2, 3, 5]
output_values = [3, 1, 2, 5]
output_indices = [1, 2, 3, 4]
When the `stable` attribute is additionally set to `true`, the example above will only have a single correct solution:
output_values = [3, 1, 2, 5]
output_indices = [1, 2, 3, 5]
output_values = [5, 3, 1, 2]
output_indices = [0, 1, 2, 3]
When the ``stable`` attribute is additionally set to *true*, the example above will only have a single correct solution:
The indices are always sorted ascendingly when `sort == index` for any given TopK node. Setting `sort == index` and `mode == max` means gthat the values are first sorted in the descending order but the indices which affect the order of output elements are sorted ascendingly.
.. code-block:: cpp
output_values = [5, 3, 1, 2]
output_indices = [0, 1, 2, 3]
The indices are always sorted ascendingly when ``sort == index`` for any given TopK node. Setting ``sort == index`` and ``mode == max`` means gthat the values are first sorted in the descending order but the indices which affect the order of output elements are sorted ascendingly.
**Example**
This example assumes that `K` is equal to 10:
This example assumes that ``K`` is equal to 10:
.. code-block:: cpp
<layer ... type="TopK" ... >
<data axis="3" mode="max" sort="value" stable="true" index_element_type="i64"/>
<input>
<port id="0">
<dim>1</dim>
<dim>3</dim>
<dim>224</dim>
<dim>224</dim>
</port>
<port id="1">
</port>
<output>
<port id="2">
<dim>1</dim>
<dim>3</dim>
<dim>224</dim>
<dim>10</dim>
</port>
<port id="3">
<dim>1</dim>
<dim>3</dim>
<dim>224</dim>
<dim>10</dim>
</port>
</output>
</layer>
@endsphinxdirective
```xml
<layer ... type="TopK" ... >
<data axis="3" mode="max" sort="value" stable="true" index_element_type="i64"/>
<input>
<port id="0">
<dim>1</dim>
<dim>3</dim>
<dim>224</dim>
<dim>224</dim>
</port>
<port id="1">
</port>
<output>
<port id="2">
<dim>1</dim>
<dim>3</dim>
<dim>224</dim>
<dim>10</dim>
</port>
<port id="3">
<dim>1</dim>
<dim>3</dim>
<dim>224</dim>
<dim>10</dim>
</port>
</output>
</layer>
```

View File

@ -1,5 +1,7 @@
# TopK {#openvino_docs_ops_sort_TopK_3}
@sphinxdirective
**Versioned name**: *TopK-3*
**Category**: *Sorting and maximization*
@ -12,21 +14,21 @@
* **Description**: Specifies the axis along which the values are retrieved.
* **Range of values**: An integer. Negative value means counting dimension from the end.
* **Type**: `int`
* **Type**: ``int``
* **Required**: *yes*
* *mode*
* **Description**: Specifies which operation is used to select the biggest element of two.
* **Range of values**: `min`, `max`
* **Type**: `string`
* **Range of values**: ``min``, ``max``
* **Type**: ``string``
* **Required**: *yes*
* *sort*
* **Description**: Specifies order of output elements and/or indices.
* **Range of values**: `value`, `index`, `none`
* **Type**: `string`
* **Range of values**: ``value``, ``index``, ``none``
* **Type**: ``string``
* **Required**: *yes*
* *index_element_type*
@ -46,62 +48,68 @@
**Outputs**:
* **1**: Output tensor of type *T* with top *k* values from the input tensor along specified dimension *axis*. The shape of the tensor is `[input1.shape[0], ..., input1.shape[axis-1], k, input1.shape[axis+1], ...]`.
* **1**: Output tensor of type *T* with top *k* values from the input tensor along specified dimension *axis*. The shape of the tensor is ``[input1.shape[0], ..., input1.shape[axis-1], k, input1.shape[axis+1], ...]``.
* **2**: Output tensor with top *k* indices for each slice along *axis* dimension of type *T_IND*. The shape of the tensor is the same as for the 1st output, that is `[input1.shape[0], ..., input1.shape[axis-1], k, input1.shape[axis+1], ...]`.
* **2**: Output tensor with top *k* indices for each slice along *axis* dimension of type *T_IND*. The shape of the tensor is the same as for the 1st output, that is ``[input1.shape[0], ..., input1.shape[axis-1], k, input1.shape[axis+1], ...]``.
**Types**
* *T*: any numeric type.
* *T_IND*: `int64` or `int32`.
* *T_IND*: ``int64`` or ``int32``.
**Detailed Description**
The output tensor is populated by values computed in the following way:
output[i1, ..., i(axis-1), j, i(axis+1) ..., iN] = top_k(input[i1, ...., i(axis-1), :, i(axis+1), ..., iN]), k, sort, mode)
.. code-block:: cpp
So for each slice `input[i1, ...., i(axis-1), :, i(axis+1), ..., iN]` which represents 1D array, *TopK* value is computed individually.
output[i1, ..., i(axis-1), j, i(axis+1) ..., iN] = top_k(input[i1, ...., i(axis-1), :, i(axis+1), ..., iN]), k, sort, mode)
Sorting and minimum/maximum are controlled by `sort` and `mode` attributes:
* *mode*=`max`, *sort*=`value` - descending by value
* *mode*=`max`, *sort*=`index` - ascending by index
* *mode*=`max`, *sort*=`none` - undefined
* *mode*=`min`, *sort*=`value` - ascending by value
* *mode*=`min`, *sort*=`index` - ascending by index
* *mode*=`min`, *sort*=`none` - undefined
So for each slice ``input[i1, ...., i(axis-1), :, i(axis+1), ..., iN]`` which represents 1D array, *TopK* value is computed individually.
Sorting and minimum/maximum are controlled by ``sort`` and ``mode`` attributes:
* *mode* = ``max``, *sort* = ``value`` - descending by value
* *mode* = ``max``, *sort* = ``index`` - ascending by index
* *mode* = ``max``, *sort* = ``none`` - undefined
* *mode* = ``min``, *sort* = ``value`` - ascending by value
* *mode* = ``min``, *sort* = ``index`` - ascending by index
* *mode* = ``min``, *sort* = ``none`` - undefined
If there are several elements with the same value then their output order is not determined.
**Example**
```xml
<layer ... type="TopK" ... >
<data axis="1" mode="max" sort="value" index_element_type="i64"/>
<input>
<port id="0">
<dim>6</dim>
<dim>12</dim>
<dim>10</dim>
<dim>24</dim>
</port>
<port id="1">
<!-- k = 3 -->
</port>
<output>
<port id="2">
<dim>6</dim>
<dim>3</dim>
<dim>10</dim>
<dim>24</dim>
</port>
<port id="3">
<dim>6</dim>
<dim>3</dim>
<dim>10</dim>
<dim>24</dim>
</port>
</output>
</layer>
```
.. code-block:: cpp
<layer ... type="TopK" ... >
<data axis="1" mode="max" sort="value" index_element_type="i64"/>
<input>
<port id="0">
<dim>6</dim>
<dim>12</dim>
<dim>10</dim>
<dim>24</dim>
</port>
<port id="1">
<!-- k = 3 -->
</port>
<output>
<port id="2">
<dim>6</dim>
<dim>3</dim>
<dim>10</dim>
<dim>24</dim>
</port>
<port id="3">
<dim>6</dim>
<dim>3</dim>
<dim>10</dim>
<dim>24</dim>
</port>
</output>
</layer>
@endsphinxdirective