DOCS shift to rst (#17346)

This commit is contained in:
Sebastian Golebiewski
2023-05-04 13:29:16 +02:00
committed by GitHub
parent 8c95c90e45
commit c785551b57
11 changed files with 668 additions and 558 deletions

View File

@@ -1,16 +1,23 @@
# LogSoftMax {#openvino_docs_ops_activation_LogSoftmax_5}
@sphinxdirective
**Versioned name**: *LogSoftmax-5*
**Category**: *Activation function*
**Short description**: LogSoftmax computes the natural logarithm of softmax values for the given input.
**Note**: This is recommended to not compute LogSoftmax directly as Log(Softmax(x, axis)), more numeric stable is to compute LogSoftmax as:
\f[
t = (x - ReduceMax(x,\ axis)) \\
LogSoftmax(x, axis) = t - Log(ReduceSum(Exp(t),\ axis))
\f]
.. note::
This is recommended to not compute LogSoftmax directly as Log(Softmax(x, axis)), more numeric stable is to compute LogSoftmax as:
.. math::
t = (x - ReduceMax(x,\ axis)) \\
LogSoftmax(x, axis) = t - Log(ReduceSum(Exp(t),\ axis))
**Attributes**
@@ -24,11 +31,11 @@ LogSoftmax(x, axis) = t - Log(ReduceSum(Exp(t),\ axis))
**Inputs**:
* **1**: Input tensor *x* of type *T* with enough number of dimension to be compatible with *axis* attribute. **Required.**
* **1**: Input tensor *x* of type *T* with enough number of dimension to be compatible with *axis* attribute. **Required.**
**Outputs**:
* **1**: The resulting tensor of the same shape and of type *T*.
* **1**: The resulting tensor of the same shape and of type *T*.
**Types**
@@ -36,27 +43,33 @@ LogSoftmax(x, axis) = t - Log(ReduceSum(Exp(t),\ axis))
**Mathematical Formulation**
\f[
y_{c} = ln\left(\frac{e^{Z_{c}}}{\sum_{d=1}^{C}e^{Z_{d}}}\right)
\f]
where \f$C\f$ is a size of tensor along *axis* dimension.
.. math::
y_{c} = ln\left(\frac{e^{Z_{c}}}{\sum_{d=1}^{C}e^{Z_{d}}}\right)
where :math:`C` is a size of tensor along *axis* dimension.
**Example**
```xml
<layer ... type="LogSoftmax" ... >
<data axis="1" />
<input>
<port id="0">
<dim>256</dim>
<dim>56</dim>
</port>
</input>
<output>
<port id="3">
<dim>256</dim>
<dim>56</dim>
</port>
</output>
</layer>
```
.. code-block:: cpp
<layer ... type="LogSoftmax" ... >
<data axis="1" />
<input>
<port id="0">
<dim>256</dim>
<dim>56</dim>
</port>
</input>
<output>
<port id="3">
<dim>256</dim>
<dim>56</dim>
</port>
</output>
</layer>
@endsphinxdirective

View File

@@ -1,5 +1,7 @@
# Log {#openvino_docs_ops_arithmetic_Log_1}
@sphinxdirective
**Versioned name**: *Log-1*
**Category**: *Arithmetic unary*
@@ -8,13 +10,14 @@
**Detailed description**: *Log* does the following with the input tensor *a*:
\f[
a_{i} = log(a_{i})
\f]
.. math::
a_{i} = log(a_{i})
**Attributes**:
No attributes available.
No attributes available.
**Inputs**
@@ -32,19 +35,22 @@ a_{i} = log(a_{i})
*Example 1*
```xml
<layer ... type="Log">
<input>
<port id="0">
<dim>256</dim>
<dim>56</dim>
</port>
</input>
<output>
<port id="1">
<dim>256</dim>
<dim>56</dim>
</port>
</output>
</layer>
```
.. code-block:: cpp
<layer ... type="Log">
<input>
<port id="0">
<dim>256</dim>
<dim>56</dim>
</port>
</input>
<output>
<port id="1">
<dim>256</dim>
<dim>56</dim>
</port>
</output>
</layer>
@endsphinxdirective

View File

@@ -1,5 +1,7 @@
# LessEqual {#openvino_docs_ops_comparison_LessEqual_1}
@sphinxdirective
**Versioned name**: *LessEqual-1*
**Category**: *Comparison binary*
@@ -7,13 +9,14 @@
**Short description**: *LessEqual* performs element-wise comparison operation with two given tensors applying broadcast rules specified in the *auto_broadcast* attribute.
**Detailed description**
Before performing arithmetic operation, input tensors *a* and *b* are broadcasted if their shapes are different and `auto_broadcast` attributes is not `none`. Broadcasting is performed according to `auto_broadcast` value.
Before performing arithmetic operation, input tensors *a* and *b* are broadcasted if their shapes are different and ``auto_broadcast`` attributes is not ``none``. Broadcasting is performed according to ``auto_broadcast`` value.
After broadcasting *LessEqual* does the following with the input tensors *a* and *b*:
\f[
o_{i} = a_{i} \leq b_{i}
\f]
.. math::
o_{i} = a_{i} \leq b_{i}
**Attributes**:
@@ -21,9 +24,11 @@ o_{i} = a_{i} \leq b_{i}
* **Description**: specifies rules used for auto-broadcasting of input tensors.
* **Range of values**:
* *none* - no auto-broadcasting is allowed, all input shapes should match,
* *numpy* - numpy broadcasting rules, description is available in [Broadcast Rules For Elementwise Operations](../broadcast_rules.md),
* *pdpd* - PaddlePaddle-style implicit broadcasting, description is available in [Broadcast Rules For Elementwise Operations](../broadcast_rules.md).
* *numpy* - numpy broadcasting rules, description is available in :doc:`Broadcast Rules For Elementwise Operations <openvino_docs_ops_broadcast_rules>`,
* *pdpd* - PaddlePaddle-style implicit broadcasting, description is available in :doc:`Broadcast Rules For Elementwise Operations <openvino_docs_ops_broadcast_rules>`.
* **Type**: string
* **Default value**: "numpy"
* **Required**: *no*
@@ -45,52 +50,57 @@ o_{i} = a_{i} \leq b_{i}
*Example 1: no broadcast*
```xml
<layer ... type="LessEqual">
<data auto_broadcast="none"/>
<input>
<port id="0">
<dim>256</dim>
<dim>56</dim>
</port>
<port id="1">
<dim>256</dim>
<dim>56</dim>
</port>
</input>
<output>
<port id="2">
<dim>256</dim>
<dim>56</dim>
</port>
</output>
</layer>
```
.. code-block:: cpp
<layer ... type="LessEqual">
<data auto_broadcast="none"/>
<input>
<port id="0">
<dim>256</dim>
<dim>56</dim>
</port>
<port id="1">
<dim>256</dim>
<dim>56</dim>
</port>
</input>
<output>
<port id="2">
<dim>256</dim>
<dim>56</dim>
</port>
</output>
</layer>
*Example 2: numpy broadcast*
```xml
<layer ... type="LessEqual">
<data auto_broadcast="numpy"/>
<input>
<port id="0">
<dim>8</dim>
<dim>1</dim>
<dim>6</dim>
<dim>1</dim>
</port>
<port id="1">
<dim>7</dim>
<dim>1</dim>
<dim>5</dim>
</port>
</input>
<output>
<port id="2">
<dim>8</dim>
<dim>7</dim>
<dim>6</dim>
<dim>5</dim>
</port>
</output>
</layer>
```
.. code-block:: cpp
<layer ... type="LessEqual">
<data auto_broadcast="numpy"/>
<input>
<port id="0">
<dim>8</dim>
<dim>1</dim>
<dim>6</dim>
<dim>1</dim>
</port>
<port id="1">
<dim>7</dim>
<dim>1</dim>
<dim>5</dim>
</port>
</input>
<output>
<port id="2">
<dim>8</dim>
<dim>7</dim>
<dim>6</dim>
<dim>5</dim>
</port>
</output>
</layer>
@endsphinxdirective

View File

@@ -1,5 +1,7 @@
# Less {#openvino_docs_ops_comparison_Less_1}
@sphinxdirective
**Versioned name**: *Less-1*
**Category**: *Comparison binary*
@@ -7,13 +9,13 @@
**Short description**: *Less* performs element-wise comparison operation with two given tensors applying multi-directional broadcast rules.
**Detailed description**
Before performing arithmetic operation, input tensors *a* and *b* are broadcasted if their shapes are different and `auto_broadcast` attributes is not `none`. Broadcasting is performed according to `auto_broadcast` value.
Before performing arithmetic operation, input tensors *a* and *b* are broadcasted if their shapes are different and ``auto_broadcast`` attributes is not ``none``. Broadcasting is performed according to ``auto_broadcast`` value.
After broadcasting *Less* does the following with the input tensors *a* and *b*:
\f[
o_{i} = a_{i} < b_{i}
\f]
.. math::
o_{i} = a_{i} < b_{i}
**Attributes**:
@@ -22,10 +24,12 @@ o_{i} = a_{i} < b_{i}
* **Description**: specifies rules used for auto-broadcasting of input tensors.
* **Range of values**:
* *none* - no auto-broadcasting is allowed, all input shapes should match
* *numpy* - numpy broadcasting rules, description is available in [Broadcast Rules For Elementwise Operations](../broadcast_rules.md)
* *pdpd* - PaddlePaddle-style implicit broadcasting, description is available in [Broadcast Rules For Elementwise Operations](../broadcast_rules.md)
* **Type**: `string`
* *numpy* - numpy broadcasting rules, description is available in :doc:`Broadcast Rules For Elementwise Operations <openvino_docs_ops_broadcast_rules>`
* *pdpd* - PaddlePaddle-style implicit broadcasting, description is available in :doc:`Broadcast Rules For Elementwise Operations <openvino_docs_ops_broadcast_rules>`
* **Type**: ``string``
* **Default value**: "numpy"
* **Required**: *no*
@@ -46,50 +50,56 @@ o_{i} = a_{i} < b_{i}
*Example 1*
```xml
<layer ... type="Less">
<input>
<port id="0">
<dim>256</dim>
<dim>56</dim>
</port>
<port id="1">
<dim>256</dim>
<dim>56</dim>
</port>
</input>
<output>
<port id="2">
<dim>256</dim>
<dim>56</dim>
</port>
</output>
</layer>
```
.. code-block:: cpp
<layer ... type="Less">
<input>
<port id="0">
<dim>256</dim>
<dim>56</dim>
</port>
<port id="1">
<dim>256</dim>
<dim>56</dim>
</port>
</input>
<output>
<port id="2">
<dim>256</dim>
<dim>56</dim>
</port>
</output>
</layer>
*Example 2: broadcast*
```xml
<layer ... type="Less">
<input>
<port id="0">
<dim>8</dim>
<dim>1</dim>
<dim>6</dim>
<dim>1</dim>
</port>
<port id="1">
<dim>7</dim>
<dim>1</dim>
<dim>5</dim>
</port>
</input>
<output>
<port id="2">
<dim>8</dim>
<dim>7</dim>
<dim>6</dim>
<dim>5</dim>
</port>
</output>
</layer>
```
.. code-block:: cpp
<layer ... type="Less">
<input>
<port id="0">
<dim>8</dim>
<dim>1</dim>
<dim>6</dim>
<dim>1</dim>
</port>
<port id="1">
<dim>7</dim>
<dim>1</dim>
<dim>5</dim>
</port>
</input>
<output>
<port id="2">
<dim>8</dim>
<dim>7</dim>
<dim>6</dim>
<dim>5</dim>
</port>
</output>
</layer>
@endsphinxdirective

View File

@@ -1,183 +1,192 @@
# Loop {#openvino_docs_ops_infrastructure_Loop_5}
@sphinxdirective
**Versioned name**: *Loop-5*
**Category**: *Infrastructure*
**Short description**: *Loop* operation performs recurrent execution of the network, which is described in the `body`, iterating through the data.
The operation has similar semantic to the ONNX* Loop [operation](https://github.com/onnx/onnx/blob/master/docs/Changelog.md#Loop-13).
**Short description**: *Loop* operation performs recurrent execution of the network, which is described in the ``body``, iterating through the data.
The operation has similar semantic to the ONNX* Loop `operation <https://github.com/onnx/onnx/blob/master/docs/Changelog.md#Loop-13>`__.
**Detailed description**
The body of the Loop can be executed 0 or more times depending on the values passed to the Loop operation inputs called "trip count", "execution condition" and input of the Loop body called "current iteration".
These Loop operation inputs have the following meaning:
1. Trip count is an integer scalar or 1D tensor with 1 element input specifying maximum number of iterations. To simulate infinite loop Constant `-1` can be provided as input.
1. Trip count is an integer scalar or 1D tensor with 1 element input specifying maximum number of iterations. To simulate infinite loop Constant ``-1`` can be provided as input.
2. Loop execution condition input is a boolean scalar or 1D tensor with 1 element input specifying whether to run the first loop iteration or not. Note, that the body of the Loop must yield the condition value for the consecutive iterations.
There are several combinations of these two inputs `(trip_count, execution condition)` which are described in the following code snippet:
There are several combinations of these two inputs ``(trip_count, execution condition)`` which are described in the following code snippet:
```
input (-1, true) // infinite loop
bool cond = true;
for (int i = 0; cond; ++i)
{
cond = true; // sub-graph calculating condition must always return "true"!
}
.. code-block:: sh
input (-1, cond) // while loop
bool cond = ...;
for (int i = 0; cond; ++i)
{
cond = ...;
}
input (-1, true) // infinite loop
bool cond = true;
for (int i = 0; cond; ++i)
{
cond = true; // sub-graph calculating condition must always return "true"!
}
input (-1, true) // do-while loop
bool cond = true;
for (int i = 0; cond; ++i)
{
cond = ...;
}
input (-1, cond) // while loop
bool cond = ...;
for (int i = 0; cond; ++i)
{
cond = ...;
}
input (trip_count, true) // for loop
int trip_count = ...;
bool cond = true;
for (int i = 0; i < trip_count; ++i)
{
cond = true; // sub-graph calculating condition must always return "true"!
}
input (-1, true) // do-while loop
bool cond = true;
for (int i = 0; cond; ++i)
{
cond = ...;
}
input (trip_count, true) // for loop
int trip_count = ...;
bool cond = true;
for (int i = 0; i < trip_count; ++i)
{
cond = true; // sub-graph calculating condition must always return "true"!
}
input (trip_count, cond) // for with condition
int trip_count = ...;
bool cond = ...;
for (int i = 0; i < trip_count && cond; ++i)
{
cond = ...;
}
input (trip_count, cond) // for with condition
int trip_count = ...;
bool cond = ...;
for (int i = 0; i < trip_count && cond; ++i)
{
cond = ...;
}
```
1. One of the body graph inputs called "current iteration" is an integer scalar or 1D integer tensor with 1 number specifying current iteration number. The iteration number starts from 0 and incremented by one for each iteration. This input is optional and may not exist if the iteration number value is not used in the body.
2. One of the body graph outputs is called "condition" is a boolean scalar or 1D tensor with 1 element. This value is used to decide whenever to perform the next iteration or not.
Loop operation description in the IR has regular sections: `input` and `output`. They connect Loop body to the outer graph and specify condition(s).
Loop operation description in the IR also has several special sections: `body`, `port_map` and `back_edges` similar to the ones from the TensorIterator operation but having some important features described below.
Loop operation description in the IR has regular sections: ``input`` and ``output``. They connect Loop body to the outer graph and specify condition(s).
Loop operation description in the IR also has several special sections: ``body``, ``port_map`` and ``back_edges`` similar to the ones from the TensorIterator operation but having some important features described below.
1. The body operation getting an input from the main graph should have an entry in the `port_map` section of the Loop operation. These edges connect input ports of the Loop with the body `Parameter`s.
2. Input tensors to the Loop can be sliced along a specified axis, the Loop can iterates over all sliced parts. The corresponding `input` entry in the `port_map` should have `axis` attribute specifying the axis to slice. Therefore, inputs to the Loop operation corresponding to `input` entries in the `port_map` without `axis` attribute are used "as is" (without slicing).
3. The body operation producing tensor to be used in the subsequent iterations (like in RNN models) should have a back edge described in the `back_edges` section of the operation. The back edge connects the respective body `Parameter` and `Result` operations. For such a case the Loop operation node provides input for the first iteration, while corresponding Loop operation output produces the tensor computed during the last iteration.
4. Output tensors produced by a particular body operation across all iterations can be concatenated and returned as a Loop operation output (this is a "scan output" according to the ONNX* Loop operation [specification](https://github.com/onnx/onnx/blob/master/docs/Changelog.md#Loop-13)). The corresponding `output` entry in the `port_map` should have `axis` attribute specifying the axis to concatenate. Therefore, outputs from operations corresponding to `output` entries in the `port_map` without `axis` attribute are returned "as is" (without concatenation).
5. There is one body `Parameter` operation not connected through the `port_map`. This is a "current iteration" input. The Loop operation is responsible for providing the appropriate value for each iteration.
6. Connection of nodes inside the Loop body with the main graph should be done through `Parameter` and `Result` body operations. No other ways to connect graphs are allowed.
1. The body operation getting an input from the main graph should have an entry in the ``port_map`` section of the Loop operation. These edges connect input ports of the Loop with the body ``Parameter``s.
2. Input tensors to the Loop can be sliced along a specified axis, the Loop can iterates over all sliced parts. The corresponding ``input`` entry in the ``port_map`` should have ``axis`` attribute specifying the axis to slice. Therefore, inputs to the Loop operation corresponding to ``input`` entries in the ``port_map`` without ``axis`` attribute are used "as is" (without slicing).
3. The body operation producing tensor to be used in the subsequent iterations (like in RNN models) should have a back edge described in the ``back_edges`` section of the operation. The back edge connects the respective body ``Parameter`` and ``Result`` operations. For such a case the Loop operation node provides input for the first iteration, while corresponding Loop operation output produces the tensor computed during the last iteration.
4. Output tensors produced by a particular body operation across all iterations can be concatenated and returned as a Loop operation output (this is a "scan output" according to the ONNX* Loop operation `specification <https://github.com/onnx/onnx/blob/master/docs/Changelog.md#Loop-13>`__ ). The corresponding ``output`` entry in the ``port_map`` should have ``axis`` attribute specifying the axis to concatenate. Therefore, outputs from operations corresponding to ``output`` entries in the ``port_map`` without ``axis`` attribute are returned "as is" (without concatenation).
5. There is one body ``Parameter`` operation not connected through the ``port_map``. This is a "current iteration" input. The Loop operation is responsible for providing the appropriate value for each iteration.
6. Connection of nodes inside the Loop body with the main graph should be done through ``Parameter`` and ``Result`` body operations. No other ways to connect graphs are allowed.
**Loop attributes**:
* **Body**:
`body` is a network that will be recurrently executed. The network is described operation by operation as a typical IR network.
``body`` is a network that will be recurrently executed. The network is described operation by operation as a typical IR network.
* **Body attributes**:
* **Body attributes**:
No attributes available.
No attributes available.
* **Port map**:
*port_map* is a set of rules to map input or output data tensors of `Loop` operation onto `body` data tensors. The `port_map` entries can be` input` and `output`. Each entry describes a corresponding mapping rule.
*port_map* is a set of rules to map input or output data tensors of ``Loop`` operation onto ``body`` data tensors. The ``port_map`` entries can be`` input`` and ``output``. Each entry describes a corresponding mapping rule.
* **Port map attributes**:
* **Port map attributes**:
* *external_port_id*
* **Description**: *external_port_id* is a port ID of the `Loop` operation. The value `-1` means that the body node is not connected to the `Loop` operation.
* **Range of values**: IDs of the *Loop* outputs
* **Type**: `int`
* **Default value**: None
* **Required**: *yes*
* *external_port_id*
* *internal_layer_id*
* **Description**: *external_port_id* is a port ID of the ``Loop`` operation. The value ``-1`` means that the body node is not connected to the ``Loop`` operation.
* **Range of values**: IDs of the *Loop* outputs
* **Type**: ``int``
* **Default value**: None
* **Required**: *yes*
* **Description**: *internal_layer_id* is a `Parameter` or `Result` operation ID inside the `body` network to map to.
* **Range of values**: IDs of the `Parameter` operations inside in the *Loop* operation
* **Type**: `int`
* **Default value**: None
* **Required**: *yes*
* *internal_layer_id*
* *axis*
* **Description**: *internal_layer_id* is a ``Parameter`` or ``Result`` operation ID inside the ``body`` network to map to.
* **Range of values**: IDs of the ``Parameter`` operations inside in the *Loop* operation
* **Type**: ``int``
* **Default value**: None
* **Required**: *yes*
* **Description**: if *axis* is specified for `output` entry, then it is an axis to concatenate the body `Result` output across all iterations.
If *axis* is specified for `input` entry, then it is an axis to iterate through, it triggers the slicing of the input tensor.
* **Range of values**: an integer. Negative value means counting dimension from the end.
* **Type**: `int`
* **Default value**: None
* **Required**: *no*
* *axis*
* **Description**: if *axis* is specified for ``output`` entry, then it is an axis to concatenate the body ``Result`` output across all iterations.
If *axis* is specified for ``input`` entry, then it is an axis to iterate through, it triggers the slicing of the input tensor.
* **Range of values**: an integer. Negative value means counting dimension from the end.
* **Type**: ``int``
* **Default value**: None
* **Required**: *no*
* **Back edges**:
*back_edges* is a set of rules to transfer tensor values from `body` outputs at one iteration to `body` parameters at the next iteration. Back edge connects some `Result` operation in the `body` to `Parameter` operation in the same `body`.
*back_edges* is a set of rules to transfer tensor values from ``body`` outputs at one iteration to ``body`` parameters at the next iteration. Back edge connects some ``Result`` operation in the ``body`` to ``Parameter`` operation in the same ``body``.
* **Back edge attributes**:
* **Back edge attributes**:
* *from-layer*
* *from-layer*
* **Description**: *from-layer* is a `Result` operation ID inside the `body` network.
* **Range of values**: IDs of the `Result` operations inside the *Loop*
* **Type**: `int`
* **Default value**: None
* **Required**: *yes*
* **Description**: *from-layer* is a ``Result`` operation ID inside the ``body`` network.
* **Range of values**: IDs of the ``Result`` operations inside the *Loop*
* **Type**: ``int``
* **Default value**: None
* **Required**: *yes*
* *to-layer*
* *to-layer*
* **Description**: *to-layer* is a `Parameter` operation ID inside the `body` network to end mapping.
* **Range of values**: IDs of the `Parameter` operations inside the *Loop*
* **Type**: `int`
* **Default value**: None
* **Required**: *yes*
* **Description**: *to-layer* is a ``Parameter`` operation ID inside the ``body`` network to end mapping.
* **Range of values**: IDs of the ``Parameter`` operations inside the *Loop*
* **Type**: ``int``
* **Default value**: None
* **Required**: *yes*
**Loop Inputs**
* **Trip count**: A scalar or 1D tensor with 1 element of `int64` or `int32` type specifying maximum number of iterations. **Required.**
* **Trip count**: A scalar or 1D tensor with 1 element of ``int64`` or ``int32`` type specifying maximum number of iterations. **Required.**
* **ExecutionCondition**: A scalar or 1D tensor with 1 element of `boolean` type specifying whether to execute the first iteration or not. `True` value means to execute the 1st iteration. **Required.**
* **ExecutionCondition**: A scalar or 1D tensor with 1 element of ``boolean`` type specifying whether to execute the first iteration or not. ``True`` value means to execute the 1st iteration. **Required.**
* **Multiple other inputs**: tensors of different types and shapes. **Optional.**
**Loop Outputs**
* **Multiple outputs**: Results of execution of the `body`. Tensors of any type and shape.
* **Multiple outputs**: Results of execution of the ``body``. Tensors of any type and shape.
**Body Inputs**
* **Multiple inputs**: tensors of different types and shapes except the one corresponding to the current iteration number. This input is marked in the port_map with attribute `purpose = "current_iteration"` and produces a scalar or 1D tensor with 1 element of `int64` or `int32` type. **Optional.**
* **Multiple inputs**: tensors of different types and shapes except the one corresponding to the current iteration number. This input is marked in the port_map with attribute ``purpose = "current_iteration"`` and produces a scalar or 1D tensor with 1 element of ``int64`` or ``int32`` type. **Optional.**
**Body Outputs**
* **Multiple outputs**: Results of execution of the `body`. Tensors of any type and shape except the one corresponding to the output with execution condition. This output is marked in the port_map with attribute `purpose = "execution_condition"` and is mandatory and produces a scalar or 1D tensor with 1 element of `boolean` type. Other outputs are optional.
* **Multiple outputs**: Results of execution of the ``body``. Tensors of any type and shape except the one corresponding to the output with execution condition. This output is marked in the port_map with attribute ``purpose = "execution_condition"`` and is mandatory and produces a scalar or 1D tensor with 1 element of ``boolean`` type. Other outputs are optional.
**Examples**
*Example 1: a typical Loop structure*
```xml
<layer type="Loop" ... >
<input> ... </input>
<output> ... </output>
<port_map>
<input external_port_id="0" internal_layer_id="0"/>
<input external_port_id="1" internal_layer_id="1"/>
<input external_port_id="-1" internal_layer_id="2" purpose="current_iteration"/>
...
<output external_port_id="3" internal_layer_id="4"/>
<output external_port_id="4" internal_layer_id="10" axis="1"/>
<output external_port_id="-1" internal_layer_id="22" purpose="execution_condition"/>
...
</port_map>
<back_edges>
<edge from-layer="1" to-layer="5"/>
...
</back_edges>
<body>
<layers> ... </layers>
<edges> ... </edges>
</body>
</layer>
```
.. code-block:: cpp
<layer type="Loop" ... >
<input> ... </input>
<output> ... </output>
<port_map>
<input external_port_id="0" internal_layer_id="0"/>
<input external_port_id="1" internal_layer_id="1"/>
<input external_port_id="-1" internal_layer_id="2" purpose="current_iteration"/>
...
<output external_port_id="3" internal_layer_id="4"/>
<output external_port_id="4" internal_layer_id="10" axis="1"/>
<output external_port_id="-1" internal_layer_id="22" purpose="execution_condition"/>
...
</port_map>
<back_edges>
<edge from-layer="1" to-layer="5"/>
...
</back_edges>
<body>
<layers> ... </layers>
<edges> ... </edges>
</body>
</layer>
@endsphinxdirective

View File

@@ -1,18 +1,21 @@
# LogicalAnd {#openvino_docs_ops_logical_LogicalAnd_1}
@sphinxdirective
**Versioned name**: *LogicalAnd-1*
**Category**: *Logical binary*
**Short description**: *LogicalAnd* performs element-wise logical AND operation with two given tensors applying multi-directional broadcast rules.
**Detailed description**: Before performing logical operation, input tensors *a* and *b* are broadcasted if their shapes are different and `auto_broadcast` attributes is not `none`. Broadcasting is performed according to `auto_broadcast` value.
**Detailed description**: Before performing logical operation, input tensors *a* and *b* are broadcasted if their shapes are different and ``auto_broadcast`` attributes is not ``none``. Broadcasting is performed according to ``auto_broadcast`` value.
After broadcasting *LogicalAnd* does the following with the input tensors *a* and *b*:
\f[
o_{i} = a_{i} \wedge b_{i}
\f]
.. math::
o_{i} = a_{i} \wedge b_{i}
**Attributes**:
@@ -20,9 +23,11 @@ o_{i} = a_{i} \wedge b_{i}
* **Description**: specifies rules used for auto-broadcasting of input tensors.
* **Range of values**:
* *none* - no auto-broadcasting is allowed, all input shapes must match,
* *numpy* - numpy broadcasting rules, description is available in [Broadcast Rules For Elementwise Operations](../broadcast_rules.md),
* *pdpd* - PaddlePaddle-style implicit broadcasting, description is available in [Broadcast Rules For Elementwise Operations](../broadcast_rules.md).
* *numpy* - numpy broadcasting rules, description is available in :doc:`Broadcast Rules For Elementwise Operations <openvino_docs_ops_broadcast_rules>`,
* *pdpd* - PaddlePaddle-style implicit broadcasting, description is available in :doc:`Broadcast Rules For Elementwise Operations <openvino_docs_ops_broadcast_rules>`.
* **Type**: string
* **Default value**: "numpy"
* **Required**: *no*
@@ -38,56 +43,62 @@ o_{i} = a_{i} \wedge b_{i}
**Types**
* *T_BOOL*: `boolean`.
* *T_BOOL*: ``boolean``.
**Examples**
*Example 1: no broadcast*
```xml
<layer ... type="LogicalAnd">
<input>
<port id="0">
<dim>256</dim>
<dim>56</dim>
</port>
<port id="1">
<dim>256</dim>
<dim>56</dim>
</port>
</input>
<output>
<port id="2">
<dim>256</dim>
<dim>56</dim>
</port>
</output>
</layer>
```
.. code-block:: cpp
<layer ... type="LogicalAnd">
<input>
<port id="0">
<dim>256</dim>
<dim>56</dim>
</port>
<port id="1">
<dim>256</dim>
<dim>56</dim>
</port>
</input>
<output>
<port id="2">
<dim>256</dim>
<dim>56</dim>
</port>
</output>
</layer>
*Example 2: numpy broadcast*
```xml
<layer ... type="LogicalAnd">
<input>
<port id="0">
<dim>8</dim>
<dim>1</dim>
<dim>6</dim>
<dim>1</dim>
</port>
<port id="1">
<dim>7</dim>
<dim>1</dim>
<dim>5</dim>
</port>
</input>
<output>
<port id="2">
<dim>8</dim>
<dim>7</dim>
<dim>6</dim>
<dim>5</dim>
</port>
</output>
</layer>
```
.. code-block:: cpp
<layer ... type="LogicalAnd">
<input>
<port id="0">
<dim>8</dim>
<dim>1</dim>
<dim>6</dim>
<dim>1</dim>
</port>
<port id="1">
<dim>7</dim>
<dim>1</dim>
<dim>5</dim>
</port>
</input>
<output>
<port id="2">
<dim>8</dim>
<dim>7</dim>
<dim>6</dim>
<dim>5</dim>
</port>
</output>
</layer>
@endsphinxdirective

View File

@@ -1,5 +1,7 @@
# LogicalNot {#openvino_docs_ops_logical_LogicalNot_1}
@sphinxdirective
**Versioned name**: *LogicalNot-1*
**Category**: *Logical unary*
@@ -8,9 +10,10 @@
**Detailed description**: *LogicalNot* performs element-wise logical negation operation with given tensor, based on the following mathematical formula:
\f[
a_{i} = \lnot a_{i}
\f]
.. math::
a_{i} = \lnot a_{i}
**Attributes**: *LogicalNot* operation has no attributes.
@@ -24,28 +27,31 @@ a_{i} = \lnot a_{i}
**Types**
* *T_BOOL*: `boolean`.
* *T_BOOL*: ``boolean``.
\f[
a_{i} = \lnot a_{i}
\f]
.. math::
a_{i} = \lnot a_{i}
**Example**
```xml
<layer ... type="LogicalNot">
<input>
<port id="0">
<dim>256</dim>
<dim>56</dim>
</port>
</input>
<output>
<port id="1">
<dim>256</dim>
<dim>56</dim>
</port>
</output>
</layer>
```
.. code-block:: cpp
<layer ... type="LogicalNot">
<input>
<port id="0">
<dim>256</dim>
<dim>56</dim>
</port>
</input>
<output>
<port id="1">
<dim>256</dim>
<dim>56</dim>
</port>
</output>
</layer>
@endsphinxdirective

View File

@@ -1,18 +1,21 @@
# LogicalOr {#openvino_docs_ops_logical_LogicalOr_1}
@sphinxdirective
**Versioned name**: *LogicalOr-1*
**Category**: *Logical binary*
**Short description**: *LogicalOr* performs element-wise logical OR operation with two given tensors applying multi-directional broadcast rules.
**Detailed description**: Before performing logical operation, input tensors *a* and *b* are broadcasted if their shapes are different and `auto_broadcast` attributes is not `none`. Broadcasting is performed according to `auto_broadcast` value.
**Detailed description**: Before performing logical operation, input tensors *a* and *b* are broadcasted if their shapes are different and ``auto_broadcast`` attributes is not ``none``. Broadcasting is performed according to ``auto_broadcast`` value.
After broadcasting *LogicalOr* does the following with the input tensors *a* and *b*:
\f[
o_{i} = a_{i} \lor b_{i}
\f]
.. math::
o_{i} = a_{i} \lor b_{i}
**Attributes**:
@@ -20,9 +23,11 @@ o_{i} = a_{i} \lor b_{i}
* **Description**: specifies rules used for auto-broadcasting of input tensors.
* **Range of values**:
* *none* - no auto-broadcasting is allowed, all input shapes must match
* *numpy* - numpy broadcasting rules, description is available in [Broadcast Rules For Elementwise Operations](../broadcast_rules.md),
* *pdpd* - PaddlePaddle-style implicit broadcasting, description is available in [Broadcast Rules For Elementwise Operations](../broadcast_rules.md).
* *numpy* - numpy broadcasting rules, description is available in :doc:`Broadcast Rules For Elementwise Operations <openvino_docs_ops_broadcast_rules>`,
* *pdpd* - PaddlePaddle-style implicit broadcasting, description is available in :doc:`Broadcast Rules For Elementwise Operations <openvino_docs_ops_broadcast_rules>`.
* **Type**: string
* **Default value**: "numpy"
* **Required**: *no*
@@ -38,56 +43,61 @@ o_{i} = a_{i} \lor b_{i}
**Types**
* *T_BOOL*: `boolean`.
* *T_BOOL*: ``boolean``.
**Examples**
*Example 1: no broadcast*
```xml
<layer ... type="LogicalOr">
<input>
<port id="0">
<dim>256</dim>
<dim>56</dim>
</port>
<port id="1">
<dim>256</dim>
<dim>56</dim>
</port>
</input>
<output>
<port id="2">
<dim>256</dim>
<dim>56</dim>
</port>
</output>
</layer>
```
.. code-block:: cpp
<layer ... type="LogicalOr">
<input>
<port id="0">
<dim>256</dim>
<dim>56</dim>
</port>
<port id="1">
<dim>256</dim>
<dim>56</dim>
</port>
</input>
<output>
<port id="2">
<dim>256</dim>
<dim>56</dim>
</port>
</output>
</layer>
*Example 2: numpy broadcast*
```xml
<layer ... type="LogicalOr">
<input>
<port id="0">
<dim>8</dim>
<dim>1</dim>
<dim>6</dim>
<dim>1</dim>
</port>
<port id="1">
<dim>7</dim>
<dim>1</dim>
<dim>5</dim>
</port>
</input>
<output>
<port id="2">
<dim>8</dim>
<dim>7</dim>
<dim>6</dim>
<dim>5</dim>
</port>
</output>
</layer>
```
.. code-block:: cpp
<layer ... type="LogicalOr">
<input>
<port id="0">
<dim>8</dim>
<dim>1</dim>
<dim>6</dim>
<dim>1</dim>
</port>
<port id="1">
<dim>7</dim>
<dim>1</dim>
<dim>5</dim>
</port>
</input>
<output>
<port id="2">
<dim>8</dim>
<dim>7</dim>
<dim>6</dim>
<dim>5</dim>
</port>
</output>
</layer>
@endsphinxdirective

View File

@@ -1,18 +1,21 @@
# LogicalXor {#openvino_docs_ops_logical_LogicalXor_1}
@sphinxdirective
**Versioned name**: *LogicalXor-1*
**Category**: *Logical binary*
**Short description**: *LogicalXor* performs element-wise logical XOR operation with two given tensors applying multi-directional broadcast rules.
**Detailed description**: Before performing logical operation, input tensors *a* and *b* are broadcasted if their shapes are different and `auto_broadcast` attributes is not `none`. Broadcasting is performed according to `auto_broadcast` value.
**Detailed description**: Before performing logical operation, input tensors *a* and *b* are broadcasted if their shapes are different and ``auto_broadcast`` attributes is not ``none``. Broadcasting is performed according to ``auto_broadcast`` value.
After broadcasting *LogicalXor* does the following with the input tensors *a* and *b*:
\f[
o_{i} = a_{i} \oplus b_{i}
\f]
.. math::
o_{i} = a_{i} \oplus b_{i}
**Attributes**:
@@ -20,9 +23,11 @@ o_{i} = a_{i} \oplus b_{i}
* **Description**: specifies rules used for auto-broadcasting of input tensors.
* **Range of values**:
* *none* - no auto-broadcasting is allowed, all input shapes must match
* *numpy* - numpy broadcasting rules, description is available in [Broadcast Rules For Elementwise Operations](../broadcast_rules.md),
* *pdpd* - PaddlePaddle-style implicit broadcasting, description is available in [Broadcast Rules For Elementwise Operations](../broadcast_rules.md).
* *numpy* - numpy broadcasting rules, description is available in :doc:`Broadcast Rules For Elementwise Operations <openvino_docs_ops_broadcast_rules>`,
* *pdpd* - PaddlePaddle-style implicit broadcasting, description is available in :doc:`Broadcast Rules For Elementwise Operations <openvino_docs_ops_broadcast_rules>`.
* **Type**: string
* **Default value**: "numpy"
* **Required**: *no*
@@ -38,56 +43,62 @@ o_{i} = a_{i} \oplus b_{i}
**Types**
* *T_BOOL*: `boolean`.
* *T_BOOL*: ``boolean``.
**Examples**
*Example 1: no broadcast*
```xml
<layer ... type="LogicalXor">
<input>
<port id="0">
<dim>256</dim>
<dim>56</dim>
</port>
<port id="1">
<dim>256</dim>
<dim>56</dim>
</port>
</input>
<output>
<port id="2">
<dim>256</dim>
<dim>56</dim>
</port>
</output>
</layer>
```
.. code-block:: cpp
<layer ... type="LogicalXor">
<input>
<port id="0">
<dim>256</dim>
<dim>56</dim>
</port>
<port id="1">
<dim>256</dim>
<dim>56</dim>
</port>
</input>
<output>
<port id="2">
<dim>256</dim>
<dim>56</dim>
</port>
</output>
</layer>
*Example 2: numpy broadcast*
```xml
<layer ... type="LogicalXor">
<input>
<port id="0">
<dim>8</dim>
<dim>1</dim>
<dim>6</dim>
<dim>1</dim>
</port>
<port id="1">
<dim>7</dim>
<dim>1</dim>
<dim>5</dim>
</port>
</input>
<output>
<port id="2">
<dim>8</dim>
<dim>7</dim>
<dim>6</dim>
<dim>5</dim>
</port>
</output>
</layer>
```
.. code-block:: cpp
<layer ... type="LogicalXor">
<input>
<port id="0">
<dim>8</dim>
<dim>1</dim>
<dim>6</dim>
<dim>1</dim>
</port>
<port id="1">
<dim>7</dim>
<dim>1</dim>
<dim>5</dim>
</port>
</input>
<output>
<port id="2">
<dim>8</dim>
<dim>7</dim>
<dim>6</dim>
<dim>5</dim>
</port>
</output>
</layer>
@endsphinxdirective

View File

@@ -1,5 +1,7 @@
# LRN {#openvino_docs_ops_normalization_LRN_1}
@sphinxdirective
**Versioned name**: *LRN-1*
**Category**: *Normalization*
@@ -9,91 +11,105 @@
**Detailed description**:
Local Response Normalization performs a normalization over local input regions.
Each input value is divided by
\f[ (bias + \frac{alpha}{{size}^{len(axes)}} \cdot \sum_{i} data_{i})^{beta} \f]
The sum is taken over a region of a side length `size` and number of dimensions equal to number of axes.
.. math::
(bias + \frac{alpha}{{size}^{len(axes)}} \cdot \sum_{i} data_{i})^{beta}
The sum is taken over a region of a side length ``size`` and number of dimensions equal to number of axes.
The region is centered at the input value that's being normalized (with zero padding added if needed).
Here is an example for 4D `data` input tensor and `axes = [1]`:
```
sqr_sum[a, b, c, d] =
sum(data[a, max(0, b - size / 2) : min(data.shape[1], b + size / 2 + 1), c, d] ** 2)
output = data / (bias + (alpha / size ** len(axes)) * sqr_sum) ** beta
```
Here is an example for 4D ``data`` input tensor and ``axes = [1]``:
.. code-block:: sh
sqr_sum[a, b, c, d] =
sum(data[a, max(0, b - size / 2) : min(data.shape[1], b + size / 2 + 1), c, d] ** 2)
output = data / (bias + (alpha / size ** len(axes)) * sqr_sum) ** beta
Example for 4D ``data`` input tensor and ``axes = [2, 3]``:
.. code-block:: sh
sqr_sum[a, b, c, d] =
sum(data[a, b, max(0, c - size / 2) : min(data.shape[2], c + size / 2 + 1), max(0, d - size / 2) : min(data.shape[3], d + size / 2 + 1)] ** 2)
output = data / (bias + (alpha / size ** len(axes)) * sqr_sum) ** beta
Example for 4D `data` input tensor and `axes = [2, 3]`:
```
sqr_sum[a, b, c, d] =
sum(data[a, b, max(0, c - size / 2) : min(data.shape[2], c + size / 2 + 1), max(0, d - size / 2) : min(data.shape[3], d + size / 2 + 1)] ** 2)
output = data / (bias + (alpha / size ** len(axes)) * sqr_sum) ** beta
```
**Attributes**:
* *alpha*
* **Description**: *alpha* represents the scaling attribute for the normalizing sum. For example, *alpha* equal `0.0001` means that the normalizing sum is multiplied by `0.0001`.
* **Description**: *alpha* represents the scaling attribute for the normalizing sum. For example, *alpha* equal ``0.0001`` means that the normalizing sum is multiplied by ``0.0001``.
* **Range of values**: no restrictions
* **Type**: `float`
* **Type**: ``float``
* **Required**: *yes*
* *beta*
* **Description**: *beta* represents the exponent for the normalizing sum. For example, *beta* equal `0.75` means that the normalizing sum is raised to the power of `0.75`.
* **Description**: *beta* represents the exponent for the normalizing sum. For example, *beta* equal ``0.75`` means that the normalizing sum is raised to the power of ``0.75``.
* **Range of values**: positive number
* **Type**: `float`
* **Type**: ``float``
* **Required**: *yes*
* *bias*
* **Description**: *bias* represents the offset. Usually positive number to avoid dividing by zero.
* **Range of values**: no restrictions
* **Type**: `float`
* **Type**: ``float``
* **Required**: *yes*
* *size*
* **Description**: *size* represents the side length of the region to be used for the normalization sum. The region can have one or more dimensions depending on the second input axes indices.
* **Range of values**: positive integer
* **Type**: `int`
* **Type**: ``int``
* **Required**: *yes*
**Inputs**
* **1**: `data` - tensor of type *T* and arbitrary shape. **Required.**
* **1**: ``data`` - tensor of type *T* and arbitrary shape. **Required.**
* **2**: `axes` - 1D tensor of type *T_IND* which specifies indices of dimensions in `data` which define normalization slices. **Required.**
* **2**: ``axes`` - 1D tensor of type *T_IND* which specifies indices of dimensions in ``data`` which define normalization slices. **Required.**
**Outputs**
* **1**: Output tensor of type *T* and the same shape as the `data` input tensor.
* **1**: Output tensor of type *T* and the same shape as the ``data`` input tensor.
**Types**
* *T*: any supported floating-point type.
* *T_IND*: any supported integer type.
**Example**
```xml
<layer id="1" type="LRN" ...>
<data alpha="1.0e-04" beta="0.75" size="5" bias="1"/>
<input>
<port id="0">
<dim>6</dim>
<dim>12</dim>
<dim>10</dim>
<dim>24</dim>
</port>
<port id="1">
<dim>1</dim> <!-- value is [1] that means independent normalization for each pixel along channels -->
</port>
</input>
<output>
<port id="2">
<dim>6</dim>
<dim>12</dim>
<dim>10</dim>
<dim>24</dim>
</port>
</output>
</layer>
```
.. code-block:: cpp
<layer id="1" type="LRN" ...>
<data alpha="1.0e-04" beta="0.75" size="5" bias="1"/>
<input>
<port id="0">
<dim>6</dim>
<dim>12</dim>
<dim>10</dim>
<dim>24</dim>
</port>
<port id="1">
<dim>1</dim> < !-- value is [1] that means independent normalization for each pixel along channels -->
</port>
</input>
<output>
<port id="2">
<dim>6</dim>
<dim>12</dim>
<dim>10</dim>
<dim>24</dim>
</port>
</output>
</layer>
@endsphinxdirective

View File

@@ -1,26 +1,29 @@
# LSTMCell {#openvino_docs_ops_sequence_LSTMCell_1}
@sphinxdirective
**Versioned name**: *LSTMCell-1*
**Category**: *Sequence processing*
**Short description**: *LSTMCell* operation represents a single LSTM cell. It computes the output using the formula described in the original paper [Long Short-Term Memory](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.676.4320&rep=rep1&type=pdf).
**Short description**: *LSTMCell* operation represents a single LSTM cell. It computes the output using the formula described in the original paper `Long Short-Term Memory <https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.676.4320&rep=rep1&type=pdf>`__.
**Detailed description**: *LSTMCell* computes the output *Ht* and *ot* for current time step based on the following formula:
```
Formula:
* - matrix multiplication
(.) - Hadamard product (element-wise)
[,] - concatenation
f, g, h - are activation functions.
it = f(Xt*(Wi^T) + Ht-1*(Ri^T) + Wbi + Rbi)
ft = f(Xt*(Wf^T) + Ht-1*(Rf^T) + Wbf + Rbf)
ct = g(Xt*(Wc^T) + Ht-1*(Rc^T) + Wbc + Rbc)
Ct = ft (.) Ct-1 + it (.) ct
ot = f(Xt*(Wo^T) + Ht-1*(Ro^T) + Wbo + Rbo)
Ht = ot (.) h(Ct)
```
.. code-block:: sh
Formula:
* - matrix multiplication
(.) - Hadamard product (element-wise)
[,] - concatenation
f, g, h - are activation functions.
it = f(Xt*(Wi^T) + Ht-1*(Ri^T) + Wbi + Rbi)
ft = f(Xt*(Wf^T) + Ht-1*(Rf^T) + Wbf + Rbf)
ct = g(Xt*(Wc^T) + Ht-1*(Rc^T) + Wbc + Rbc)
Ct = ft (.) Ct-1 + it (.) ct
ot = f(Xt*(Wo^T) + Ht-1*(Ro^T) + Wbo + Rbo)
Ht = ot (.) h(Ct)
**Attributes**
@@ -28,7 +31,7 @@ Formula:
* **Description**: *hidden_size* specifies hidden state size.
* **Range of values**: a positive integer
* **Type**: `int`
* **Type**: ``int``
* **Required**: *yes*
* *activations*
@@ -43,7 +46,7 @@ Formula:
* **Description**: *activations_alpha, activations_beta* attributes of functions; applicability and meaning of these attributes depends on chosen activation functions
* **Range of values**: a list of floating-point numbers
* **Type**: `float[]`
* **Type**: ``float[]``
* **Default value**: None
* **Required**: *no*
@@ -51,73 +54,78 @@ Formula:
* **Description**: *clip* specifies bound values *[-C, C]* for tensor clipping. Clipping is performed before activations.
* **Range of values**: a positive floating-point number
* **Type**: `float`
* **Type**: ``float``
* **Default value**: *infinity* that means that the clipping is not applied
* **Required**: *no*
**Inputs**
* **1**: `X` - 2D tensor of type *T* `[batch_size, input_size]`, input data. **Required.**
* **1**: ``X`` - 2D tensor of type *T* ``[batch_size, input_size]``, input data. **Required.**
* **2**: `initial_hidden_state` - 2D tensor of type *T* `[batch_size, hidden_size]`. **Required.**
* **2**: ``initial_hidden_state`` - 2D tensor of type *T* ``[batch_size, hidden_size]``. **Required.**
* **3**: `initial_cell_state` - 2D tensor of type *T* `[batch_size, hidden_size]`. **Required.**
* **3**: ``initial_cell_state`` - 2D tensor of type *T* ``[batch_size, hidden_size]``. **Required.**
* **4**: `W` - 2D tensor of type *T* `[4 * hidden_size, input_size]`, the weights for matrix multiplication, gate order: fico. **Required.**
* **4**: ``W`` - 2D tensor of type *T* ``[4 * hidden_size, input_size]``, the weights for matrix multiplication, gate order: fico. **Required.**
* **5**: `R` - 2D tensor of type *T* `[4 * hidden_size, hidden_size]`, the recurrence weights for matrix multiplication, gate order: fico. **Required.**
* **5**: ``R`` - 2D tensor of type *T* ``[4 * hidden_size, hidden_size]``, the recurrence weights for matrix multiplication, gate order: fico. **Required.**
* **6**: `B` 1D tensor of type *T* `[4 * hidden_size]`, the sum of biases (weights and recurrence weights), if not specified - assumed to be 0. **optional.**
* **6**: ``B`` 1D tensor of type *T* ``[4 * hidden_size]``, the sum of biases (weights and recurrence weights), if not specified - assumed to be 0. **optional.**
**Outputs**
* **1**: `Ho` - 2D tensor of type *T* `[batch_size, hidden_size]`, the last output value of hidden state.
* **1**: ``Ho`` - 2D tensor of type *T* ``[batch_size, hidden_size]``, the last output value of hidden state.
* **2**: `Co` - 2D tensor of type *T* `[batch_size, hidden_size]`, the last output value of cell state.
* **2**: ``Co`` - 2D tensor of type *T* ``[batch_size, hidden_size]``, the last output value of cell state.
**Types**
* *T*: any supported floating-point type.
**Example**
```xml
<layer ... type="LSTMCell" ...>
<data hidden_size="128"/>
<input>
<port id="0">
<dim>1</dim>
<dim>16</dim>
</port>
<port id="1">
<dim>1</dim>
<dim>128</dim>
</port>
<port id="2">
<dim>1</dim>
<dim>128</dim>
</port>
<port id="3">
<dim>512</dim>
<dim>16</dim>
</port>
<port id="4">
<dim>512</dim>
<dim>128</dim>
</port>
<port id="5">
<dim>512</dim>
</port>
</input>
<output>
<port id="6">
<dim>1</dim>
<dim>128</dim>
</port>
<port id="7">
<dim>1</dim>
<dim>128</dim>
</port>
</output>
</layer>
```
.. code-block:: cpp
<layer ... type="LSTMCell" ...>
<data hidden_size="128"/>
<input>
<port id="0">
<dim>1</dim>
<dim>16</dim>
</port>
<port id="1">
<dim>1</dim>
<dim>128</dim>
</port>
<port id="2">
<dim>1</dim>
<dim>128</dim>
</port>
<port id="3">
<dim>512</dim>
<dim>16</dim>
</port>
<port id="4">
<dim>512</dim>
<dim>128</dim>
</port>
<port id="5">
<dim>512</dim>
</port>
</input>
<output>
<port id="6">
<dim>1</dim>
<dim>128</dim>
</port>
<port id="7">
<dim>1</dim>
<dim>128</dim>
</port>
</output>
</layer>
@endsphinxdirective