bulk change type T to type *T* in spec (#6486)

* bulk change `type T` to `type *T*` in spec

* update all `T` which referee to type to use *T* pattern

* get back with `T` where T is dimension

* fix *T*1 -> *T1*

* Make italic types where was no formating
This commit is contained in:
Patryk Elszkowski 2021-07-02 12:51:00 +02:00 committed by GitHub
parent ccf786438b
commit de53c40578
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
61 changed files with 253 additions and 257 deletions

View File

@ -38,16 +38,16 @@ clamp( x_{i} )=\min\big( \max\left( x_{i}, min\_value \right), max\_value \big)
**Inputs**: **Inputs**:
* **1**: A tensor of type `T` and arbitrary shape. **Required**. * **1**: A tensor of type *T* and arbitrary shape. **Required**.
**Outputs**: **Outputs**:
* **1**: A tensor of type `T` with same shape as input tensor. * **1**: A tensor of type *T* with same shape as input tensor.
**Types** **Types**
* *T*: any numeric type. * *T*: any numeric type.
* **Note**: In case of integral numeric type, ceil is used to convert *min* from `float` to `T` and floor is used to convert *max* from `float` to `T`. * **Note**: In case of integral numeric type, ceil is used to convert *min* from `float` to *T* and floor is used to convert *max* from `float` to *T*.
**Example** **Example**

View File

@ -20,7 +20,7 @@ Elu(x) = \left\{\begin{array}{r}
where α corresponds to *alpha* attribute. where α corresponds to *alpha* attribute.
*Elu* is equivalent to *ReLU* operation when *alpha* is equal to zero. *Elu* is equivalent to *ReLU* operation when *alpha* is equal to zero.
**Attributes** **Attributes**
@ -34,11 +34,11 @@ where α corresponds to *alpha* attribute.
**Inputs**: **Inputs**:
* **1**: A tensor of type `T` and arbitrary shape. **Required**. * **1**: A tensor of type *T* and arbitrary shape. **Required**.
**Outputs**: **Outputs**:
* **1**: The result of element-wise *Elu* function applied to the input tensor. A tensor of type `T` and the same shape as input tensor. * **1**: The result of element-wise *Elu* function applied to the input tensor. A tensor of type *T* and the same shape as input tensor.
**Types** **Types**

View File

@ -18,11 +18,11 @@ exp(x) = e^{x}
**Inputs** **Inputs**
* **1**: A tensor of type `T` and arbitrary shape. **Required**. * **1**: A tensor of type *T* and arbitrary shape. **Required**.
**Outputs** **Outputs**
* **1**: The result of element-wise *Exp* function applied to the input tensor. A tensor of type `T` and the same shape as input tensor. * **1**: The result of element-wise *Exp* function applied to the input tensor. A tensor of type *T* and the same shape as input tensor.
**Types** **Types**
@ -45,4 +45,4 @@ exp(x) = e^{x}
</port> </port>
</output> </output>
</layer> </layer>
``` ```

View File

@ -27,11 +27,11 @@ Additionally, *Gelu* function may be approximated as follows:
**Inputs**: **Inputs**:
* **1**: A tensor of type `T` and arbitrary shape. **Required**. * **1**: A tensor of type *T* and arbitrary shape. **Required**.
**Outputs**: **Outputs**:
* **1**: The result of element-wise *Gelu* function applied to the input tensor. A tensor of type `T` and the same shape as input tensor. * **1**: The result of element-wise *Gelu* function applied to the input tensor. A tensor of type *T* and the same shape as input tensor.
**Types** **Types**

View File

@ -18,15 +18,15 @@ For each element from the input tensor calculates corresponding
**Inputs** **Inputs**
* **1**: An tensor of type T. **Required.** * **1**: An tensor of type *T*. **Required.**
* **2**: `alpha` 0D tensor (scalar) of type T. **Required.** * **2**: `alpha` 0D tensor (scalar) of type *T*. **Required.**
* **3**: `beta` 0D tensor (scalar) of type T. **Required.** * **3**: `beta` 0D tensor (scalar) of type *T*. **Required.**
**Outputs** **Outputs**
* **1**: The result of the hard sigmoid operation. A tensor of type T. * **1**: The result of the hard sigmoid operation. A tensor of type *T*.
**Types** **Types**
@ -51,4 +51,4 @@ For each element from the input tensor calculates corresponding
</port> </port>
</output> </output>
</layer> </layer>
``` ```

View File

@ -24,11 +24,11 @@ LogSoftmax(x, axis) = t - Log(ReduceSum(Exp(t), axis))
**Inputs**: **Inputs**:
* **1**: Input tensor *x* of type T with enough number of dimension to be compatible with *axis* attribute. Required. * **1**: Input tensor *x* of type *T* with enough number of dimension to be compatible with *axis* attribute. Required.
**Outputs**: **Outputs**:
* **1**: The resulting tensor of the same shape and of type T. * **1**: The resulting tensor of the same shape and of type *T*.
**Types** **Types**
@ -59,4 +59,4 @@ where \f$C\f$ is a size of tensor along *axis* dimension.
</port> </port>
</output> </output>
</layer> </layer>
``` ```

View File

@ -20,11 +20,11 @@ Mish(x) = x\cdot\tanh\big(SoftPlus(x)\big) = x\cdot\tanh\big(\ln(1+e^{x})\big)
**Inputs**: **Inputs**:
* **1**: A tensor of type `T` and arbitrary shape. **Required**. * **1**: A tensor of type *T* and arbitrary shape. **Required**.
**Outputs**: **Outputs**:
* **1**: The result of element-wise *Mish* function applied to the input tensor. A tensor of type `T` and the same shape as input tensor. * **1**: The result of element-wise *Mish* function applied to the input tensor. A tensor of type *T* and the same shape as input tensor.
**Types** **Types**

View File

@ -31,15 +31,15 @@ Selu(x) = \lambda\cdot\big(\max(0, x) + \min(0, \alpha(e^{x}-1))\big)
**Inputs** **Inputs**
* **1**: `data`. A tensor of type `T` and arbitrary shape. **Required.** * **1**: `data`. A tensor of type *T* and arbitrary shape. **Required.**
* **2**: `alpha`. 1D tensor with one element of type `T`. **Required.** * **2**: `alpha`. 1D tensor with one element of type *T*. **Required.**
* **3**: `lambda`. 1D tensor with one element of type `T`. **Required.** * **3**: `lambda`. 1D tensor with one element of type *T*. **Required.**
**Outputs** **Outputs**
* **1**: The result of element-wise *Selu* function applied to `data` input tensor. A tensor of type `T` and the same shape as `data` input tensor. * **1**: The result of element-wise *Selu* function applied to `data` input tensor. A tensor of type *T* and the same shape as `data` input tensor.
**Types** **Types**

View File

@ -8,7 +8,7 @@
**Detailed description** **Detailed description**
*SoftPlus* operation is introduced in this [article](https://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.165.6419). *SoftPlus* operation is introduced in this [article](https://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.165.6419).
*SoftPlus* performs element-wise activation function on a given input tensor, based on the following mathematical formula: *SoftPlus* performs element-wise activation function on a given input tensor, based on the following mathematical formula:
@ -35,11 +35,11 @@ For example, if *T* is `fp32`, `threshold` should be `20` or if *T* is `fp16`, `
**Inputs**: **Inputs**:
* **1**: A tensor of type `T` and arbitrary shape. **Required**. * **1**: A tensor of type *T* and arbitrary shape. **Required**.
**Outputs**: **Outputs**:
* **1**: The result of element-wise *SoftPlus* function applied to the input tensor. A tensor of type `T` and the same shape as input tensor. * **1**: The result of element-wise *SoftPlus* function applied to the input tensor. A tensor of type *T* and the same shape as input tensor.
**Types** **Types**

View File

@ -22,13 +22,13 @@ where β corresponds to `beta` scalar input.
**Inputs**: **Inputs**:
* **1**: `data`. A tensor of type `T` and arbitrary shape. **Required**. * **1**: `data`. A tensor of type *T* and arbitrary shape. **Required**.
* **2**: `beta`. A non-negative scalar value of type `T`. Multiplication parameter for the sigmoid. Default value 1.0 is used. **Optional**. * **2**: `beta`. A non-negative scalar value of type *T*. Multiplication parameter for the sigmoid. Default value 1.0 is used. **Optional**.
**Outputs**: **Outputs**:
* **1**: The result of element-wise *Swish* function applied to the input tensor `data`. A tensor of type `T` and the same shape as `data` input tensor. * **1**: The result of element-wise *Swish* function applied to the input tensor `data`. A tensor of type *T* and the same shape as `data` input tensor.
**Types** **Types**

View File

@ -2,7 +2,7 @@
**Versioned name**: *Abs-1* **Versioned name**: *Abs-1*
**Category**: Arithmetic unary operation **Category**: Arithmetic unary operation
**Short description**: *Abs* performs element-wise the absolute value with given tensor. **Short description**: *Abs* performs element-wise the absolute value with given tensor.
@ -12,11 +12,11 @@
**Inputs** **Inputs**
* **1**: An tensor of type T. **Required.** * **1**: An tensor of type *T*. **Required.**
**Outputs** **Outputs**
* **1**: The result of element-wise abs operation. A tensor of type T. * **1**: The result of element-wise abs operation. A tensor of type *T*.
**Types** **Types**
@ -48,4 +48,3 @@ a_{i} = abs(a_{i})
</output> </output>
</layer> </layer>
``` ```

View File

@ -2,7 +2,7 @@
**Versioned name**: *Acos-1* **Versioned name**: *Acos-1*
**Category**: Arithmetic unary operation **Category**: Arithmetic unary operation
**Short description**: *Acos* performs element-wise inverse cosine (arccos) operation with given tensor. **Short description**: *Acos* performs element-wise inverse cosine (arccos) operation with given tensor.
@ -12,11 +12,11 @@
**Inputs** **Inputs**
* **1**: An tensor of type T. **Required.** * **1**: An tensor of type *T*. **Required.**
**Outputs** **Outputs**
* **1**: The result of element-wise acos operation. A tensor of type T. * **1**: The result of element-wise acos operation. A tensor of type *T*.
**Types** **Types**

View File

@ -2,7 +2,7 @@
**Versioned name**: *Acosh-3* **Versioned name**: *Acosh-3*
**Category**: Arithmetic unary operation **Category**: Arithmetic unary operation
**Short description**: *Acosh* performs element-wise hyperbolic inverse cosine (arccosh) operation with given tensor. **Short description**: *Acosh* performs element-wise hyperbolic inverse cosine (arccosh) operation with given tensor.
@ -12,11 +12,11 @@
**Inputs** **Inputs**
* **1**: A tensor of type T. **Required.** * **1**: A tensor of type *T*. **Required.**
**Outputs** **Outputs**
* **1**: The result of element-wise acosh operation. A tensor of type T. * **1**: The result of element-wise acosh operation. A tensor of type *T*.
**Types** **Types**

View File

@ -29,12 +29,12 @@ o_{i} = a_{i} + b_{i}
**Inputs** **Inputs**
* **1**: A tensor of type T and arbitrary shape and rank. **Required.** * **1**: A tensor of type *T* and arbitrary shape and rank. **Required.**
* **2**: A tensor of type T and arbitrary shape and rank. **Required.** * **2**: A tensor of type *T* and arbitrary shape and rank. **Required.**
**Outputs** **Outputs**
* **1**: The result of element-wise addition operation. A tensor of type T with shape equal to broadcasted shape of the two inputs. * **1**: The result of element-wise addition operation. A tensor of type *T* with shape equal to broadcasted shape of the two inputs.
**Types** **Types**

View File

@ -2,7 +2,7 @@
**Versioned name**: *Asin-1* **Versioned name**: *Asin-1*
**Category**: Arithmetic unary operation **Category**: Arithmetic unary operation
**Short description**: *Asin* performs element-wise inverse sine (arcsin) operation with given tensor. **Short description**: *Asin* performs element-wise inverse sine (arcsin) operation with given tensor.
@ -12,11 +12,11 @@
**Inputs** **Inputs**
* **1**: An tensor of type T. **Required.** * **1**: An tensor of type *T*. **Required.**
**Outputs** **Outputs**
* **1**: The result of element-wise asin operation. A tensor of type T. * **1**: The result of element-wise asin operation. A tensor of type *T*.
**Types** **Types**

View File

@ -2,7 +2,7 @@
**Versioned name**: *Asinh-3* **Versioned name**: *Asinh-3*
**Category**: Arithmetic unary operation **Category**: Arithmetic unary operation
**Short description**: *Asinh* performs element-wise hyperbolic inverse sine (arcsinh) operation with given tensor. **Short description**: *Asinh* performs element-wise hyperbolic inverse sine (arcsinh) operation with given tensor.
@ -12,11 +12,11 @@
**Inputs** **Inputs**
* **1**: A tensor of type T. **Required.** * **1**: A tensor of type *T*. **Required.**
**Outputs** **Outputs**
* **1**: The result of element-wise asinh operation. A tensor of type T. * **1**: The result of element-wise asinh operation. A tensor of type *T*.
**Types** **Types**

View File

@ -2,7 +2,7 @@
**Versioned name**: *Atan-1* **Versioned name**: *Atan-1*
**Category**: Arithmetic unary operation **Category**: Arithmetic unary operation
**Short description**: *Atan* performs element-wise inverse tangent (arctangent) operation with given tensor. **Short description**: *Atan* performs element-wise inverse tangent (arctangent) operation with given tensor.
@ -12,11 +12,11 @@
**Inputs** **Inputs**
* **1**: An tensor of type T. **Required.** * **1**: An tensor of type *T*. **Required.**
**Outputs** **Outputs**
* **1**: The result of element-wise atan operation. A tensor of type T. * **1**: The result of element-wise atan operation. A tensor of type *T*.
**Types** **Types**

View File

@ -2,7 +2,7 @@
**Versioned name**: *Atanh-3* **Versioned name**: *Atanh-3*
**Category**: Arithmetic unary operation **Category**: Arithmetic unary operation
**Short description**: *Atanh* performs element-wise hyperbolic inverse tangent (arctangenth) operation with given tensor. **Short description**: *Atanh* performs element-wise hyperbolic inverse tangent (arctangenth) operation with given tensor.
@ -12,11 +12,11 @@
**Inputs** **Inputs**
* **1**: A tensor of type T. **Required.** * **1**: A tensor of type *T*. **Required.**
**Outputs** **Outputs**
* **1**: The result of element-wise atanh operation. A tensor of type T. * **1**: The result of element-wise atanh operation. A tensor of type *T*.
**Types** **Types**

View File

@ -2,7 +2,7 @@
**Versioned name**: *Cosh-1* **Versioned name**: *Cosh-1*
**Category**: Arithmetic unary operation **Category**: Arithmetic unary operation
**Short description**: *Cosh* performs element-wise hyperbolic cosine operation with given tensor. **Short description**: *Cosh* performs element-wise hyperbolic cosine operation with given tensor.
@ -12,11 +12,11 @@
**Inputs** **Inputs**
* **1**: An tensor of type T. **Required.** * **1**: An tensor of type *T*. **Required.**
**Outputs** **Outputs**
* **1**: The result of element-wise cosh operation. A tensor of type T. * **1**: The result of element-wise cosh operation. A tensor of type *T*.
**Types** **Types**

View File

@ -2,10 +2,10 @@
**Versioned name**: *CumSum-3* **Versioned name**: *CumSum-3*
**Category**: Arithmetic unary operation **Category**: Arithmetic unary operation
**Short description**: *CumSum* performs cumulative summation of the input elements along the given axis. **Short description**: *CumSum* performs cumulative summation of the input elements along the given axis.
**Detailed description**: By default, it will do the sum inclusively meaning the first element is copied as is. Through an "exclusive" attribute, this behavior can change to exclude the first element. It can also perform summation in the opposite direction of the axis. For that, set reverse attribute to `true`. **Detailed description**: By default, it will do the sum inclusively meaning the first element is copied as is. Through an "exclusive" attribute, this behavior can change to exclude the first element. It can also perform summation in the opposite direction of the axis. For that, set reverse attribute to `true`.
**Attributes**: **Attributes**:
@ -32,13 +32,13 @@
**Inputs** **Inputs**
* **1**: An tensor of type T. **Required.** * **1**: An tensor of type *T*. **Required.**
* **2**: Scalar axis of type T_AXIS. Negative value means counting dimensions from the back. Default value is 0. **Optional.** * **2**: Scalar axis of type *T_AXIS*. Negative value means counting dimensions from the back. Default value is 0. **Optional.**
**Outputs** **Outputs**
* **1**: Output tensor with cumulative sums of the input's elements. A tensor of type T of the same shape as 1st input. * **1**: Output tensor with cumulative sums of the input's elements. A tensor of type *T* of the same shape as 1st input.
**Types** **Types**

View File

@ -41,12 +41,12 @@ The result of division by zero is undefined.
**Inputs** **Inputs**
* **1**: A tensor of type T and arbitrary shape and rank. **Required.** * **1**: A tensor of type *T* and arbitrary shape and rank. **Required.**
* **2**: A tensor of type T and arbitrary shape and rank. **Required.** * **2**: A tensor of type *T* and arbitrary shape and rank. **Required.**
**Outputs** **Outputs**
* **1**: The result of element-wise division operation. A tensor of type T with shape equal to broadcasted shape of the two inputs. * **1**: The result of element-wise division operation. A tensor of type *T* with shape equal to broadcasted shape of the two inputs.
**Types** **Types**

View File

@ -19,11 +19,11 @@ erf(x) = \pi^{-1} \int_{-x}^{x} e^{-t^2} dt
**Inputs** **Inputs**
* **1**: A tensor of type T. **Required.** * **1**: A tensor of type *T*. **Required.**
**Outputs** **Outputs**
* **1**: The result of element-wise operation. A tensor of type T. * **1**: The result of element-wise operation. A tensor of type *T*.
**Types** **Types**

View File

@ -4,14 +4,14 @@
**Category**: Arithmetic binary operation **Category**: Arithmetic binary operation
**Short description**: *FloorMod* performs an element-wise floor modulo operation with two given tensors applying broadcasting rule specified in the *auto_broadcast* attribute. **Short description**: *FloorMod* performs an element-wise floor modulo operation with two given tensors applying broadcasting rule specified in the *auto_broadcast* attribute.
**Detailed description** **Detailed description**
As a first step input tensors *a* and *b* are broadcasted if their shapes differ. Broadcasting is performed according to `auto_broadcast` attribute specification. As a second step *FloorMod* operation is computed element-wise on the input tensors *a* and *b* according to the formula below: As a first step input tensors *a* and *b* are broadcasted if their shapes differ. Broadcasting is performed according to `auto_broadcast` attribute specification. As a second step *FloorMod* operation is computed element-wise on the input tensors *a* and *b* according to the formula below:
\f[ \f[
o_{i} = a_{i} % b_{i} o_{i} = a_{i} % b_{i}
\f] \f]
*FloorMod* operation computes a reminder of a floored division. It is the same behaviour like in Python programming language: `floor(x / y) * y + floor_mod(x, y) = x`. The sign of the result is equal to a sign of a divisor. The result of division by zero is undefined. *FloorMod* operation computes a reminder of a floored division. It is the same behaviour like in Python programming language: `floor(x / y) * y + floor_mod(x, y) = x`. The sign of the result is equal to a sign of a divisor. The result of division by zero is undefined.
@ -29,12 +29,12 @@ o_{i} = a_{i} % b_{i}
**Inputs** **Inputs**
* **1**: A tensor of type T and arbitrary shape. Required. * **1**: A tensor of type *T* and arbitrary shape. Required.
* **2**: A tensor of type T and arbitrary shape. Required. * **2**: A tensor of type *T* and arbitrary shape. Required.
**Outputs** **Outputs**
* **1**: The result of element-wise floor modulo operation. A tensor of type T with shape equal to broadcasted shape of two inputs. * **1**: The result of element-wise floor modulo operation. A tensor of type *T* with shape equal to broadcasted shape of two inputs.
**Types** **Types**

View File

@ -2,7 +2,7 @@
**Versioned name**: *Log-1* **Versioned name**: *Log-1*
**Category**: Arithmetic unary operation **Category**: Arithmetic unary operation
**Short description**: *Log* performs element-wise natural logarithm operation with given tensor. **Short description**: *Log* performs element-wise natural logarithm operation with given tensor.
@ -18,11 +18,11 @@ a_{i} = log(a_{i})
**Inputs** **Inputs**
* **1**: An tensor of type T and arbitrary shape. **Required.** * **1**: An tensor of type *T* and arbitrary shape. **Required.**
**Outputs** **Outputs**
* **1**: The result of element-wise log operation. A tensor of type T and the same shape as input. * **1**: The result of element-wise log operation. A tensor of type *T* and the same shape as input.
**Types** **Types**
@ -47,4 +47,4 @@ a_{i} = log(a_{i})
</port> </port>
</output> </output>
</layer> </layer>
``` ```

View File

@ -13,7 +13,7 @@ After broadcasting *Maximum* does the following with the input tensors *a* and *
\f[ \f[
o_{i} = max(a_{i}, b_{i}) o_{i} = max(a_{i}, b_{i})
\f] \f]
**Attributes**: **Attributes**:
@ -29,12 +29,12 @@ o_{i} = max(a_{i}, b_{i})
**Inputs** **Inputs**
* **1**: A tensor of type T and arbitrary shape. Required. * **1**: A tensor of type *T* and arbitrary shape. Required.
* **2**: A tensor of type T and arbitrary shape. Required. * **2**: A tensor of type *T* and arbitrary shape. Required.
**Outputs** **Outputs**
* **1**: The result of element-wise maximum operation. A tensor of type T with shape equal to broadcasted shape of two inputs. * **1**: The result of element-wise maximum operation. A tensor of type *T* with shape equal to broadcasted shape of two inputs.
**Types** **Types**
@ -92,4 +92,4 @@ o_{i} = max(a_{i}, b_{i})
</port> </port>
</output> </output>
</layer> </layer>
``` ```

View File

@ -11,7 +11,7 @@ As a first step input tensors *a* and *b* are broadcasted if their shapes differ
\f[ \f[
o_{i} = min(a_{i}, b_{i}) o_{i} = min(a_{i}, b_{i})
\f] \f]
**Attributes**: **Attributes**:
@ -27,12 +27,12 @@ o_{i} = min(a_{i}, b_{i})
**Inputs** **Inputs**
* **1**: A tensor of type T and arbitrary shape. Required. * **1**: A tensor of type *T* and arbitrary shape. Required.
* **2**: A tensor of type T and arbitrary shape. Required. * **2**: A tensor of type *T* and arbitrary shape. Required.
**Outputs** **Outputs**
* **1**: The result of element-wise minimum operation. A tensor of type T with shape equal to broadcasted shape of two inputs. * **1**: The result of element-wise minimum operation. A tensor of type *T* with shape equal to broadcasted shape of two inputs.
**Types** **Types**
@ -90,4 +90,4 @@ o_{i} = min(a_{i}, b_{i})
</port> </port>
</output> </output>
</layer> </layer>
``` ```

View File

@ -4,7 +4,7 @@
**Category**: Arithmetic binary operation **Category**: Arithmetic binary operation
**Short description**: *Mod* performs an element-wise modulo operation with two given tensors applying broadcasting rule specified in the *auto_broadcast* attribute. **Short description**: *Mod* performs an element-wise modulo operation with two given tensors applying broadcasting rule specified in the *auto_broadcast* attribute.
**Detailed description** **Detailed description**
As a first step input tensors *a* and *b* are broadcasted if their shapes differ. Broadcasting is performed according to `auto_broadcast` attribute specification. As a second step *Mod* operation is computed element-wise on the input tensors *a* and *b* according to the formula below: As a first step input tensors *a* and *b* are broadcasted if their shapes differ. Broadcasting is performed according to `auto_broadcast` attribute specification. As a second step *Mod* operation is computed element-wise on the input tensors *a* and *b* according to the formula below:
@ -30,12 +30,12 @@ o_{i} = a_{i} % b_{i}
**Inputs** **Inputs**
* **1**: A tensor of type T and arbitrary shape. Required. * **1**: A tensor of type *T* and arbitrary shape. Required.
* **2**: A tensor of type T and arbitrary shape. Required. * **2**: A tensor of type *T* and arbitrary shape. Required.
**Outputs** **Outputs**
* **1**: The result of element-wise modulo operation. A tensor of type T with shape equal to broadcasted shape of two inputs. * **1**: The result of element-wise modulo operation. A tensor of type *T* with shape equal to broadcasted shape of two inputs.
**Types** **Types**
@ -93,4 +93,4 @@ o_{i} = a_{i} % b_{i}
</port> </port>
</output> </output>
</layer> </layer>
``` ```

View File

@ -29,12 +29,12 @@ o_{i} = a_{i} * b_{i}
**Inputs** **Inputs**
* **1**: A tensor of type T and arbitrary shape and rank. **Required.** * **1**: A tensor of type *T* and arbitrary shape and rank. **Required.**
* **2**: A tensor of type T and arbitrary shape and rank. **Required.** * **2**: A tensor of type *T* and arbitrary shape and rank. **Required.**
**Outputs** **Outputs**
* **1**: The result of element-wise multiplication operation. A tensor of type T with shape equal to broadcasted shape of the two inputs. * **1**: The result of element-wise multiplication operation. A tensor of type *T* with shape equal to broadcasted shape of the two inputs.
**Types** **Types**
@ -93,4 +93,4 @@ o_{i} = a_{i} * b_{i}
</port> </port>
</output> </output>
</layer> </layer>
``` ```

View File

@ -11,7 +11,7 @@ As a first step input tensors *a* and *b* are broadcasted if their shapes differ
\f[ \f[
o_{i} = {a_{i} ^ b_{i}} o_{i} = {a_{i} ^ b_{i}}
\f] \f]
**Attributes**: **Attributes**:
@ -27,12 +27,12 @@ o_{i} = {a_{i} ^ b_{i}}
**Inputs** **Inputs**
* **1**: A tensor of type T and arbitrary shape. Required. * **1**: A tensor of type *T* and arbitrary shape. Required.
* **2**: A tensor of type T and arbitrary shape. Required. * **2**: A tensor of type *T* and arbitrary shape. Required.
**Outputs** **Outputs**
* **1**: The result of element-wise power operation. A tensor of type T with shape equal to broadcasted shape of two inputs. * **1**: The result of element-wise power operation. A tensor of type *T* with shape equal to broadcasted shape of two inputs.
**Types** **Types**
@ -91,4 +91,4 @@ o_{i} = {a_{i} ^ b_{i}}
</port> </port>
</output> </output>
</layer> </layer>
``` ```

View File

@ -2,7 +2,7 @@
**Versioned name**: *Sign-1* **Versioned name**: *Sign-1*
**Category**: Arithmetic unary operation **Category**: Arithmetic unary operation
**Short description**: *Sign* performs element-wise sign operation with given tensor. **Short description**: *Sign* performs element-wise sign operation with given tensor.
@ -12,11 +12,11 @@
**Inputs** **Inputs**
* **1**: An tensor of type T. **Required.** * **1**: An tensor of type *T*. **Required.**
**Outputs** **Outputs**
* **1**: The result of element-wise sign operation. A tensor of type T with mapped elements of the input tensor to -1 (if it is negative), 0 (if it is zero), or 1 (if it is positive). * **1**: The result of element-wise sign operation. A tensor of type *T* with mapped elements of the input tensor to -1 (if it is negative), 0 (if it is zero), or 1 (if it is positive).
**Types** **Types**
@ -47,4 +47,4 @@ a_{i} = sign(a_{i})
</port> </port>
</output> </output>
</layer> </layer>
``` ```

View File

@ -2,7 +2,7 @@
**Versioned name**: *Sin-1* **Versioned name**: *Sin-1*
**Category**: Arithmetic unary operation **Category**: Arithmetic unary operation
**Short description**: *Sin* performs element-wise sine operation with given tensor. **Short description**: *Sin* performs element-wise sine operation with given tensor.
@ -19,11 +19,11 @@ a - value representing angle in radians.
**Inputs** **Inputs**
* **1**: An tensor of type T and arbitrary rank. **Required.** * **1**: An tensor of type *T* and arbitrary rank. **Required.**
**Outputs** **Outputs**
* **1**: The result of element-wise sin operation. A tensor of type T. * **1**: The result of element-wise sin operation. A tensor of type *T*.
**Types** **Types**

View File

@ -2,7 +2,7 @@
**Versioned name**: *Sinh-1* **Versioned name**: *Sinh-1*
**Category**: Arithmetic unary operation **Category**: Arithmetic unary operation
**Short description**: *Sinh* performs element-wise hyperbolic sine (sinh) operation with given tensor. **Short description**: *Sinh* performs element-wise hyperbolic sine (sinh) operation with given tensor.
@ -12,7 +12,7 @@
**Inputs** **Inputs**
* **1**: An tensor of type T. **Required.** * **1**: An tensor of type *T*. **Required.**
**Outputs** **Outputs**

View File

@ -11,7 +11,7 @@ As a first step input tensors *a* and *b* are broadcasted if their shapes differ
\f[ \f[
o_{i} = (a_{i} - b_{i})^2 o_{i} = (a_{i} - b_{i})^2
\f] \f]
**Attributes**: **Attributes**:
@ -27,12 +27,12 @@ o_{i} = (a_{i} - b_{i})^2
**Inputs** **Inputs**
* **1**: A tensor of type T and arbitrary shape. Required. * **1**: A tensor of type *T* and arbitrary shape. Required.
* **2**: A tensor of type T and arbitrary shape. Required. * **2**: A tensor of type *T* and arbitrary shape. Required.
**Outputs** **Outputs**
* **1**: The result of element-wise subtract and square the result operation. A tensor of type T with shape equal to broadcasted shape of two inputs. * **1**: The result of element-wise subtract and square the result operation. A tensor of type *T* with shape equal to broadcasted shape of two inputs.
**Types** **Types**
@ -89,4 +89,4 @@ o_{i} = (a_{i} - b_{i})^2
</port> </port>
</output> </output>
</layer> </layer>
``` ```

View File

@ -29,12 +29,12 @@ o_{i} = a_{i} - b_{i}
**Inputs** **Inputs**
* **1**: A tensor of type T and arbitrary shape and rank. **Required.** * **1**: A tensor of type *T* and arbitrary shape and rank. **Required.**
* **2**: A tensor of type T and arbitrary shape and rank. **Required.** * **2**: A tensor of type *T* and arbitrary shape and rank. **Required.**
**Outputs** **Outputs**
* **1**: The result of element-wise subtraction operation. A tensor of type T with shape equal to broadcasted shape of the two inputs. * **1**: The result of element-wise subtraction operation. A tensor of type *T* with shape equal to broadcasted shape of the two inputs.
**Types** **Types**
@ -91,4 +91,4 @@ o_{i} = a_{i} - b_{i}
</port> </port>
</output> </output>
</layer> </layer>
``` ```

View File

@ -2,7 +2,7 @@
**Versioned name**: *Tan-1* **Versioned name**: *Tan-1*
**Category**: Arithmetic unary operation **Category**: Arithmetic unary operation
**Short description**: *Tan* performs element-wise tangent operation with given tensor. **Short description**: *Tan* performs element-wise tangent operation with given tensor.
@ -12,11 +12,11 @@
**Inputs** **Inputs**
* **1**: An tensor of type T. **Required.** * **1**: An tensor of type *T*. **Required.**
**Outputs** **Outputs**
* **1**: The result of element-wise tan operation. A tensor of type T. * **1**: The result of element-wise tan operation. A tensor of type *T*.
**Types** **Types**
@ -48,4 +48,3 @@ a_{i} = tan(a_{i})
</output> </output>
</layer> </layer>
``` ```

View File

@ -20,8 +20,8 @@
**Inputs** **Inputs**
* **1**: A tensor of type T. **Required.** * **1**: A tensor of type *T*. **Required.**
* **2**: A tensor of type T. **Required.** * **2**: A tensor of type *T*. **Required.**
**Outputs** **Outputs**
@ -40,7 +40,7 @@ After broadcasting *Equal* does the following with the input tensors *a* and *b*
o_{i} = a_{i} == b_{i} o_{i} = a_{i} == b_{i}
\f] \f]
**Examples** **Examples**
*Example 1* *Example 1*

View File

@ -20,8 +20,8 @@
**Inputs** **Inputs**
* **1**: A tensor of type T. **Required.** * **1**: A tensor of type *T*. **Required.**
* **2**: A tensor of type T. **Required.** * **2**: A tensor of type *T*. **Required.**
**Outputs** **Outputs**

View File

@ -20,8 +20,8 @@
**Inputs** **Inputs**
* **1**: A tensor of type T. **Required.** * **1**: A tensor of type *T*. **Required.**
* **2**: A tensor of type T. **Required.** * **2**: A tensor of type *T*. **Required.**
**Outputs** **Outputs**

View File

@ -20,8 +20,8 @@
**Inputs** **Inputs**
* **1**: A tensor of type T. **Required.** * **1**: A tensor of type *T*. **Required.**
* **2**: A tensor of type T. **Required.** * **2**: A tensor of type *T*. **Required.**
**Outputs** **Outputs**
@ -90,4 +90,4 @@ o_{i} = a_{i} <= b_{i}
</port> </port>
</output> </output>
</layer> </layer>
``` ```

View File

@ -20,8 +20,8 @@
**Inputs** **Inputs**
* **1**: A tensor of type T. **Required.** * **1**: A tensor of type *T*. **Required.**
* **2**: A tensor of type T. **Required.** * **2**: A tensor of type *T*. **Required.**
**Outputs** **Outputs**
@ -90,4 +90,4 @@ o_{i} = a_{i} < b_{i}
</port> </port>
</output> </output>
</layer> </layer>
``` ```

View File

@ -20,8 +20,8 @@
**Inputs** **Inputs**
* **1**: A tensor of type T. **Required.** * **1**: A tensor of type *T*. **Required.**
* **2**: A tensor of type T. **Required.** * **2**: A tensor of type *T*. **Required.**
**Outputs** **Outputs**
@ -40,7 +40,7 @@ After broadcasting *NotEqual* does the following with the input tensors *a* and
o_{i} = a_{i} != b_{i} o_{i} = a_{i} != b_{i}
\f] \f]
**Examples** **Examples**
*Example 1* *Example 1*
@ -90,4 +90,4 @@ o_{i} = a_{i} != b_{i}
</port> </port>
</output> </output>
</layer> </layer>
``` ```

View File

@ -6,7 +6,7 @@
**Short description**: *RegionYolo* computes the coordinates of regions with probability for each class. **Short description**: *RegionYolo* computes the coordinates of regions with probability for each class.
**Detailed description**: This operation is directly mapped to the [YOLO9000: Better, Faster, Stronger](https://arxiv.org/pdf/1612.08242.pdf) paper. **Detailed description**: This operation is directly mapped to the [YOLO9000: Better, Faster, Stronger](https://arxiv.org/pdf/1612.08242.pdf) paper.
**Attributes**: **Attributes**:
@ -78,13 +78,13 @@
**Inputs**: **Inputs**:
* **1**: `data` - 4D tensor of type `T` and shape `[N, C, H, W]`. **Required.** * **1**: `data` - 4D tensor of type *T* and shape `[N, C, H, W]`. **Required.**
**Outputs**: **Outputs**:
* **1**: tensor of type `T` and rank 4 or less that codes detected regions. Refer to the [YOLO9000: Better, Faster, Stronger](https://arxiv.org/pdf/1612.08242.pdf) paper to decode the output as boxes. `anchors` should be used to decode real box coordinates. If `do_softmax` is set to `0`, then the output shape is `[N, (classes + coords + 1) * len(mask), H, W]`. If `do_softmax` is set to `1`, then output shape is partially flattened and defined in the following way: * **1**: tensor of type *T* and rank 4 or less that codes detected regions. Refer to the [YOLO9000: Better, Faster, Stronger](https://arxiv.org/pdf/1612.08242.pdf) paper to decode the output as boxes. `anchors` should be used to decode real box coordinates. If `do_softmax` is set to `0`, then the output shape is `[N, (classes + coords + 1) * len(mask), H, W]`. If `do_softmax` is set to `1`, then output shape is partially flattened and defined in the following way:
`flat_dim = data.shape[axis] * data.shape[axis+1] * ... * data.shape[end_axis]` `flat_dim = data.shape[axis] * data.shape[axis+1] * ... * data.shape[end_axis]`
`output.shape = [data.shape[0], ..., data.shape[axis-1], flat_dim, data.shape[end_axis + 1], ...]` `output.shape = [data.shape[0], ..., data.shape[axis-1], flat_dim, data.shape[end_axis + 1], ...]`
**Types** **Types**
@ -133,4 +133,4 @@
</output> </output>
</layer> </layer>
``` ```

View File

@ -12,13 +12,13 @@ No attributes available.
**Inputs**: **Inputs**:
* **1**: "start" - A scalar of type T. **Required.** * **1**: "start" - A scalar of type *T*. **Required.**
* **2**: "stop" - A scalar of type T. **Required.** * **2**: "stop" - A scalar of type *T*. **Required.**
* **3**: "step" - A scalar of type T. **Required.** * **3**: "step" - A scalar of type *T*. **Required.**
**Outputs**: **Outputs**:
* **1**: A tensor of type T. * **1**: A tensor of type *T*.
**Types** **Types**
@ -87,4 +87,3 @@ val[i]=start+i*step
</output> </output>
</layer> </layer>
``` ```

View File

@ -18,9 +18,9 @@
**Inputs**: **Inputs**:
* **1**: "start" - A scalar of type T1. **Required.** * **1**: "start" - A scalar of type *T1*. **Required.**
* **2**: "stop" - A scalar of type T2. **Required.** * **2**: "stop" - A scalar of type *T2*. **Required.**
* **3**: "step" - A scalar of type T3. If `step` is equal to zero after casting to `output_type`, behavior is undefined. **Required.** * **3**: "step" - A scalar of type *T3*. If `step` is equal to zero after casting to `output_type`, behavior is undefined. **Required.**
**Outputs**: **Outputs**:
@ -124,4 +124,3 @@ This is aligned with PyTorch's operation `torch.arange`, to align with tensorflo
</output> </output>
</layer> </layer>
``` ```

View File

@ -89,13 +89,13 @@
**Inputs** **Inputs**
* **1**: `data` - tensor of type `T` with data for interpolation. **Required.** * **1**: `data` - tensor of type *T* with data for interpolation. **Required.**
* **2**: `sizes` - 1D tensor of type `T_SIZE` describing output shape for spatial axes. Number of elements matches the number of indices in `axes` input, the order matches as well. **Required.** * **2**: `sizes` - 1D tensor of type *T_SIZE* describing output shape for spatial axes. Number of elements matches the number of indices in `axes` input, the order matches as well. **Required.**
* **3**: `scales` - 1D tensor of type `T_SCALES` describing scales for spatial axes. Number and order of elements match the number and order of indices in `axes` input. **Required.** * **3**: `scales` - 1D tensor of type *T_SCALES* describing scales for spatial axes. Number and order of elements match the number and order of indices in `axes` input. **Required.**
* **4**: `axes` - 1D tensor of type `T_AXES` specifying dimension indices where interpolation is applied, and `axes` is any unordered list of indices of different dimensions of input tensor, e.g. `[0, 4]`, `[4, 0]`, `[4, 2, 1]`, `[1, 2, 3]`. These indices should be non-negative integers from `0` to `rank(data) - 1` inclusively. Other dimensions do not change. The order of elements in `axes` attribute matters, and mapped directly to elements in the 2nd input `sizes`. **Optional** with default value `[0,...,rank(data) - 1]`. * **4**: `axes` - 1D tensor of type *T_AXES* specifying dimension indices where interpolation is applied, and `axes` is any unordered list of indices of different dimensions of input tensor, e.g. `[0, 4]`, `[4, 0]`, `[4, 2, 1]`, `[1, 2, 3]`. These indices should be non-negative integers from `0` to `rank(data) - 1` inclusively. Other dimensions do not change. The order of elements in `axes` attribute matters, and mapped directly to elements in the 2nd input `sizes`. **Optional** with default value `[0,...,rank(data) - 1]`.
**Outputs** **Outputs**

View File

@ -6,13 +6,13 @@
**Short description**: *Result* layer specifies output of the model. **Short description**: *Result* layer specifies output of the model.
**Attributes**: **Attributes**:
No attributes available. No attributes available.
**Inputs** **Inputs**
* **1**: A tensor of type T. **Required.** * **1**: A tensor of type *T*. **Required.**
**Types** **Types**
@ -31,4 +31,4 @@
</port> </port>
</input> </input>
</layer> </layer>
``` ```

View File

@ -20,8 +20,8 @@
**Inputs** **Inputs**
* **1**: A tensor of type T. **Required**. * **1**: A tensor of type *T*. **Required**.
* **2**: A tensor of type T. **Required**. * **2**: A tensor of type *T*. **Required**.
**Outputs** **Outputs**
@ -90,4 +90,4 @@ o_{i} = a_{i} and b_{i}
</port> </port>
</output> </output>
</layer> </layer>
``` ```

View File

@ -2,7 +2,7 @@
**Versioned name**: *LogicalNot-1* **Versioned name**: *LogicalNot-1*
**Category**: Logical unary operation **Category**: Logical unary operation
**Short description**: *LogicalNot* performs element-wise logical negation operation with given tensor. **Short description**: *LogicalNot* performs element-wise logical negation operation with given tensor.
@ -12,11 +12,11 @@
**Inputs** **Inputs**
* **1**: An tensor of type T. **Required.** * **1**: An tensor of type *T*. **Required.**
**Outputs** **Outputs**
* **1**: The result of element-wise logical negation operation. A tensor of type T. * **1**: The result of element-wise logical negation operation. A tensor of type *T*.
**Types** **Types**
@ -47,4 +47,4 @@ a_{i} = not(a_{i})
</port> </port>
</output> </output>
</layer> </layer>
``` ```

View File

@ -20,8 +20,8 @@
**Inputs** **Inputs**
* **1**: A tensor of type T. **Required**. * **1**: A tensor of type *T*. **Required**.
* **2**: A tensor of type T. **Required**. * **2**: A tensor of type *T*. **Required**.
**Outputs** **Outputs**
@ -90,4 +90,4 @@ o_{i} = a_{i} or b_{i}
</port> </port>
</output> </output>
</layer> </layer>
``` ```

View File

@ -20,12 +20,12 @@
**Inputs** **Inputs**
* **1**: A tensor of type T. **Required**. * **1**: A tensor of type *T*. **Required**.
* **2**: A tensor of type T. **Required**. * **2**: A tensor of type *T*. **Required**.
**Outputs** **Outputs**
* **1**: The result of element-wise logical XOR operation. A tensor of type T. * **1**: The result of element-wise logical XOR operation. A tensor of type *T*.
**Types** **Types**
@ -90,4 +90,4 @@ o_{i} = a_{i} xor b_{i}
</port> </port>
</output> </output>
</layer> </layer>
``` ```

View File

@ -48,11 +48,11 @@ If `mode = depth_first`:
**Inputs** **Inputs**
* **1**: `data` - input tensor of type T with rank >= 3. **Required**. * **1**: `data` - input tensor of type *T* with rank >= 3. **Required**.
**Outputs** **Outputs**
* **1**: permuted tensor of type T and shape `[N, C / block_size ^ K, D1 * block_size, D2 * block_size, ..., DK * block_size]`. * **1**: permuted tensor of type *T* and shape `[N, C / block_size ^ K, D1 * block_size, D2 * block_size, ..., DK * block_size]`.
**Types** **Types**

View File

@ -21,13 +21,13 @@ for batch in range(BATCH_SIZE):
parent = parent_idx[max_sequence_in_beam - 1, batch, beam] parent = parent_idx[max_sequence_in_beam - 1, batch, beam]
final_idx[max_sequence_in_beam - 1, batch, beam] = step_idx[max_sequence_in_beam - 1, batch, beam] final_idx[max_sequence_in_beam - 1, batch, beam] = step_idx[max_sequence_in_beam - 1, batch, beam]
for level in reversed(range(max_sequence_in_beam - 1)): for level in reversed(range(max_sequence_in_beam - 1)):
final_idx[level, batch, beam] = step_idx[level, batch, parent] final_idx[level, batch, beam] = step_idx[level, batch, parent]
parent = parent_idx[level, batch, parent] parent = parent_idx[level, batch, parent]
# For a given beam, past the time step containing the first decoded end_token # For a given beam, past the time step containing the first decoded end_token
# all values are filled in with end_token. # all values are filled in with end_token.
finished = False finished = False
for time in range(max_sequence_in_beam): for time in range(max_sequence_in_beam):
@ -43,18 +43,18 @@ Element data types for all input tensors should match each other.
**Inputs** **Inputs**
* **1**: `step_ids` -- a tensor of shape `[MAX_TIME, BATCH_SIZE, BEAM_WIDTH]` of type `T` with indices from per each step. Required. * **1**: `step_ids` -- a tensor of shape `[MAX_TIME, BATCH_SIZE, BEAM_WIDTH]` of type *T* with indices from per each step. Required.
* **2**: `parent_idx` -- a tensor of shape `[MAX_TIME, BATCH_SIZE, BEAM_WIDTH]` of type `T` with parent beam indices. Required. * **2**: `parent_idx` -- a tensor of shape `[MAX_TIME, BATCH_SIZE, BEAM_WIDTH]` of type *T* with parent beam indices. Required.
* **3**: `max_seq_len` -- a tensor of shape `[BATCH_SIZE]` of type `T` with maximum lengths for each sequence in the batch. Required. * **3**: `max_seq_len` -- a tensor of shape `[BATCH_SIZE]` of type *T* with maximum lengths for each sequence in the batch. Required.
* **4**: `end_token` -- a scalar tensor of type `T` with value of the end marker in a sequence. Required. * **4**: `end_token` -- a scalar tensor of type *T* with value of the end marker in a sequence. Required.
**Outputs** **Outputs**
* **1**: `final_idx` -- a tensor of shape `[MAX_TIME, BATCH_SIZE, BEAM_WIDTH]` of type `T`. * **1**: `final_idx` -- a tensor of shape `[MAX_TIME, BATCH_SIZE, BEAM_WIDTH]` of type *T*.
**Types** **Types**

View File

@ -29,13 +29,13 @@ Where D is the rank of input tensor `data`. The axis being split must be evenly
**Inputs** **Inputs**
* **1**: `data`. A tensor of type `T` and arbitrary shape. **Required.** * **1**: `data`. A tensor of type *T* and arbitrary shape. **Required.**
* **2**: `axis`. Axis along `data` to split. A scalar of type `T_AXIS` within the range `[-rank(data), rank(data) - 1]`. Negative values address dimensions from the end. **Required.** * **2**: `axis`. Axis along `data` to split. A scalar of type *T_AXIS* within the range `[-rank(data), rank(data) - 1]`. Negative values address dimensions from the end. **Required.**
* **Note**: The dimension of input tensor `data` shape along `axis` must be evenly divisible by *num_splits* attribute. * **Note**: The dimension of input tensor `data` shape along `axis` must be evenly divisible by *num_splits* attribute.
**Outputs** **Outputs**
* **Multiple outputs**: Tensors of type `T`. The i-th output has the same shape as `data` input tensor except for dimension along `axis` which is `data.shape[axis]/num_splits`. * **Multiple outputs**: Tensors of type *T*. The i-th output has the same shape as `data` input tensor except for dimension along `axis` which is `data.shape[axis]/num_splits`.
**Types** **Types**
@ -78,4 +78,4 @@ Where D is the rank of input tensor `data`. The axis being split must be evenly
</port> </port>
</output> </output>
</layer> </layer>
``` ```

View File

@ -4,7 +4,7 @@
**Category**: Data movement operation **Category**: Data movement operation
**Short description**: *StridedSlice* extracts a strided slice of a tensor. **Short description**: *StridedSlice* extracts a strided slice of a tensor.
**Attributes** **Attributes**
@ -50,22 +50,22 @@
**Inputs**: **Inputs**:
* **1**: `data` - input tensor to be sliced of type `T` and arbitrary shape. **Required.** * **1**: `data` - input tensor to be sliced of type *T* and arbitrary shape. **Required.**
* **2**: `begin` - 1D tensor of type `T_IND` with begin indexes for input tensor slicing. **Required.** * **2**: `begin` - 1D tensor of type *T_IND* with begin indexes for input tensor slicing. **Required.**
Out-of-bounds values are silently clamped. If `begin_mask[i]` is `1`, the value of `begin[i]` is ignored and the range of the appropriate dimension starts from `0`. Negative values mean indexing starts from the end. For example, if `data=[1,2,3]`, `begin[0]=-1` means `begin[0]=3`. Out-of-bounds values are silently clamped. If `begin_mask[i]` is `1`, the value of `begin[i]` is ignored and the range of the appropriate dimension starts from `0`. Negative values mean indexing starts from the end. For example, if `data=[1,2,3]`, `begin[0]=-1` means `begin[0]=3`.
* **3**: `end` - 1D tensor of type `T_IND` with end indexes for input tensor slicing. **Required.** * **3**: `end` - 1D tensor of type *T_IND* with end indexes for input tensor slicing. **Required.**
Out-of-bounds values will be silently clamped. If `end_mask[i]` is `1`, the value of `end[i]` is ignored and the full range of the appropriate dimension is used instead. Negative values mean indexing starts from the end. For example, if `data=[1,2,3]`, `end[0]=-1` means `end[0]=3`. Out-of-bounds values will be silently clamped. If `end_mask[i]` is `1`, the value of `end[i]` is ignored and the full range of the appropriate dimension is used instead. Negative values mean indexing starts from the end. For example, if `data=[1,2,3]`, `end[0]=-1` means `end[0]=3`.
* **4**: `stride` - 1D tensor of type `T_IND` with strides. **Optional.** * **4**: `stride` - 1D tensor of type *T_IND* with strides. **Optional.**
**Types** **Types**
* *T*: any supported type. * *T*: any supported type.
* *T_IND*: any supported integer type. * *T_IND*: any supported integer type.
**Example** **Example**
Example of `begin_mask` & `end_mask` usage. Example of `begin_mask` & `end_mask` usage.
```xml ```xml
<layer ... type="StridedSlice" ...> <layer ... type="StridedSlice" ...>
<data begin_mask="0,1,1" ellipsis_mask="0,0,0" end_mask="1,1,0" new_axis_mask="0,0,0" shrink_axis_mask="0,0,0"/> <data begin_mask="0,1,1" ellipsis_mask="0,0,0" end_mask="1,1,0" new_axis_mask="0,0,0" shrink_axis_mask="0,0,0"/>
@ -86,7 +86,7 @@ Example of `begin_mask` & `end_mask` usage.
</port> </port>
</input> </input>
<output> <output>
<port id="4"> <port id="4">
<dim>1</dim> <dim>1</dim>
<dim>3</dim> <dim>3</dim>
<dim>2</dim> <dim>2</dim>

View File

@ -4,7 +4,7 @@
**Category**: Data movement **Category**: Data movement
**Short description**: *Tile* operation repeats an input tensor *"data"* the number of times given by *"repeats"* input tensor along each dimension. **Short description**: *Tile* operation repeats an input tensor *"data"* the number of times given by *"repeats"* input tensor along each dimension.
* If number of elements in *"repeats"* is more than shape of *"data"*, then *"data"* will be promoted to "*repeats*" by prepending new axes, e.g. let's shape of *"data"* is equal to (2, 3) and *"repeats"* is equal to [2, 2, 2], then shape of *"data"* will be promoted to (1, 2, 3) and result shape will be (2, 4, 6). * If number of elements in *"repeats"* is more than shape of *"data"*, then *"data"* will be promoted to "*repeats*" by prepending new axes, e.g. let's shape of *"data"* is equal to (2, 3) and *"repeats"* is equal to [2, 2, 2], then shape of *"data"* will be promoted to (1, 2, 3) and result shape will be (2, 4, 6).
* If number of elements in *"repeats"* is less than shape of *"data"*, then *"repeats"* will be promoted to "*data*" by prepending 1's to it, e.g. let's shape of *"data"* is equal to (4, 2, 3) and *"repeats"* is equal to [2, 2], then *"repeats"* will be promoted to [1, 2, 2] and result shape will be (4, 4, 6) * If number of elements in *"repeats"* is less than shape of *"data"*, then *"repeats"* will be promoted to "*data*" by prepending 1's to it, e.g. let's shape of *"data"* is equal to (4, 2, 3) and *"repeats"* is equal to [2, 2], then *"repeats"* will be promoted to [1, 2, 2] and result shape will be (4, 4, 6)
@ -14,12 +14,12 @@ No attributes available.
**Inputs**: **Inputs**:
* **1**: "data" - an input tensor to be padded. A tensor of type T1. **Required.** * **1**: "data" - an input tensor to be padded. A tensor of type *T1*. **Required.**
* **2**: "repeats" - a per-dimension replication factor. For example, *repeats* equal to 88 means that the output tensor gets 88 copies of data from the specified axis. A tensor of type T2. **Required.** * **2**: "repeats" - a per-dimension replication factor. For example, *repeats* equal to 88 means that the output tensor gets 88 copies of data from the specified axis. A tensor of type *T2*. **Required.**
**Outputs**: **Outputs**:
* **1**: The count of dimensions in result shape will be equal to the maximum from count of dimensions in "data" shape and number of elements in "repeats". A tensor with type matching 1st tensor. * **1**: The count of dimensions in result shape will be equal to the maximum from count of dimensions in "data" shape and number of elements in "repeats". A tensor with type matching 1st tensor.
**Types** **Types**
@ -89,7 +89,7 @@ No attributes available.
<layer ... type="Tile"> <layer ... type="Tile">
<input> <input>
<port id="0"> <port id="0">
<dim>5</dim> <dim>5</dim>
<dim>2</dim> <dim>2</dim>
<dim>3</dim> <dim>3</dim>
<dim>4</dim> <dim>4</dim>
@ -100,11 +100,11 @@ No attributes available.
</input> </input>
<output> <output>
<port id="2"> <port id="2">
<dim>5</dim> <dim>5</dim>
<dim>2</dim> <dim>2</dim>
<dim>6</dim> <dim>6</dim>
<dim>12</dim> <dim>12</dim>
</port> </port>
</output> </output>
</layer> </layer>
``` ```

View File

@ -6,7 +6,7 @@
**Short description**: *Transpose* operation reorders the input tensor dimensions. **Short description**: *Transpose* operation reorders the input tensor dimensions.
**Detailed description**: *Transpose* operation reorders the input tensor dimensions. Source indexes and destination indexes are bound by the formula: **Detailed description**: *Transpose* operation reorders the input tensor dimensions. Source indexes and destination indexes are bound by the formula:
\f[output[i(order[0]), i(order[1]), ..., i(order[N-1])] = input[i(0), i(1), ..., i(N-1)]\\ \quad \textrm{where} \quad i(j) \quad\textrm{is in the range} \quad [0, (input.shape[j]-1)]\f] \f[output[i(order[0]), i(order[1]), ..., i(order[N-1])] = input[i(0), i(1), ..., i(N-1)]\\ \quad \textrm{where} \quad i(j) \quad\textrm{is in the range} \quad [0, (input.shape[j]-1)]\f]
@ -14,12 +14,12 @@
**Inputs**: **Inputs**:
* **1**: `arg` - the tensor to be transposed. A tensor of type `T` and arbitrary shape. **Required.** * **1**: `arg` - the tensor to be transposed. A tensor of type *T* and arbitrary shape. **Required.**
* **2**: `input_order` - the permutation to apply to the axes of the first input shape. A 1D tensor of `n` elements `T_AXIS` type and shape `[n]`, where `n` is the rank of the first input or `0`. The tensor's value must contain every integer in the range `[0, n-1]`, but if an empty tensor is specified (shape `[0]`), then the axes will be inverted. **Required.** * **2**: `input_order` - the permutation to apply to the axes of the first input shape. A 1D tensor of `n` elements *T_AXIS* type and shape `[n]`, where `n` is the rank of the first input or `0`. The tensor's value must contain every integer in the range `[0, n-1]`, but if an empty tensor is specified (shape `[0]`), then the axes will be inverted. **Required.**
**Outputs**: **Outputs**:
* **1**: A tensor of type `T` and transposed shape according to the rules specified above. * **1**: A tensor of type *T* and transposed shape according to the rules specified above.
**Types** **Types**
@ -67,7 +67,7 @@
<dim>0</dim> <!-- input_order is an empty 1D tensor --> <dim>0</dim> <!-- input_order is an empty 1D tensor -->
</port> </port>
</input> </input>
<output> <output>
<port id="2"> <port id="2">
<dim>4</dim> <dim>4</dim>
<dim>3</dim> <dim>3</dim>
@ -75,4 +75,4 @@
</port> </port>
</output> </output>
</layer> </layer>
``` ```

View File

@ -63,13 +63,13 @@ output = data / (bias + (alpha / size ** len(axes)) * sqr_sum) ** beta
**Inputs** **Inputs**
* **1**: `data` - tensor of type `T` and arbitrary shape. **Required.** * **1**: `data` - tensor of type *T* and arbitrary shape. **Required.**
* **2**: `axes` - 1D tensor of type `T_IND` which specifies indices of dimensions in `data` which define normalization slices. **Required.** * **2**: `axes` - 1D tensor of type *T_IND* which specifies indices of dimensions in `data` which define normalization slices. **Required.**
**Outputs** **Outputs**
* **1**: Output tensor of type `T` and the same shape as the `data` input tensor. * **1**: Output tensor of type *T* and the same shape as the `data` input tensor.
**Types** **Types**
* *T*: any supported floating point type. * *T*: any supported floating point type.

View File

@ -6,7 +6,7 @@
**Short description**: Performs max pooling operation on input. **Short description**: Performs max pooling operation on input.
**Detailed description**: Input shape can be either 3D, 4D or 5D. Max Pooling operation is performed with the respect to input shape from the third dimension to the last dimension. If paddings are used then during the pooling calculation their value are `-inf`. The Max Pooling operation involves sliding a filter over each channel of feature map and downsampling by choosing the biggest value within the region covered by the filter. [Article about max pooling in Convolutional Networks](https://deeplizard.com/learn/video/ZjM_XQa5s6s). **Detailed description**: Input shape can be either 3D, 4D or 5D. Max Pooling operation is performed with the respect to input shape from the third dimension to the last dimension. If paddings are used then during the pooling calculation their value are `-inf`. The Max Pooling operation involves sliding a filter over each channel of feature map and downsampling by choosing the biggest value within the region covered by the filter. [Article about max pooling in Convolutional Networks](https://deeplizard.com/learn/video/ZjM_XQa5s6s).
**Attributes**: *Pooling* attributes are specified in the `data` node, which is a child of the layer node. **Attributes**: *Pooling* attributes are specified in the `data` node, which is a child of the layer node.
@ -67,7 +67,7 @@
**Inputs**: **Inputs**:
* **1**: 3D, 4D or 5D input tensor of type T. Required. * **1**: 3D, 4D or 5D input tensor of type *T*. Required.
**Outputs**: **Outputs**:
* **1**: Input shape can be either `[N, C, H]`, `[N, C, H, W]` or `[N, C, H, W, D]`. Then the corresponding output shape will be `[N, C, H_out]`, `[N, C, H_out, W_out]` or `[N, C, H_out, W_out, D_out]`. Output tensor has the same data type as input tensor. * **1**: Input shape can be either `[N, C, H]`, `[N, C, H, W]` or `[N, C, H, W, D]`. Then the corresponding output shape will be `[N, C, H_out]`, `[N, C, H_out, W_out]` or `[N, C, H_out, W_out, D_out]`. Output tensor has the same data type as input tensor.
@ -77,38 +77,38 @@
* *T*: floating point or integer type. * *T*: floating point or integer type.
**Mathematical Formulation** **Mathematical Formulation**
Output shape calculation based on `auto_pad` and `rounding_type`: Output shape calculation based on `auto_pad` and `rounding_type`:
* `auto_pad = explicit` and `rounding_type = floor` * `auto_pad = explicit` and `rounding_type = floor`
`H_out = floor(H + pads_begin[0] + pads_end[0] - kernel[0] / strides[0]) + 1` `H_out = floor(H + pads_begin[0] + pads_end[0] - kernel[0] / strides[0]) + 1`
`W_out = floor(W + pads_begin[1] + pads_end[1] - kernel[1] / strides[1]) + 1` `W_out = floor(W + pads_begin[1] + pads_end[1] - kernel[1] / strides[1]) + 1`
`D_out = floor(D + pads_begin[2] + pads_end[2] - kernel[2] / strides[2]) + 1` `D_out = floor(D + pads_begin[2] + pads_end[2] - kernel[2] / strides[2]) + 1`
* `auto_pad = valid` and `rounding_type = floor`
`H_out = floor(H - kernel[0] / strides[0]) + 1`
`W_out = floor(W - kernel[1] / strides[1]) + 1`
`D_out = floor(D - kernel[2] / strides[2]) + 1`
* `auto_pad = same_upper/same_lower` and `rounding_type = floor`
`H_out = H`
`W_out = W`
`D_out = D`
* `auto_pad = explicit` and `rounding_type = ceil`
`H_out = ceil(H + pads_begin[0] + pads_end[0] - kernel[0] / strides[0]) + 1`
`W_out = ceil(W + pads_begin[1] + pads_end[1] - kernel[1] / strides[1]) + 1`
`D_out = ceil(D + pads_begin[2] + pads_end[2] - kernel[2] / strides[2]) + 1`
* `auto_pad = valid` and `rounding_type = ceil`
`H_out = ceil(H - kernel[0] / strides[0]) + 1`
`W_out = ceil(W - kernel[1] / strides[1]) + 1`
`D_out = ceil(D - kernel[2] / strides[2]) + 1`
* `auto_pad = same_upper/same_lower` and `rounding_type = ceil`
`H_out = H`
`W_out = W`
`D_out = D`
If `H + pads_begin[i] + pads_end[i] - kernel[i]` is not divided by `strides[i]` evenly then the result is rounded with the respect to `rounding_type` attribute. * `auto_pad = valid` and `rounding_type = floor`
`H_out = floor(H - kernel[0] / strides[0]) + 1`
`W_out = floor(W - kernel[1] / strides[1]) + 1`
`D_out = floor(D - kernel[2] / strides[2]) + 1`
* `auto_pad = same_upper/same_lower` and `rounding_type = floor`
`H_out = H`
`W_out = W`
`D_out = D`
* `auto_pad = explicit` and `rounding_type = ceil`
`H_out = ceil(H + pads_begin[0] + pads_end[0] - kernel[0] / strides[0]) + 1`
`W_out = ceil(W + pads_begin[1] + pads_end[1] - kernel[1] / strides[1]) + 1`
`D_out = ceil(D + pads_begin[2] + pads_end[2] - kernel[2] / strides[2]) + 1`
* `auto_pad = valid` and `rounding_type = ceil`
`H_out = ceil(H - kernel[0] / strides[0]) + 1`
`W_out = ceil(W - kernel[1] / strides[1]) + 1`
`D_out = ceil(D - kernel[2] / strides[2]) + 1`
* `auto_pad = same_upper/same_lower` and `rounding_type = ceil`
`H_out = H`
`W_out = W`
`D_out = D`
If `H + pads_begin[i] + pads_end[i] - kernel[i]` is not divided by `strides[i]` evenly then the result is rounded with the respect to `rounding_type` attribute.
Example 1 shows how *MaxPool* operates with 4D input using 2D kernel and `auto_pad = explicit` Example 1 shows how *MaxPool* operates with 4D input using 2D kernel and `auto_pad = explicit`
@ -194,7 +194,7 @@ output = [[[[5, 3],
```xml ```xml
<layer ... type="MaxPool" ... > <layer ... type="MaxPool" ... >
<data auto_pad="same_upper" kernel="2,2" pads_begin="1,1" pads_end="1,1" strides="2,2"/> <data auto_pad="same_upper" kernel="2,2" pads_begin="1,1" pads_end="1,1" strides="2,2"/>
<input> <input>
<port id="0"> <port id="0">
<dim>1</dim> <dim>1</dim>
<dim>3</dim> <dim>3</dim>
@ -214,7 +214,7 @@ output = [[[[5, 3],
<layer ... type="MaxPool" ... > <layer ... type="MaxPool" ... >
<data auto_pad="explicit" kernel="2,2" pads_begin="1,1" pads_end="1,1" strides="2,2"/> <data auto_pad="explicit" kernel="2,2" pads_begin="1,1" pads_end="1,1" strides="2,2"/>
<input> <input>
<port id="0"> <port id="0">
<dim>1</dim> <dim>1</dim>
<dim>3</dim> <dim>3</dim>
@ -234,7 +234,7 @@ output = [[[[5, 3],
<layer ... type="MaxPool" ... > <layer ... type="MaxPool" ... >
<data auto_pad="valid" kernel="2,2" pads_begin="1,1" pads_end="1,1" strides="2,2"/> <data auto_pad="valid" kernel="2,2" pads_begin="1,1" pads_end="1,1" strides="2,2"/>
<input> <input>
<port id="0"> <port id="0">
<dim>1</dim> <dim>1</dim>
<dim>3</dim> <dim>3</dim>
@ -251,4 +251,4 @@ output = [[[[5, 3],
</port> </port>
</output> </output>
</layer> </layer>
``` ```

View File

@ -30,7 +30,7 @@ The main difference between [CTCGreedyDecoder](CTCGreedyDecoder_1.md) and CTCGre
* **Type**: `boolean` * **Type**: `boolean`
* **Default value**: true * **Default value**: true
* **Required**: *No* * **Required**: *No*
* *classes_index_type* * *classes_index_type*
* **Description**: the type of output tensor with classes indices * **Description**: the type of output tensor with classes indices
@ -38,7 +38,7 @@ The main difference between [CTCGreedyDecoder](CTCGreedyDecoder_1.md) and CTCGre
* **Type**: string * **Type**: string
* **Default value**: "i32" * **Default value**: "i32"
* **Required**: *No* * **Required**: *No*
* *sequence_length_type* * *sequence_length_type*
* **Description**: the type of output tensor with sequence length * **Description**: the type of output tensor with sequence length

View File

@ -28,13 +28,13 @@ Sequences in the batch can have different length. The lengths of sequences are c
**Inputs** **Inputs**
* **1**: `data` - input tensor with batch of sequences of type `T_F` and shape `[T, N, C]`, where `T` is the maximum sequence length, `N` is the batch size and `C` is the number of classes. **Required.** * **1**: `data` - input tensor with batch of sequences of type *T_F* and shape `[T, N, C]`, where `T` is the maximum sequence length, `N` is the batch size and `C` is the number of classes. **Required.**
* **2**: `sequence_mask` - input tensor with sequence masks for each sequence in the batch of type `T_F` populated with values `0` and `1` and shape `[T, N]`. **Required.** * **2**: `sequence_mask` - input tensor with sequence masks for each sequence in the batch of type *T_F* populated with values `0` and `1` and shape `[T, N]`. **Required.**
**Output** **Output**
* **1**: Output tensor of type `T_F` and shape `[N, T, 1, 1]` which is filled with integer elements containing final sequence class indices. A final sequence can be shorter that the size `T` of the tensor, all elements that do not code sequence classes are filled with `-1`. * **1**: Output tensor of type *T_F* and shape `[N, T, 1, 1]` which is filled with integer elements containing final sequence class indices. A final sequence can be shorter that the size `T` of the tensor, all elements that do not code sequence classes are filled with `-1`.
**Types** **Types**
* *T_F*: any supported floating point type. * *T_F*: any supported floating point type.
@ -64,4 +64,4 @@ Sequences in the batch can have different length. The lengths of sequences are c
</port> </port>
</output> </output>
</layer> </layer>
``` ```

View File

@ -26,7 +26,7 @@ If `special_zero` is set to `true` index of `0` cannot be larger than the rank o
**Inputs**: **Inputs**:
* **1**: `data` a tensor of type T and arbitrary shape. **Required**. * **1**: `data` a tensor of type *T* and arbitrary shape. **Required**.
* **2**: `shape` 1D tensor of type *T_SHAPE* describing output shape. **Required**. * **2**: `shape` 1D tensor of type *T_SHAPE* describing output shape. **Required**.