Revise ReLU (#2863)

* remove relu_backprop

* Update ReLU spec

* change inputs and outputs subsections of ReLU spec

* Update Mathematical Formulation subsection

* Update Category of ReLU in spec

* Update Short description of ReLU in spec
This commit is contained in:
Piotr Szmelczynski 2020-10-29 09:37:52 +01:00 committed by GitHub
parent fdbfab8546
commit 0c373ba79b
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
2 changed files with 12 additions and 15 deletions

View File

@ -2,9 +2,9 @@
**Versioned name**: *ReLU-1*
**Category**: *Activation*
**Category**: *Activation function*
**Short description**: [Reference](http://caffe.berkeleyvision.org/tutorial/layers/relu.html)
**Short description**: ReLU element-wise activation function. ([Reference](http://caffe.berkeleyvision.org/tutorial/layers/relu.html))
**Detailed description**: [Reference](https://github.com/Kulbear/deep-learning-nano-foundation/wiki/ReLU-and-Softmax-Activation-Functions#rectified-linear-units)
@ -12,13 +12,19 @@
**Mathematical Formulation**
\f[
Y_{i}^{( l )} = max(0, Y_{i}^{( l - 1 )})
\f]
For each element from the input tensor calculates corresponding
element in the output tensor with the following formula:
\f[
Y_{i}^{( l )} = max(0, Y_{i}^{( l - 1 )})
\f]
**Inputs**:
* **1**: Multidimensional input tensor. Required.
* **1**: Multidimensional input tensor *x* of any supported numeric type. Required.
**Outputs**:
* **1**: Result of ReLU function applied to the input tensor *x*. Tensor with shape and type matching the input tensor. Required.
**Example**

View File

@ -33,15 +33,6 @@ namespace ngraph
out[i] = arg[i] > zero ? arg[i] : zero;
}
}
template <typename T>
void relu_backprop(const T* arg, const T* delta_arg, T* out, size_t count)
{
T zero = 0;
for (size_t i = 0; i < count; i++)
{
out[i] = arg[i] > zero ? delta_arg[i] : zero;
}
}
}
}
}