* Doc Migration from Gitlab (#1289) * doc migration * fix * Update FakeQuantize_1.md * Update performance_benchmarks.md * Updates graphs for FPGA * Update performance_benchmarks.md * Change DL Workbench structure (#1) * Changed DL Workbench structure * Fixed tags * fixes * Update ie_docs.xml * Update performance_benchmarks_faq.md * Fixes in DL Workbench layout * Fixes for CVS-31290 * [DL Workbench] Minor correction * Fix for CVS-30955 * Added nGraph deprecation notice as requested by Zoe * fix broken links in api doxy layouts * CVS-31131 fixes * Additional fixes * Fixed POT TOC * Update PAC_Configure.md PAC DCP 1.2.1 install guide. * Update inference_engine_intro.md * fix broken link * Update opset.md * fix * added opset4 to layout * added new opsets to layout, set labels for them * Update VisionAcceleratorFPGA_Configure.md Updated from 2020.3 to 2020.4 Co-authored-by: domi2000 <domi2000@users.noreply.github.com>
1.4 KiB
1.4 KiB
Selu
Versioned name: Selu-1
Category: Arithmetic unary operation
Short description: Selu calculates the SELU activation function (https://arxiv.org/abs/1706.02515) element-wise with given tensor.
Detailed Description
For each element from the input tensor calculates corresponding element in the output tensor with the following formula: \f[ selu(x) = \lambda \left{\begin{array}{ll} \alpha(e^{x} - 1) \quad \mbox{if } x \le 0 \ x \quad \mbox{if } x > 0 \end{array}\right. \f]
Attributes:
No attributes available.
Inputs
-
1: An tensor of type T. Required.
-
2:
alpha1D tensor with one element of type T. Required. -
3:
lambda1D tensor with one element of type T. Required.
Outputs
- 1: The result of element-wise operation. A tensor of type T.
Types
- T: any supported floating point type.
Examples
Example 1
<layer ... type="Selu">
<input>
<port id="0">
<dim>256</dim>
<dim>56</dim>
</port>
<port id="1">
<dim>1</dim>
</port>
<port id="2">
<dim>1</dim>
</port>
</input>
<output>
<port id="3">
<dim>256</dim>
<dim>56</dim>
</port>
</output>
</layer>