* Specify Einsum-7 operation
Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>
* Finalize specification for Einsum-7 operation
Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>
* Remove duplicate example
Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>
* Update doc headers with Einsum operation
Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>
* Apply comments after the first review: grammar corrections and sentence rephrase
Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>
* Apply feedback from tech-writers and online review
Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>
* Make additional grammar corrections
Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>
* Correct documentation: optimize some sentences, links and examples
Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>
* Support capital letters in equation and implicit mode
Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>
* Review spec of Split operation
* Address review comments
* Changed detailed description
* Added more description for attribute
* Changed T1 and T2 for T and T_AXIS
* Changed range of values in axis input description
* Convolution: Enhance dynamic shape inference of validate and infer types method
* Convolution: Change onnx test with dynamic shapes to float element type
* Convolution: Remove test instances with integer precision
* Convolution: Add backticks to types in spec
* Convolution: Change element type variable for output element type
* GroupConvolution: Add backticks to types in spec
* GroupConvolution: Enhance dynamic shape inference of validate and infer types method
* GroupConvolution: Remove serialization test instances with integer precision
* GroupConvolutionBackpropData: Remove serialization test instances with integer precision
* GroupConvolutionBackpropData: Enhance dynamic shape inference of validate and infer types method
* Convolution: Add helper function to validate convolution parameters in ref impl
* Convolution: Rewrite lambda to capture spatial dims of filters in validate and infer types
* GroupConvolution: Refactor reference implementation
* Remove call to old implementation of convolution using dilations
* Added validation method to validate shapes
* GroupConvolutionBackpropData: Add more type_prop unit test and refactor test names
* Convolution: Extended validation of convolution parameters in reference implementation
* GroupConvolution: Extended validation of group convolution parameters in reference implementation
* GroupConvolutionBackpropData: Add helper function to validate convolution backprop parameters in ref impl
* Clean up unnecessary lines
* BinaryConvolution: Use validate helper function from convolution ref impl
* Convolution: Refactor validate and infer types to improve readability
* BinaryConvolution: Refactor validate and infer types to improve readability
* Convolution: Add explicit tensor shape dims for inputs and outputs in spec
* BinaryConvolution: Add explicit tensor shape dims for inputs and outputs in spec
* GroupConvolution: Add explicit tensor shape dims for inputs and outputs in spec
* Add helper function to infer convolution forward output shape
* Convolution: Refactor validate and infer types to use helpers to infer output shape
* BinaryConvolution: Refactor validate and infer types to use helpers to infer output shape
* GroupConvolutionBackpropData: Fix formula to calculate output shape in validation functions
* Remove symbol to export convolution output shape inference function
* GroupConvolution: Add validation checks for input channels dim of data batch and filter shape
* GroupConvolutionBackpropData: clean up type prop tests
* Convolution: Change element type in onnx unit tests with dyn shapes and convolution nodes
* GroupConvolutionBackpropData: Correct layout of filters input
* GroupConvolution: Deduce groups from inputs shape during output shape inference
* Change spec supported types of convolution operations to any numeric type
* Revert "GroupConvolution: Remove serialization test instances with integer precision"
This reverts commit 781c2570d6.
* Revert "GroupConvolutionBackpropData: Remove serialization test instances with integer precision"
This reverts commit 9a6ac23968.
* Revert "Convolution: Remove test instances with integer precision"
This reverts commit 0b07052a62.
* Revert "Convolution: Change element type in onnx unit tests with dyn shapes and convolution nodes"
This reverts commit c9f5944b6b.
* Revert "Convolution: Change onnx test with dynamic shapes to float element type"
This reverts commit 1f4202b010.
* Allow integral types in validate and infer types method for convolution group of operations
* Add i32 precision in single layer tests for convolution group of operations
* BinaryConvolution: Fix shape of input and output tensors in spec
* Address nitpick comments
* Review spec of Mish operation
* Add minor changes
* Updated reference paper to a newer version
* Fix typo in SoftPlus op
* Minor change in example section
* Fix minor wording issues
* Review spec of PReLU operation
* Address review comments
* Correct second input description
* Add note to clarify input channel dimension
* Add additional equivalent formula for op
* Change reference link to abstract
* Add additional examples
* Address review comments related to wording
* Fix IR layer examples
* Review spec of Selu operation
* Fix path for Selu op in opset files
* Remove unnecessary line in example
* Address review comments related to wording
* Refactor specification
* Complete detail description section
* Rewrite mathematical formula
* Fix range of values for min and max attributes
* Add note for conversion policy between float and integral type of input tensor
* Address review comments
* Fix typo in max attribute
* Remove redundant examples
* Created opset7.md to add the specification of the operation FFT.
* Started to write the specification of the operation FFT.
* Added link to the specification of the operation FFT.
* Continued to write the specification of the FFT.
* Written about inputs and outputs of FFT.
* Started to write examples.
* Added example when there is no input 'signal_size'.
* Added more example.
* Small fixes.
* Small fix.
* Renamed FFT to DFFT.
* Small fix.
* Small fix.
* Started to write the algorithm of FFT.
* Added asserts.
* Started to write the method __call__ of the class of DFFT calculation.
* Fixed category.
* Continued to write the algorithm of the FFT calculation.
* Continued to write the algorithm of the FFT calculation.
* Continued to write the algorithm of the FFT calculation.
* Continued to write the algorithm of the calculation of DFFT.
* Written the algorithm of the calculation of DFFT.
* Small fix.
* Renamed operation.
* Added examples of 3D input tensors.
* Covered complex number representation.
* Written formulas for FFT.
* Written about start point of trimming.
* Small fixes.
* Small fixes.
* Some fixes.
* Added some note.
* Added a description of the calculation of the output shape.
* Added examples with unsorted axes and with (-1) in signal_size.
* Fixed range of axes indices.
* Small fix.
* Small change.
* Added T_SIZE type.
* Added negative axes support.
* Some fixes.
* Some fixes.
* Written the draft of the specification of the operation IFFT.
* Small fix.
* Renamed the operation IFFT into DIFFT. Deleted attribute.
* Renamed operation to IDFT.
* Deleted int8 type.
* Added examples of 3D input tensors.
* Added formulas and text.
* Fixed ie_docs.xml.
* Fixed sign in the IFFT formula.
* Some fixes.
* Added examples with unsorted axes and with (-1) in signal_size.
* Some fixes.
* Small fix.
* Small fixes.
* Added type T_SIZE.
* Deleted redundant sentence.
* Added support for negative axes.
* Some changes.
* Mod operation specification refactoring.
* Add dummy broadcast_rules.md.
* Minor fixes, e.g. capitalize operation names, typos.
* Add comment about division by zero.
* Division by zero update.
Co-authored-by: jdanieck <jozef.daniecki@intel.com>
* Added support for Gelu-6 to the MO
* Adding Gelu-6 to ngraph and python API + some tests
* Fixed typo in the Gelu approximation mode
* Fixed Gelu-6 reference implementation for Tanh mode
* Added transformation to downgrade v6::Gelu to v2::Gelu
* Added specification for the Gelu-6
* Code style fixes
* The Gelu-6 operation specification update
* Fixed compilation issue in reference implementation for Gelu
* Fix compilation issues for some OSs
* Code style fix
* One more cpplint issue fix
* Fixed Gelu6 reference implementation compilation on Windows.
* Code style fix
* Fixed various ngraph unit tests
* Code style check
* Reverted Gelu-2 to be fused op
* Fixed Gelu6 downgrade transformation
* Added unit test for Gelu6Downgrade transformation
* Update copyright year
* Updated copyright year
* Replaced tab characters with 4 spaces in IR reader tests
* Code style fixes
* Added default value for GeluApproximation mode for Gelu-6 op
* Fixed code style for Gelu-6
* Changed order of parameters for the Gelu evaluate to potentially avoid backward compatibility issues with ARM plugin
* Fixed code style
* Introduced opset7. Moved Gelu6 to opset7
* Fixed non-updated transformation
* Fixed opset version in ngraph Python API for Gelu operation
* Fixed typo in the opset number in the documentation
* Reverted some changes related to Gelu6
* Updated MO to produce Gelu7
* Updated unit tests for Gelu
* Updated Gelu7 specification
* Changed gelu reference implementation. Added opset7 to Python packages
* Updated Python API tests for Gelu operation
* Code style fix
* Marked get_approximation_mode function as const
* Added missing "const" qualifier
* Fixed code style issues in tests
* Added extractor for MxNet operation Gelu
* Spelling issues fix
* Updated MxNet supported symbols
* Added NGRAPH_OP_SCOPE for Gelu7 validate_and_infer_types
* Fixed a typo in the comment
* Update spec for ADD operation.
* Change back quote for attribute name and value.
* Update link for auto_broadcast attribute.
* Move detailed description section, add suto_broadcast attribite to examples.
* Remove github link in numpy attribute description and replace it with local link.
* Add brodcast_rules.md for specific broadcast rules.
* Add new line at the end of broadcast_rules.md, modify font for add inputs.
* Change link for Broadcast_1.md
* Add description of broadcast in broadcast_rules.md
* Correct output shape description.
* Add bidirectional broadcast description and new examples.
* Add description for auto_broadcast types: None and PDPD.
* Add examples for pdpd and bidirectional broadcasts, add pdpd attributte for Add, modify Broadcast ops to refer broadcast_rules file.
* Duplicated 'openvino_docs_ops_broadcast_rules' label change.
* Add example with scalar for bidiretional broadcast.
* Add new lines for examples.
* rearrenge the spec to match the criteria
* Add Sin to the list of unary operators for unit tests
* add detailed description for sin
* remove latex tags for theta symbol
* add supported input rank for input tensor
* add link to wikipedia
* add description for input a
* BinaryConvolution specification refactoring.
* Aligh tensor types to current CPU implementation.
* Remove !D & 3D case becuase CPU plugin supports only 2D case.
* Add pad_value to the example.
* Add computation algorithm for mode 'xnor-popcount'.
* Computation formula refactoring.
* Fix typo in the description.