* Convolution: Enhance dynamic shape inference of validate and infer types method
* Convolution: Change onnx test with dynamic shapes to float element type
* Convolution: Remove test instances with integer precision
* Convolution: Add backticks to types in spec
* Convolution: Change element type variable for output element type
* GroupConvolution: Add backticks to types in spec
* GroupConvolution: Enhance dynamic shape inference of validate and infer types method
* GroupConvolution: Remove serialization test instances with integer precision
* GroupConvolutionBackpropData: Remove serialization test instances with integer precision
* GroupConvolutionBackpropData: Enhance dynamic shape inference of validate and infer types method
* Convolution: Add helper function to validate convolution parameters in ref impl
* Convolution: Rewrite lambda to capture spatial dims of filters in validate and infer types
* GroupConvolution: Refactor reference implementation
* Remove call to old implementation of convolution using dilations
* Added validation method to validate shapes
* GroupConvolutionBackpropData: Add more type_prop unit test and refactor test names
* Convolution: Extended validation of convolution parameters in reference implementation
* GroupConvolution: Extended validation of group convolution parameters in reference implementation
* GroupConvolutionBackpropData: Add helper function to validate convolution backprop parameters in ref impl
* Clean up unnecessary lines
* BinaryConvolution: Use validate helper function from convolution ref impl
* Convolution: Refactor validate and infer types to improve readability
* BinaryConvolution: Refactor validate and infer types to improve readability
* Convolution: Add explicit tensor shape dims for inputs and outputs in spec
* BinaryConvolution: Add explicit tensor shape dims for inputs and outputs in spec
* GroupConvolution: Add explicit tensor shape dims for inputs and outputs in spec
* Add helper function to infer convolution forward output shape
* Convolution: Refactor validate and infer types to use helpers to infer output shape
* BinaryConvolution: Refactor validate and infer types to use helpers to infer output shape
* GroupConvolutionBackpropData: Fix formula to calculate output shape in validation functions
* Remove symbol to export convolution output shape inference function
* GroupConvolution: Add validation checks for input channels dim of data batch and filter shape
* GroupConvolutionBackpropData: clean up type prop tests
* Convolution: Change element type in onnx unit tests with dyn shapes and convolution nodes
* GroupConvolutionBackpropData: Correct layout of filters input
* GroupConvolution: Deduce groups from inputs shape during output shape inference
* Change spec supported types of convolution operations to any numeric type
* Revert "GroupConvolution: Remove serialization test instances with integer precision"
This reverts commit 781c2570d6.
* Revert "GroupConvolutionBackpropData: Remove serialization test instances with integer precision"
This reverts commit 9a6ac23968.
* Revert "Convolution: Remove test instances with integer precision"
This reverts commit 0b07052a62.
* Revert "Convolution: Change element type in onnx unit tests with dyn shapes and convolution nodes"
This reverts commit c9f5944b6b.
* Revert "Convolution: Change onnx test with dynamic shapes to float element type"
This reverts commit 1f4202b010.
* Allow integral types in validate and infer types method for convolution group of operations
* Add i32 precision in single layer tests for convolution group of operations
* BinaryConvolution: Fix shape of input and output tensors in spec
* Address nitpick comments
* Utils: make_try_fold, clone_try_fold. Template node creation and attempt to fold it
* RTTI for ArithmeticReduction(KeepDims)
* Enriched ngraph::get_default_order overloads with ones for dynamic shape and rank
* [ Transpose sinking ] Transpose->FQ->Reduce to FQ->Reduce->Transpose
* Style: deleted empty line
* RTTI in Reduction operations
* RTTI for LogicalReductionKeepDims
* Transpose: optimizations moved from algebraic simplification to TransposeSinking
* renamed file
* Fix test
* keep_dims is always initialized
* Apply suggestions from code review
Co-authored-by: Gleb Kazantaev <gleb.nnstu@gmail.com>
Co-authored-by: Gleb Kazantaev <gleb.nnstu@gmail.com>
* clean the FusedOp from mod operation
* add backend and type_prop tests for mod operator
* convert taking autobrodcast to match binary elementwise arithmetic ops
* add type_prop/mod.cpp to CMakeLists.txt
* fix style
* fix style v2
* remove evaluate method and add backend test for negative numbers
* add copyright for type_prop/mod.cpp
* replace .format on f-string in cross_check_tool
* Replace f-string on .format in utils.py
* replace f-string in benchmark tool
* Replace .format on f-string in benchmark tool
* Add f-string after update
* Fix some lines
* Fix utils
* Add more span util and test
* apply review suggestions
* add throw in at operator in span
Co-authored-by: Patryk Elszkowski <patryk.elszkowki@intel.com>
* Added reference implementation for Roll operation.
* Small corrections.
* Removed duplicate test disabling.
* Changed implementation using manual data manipulation.
* Removed unnecessary function.
* Corrected tests, added converting axes and shift to int64.
* nGraph shell implementation of Gather-7
* review comments applied
* style_apply
* applied @ilyachur's comments
* style-apply
* applied @popovaan's comments
* changed ieFuncTest for Gather (now is created from op version instead of opset) added check for batch_dims
* clang_format_fix and some other corrections
* returned back opset3::Gather in ieFuncTests
* added `constexpr` to `AXIS_NOT_SET_VALUE` as @vgavrilo suggested
* removed AXIS_NOT_SET_VALUE and added proper support when axis is not specified
* clang_format_fix_all
* applied review comments: added support for dynamic axis
* applied review comments, minor corrections in gather_elements
* Review spec of Mish operation
* Add minor changes
* Updated reference paper to a newer version
* Fix typo in SoftPlus op
* Minor change in example section
* Fix minor wording issues
* Review spec of PReLU operation
* Address review comments
* Correct second input description
* Add note to clarify input channel dimension
* Add additional equivalent formula for op
* Change reference link to abstract
* Add additional examples
* Address review comments related to wording
* Fix IR layer examples
* Caching support of multi-device scenario
- IE_CORE: introduce CacheGuard which can create locks for specific cache identified by 'hash'
- Added functional tests for it
Fixes of Thread Sanitizer failures:
- ngraph::Serialize - m_ref[i] can create new element, casted to 'const' to avoid this
- ngraph::get_opset oprations: reworked to use std::call_once instead of double bool check
* Added docs for ie_cache_guard.hpp
* Fix Debian 9 compilation issue
* Fix build for CentOS 6
Added assert to verify that table of locked hashes is empty on destruction
* Fixed review comments
* doc: update README for C samples, add comments
* samples: revert extension library settings for CPU only
* add validated image formats to samples README
* add output to c samples README
* add device check for xml config option
* Review spec of Selu operation
* Fix path for Selu op in opset files
* Remove unnecessary line in example
* Address review comments related to wording