* Review adaptive max pool shape inference
* Review AvgPool and MaxPool
* Review convolution operator
* Review GroupConvolution shape inference
* Review ConvolutionBackpropData operator
* Review GroupConvolutionBackpropData op
* Review BinaryConvolution operator
- add common bases for convolution ops
- refactor convolution ops
* Review DeformableConvolution operator
* Use new convolution shape_infer in GPU
* Fix build and test issues
* Correct set output spatial shape
in default constructed back prop convolutions
* The convolution shape_infer use pads as parameters
the external padding can be operators or other class padding properties shape_infer should not modify operators padding when
called from plugin
* Apply code formatting
* Fix padding validation and update
* Max and Avg pool don't update op properties
from plugin shape inference
- use ShapeInferWithPadding for pooling operators
* Remove not used function in shape_inference
* Fix evaluates in MaxPool
* Relax convolution shape infer inputs size check
* Remove unused entryFallbackWithPadding class
* Remove unused dilations variable
* Remove unused resize_attributes from max_pool_base
---------
Co-authored-by: mitruska <katarzyna.mitrus@intel.com>
* User can set input and output precision for timetest tool
* Update run_timetest.py with the ip and op options as well
* Use only one getType function
* Add extra line at the end of the file
* Remove unused parameters
* Update comment accordingly
---------
Co-authored-by: Vitaliy Urusovskij <vitaliy.urusovskij@intel.com>
* Remove suppression Wno-delete-non-abstract-non-virtual-dtor
* Fixed Allocator warning
* Suppress warning for GPU plugin
* Skip warning for GNA
* Fixed preprocessing
* Added virtual constructor for base plugin class
* Some fix for CPU
* Suppress for CPU
* Fixed any
* Fixed meta
* Disable warning for paddle
* Fixed Allocator tests
* Move suppress to paddle
* Fixed benchmark_app
* add reshape shapeinfer in cpu plugin
Signed-off-by: Hu Yuan2 <yuan2.hu@intel.com>
* add squeeze and unsqueeze
Signed-off-by: Hu Yuan2 <yuan2.hu@intel.com>
* add precision i8 i64 on test
Signed-off-by: Hu Yuan2 <yuan2.hu@intel.com>
* fix code out of bounds risk
Signed-off-by: Hu Yuan2 <yuan2.hu@intel.com>
* test performance of this PR
Signed-off-by: Hu Yuan2 <yuan2.hu@intel.com>
* fix code issue
Signed-off-by: Hu Yuan2 <yuan2.hu@intel.com>
* Revert "test performance of this PR"
This reverts commit f4f9f002de28d03bc1c55c24067f75b74824904c.
* fix reviewer comment
fix throw message
not create ov::shape instance
remove i8 test case
Signed-off-by: Hu Yuan2 <yuan2.hu@intel.com>
* fix pytorch layer test failed issue
inputShape(1,0) outpattern(-1) is a valid input
Signed-off-by: Hu Yuan2 <yuan2.hu@intel.com>
* fix windows compile issue
Signed-off-by: Hu Yuan2 <yuan2.hu@intel.com>
* fix rebase mistaken
Signed-off-by: Hu Yuan2 <yuan2.hu@intel.com>
---------
Signed-off-by: Hu Yuan2 <yuan2.hu@intel.com>
* add opextension support
* support opconversion
* fix test contructor ambiguous
* fix ci fail
* add tag to avoid compiler ambiguous
* move tests to layer_tests & remove PaddleTag
* static cast
* use create_ov_node_by_name
---------
Co-authored-by: Luo Cheng <cheng.luo@intel.com>
* Fix failed unit-tests on dGPU
+ modified fully_connected_random_test_i8_3d not to have ambiguous
+ oneDNN does NOT support i64 type for reorder. Added exception.
+ bugfix in prepare_primitive_fusing about exception of activation function
+ Add exception logic for dynamic to select ocl type in is_node_for_onednn
Signed-off-by: Min, Byungil <byungil.min@intel.com>
* Reference impl for interpolate-11 init
* ND support init
* Tests clean up
* Add evaluate method for Interpolate-11
* New version tests init
* Type parametrized tests
* Tests duplication clean up and reusage of v4 test cases
* Add clipping to the type bounds
* Style fix
* Add float type tests
* Fix default ports values
* Commented code clean up
* Add passing cube_coeff param
* Tests clean up
* Add separate namespace
* Adjust variable names
* Adjust function name
* Use vectors instead of raw ptrs
* update func to static inline
* Adjust types
* Add Interpolate-11 to template plugin evaluates map
* Revert interpolate-11 core evaluate support
* Use const ref to filter
* Use static cast
* Update link
* Enable MapAllocator in IR Frontend
* Fix `ov_infer_request_ppp` test
With `mmap()`ing of IR, .bin can't be deleted until unmapping.
And it shows that there was a leak in test
* Add comment to Win `CreateFile()` regarding
FILE_SHARE_DELETE
* Unmap .bin file before IR files deletion
Wait ov::Model deletion to trigger .bin file unmapping
before IR files deletion
* ClangFormat
* Add `use_map_allocator` switch in FE
In case of direct use of FE (e.g. via MO), `mmap()` is OFF.
But in case of use FE via Core, `mmap()` is ON.
* Review adaptive max pool shape inference
* Review AvgPool and MaxPool
* Review convolution operator
* Review GroupConvolution shape inference
* Review ConvolutionBackpropData operator
* Review GroupConvolutionBackpropData op
* Review BinaryConvolution operator
- add common bases for convolution ops
- refactor convolution ops
* Review DeformableConvolution operator
* Use new convolution shape_infer in GPU
* Fix build and test issues
* Correct set output spatial shape
in default constructed back prop convolutions
* The convolution shape_infer use pads as parameters
the external padding can be operators or other class padding properties shape_infer should not modify operators padding when
called from plugin
* Apply code formatting
* Fix padding validation and update
* Use shape inference with padding instead fallback
for DeformableConvolution from opset1
* Update convertPadding function to be template