* Remove suppression Wno-delete-non-abstract-non-virtual-dtor
* Fixed Allocator warning
* Suppress warning for GPU plugin
* Skip warning for GNA
* Fixed preprocessing
* Added virtual constructor for base plugin class
* Some fix for CPU
* Suppress for CPU
* Fixed any
* Fixed meta
* Disable warning for paddle
* Fixed Allocator tests
* Move suppress to paddle
* Fixed benchmark_app
* add reshape shapeinfer in cpu plugin
Signed-off-by: Hu Yuan2 <yuan2.hu@intel.com>
* add squeeze and unsqueeze
Signed-off-by: Hu Yuan2 <yuan2.hu@intel.com>
* add precision i8 i64 on test
Signed-off-by: Hu Yuan2 <yuan2.hu@intel.com>
* fix code out of bounds risk
Signed-off-by: Hu Yuan2 <yuan2.hu@intel.com>
* test performance of this PR
Signed-off-by: Hu Yuan2 <yuan2.hu@intel.com>
* fix code issue
Signed-off-by: Hu Yuan2 <yuan2.hu@intel.com>
* Revert "test performance of this PR"
This reverts commit f4f9f002de28d03bc1c55c24067f75b74824904c.
* fix reviewer comment
fix throw message
not create ov::shape instance
remove i8 test case
Signed-off-by: Hu Yuan2 <yuan2.hu@intel.com>
* fix pytorch layer test failed issue
inputShape(1,0) outpattern(-1) is a valid input
Signed-off-by: Hu Yuan2 <yuan2.hu@intel.com>
* fix windows compile issue
Signed-off-by: Hu Yuan2 <yuan2.hu@intel.com>
* fix rebase mistaken
Signed-off-by: Hu Yuan2 <yuan2.hu@intel.com>
---------
Signed-off-by: Hu Yuan2 <yuan2.hu@intel.com>
* add opextension support
* support opconversion
* fix test contructor ambiguous
* fix ci fail
* add tag to avoid compiler ambiguous
* move tests to layer_tests & remove PaddleTag
* static cast
* use create_ov_node_by_name
---------
Co-authored-by: Luo Cheng <cheng.luo@intel.com>
* Fix failed unit-tests on dGPU
+ modified fully_connected_random_test_i8_3d not to have ambiguous
+ oneDNN does NOT support i64 type for reorder. Added exception.
+ bugfix in prepare_primitive_fusing about exception of activation function
+ Add exception logic for dynamic to select ocl type in is_node_for_onednn
Signed-off-by: Min, Byungil <byungil.min@intel.com>
* Reference impl for interpolate-11 init
* ND support init
* Tests clean up
* Add evaluate method for Interpolate-11
* New version tests init
* Type parametrized tests
* Tests duplication clean up and reusage of v4 test cases
* Add clipping to the type bounds
* Style fix
* Add float type tests
* Fix default ports values
* Commented code clean up
* Add passing cube_coeff param
* Tests clean up
* Add separate namespace
* Adjust variable names
* Adjust function name
* Use vectors instead of raw ptrs
* update func to static inline
* Adjust types
* Add Interpolate-11 to template plugin evaluates map
* Revert interpolate-11 core evaluate support
* Use const ref to filter
* Use static cast
* Update link
* Enable MapAllocator in IR Frontend
* Fix `ov_infer_request_ppp` test
With `mmap()`ing of IR, .bin can't be deleted until unmapping.
And it shows that there was a leak in test
* Add comment to Win `CreateFile()` regarding
FILE_SHARE_DELETE
* Unmap .bin file before IR files deletion
Wait ov::Model deletion to trigger .bin file unmapping
before IR files deletion
* ClangFormat
* Add `use_map_allocator` switch in FE
In case of direct use of FE (e.g. via MO), `mmap()` is OFF.
But in case of use FE via Core, `mmap()` is ON.
* Review adaptive max pool shape inference
* Review AvgPool and MaxPool
* Review convolution operator
* Review GroupConvolution shape inference
* Review ConvolutionBackpropData operator
* Review GroupConvolutionBackpropData op
* Review BinaryConvolution operator
- add common bases for convolution ops
- refactor convolution ops
* Review DeformableConvolution operator
* Use new convolution shape_infer in GPU
* Fix build and test issues
* Correct set output spatial shape
in default constructed back prop convolutions
* The convolution shape_infer use pads as parameters
the external padding can be operators or other class padding properties shape_infer should not modify operators padding when
called from plugin
* Apply code formatting
* Fix padding validation and update
* Use shape inference with padding instead fallback
for DeformableConvolution from opset1
* Update convertPadding function to be template
* * update kernel_ids using hash value
* Change set to unordered_map for kernels_code
* replace unique_id to hash value
* Remove hash_val params
* remove redundant codes (#16262)
** Remove unique_id in program_node
** Remove gen_kernel_id
** Remove set_kernels_source
** Remove remove_kernels
** Remove kernel_idx in kernels_cache
* * Use kernel_impl_params instead of kernel_id
* Divide batch when entry_point are duplicated
* rollback removing unique_id
* * Fix get_kernel failure issue (#102467)
- Modify has function of custom_gpu_primitive and generic_layer
- Add ==operation of generic_layer for _kernels map in kernels_cache
- Fix invalid kernel_impl_params related to unique_ptr life cycle issue
* Improve kernels_cache (#102467)
* Move add_kernels_source step to build_implementations
* Change replace kernels_code key to kernel_impl_params
* Return kernel vector in get_kernels
* Modify function name to get_kernels (#102467)
* Fix functions related graph serialization (#102467)
* Fix failure to run dynamic model (#102467)
* Add unit test
* Code review follow-up
- Add const to input params
- Add missing code to check kernel duplication in kernels_cache
* Add const to input params (#102467)
* [GPU] update hash and ==operator for generic_layer and custom_gpu_primitive (#102467)
* [GPU] override get_kernels_source in generic_layer and custom_gpu_primitive (#102467)
* [GPU] Fix onednn build error (#102467)
* [GPU] Fix Lin build error (#102467)
* [GPU] kernels_cache::get_kernels return vector of clone of cldnn::kernel (#102467)
* Updated serialization logics for improved kernel caches (#16262)
* primitive key kernel cache for serialization
* kernel serialization with binaries hash
* fix kernel cache init function for deserialization
* removed unnecessary codes
* [GPU] Update commnet and fix test failure (#16262)
* [GPU] Fix custom_gpu_primitive unit test failures (#16262)
* [GPU] Improved kernels cache serialization (#16262)
* removed hash in serialization logic
* update not to create a new kernels_cache for serialization
* code refactoring in serialization logic
* [GPU] Follow-up code review (#16262)
* [GPU] modify lock(#16262)
* [GPU] Fix custom_gpu_primitive unit test failure (#16262)
---------
Co-authored-by: Eddy Kim <eddy.kim@intel.com>
* Review ROIPooling class
- check interval shape and label propagation
- add template shape_infer
- add shape infer into cpu plugin
- add test with StaticShape
* Use get_output_roi instead of get_output_size
* Add missing includes
* Review PSROIPooling operator
- review interval and label propagation
- add template shape_infer implementation
- add shape_infer to cpu plugin
* Add snippets dependency
* - removed dependency back
- added an INTEL_CPU condition on snippets configuring -> no dependency when configured w/0 CPU
* Disable snippets_ngraph_functions conditionally if inference_engine_snippets are not configured
---------
Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
Move all openvino_conversion rountines into utils. Avoid using Squeeze without axis
that can create dynamic output rank
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>