* Update template plugin main documentation pages
* Update plugin documentation
* Add more documentation for method
* Register new doxygen groups
* Updated group
* Added ie group
* Fixed comments
* Reuse new implementation inside the old one
* Try to fix titles
* Fix class fields level
* [GPU] Enabled ComparisonLayerTest in single layer tests.
It seems that before, these tests were disabled cause of some failures. Now I cannot see any errors, so I just enabled all of them.
* [GPU] Run clang format for comparison single layer tests.
* [GPU] Added handling of f16 type to IsInfLayerTest.
* [GPU] Added single-layer tests for IsFinite and IsNaN operations.
* [GPU] Added single-layer test for IsInf operation.
* [GPU] Implemented IsFinite, IsInf, and IsNaN operations as activation functions.
But notice that currently, the activation kernel support only the same output data type as the input data type. So an additional reorder would be needed to convert to the correct output data type for these ops. Also worth noting is that activation functions are fused in reorder kernel. But for now, it's not working for these ops because in reorder activation call, there is a hard conversion of input data to output data type before activation. I don't know why it's added there, but it breaks fusion. So need to fix this activation fusion or disable this fusion for these ops.
* Revert "[GPU] Implemented IsFinite, IsInf, and IsNaN operations as activation functions."
This reverts commit 3f9ffe617ecddce6dbbcdeab9584a7ddeb6d1845.
* [GPU] Implemented IsFinite, IsInf, and IsNaN operations as eltwise op.
* [GPU] Changed CLDNN_ERROR_MESSAGE to OPENVINO_ASSERT in check_inputs_count method.
* [GPU] Minor fix for dynamic bert-base-uncased-qqp
Signed-off-by: Andrew Park <andrew.park@intel.com>
* Fix to check full tensor only for static shape during creating onednn gemm
Signed-off-by: Andrew Park <andrew.park@intel.com>
---------
Signed-off-by: Andrew Park <andrew.park@intel.com>
- Previously, PR15386 changed allocation of memory of primitives which are to be used as shape infer dep to host memory, for better shape infer perf.
- However this causes cache coherence issue in dGPU.
- Reverting this change so that the memory will be allocated to devicet
* [TF FE] Support EmptyTensorList and TensorListPushBack operations
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
* Rename a script to generate the test model
* Correct test model generating script
---------
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
* flush fp32 subnormals to zero in IR
* style fix in test_offline_api.py
* simplified call of FlushFP32SubnormalsToZero: is called form offline_transformations.cpp
* reverted offline_transformations.py
* use fpclassify
* style-fix
* Update src/common/transformations/tests/common_optimizations/flush_fp32_subnormals_to_zero_test.cpp
Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>
---------
Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>
* initial version of implementation
* styles applied
* fixed and registration
* add more unit tests
* fixed and in legacy opset
* review remarks
* refactor of version name range
* [dGPU] Enable stable diffusion
+ Prevent to fuse swish into oneDNN reorder.
+ Makes concat explicitly if batch size is greater than 1 and the siblings are oneDNN impl.
* Small CoreImpl refactoring
* Removed cache_dirhandling from CPU plugin
* clang-format
* Fixed python tests
* Fix
* Fixed bugs in HETERO case
* Fixed clang-format and warnings in auto plugin
* Added import_export as capability for TEMPLATE plugin
* Commented throw exception from loaded_from_cache
* Fixed clang-formatof ro template plugin
This is a corner case because body graph nodes have named output ports.
This allows to support custom RetinaNet model.
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
* remove ov::device::thermal
ov::device::thermal was only supported on myriad
* additional cleanup
* remove myriad from AUTO and MULTI
auto n multi n hetero
+ remove mentions of listing myriad devices
* two final fixes
* Update ov_auto.py
---------
Co-authored-by: Ilya Churaev <ilya.churaev@intel.com>