* C++ exception with description write lock_type thrown in the test body.
Use get_output_values_to_float()
* fusings_gpu/gemm_2in_act_scale_quantize_eltwise_i8.basic/2
* fusings_gpu/gemm_2in_act_scale_eltwise.basic/2
* Remove WA test code of [GPU][DG2] Fix fusings_gpu/gemm_2in_scale.basic/7 #15353
* Now non full-tensor post-ops are broadcasted
* Added some new tensor API
* Added tests on constructors
* Small changes
* Fixed tensor tests
* Fixed tests
* Added parametrized tests
* Extend tests and delete copy_to from remote tensor
* [GNA] Create ngraph implementation for relu_torch_pot model for further tests. Create legacy pass fusing FC-Eltwise-Const layers pattern into single FC layer with biases
* [GNA] Fix review comments, applied proper code style to changed code
* Add test for negative axes, preliminary solution to solve uncorrect
results
* Normalize axes in operation NormalizeL2
* Add test for negative axes
* Add EOF
* Update ov::hint::performance_hint UNDEFINED value from empty string to "UNDEFINED".
* Update benchmark Python version.
* Update.
* Update.
* Update.
* Update the description about hint setting within benchmark APP README and help message.
* Fix remote blob creation to use original shape
* Revert "Fix remote blob creation to use original shape"
This reverts commit 35c674aa97.
* Fix cldnn tensor adjusted blob to be reinterpreted with actual input layout
* gpu model caching unit tests
* added serialization unit tests
* added save and load for quantize primitive_inst
* reduced the range of inputs for Gemm tests
* updated the copyright year
* [Common][FE] Implement reverse infer for Transpose
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
* Update src/common/transformations/tests/common_optimizations/reverse_shape_and_type_infer.cpp
* Update src/common/transformations/tests/common_optimizations/reverse_shape_and_type_infer.cpp
* Update src/common/transformations/src/transformations/common_optimizations/reverse_shape_and_type_infer.cpp
Co-authored-by: Maxim Vafin <maxim.vafin@intel.com>
* Add one more tests with constant order and known output
* Fix reverse infer for a case of know order and output shape
---------
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
Co-authored-by: Maxim Vafin <maxim.vafin@intel.com>
* enable --compress_to_fp16 by default in MO
* corrected docs, added warning if user did't specify --compress_to_fp16 explicitly
* fix failing MO unit-tests
* do not wipe out data_type if user defined it explicitly by cli argument
* updated warning message and docs
* corrected phrasing
* corrected phrasing in FP16_Compression.md
* set compress_to_fp16=False for convert tests
* leftover: set compress_to_fp16=False for convert tests
* minor correction
* print info message in main.py, som minor changes
* typos fix
* fix losing information whether arguments set by user or got from defaults
* returned back default values instead of None
* more selective correcting of test_mo_convert_pytorch.py; added test for cases when compression is enabled/disabled or left by default
* fix test_mo_convert_pytorch.py
* optimize TensorIterator DynamicBuffer by preallocating a large chunk of intermediate buffer.
code clean.
review update: always copy in transfer as it is not worthy.
review update: update mem_holder_buffer as dnnl::memory instead of shared_ptr of it.
review update: reuse mem_buffer_holder even if the shape changes.
review update: growth factor.
review update: bug fix.
* fix code style
* review update: rewrite the dynamic buffer using the cpu Memory class, instead of dnnl::memory
* Update src/plugins/intel_cpu/src/nodes/tensoriterator.cpp
Co-authored-by: Maksim Kutakov <maxim.kutakov@gmail.com>
* Update src/plugins/intel_cpu/src/nodes/tensoriterator.cpp
Co-authored-by: Maksim Kutakov <maxim.kutakov@gmail.com>
* review update: minor fix
---------
Co-authored-by: Maksim Kutakov <maxim.kutakov@gmail.com>
* Use new evaluate method in template plugin
* Add tensor at the end of each iteration
* Remove class TemporaryOverrideOutputs
* Set shape of tensor after evaluate
* Revert "Remove class TemporaryOverrideOutputs"
This reverts commit e345ba9188.
* Update tensors when evaluate passed
* Copy data Tensor when HostTensor was initialized
* Set shape to output tensor in TemporaryOverrideOutputs
* Fix code style
* Add test
* Remove unused code
* Create reshape with scalar when shape is empty
* Reshape, special_zero = true
* Revert "Create reshape with scalar when shape is empty"
This reverts commit 0f901f419a.
* Use Shape with size zero and value max_int for dynamic tensors
* Restore Shape{0} for dynamic tensors
* Revert "Restore Shape{0} for dynamic tensors"
This reverts commit cb2d0e58eb.
* Temporary remove the test
* Use shape{0} for dynamic tensors
* Revert "Use shape{0} for dynamic tensors"
This reverts commit 08460a486b.
* Use Shape{0} for dynamic tensors
* Use new evaluate in template plugin
- Add tensor conversion between ov::Tensor <-> HostTensor
- Add shape utils to create special case shape to be dynamic shape
- Utils are in dev API to remove duplicates
* Move WA for set shape into the ov::tensor.
* Remove dynamic shape from or_tensor helper
* Mark tensor conversion utils as deprecated
- move shape util as core internal only
- update transpose test to not use deprecated functions
* Add missing deprecate suppression macro
---------
Co-authored-by: Artur Kulikowski <artur.kulikowski@intel.com>