* Update ov::hint::performance_hint UNDEFINED value from empty string to "UNDEFINED".
* Update benchmark Python version.
* Update.
* Update.
* Update.
* Update the description about hint setting within benchmark APP README and help message.
* Fix remote blob creation to use original shape
* Revert "Fix remote blob creation to use original shape"
This reverts commit 35c674aa97.
* Fix cldnn tensor adjusted blob to be reinterpreted with actual input layout
* gpu model caching unit tests
* added serialization unit tests
* added save and load for quantize primitive_inst
* reduced the range of inputs for Gemm tests
* updated the copyright year
* [Common][FE] Implement reverse infer for Transpose
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
* Update src/common/transformations/tests/common_optimizations/reverse_shape_and_type_infer.cpp
* Update src/common/transformations/tests/common_optimizations/reverse_shape_and_type_infer.cpp
* Update src/common/transformations/src/transformations/common_optimizations/reverse_shape_and_type_infer.cpp
Co-authored-by: Maxim Vafin <maxim.vafin@intel.com>
* Add one more tests with constant order and known output
* Fix reverse infer for a case of know order and output shape
---------
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
Co-authored-by: Maxim Vafin <maxim.vafin@intel.com>
* enable --compress_to_fp16 by default in MO
* corrected docs, added warning if user did't specify --compress_to_fp16 explicitly
* fix failing MO unit-tests
* do not wipe out data_type if user defined it explicitly by cli argument
* updated warning message and docs
* corrected phrasing
* corrected phrasing in FP16_Compression.md
* set compress_to_fp16=False for convert tests
* leftover: set compress_to_fp16=False for convert tests
* minor correction
* print info message in main.py, som minor changes
* typos fix
* fix losing information whether arguments set by user or got from defaults
* returned back default values instead of None
* more selective correcting of test_mo_convert_pytorch.py; added test for cases when compression is enabled/disabled or left by default
* fix test_mo_convert_pytorch.py
* optimize TensorIterator DynamicBuffer by preallocating a large chunk of intermediate buffer.
code clean.
review update: always copy in transfer as it is not worthy.
review update: update mem_holder_buffer as dnnl::memory instead of shared_ptr of it.
review update: reuse mem_buffer_holder even if the shape changes.
review update: growth factor.
review update: bug fix.
* fix code style
* review update: rewrite the dynamic buffer using the cpu Memory class, instead of dnnl::memory
* Update src/plugins/intel_cpu/src/nodes/tensoriterator.cpp
Co-authored-by: Maksim Kutakov <maxim.kutakov@gmail.com>
* Update src/plugins/intel_cpu/src/nodes/tensoriterator.cpp
Co-authored-by: Maksim Kutakov <maxim.kutakov@gmail.com>
* review update: minor fix
---------
Co-authored-by: Maksim Kutakov <maxim.kutakov@gmail.com>
* Use new evaluate method in template plugin
* Add tensor at the end of each iteration
* Remove class TemporaryOverrideOutputs
* Set shape of tensor after evaluate
* Revert "Remove class TemporaryOverrideOutputs"
This reverts commit e345ba9188.
* Update tensors when evaluate passed
* Copy data Tensor when HostTensor was initialized
* Set shape to output tensor in TemporaryOverrideOutputs
* Fix code style
* Add test
* Remove unused code
* Create reshape with scalar when shape is empty
* Reshape, special_zero = true
* Revert "Create reshape with scalar when shape is empty"
This reverts commit 0f901f419a.
* Use Shape with size zero and value max_int for dynamic tensors
* Restore Shape{0} for dynamic tensors
* Revert "Restore Shape{0} for dynamic tensors"
This reverts commit cb2d0e58eb.
* Temporary remove the test
* Use shape{0} for dynamic tensors
* Revert "Use shape{0} for dynamic tensors"
This reverts commit 08460a486b.
* Use Shape{0} for dynamic tensors
* Use new evaluate in template plugin
- Add tensor conversion between ov::Tensor <-> HostTensor
- Add shape utils to create special case shape to be dynamic shape
- Utils are in dev API to remove duplicates
* Move WA for set shape into the ov::tensor.
* Remove dynamic shape from or_tensor helper
* Mark tensor conversion utils as deprecated
- move shape util as core internal only
- update transpose test to not use deprecated functions
* Add missing deprecate suppression macro
---------
Co-authored-by: Artur Kulikowski <artur.kulikowski@intel.com>
* Add CC support for ir reader
Change-Id: I3e1c02222800be090a4307bff8c231ad28b23ff7
* Fix clang issue
Change-Id: Idaf7bc5632bd558cfb7b0ecd8891435e5ba5c6ca
It turned out that NormalizeL2 is absent in tf.raw_ops api
and always presented in the decomposed form.
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
* Adds base class and first test for tflite_layer tests
* adds layer tests for unary ops
* adds functionality to get tensors from ops
* 1. adds functionality to use custom funcs for input generation
2. removed UNIQUE op from testing ops
* adds functionality to use custom dtypes
* Cast operation support
* Enhanced tfl layer tests
* Cast operation support
* Transpose Sinking: fix dummy case
* Supported 3 more ops: L2_NORMALIZATION, ARG_MAX, ARG_MIN
* Support scalar shapes
* Supported 1 more op: TRANSPOSE_CONV
* Supported 2 more ops: COMPLEX_ABS, RFFT2D (in combination)
* (DE)QUANTIZE as Identity. Questionable
* Trigger tfl layer tests in .ci
* Apply suggestions from code review
* empty constant support
* Commit as-is. Debug prints inside
* Not ready yet
* Style
* Comments resolved
* Style
* Dynamic shape support
* Style
---------
Co-authored-by: rnugmano <ruslan.nugmanov@intel.com>
Co-authored-by: missjane <estepyreva@gmail.com>