* [TF FE] Support MaxPoolWithArgmax operation
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
* Add ticket number for TS crash
* Correct error message
* Skip crashing tests
* Set additional tensor name for MaxPool
---------
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
* [HETERO] Add ConstantFolding in compile modelto avoid unexpected dynamism after model split. Add new property, which shows number of subgraphs
* Remove check for dynamic subgraph
* Removed legacy API from core_impl
* Revert old extension in old API
* Fixed unit tests
* Wrap new extensions in old API
* Wrap extensions in all legacy API
* Fixed legacy exceptions
* Fixed ONNX tests
* Try to fix LTO
Decompression attribute (that is present in models with FP16 precision)
prevents the weights to be constantfolded. Weights constantfolding is
required by CompressQuantizeWeights to compress the weights to low
precision format.
Ticket: CVS-117310
* Shared ShapeOf transformation via SharedOpOptimization. Allow EliminateGatherUnsqueeze pattern to have intermediate binary operation. Changes covered with transformation tests
* Move Reshape up through binary op to be optimized by NopElimination transformations
* Optimizes Shared Convert
* SharedOpOptimization: give higher priority for nodes of later versions to become root nodes
* Deleted old transformations
* Added revalidation for modified nodes
* Added binary op revalidation to the EliminateGatherUnsqueeze in order to have correct shapes for other matchers in the same graph rewrite
* Made PrepareShapeOpsForEliminationAroundBE independable of the upper node
* Introduces GroupedSliceToVSplitOptimization optimization
* Preserve output names during GroupedSliceToVSplitOptimization
* Revert "Made PrepareShapeOpsForEliminationAroundBE independable of the upper node"
This reverts commit 96785b24c9.
* Comments are adressed
* SharedOpOptimization: removes Base classes from the rules
* Advanced `mmap` test for IR FE
* Move memory functions to `CommonTestUtils`
* CppLint + ClangFormat
* Refactor IR FE `mmap` tests
1) Remove `compile_model` stage as not required, so also remove
"CPU" dependency
2) Run test in separate process to make RAM values more stable
3) Update RAM reference as "binsize / 2"
* Skip test on Apple platform
* Remove `getRssFileInKB()` as unused
+ Added is_padded_spatial to program_node
+ Added reorder to remove padded input in spatial axis for mvn
+ case applied only for blocked formats of implemented mvn opt kernel
Signed-off-by: Min, Byungil <byungil.min@intel.com>
* updated to enqueue only fc for async build
* updated use_async_compilation(), make_task_executor_config() and disabled gemm_onednn.impl_replacement_with_cldnn
* added _num_async_build_threads
* added gemm to the async compliation targets