* Change fused_names algo -> cut subgraphs
* Added extractor name to serialization dir + meta_info
* Uncomment log
* Add this_op_cnt to model_info, fix in_info for model
* Replace clone node second time to relace input node
* fix small problem
* small fixes
* Switch off repeat extractor
* remove model serialization
* fused_names
* change default device in fused_names extractor
* fused_names
* Small speed up
* Move replace of const by param to cache
* Move alignament of in_info to extractorManager
* Sort model by size (check mem fragmentation)
* Fix problem with opset12
* Update manager.cpp
* Serialize cache in case of long
* Add test
* Update graph_cache.cpp
* Update graph_cache.cpp
* Graph cache size
* test other approach
* remove extra
* Fix issue with replae
* try with 1gb limitatiom
* to merge
* revert
* Change `VPUX`/`VPU` occurrences to `NPU`
* Switch `HARDWARE_AWARE_IGNORED_PATTERNS` VPU to NPU
* Rename `MYRIAD plugin`
* Rename vpu_patterns to npu_patterns in tools/pot
* Rename vpu.json to npu.json in tools/pot
* Rename restrict_for_vpu to restrict_for_npu in tools/pot
* Change keembayOptimalBatchNum to npuOptimalBatchNum
---------
Co-authored-by: Dan <mircea-aurelian.dan@intel.com>
* Fix issue with kwargs in signature
* Update src/bindings/python/src/openvino/frontend/pytorch/ts_decoder.py
* Fix problem with some ops in detectron2
* Use debug name for extra input signature
* [TF FE] Support MaxPoolWithArgmax operation
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
* Add ticket number for TS crash
* Correct error message
* Skip crashing tests
* Set additional tensor name for MaxPool
---------
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
* [HETERO] Add ConstantFolding in compile modelto avoid unexpected dynamism after model split. Add new property, which shows number of subgraphs
* Remove check for dynamic subgraph
* Removed legacy API from core_impl
* Revert old extension in old API
* Fixed unit tests
* Wrap new extensions in old API
* Wrap extensions in all legacy API
* Fixed legacy exceptions
* Fixed ONNX tests
* Try to fix LTO
Decompression attribute (that is present in models with FP16 precision)
prevents the weights to be constantfolded. Weights constantfolding is
required by CompressQuantizeWeights to compress the weights to low
precision format.
Ticket: CVS-117310
* Shared ShapeOf transformation via SharedOpOptimization. Allow EliminateGatherUnsqueeze pattern to have intermediate binary operation. Changes covered with transformation tests
* Move Reshape up through binary op to be optimized by NopElimination transformations
* Optimizes Shared Convert
* SharedOpOptimization: give higher priority for nodes of later versions to become root nodes
* Deleted old transformations
* Added revalidation for modified nodes
* Added binary op revalidation to the EliminateGatherUnsqueeze in order to have correct shapes for other matchers in the same graph rewrite
* Made PrepareShapeOpsForEliminationAroundBE independable of the upper node
* Introduces GroupedSliceToVSplitOptimization optimization
* Preserve output names during GroupedSliceToVSplitOptimization
* Revert "Made PrepareShapeOpsForEliminationAroundBE independable of the upper node"
This reverts commit 96785b24c9.
* Comments are adressed
* SharedOpOptimization: removes Base classes from the rules
* Advanced `mmap` test for IR FE
* Move memory functions to `CommonTestUtils`
* CppLint + ClangFormat
* Refactor IR FE `mmap` tests
1) Remove `compile_model` stage as not required, so also remove
"CPU" dependency
2) Run test in separate process to make RAM values more stable
3) Update RAM reference as "binsize / 2"
* Skip test on Apple platform
* Remove `getRssFileInKB()` as unused
+ Added is_padded_spatial to program_node
+ Added reorder to remove padded input in spatial axis for mvn
+ case applied only for blocked formats of implemented mvn opt kernel
Signed-off-by: Min, Byungil <byungil.min@intel.com>