* [GPU] Optimized out permute in permute-gemm(onednn) pattern.
Permute can be optimized out when permute's in and out are compatible and onednn gemm.
Signed-off-by: hyunback <hyunback.kim@intel.com>
* [DOCS] Change downloads directory link (#17846)
* installation link
* fix path
* change notebooks links (#17857)
* fix apt and yum links (#17877)
* [DOCS] Fix list and links to POT (#17887)
* change link to POT
* change header label
* fix typo
For some reason my MSVC gives the following error:
C:\Program Files\Microsoft Visual Studio\2022\Professional\VC\Tools\MSVC\14.35.32215\include\utility(176,5): error C4996: 'ngraph::SlicePlan::SlicePlan': T
he nGraph API is deprecated and will be removed in the 2024.0 release. For instructions on transitioning to the new API, please refer to https://docs.openv
ino.ai/latest/openvino_2_0_transition_guide.html [C:\Users\vzlobin\r\openvino\build\src\common\transformations\inference_engine_transformations_obj.vcxproj
]
C:\Program Files\Microsoft Visual Studio\2022\Professional\VC\Tools\MSVC\14.35.32215\include\xmemory(680,47): message : see reference to function 'std::pai
r<std::shared_ptr<ov::op::v1::StridedSlice>,ngraph::SlicePlan>::pair(std::pair<std::shared_ptr<ov::op::v1::StridedSlice>,ngraph::SlicePlan> &&)' [C:\Users\
vzlobin\r\openvino\build\src\common\transformations\inference_engine_transformations_obj.vcxproj]
C:\Program Files\Microsoft Visual Studio\2022\Professional\VC\Tools\MSVC\14.35.32215\include\utility(175,5): error C4996: 'ngraph::SlicePlan::SlicePlan': T
he nGraph API is deprecated and will be removed in the 2024.0 release. For instructions on transitioning to the new API, please refer to https://docs.openv
ino.ai/latest/openvino_2_0_transition_guide.html [C:\Users\vzlobin\r\openvino\build\src\common\transformations\inference_engine_transformations_obj.vcxproj
]
C:\Program Files\Microsoft Visual Studio\2022\Professional\VC\Tools\MSVC\14.35.32215\include\xmemory(680,47): message : see reference to function 'std::pai
r<std::shared_ptr<ov::op::v1::StridedSlice>,ngraph::SlicePlan>::pair(const std::pair<std::shared_ptr<ov::op::v1::StridedSlice>,ngraph::SlicePlan> &)' [C:\U
sers\vzlobin\r\openvino\build\src\common\transformations\inference_engine_transformations_obj.vcxproj]
* add set_value op
* Support for tensor input
* fix shape error
* refactor for dynamic shape
* update process of target_value_shape and add comments
* support arbitrary steps
* fix
* fix ends_node
* fix and add test cases
* fix error when slice operation return maximum number in int32
* remove redundant function call
* update for minus step
* add constraints for minus inputs
---------
Co-authored-by: mei, yang <yang.mei@intel.com>
* enable CPU map for ARM Linux based on freqency information
* fix code style issue
* fix code style issue
* remove 'streams = 1' WA for ARM linux
* update for typo and comments
* update for comments
* keep WA
* keep WA of streams = 1 for ARM Linux
* update num_streams WA for ARM Linux in test case
* update for comments
* update for comments
* update for comments
* update for comments
* update for merge conflict
* update and add test case for MTL
* Initial impl for runtime buffer fusing
Passing unittest with static kernel
* pass unittest with dynamic impl
* Refactor allocate_output
* Separate header of buffer fusing
* Refactored buffer fusing :: matcher/optimize
* More cleanup
* Fix crash in dolly
* Reset can_be_optimized of primitive_inst when it is not
* Fix empty tensor : Primitive with empty data should be skipped
* Fix issue in dynamic padding : Static kernel should not contain dynamic padding dims
Fix missing reset of update_shape_done_by_other flag
* Not to add cache with emtpy kernel for optimized out inst
* Fix corner case error in buffer fusing
- Shapes of some preds may not be changed, but still needed to do update_impl because 1) paddings are changed 2) output memory should be updated
- optimizable impl should not be added to the cache
* Allowing reorder & permute_ref to be optimized concat predecessor
* Some more fixes :
runtime buffer fusing is available only when all preds/concat are dynamic
runtime buffer fusing is to be executed only if the node is dynamic
* Fix allocate_output parameter called by get_estimated_device_mem_usage according to the new change
* Fixed error in cascaded concatt
* Need to reinterprete even though the size is same
* Requirements for the HW plugin to integrate with AUTO
Signed-off-by: Peter Chen <peter.chen@intel.com>
* Update property requirements and wording
1. Added purpose for each reqired property
2. Removed autobatching properties
3. Updated wording
Signed-off-by: Peter Chen <peter.chen@intel.com>
* Add one BA test and update purpose for model_name
Signed-off-by: Peter Chen <peter.chen@intel.com>
* Add request to ov::compilation_num_threads
Signed-off-by: Peter Chen <peter.chen@intel.com>
* Add link to intgration with AUTO
Signed-off-by: Peter Chen <peter.chen@intel.com>
* Wording with API 2.0
Co-authored-by: Ilya Churaev <ilyachur@gmail.com>
* Try to fix the link
* Remove ":doc:"
* Add postfix "__" for external link
* Apply suggestions from code review
Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>
* bash command and multiple devices description update
Signed-off-by: Peter Chen <peter.chen@intel.com>
---------
Signed-off-by: Peter Chen <peter.chen@intel.com>
Co-authored-by: Ilya Churaev <ilyachur@gmail.com>
Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>
* Introduce PadBase
* Update ov scope name
* Introduce Pad-12
* Common type_prop Pad tests
* Init Pad-12 ref tests
* Add Pad reference tests
* attr and op check tests
* Move eval and clone inputs from PadBase
* Init opset12
* Headers clean up
* Update shape_inference map for CPU
* Update Pad evaluates to use ov::TensorVetor
* Update shape infer map with Pads
* Fix namespace
* Update op check test
* Add common Pad shape_inference tests
* Reuse PadBase shape_infer