Commit Graph

13391 Commits

Author SHA1 Message Date
Irina Efode
ebf1874eee
[GHA] Check OpImplCheck conformance for template plugin (to make sure that all functions are checked by conformance) (#20712)
* [CONFROMANCE] Fix Template OpImplCheck on Win

* Update windows.yml

* check win

* Update windows.yml

* remove extra changes
2023-11-10 12:39:58 +04:00
Ilya Lavrenov
d4dd169ca3
Added more smart CI conditions (#20999) 2023-11-09 23:40:26 +04:00
Alina Kladieva
51a17ba642
Workaround failing MO unit when Python API unit are skipped (#20997) 2023-11-09 18:07:57 +01:00
Tatiana Savina
8034d1795f
change structure (#20988) 2023-11-09 15:03:01 +01:00
Sungeun Kim
b1705e8bd3
[GPU] clean up for extend pad/stride/dilation (#20828)
* clean up for extend pad/stride/dilation
2023-11-09 20:48:34 +09:00
Sungeun Kim
9cc4c25e48
[GPU] print datashape of input for benchmark_app (#20943)
* print datashape of input for benchmark_app
2023-11-09 20:46:50 +09:00
Zhang Yi
09a45bceae
[CPU][MLAS]Apply lower bound protection for K stride (#20873) 2023-11-09 15:14:27 +04:00
Aleksandr Voron
3c88a9cf58
[CPU] [ARM] Enable MatMul SLT tests on ARM (#20923) 2023-11-09 15:12:22 +04:00
Alina Kladieva
fa22836cfb
Fix no match files change case (#20981) 2023-11-09 12:07:47 +01:00
Ilya Lavrenov
c851d643b3
Fixed smart CI (#20980) 2023-11-09 11:28:32 +01:00
Pawel Raasz
d6852598ce
Fix Ubuntu20 build error on relu operator (#20965) 2023-11-09 07:42:06 +00:00
Alina Kladieva
000966660c
Smart CI POC (#19825)
* Try using a custom action directly from repo

* Run smart CI under ubuntu-latest

* Set output + add a sample step

* Update linux.yml

* Add components.yml

* Add some conditions

* Just to check if reference to "needs" work in job context

* Update linux.yml

* More example cases

* Dummy change to CPU

* Fix typo

* Fix SAMPLES_AFFECTED variable

* Use more correct dependents key

* Fighting with messy GHA conditions

* No brackets and no double quotes in conditions

* Revert "Dummy change to CPU"

This reverts commit 4eae09e5b5.

* Use refactored action

* Move action implementation to openvino repo

* Extend components.yml config

* Update labeler.yml

* Dummy change to TF FE

* Fix indentation

* Add missing needs

* Add missing records

* Allow missing records for components in validation

* install_openvino_dependencies as a separate step for Python_Unit_Tests

* Improve config validation

* Revert "Dummy change to TF FE"

This reverts commit 01190864d1.

* Dummy change to model hub tests

* Update CPU component config

* Dummy change to Python API

* Dummy change to Python API

* Revert "Dummy change to Python API"

This reverts commit 3fce0bb3fb.

* Dummy change to Python API

* Simplify conditions. Cover "no components changed" case

* Update components.yml

* Update .gitignore

* Revert "Dummy change to Python API"

This reverts commit e57ea9852c.

* Fix dependencies scopes

* Add simple unit tests for smart ci functionality

* Revert "Dummy change to model hub tests"

This reverts commit c3d6837e22.

* Use ghapi module with permissive license

* Cover install_build_dependencies.sh script by labeler

* More labels

* Use ghapi. Apply review comments

* Enable dot files to be matched by labeler

* Warning instead of error in artifacts upload where smart ci is enabled

* Fix master merge

* Fix condition for TF FE common tests

* Fix condition for Pytorch FE tests

* Remove condition for pytorch model tests

* Allow any label as a component

* Refactor tests log handling

* Allow any defined label as a component

* Rearrange config structure. Fill the config with actual data

* Run full scope on changes to non-matching files

* Add missing conditions

---------

Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
2023-11-09 11:38:58 +04:00
Tomasz Jankowski
6e073b1165
[core] Migrate SoftSign operator to new API (#20958)
* Align code style

* Use Evaluate in place of switch case

* Use std::transform in place of for loop
2023-11-09 11:10:37 +04:00
Oleg Pipikin
4bde741de4
Refactor StaticShapeLoopLayerTest (#20963) 2023-11-09 11:08:18 +04:00
Vitaliy Urusovskij
bcb38796ce
Ngraph helpers/builders cleaning (#20819)
* Delete `getNodeSharedPtr()`

* Remove `makeRoll` ng::builder

* Delete `makeSelect` ng::builder

* Delete `makeDepthToSpace` ng::builder

* Remove `CompareFunctions` and `getConstData` from ng::helpers

* Return `makeSelect` for compatibility with NPU

* Port `QuantizationGranularity`, `MemoryTransformation`

* Restore ng::helpers::QuantGranularity for BWD CMP
2023-11-09 10:51:00 +04:00
Vladimir Paramuzov
8f406067d1
[GPU] Remove binary convolution primitive and all related code (#20889) 2023-11-09 09:54:46 +04:00
Mingyu Kim
319a6584a2
[GPU] Decompose test combination to reduce test time (#20968) 2023-11-09 13:10:08 +09:00
Sergey Lyalin
854158612f
Scaled dot product attention (#20492)
* Added experimental ScaledDotProductAttention operation in opset12. Supported in PT FE for aten::scaled_dot_product_attention translation. Decomposed in the common optimizations as functional reference.

* Better ScaledDotProductAttention

- Moved decomposition to the decomposing transformation
- Implemented more ctors for the op
- Renamed is_causal to causal
- Shape/type inference native code instead of using decomposition
- Moved the op from opset12 to opset13
- Added Python wrapper for ScaledDotProductAttention

* Fix test that counts ops in the opsets

* Update src/core/src/op/scaled_dot_product_attention.cpp

Co-authored-by: Katarzyna Mitrus <katarzyna.mitrus@intel.com>

* Update src/core/src/op/scaled_dot_product_attention.cpp

Co-authored-by: Katarzyna Mitrus <katarzyna.mitrus@intel.com>

* Move ScaledDotProductAttentionDecomposition from fusions to decompositions.

* Remove not used legacy shape inference in ScaledDotProductAttention

* Better namespace usage

* Register all nodes in ScaledDotProductDecomposition for correct tracking of nodes and running next mather passes on all new nodes.

* Don't use register_new_node_

* ScaledDotProductAttention specification (with an extra scale argument)

* Code style fix

* Scale input implementation for ScaledDotProductAttention

* Handle attention_mask=0 case in the op spec

* Better description of scale input

* N->M in scale description

* Code style fix, remove debug print.

* Apply suggestions from code review

Co-authored-by: Katarzyna Mitrus <katarzyna.mitrus@intel.com>
Co-authored-by: Mateusz Mikolajczyk <mateusz.mikolajczyk@intel.com>

* Fix for case when is_causal is not passed

* Extended description of ScaledDotProduct op

* Better description in py op wrapper

* Basic shape propagation tests for ScaledDotProductAttention

* Added ScaledDotProductAttention to toc.

* Add op impl check

---------

Co-authored-by: Katarzyna Mitrus <katarzyna.mitrus@intel.com>
Co-authored-by: Mateusz Mikolajczyk <mateusz.mikolajczyk@intel.com>
2023-11-08 20:17:13 +01:00
Alina Kladieva
f627172e5a
Add separate label for docs snippets (#20966) 2023-11-08 22:19:35 +04:00
Andrey Kashchikhin
24cd7283e3
make cache space showing optional (#20962) 2023-11-08 17:28:08 +00:00
Mikhail Ryzhov
9616c8f510
corrected timeouts (#20954) 2023-11-08 17:45:53 +01:00
Vladislav Golubev
c2d09b9a15
FuseU4WeightsAndZeroPoint tests: avoid std::vector<std::int8_t> usage (#20918) 2023-11-08 19:54:23 +04:00
Sofya Balandina
25d94bd98b
[conformance] Skip empty test cache error (#20924) 2023-11-08 15:50:49 +00:00
Ilya Lavrenov
68e6484ecb
Fixed version detection without git (#20951) 2023-11-08 14:30:15 +01:00
Maciej Smyk
fdaa4b5d03
[DOCS] Small fixes in articles for master (#20947)
* Fixes

* Update deployment_intro.md

* Update docs/articles_en/openvino_workflow/deployment_intro.md

Co-authored-by: Sebastian Golebiewski <sebastianx.golebiewski@intel.com>

---------

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>
Co-authored-by: Sebastian Golebiewski <sebastianx.golebiewski@intel.com>
2023-11-08 13:40:52 +01:00
Oleksii Khovan
d07f272054
[GPU] Fix cum_sum_partial_sum implementation for dimensions >= BLOCK_SIZE (#20855)
- fix cum_sum_partial_sum kernel;
- add unit test and func test for big shapes;
- add test to compare Partial vs Ref performance;
- change kernels' priorities according to performance measurements;
- move common profiling helpers to test_utils.

Ticket: CVS-123590
2023-11-08 11:26:51 +00:00
Oleg Pipikin
588e96bc37
Refactor MemoryLayerTest (#20914)
* Refactor MemoryLayerTest

* Apply comments
2023-11-08 14:43:53 +04:00
Oleg Pipikin
ace986cac0
Refactor GenerateProposalsLayerTest, GridSampleLayerTest (#20772)
* Refactor GenerateProposalsLayerTest

* Refactor GridSampleLayerTest

* Fix

* Apply comments

* Apply comments
2023-11-08 14:06:28 +04:00
Pawel Raasz
b8eea7bf84
[core]Migrate Multiply operator to new API (#20853)
* Migrate Multiply operator to new API

* Add comment explain use of custom multiply

* Update custom multiply comment

Co-authored-by: Tomasz Jankowski <tomasz1.jankowski@intel.com>

---------

Co-authored-by: Tomasz Jankowski <tomasz1.jankowski@intel.com>
2023-11-08 13:53:20 +04:00
Pawel Raasz
6210deba49
[core]Migrate FakeQuantize operator to new API (#20895)
* Migrate FakeQuantize operator to new API

* Minor refactor in FakeQuantize reference
re-use existing functions in `get_inner_stride`
2023-11-08 13:52:21 +04:00
Andrei Gorbachev
87cef53088
[GPU] Refactor (#20938)
* maxmin

* mvn

* normalize_l2 and fix mvn

* prior_box_clustered

* prior_box

* pad

* roi_align

* scatter_update

* select

* shape_of

* shuffle_channels

* space_to_batch

* space_to_depth

* split

* squeeze_unsqueeze

* tile

* transpose

* variadic_split

* scatter_nd_update
2023-11-08 13:42:44 +04:00
Liu
9e7243d67c
fix typo (#20906)
Co-authored-by: Michal Lukaszewski <michal.lukaszewski@intel.com>
2023-11-08 13:10:11 +04:00
Ilya Lavrenov
d6cc3d7058
Disable warnings about API 1.0 in GNA, Python API 1.0 (#20933) 2023-11-08 12:45:22 +04:00
Alexander Kozlov
0f260c2ccd
[DOC]: Added INT4 weight compression description (#20812)
* Added INT4 information into weight compression doc

* Added GPTQ info. Fixed comments

* Fixed list

* Fixed issues. Updated Gen.AI doc

* Applied comments

* Added additional infor about GPTQ support

* Fixed typos

* Update docs/articles_en/openvino_workflow/gen_ai.md

Co-authored-by: Nico Galoppo <nico.galoppo@intel.com>

* Update docs/articles_en/openvino_workflow/gen_ai.md

Co-authored-by: Nico Galoppo <nico.galoppo@intel.com>

* Update docs/optimization_guide/nncf/code/weight_compression_openvino.py

Co-authored-by: Nico Galoppo <nico.galoppo@intel.com>

* Applied changes

* Update docs/articles_en/openvino_workflow/gen_ai.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Update docs/articles_en/openvino_workflow/gen_ai.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Update docs/articles_en/openvino_workflow/gen_ai.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Update docs/articles_en/openvino_workflow/model_optimization_guide/weight_compression.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Update docs/articles_en/openvino_workflow/model_optimization_guide/weight_compression.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Update docs/articles_en/openvino_workflow/model_optimization_guide/weight_compression.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Update docs/articles_en/openvino_workflow/model_optimization_guide/weight_compression.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Update docs/articles_en/openvino_workflow/model_optimization_guide/weight_compression.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Added table with results

* One more comment

---------

Co-authored-by: Nico Galoppo <nico.galoppo@intel.com>
Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>
2023-11-08 10:17:57 +04:00
Paul Youngsoo Ahn
c42a88a190
Support dynamic tensor_iterator (#20869)
* [GPU] Support dynamic tensoriterator with -1  num_iteration
- remove redundant codes

* [GPU] Refactoring methods for pre_process / post_process for body_network

* Add unit test for dynamic tensoriterator wo trip_count_id

* Follow-up code review
* Set inner network in loading of model cache
* Fix legacy loop unit tests
2023-11-07 15:11:08 -08:00
Roman Kazantsev
c6ca7865fb
[TF FE] Fix conversion of TF1 OD models out-of-the-box (#20916)
* [TF FE] Fix conversion of TF1 OD models out-of-the-box

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Add test While with nested If operation

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Update tests/layer_tests/tensorflow_tests/test_tf_While.py

---------

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2023-11-07 17:44:09 +00:00
Sebastian Golebiewski
ac1fb7b955
Fixing OS list in System Requirements for YUM (#20934) 2023-11-07 17:18:18 +01:00
River Li
a0849edca1
[CPU] migrate cpu plugin api 2.0 (#18124)
* [CPU] CPU plugin migrates to plugin API 2.0

* Fix legacy config/metric issue

* Fix some issue of ov_cpu_func_tests

1. set_tensors_impl segment fault
2. ov::loaded_from_cache unsupported issue

* Resolve some comments

1. ov::loaded_from_cache issue
2. throw_if_cancelled issue
3. import_model issue
4. set_tensor_impl issue
5. batched_inference issue

* Fix dynamic shape inference issue

* Fix build error

* keep original model info in infer_request

* Fix minor error

* cache internal tensors for input/output precision change

* Disable import model test cases with precision changes

* fix precision issue

* Fix issue for import model

* Fix InferRequestCancellationTests exception issue

* Skip InferRequestIOBBlobTest.*secondCallGetInputDoNotReAllocateData due to new plugin api have different behavior

* Fix graph name issue

* Fix ROI issues

* Fix Transpose shape issue

* Skip vie::Version test due to change to ov::Version

* Solve input port name changes issue

* Solve preprocess layout issue

* Fix minor issue

* tidy up code

* Fix conflict after rebase

* Fix Windows build warning

* Add aux tensors for precision change issue

* Fix import/export model issue

* WA single layer name changed by preprocess

* Revert "WA single layer name changed by preprocess"

This reverts commit bc8fcdd43c.

* Skip some legacy tests due to plugin api 2.0 is enabled

1. skip some python legacy tests for plugin api 2.0 some different behaviors
2. skip some smoke tests due to output port name was changed

* Fix 2 build warnings

* Skip some AUTO plugin tests

* Fix property issue caused by AUTO plugin

* Skip PSROIPooling issues

* Follow header files reference policy

* Split out transformation fixing for nop_elimination

* Fix AUTO plugin mismatch issue for get_tensor function

* Fix aux tensor shape issue

* Fix tensor shape issue

* WA python sync inference sample's segmentfault issue

* Fix reshape issue for dynamic inference

* Fixed incorrect tensor name in e2e test

Fixe issue: e2e ONNX_Customized_Cascade_Rcnn_api_2_True_batch_1_device_CPU_precision_FP325den8cnk

* Fix python segmentfault issue of plugin api 2.0

* Fix python segmentfault issue of plugin api 2.0

* Revert "Fix python segmentfault issue of plugin api 2.0"

This reverts commit 6f502e5d86.

* Fix onnx_duplicated_output_name due to empty tensor

Co-authored-by: Bell, Song <bell.song@intel.com>

* Remove redundant code

* Remove python segment fault WA

* Keep rt_info to fix test failure in case of legacy public api

* Fix output port names missing issue

* Adress some reviewers' comments

* Restore OnnxBackendNodeModelTest::test_maxpool_with_argmax_2d_precomputed_pads_cpu after fixing has been merged

* Resolve tensor sharing issue when there are same name output port name

In some case, model has 2 or more same name input/output ports, they aslo have the same
precision and partial_shape. Compiled_model will share the same ov::Descriptor::Tensor pointer
and ov::Tensor between multiple such ports.
Considered solving python segment fault issue to create seperated input/output ports, which also
need handle such tensor shared case, this patch will do it.

* Resolve tensor sharing issue when there are same name output port name

In some case, model has 2 or more same name input/output ports, they aslo have the same
precision and partial_shape. Compiled_model will share the same ov::Descriptor::Tensor pointer
and ov::Tensor between multiple such ports.
Considered solving python segment fault issue to create seperated input/output ports, which also
need handle such tensor shared case, this patch will do it.

* Better method to find shrared tensor desc

* rename with snake_case style

* Remove ngraph header files

* Keep external_ptr naming

* Add OPENVINO_SUPPRESS_DEPRECATED for some legacy code

* Use port's tensor_ptr to replace creating new tensor_ptr

* Resolve some reviewer comments

* Implement ov::IInferRequestInternalWrapper::GetPreProcess to recover python GetPrepProcess tests

* Remove unnecessary header files reference

* Assert the risk of precision change and reorder at the same time

* Modify legacy python test to fit plugin api 2.0 behavior

* Recover smoke_Transpose(2|4|5|6)D/TransposeLayerTest.CompareWithRefs due to fixing is merged

* Fix typo issue

* Address reviewer's comments

* Disable precision coversion

* Fix error when CpuBlockedMemoryDesc

* Remove precision mismatch WA

* WA precision issue for query_model

* Solve precision mismatch between compiled model and graph

* Fixe failure of query_model

* Rebase to new plugin api update

* Recover the test cases of precision mismatch

* Try to fix name changing for graph model

* Remove tets code

* Remove fp64

* Rebase to new plugin api update

* Update for some failure cases

* Fix bert_benchmark failure issue

* Avoid segment fault in arm acl

Legacy public api + cpu plugin api will add convert op by preprocess by default for unsupported precision,
but ACLConvertExecutor cannot support dimension > 6, so this test will be segment fault due to dimension > 6

smoke_TestNumpyBroadcastNgraphEvaluate/BroadcastLayerTest.CompareWithRefs/targetShape=(1.2.3.4.5.6.7.8.9.10)_axesMapping=()_mode=numpy_inShape=(1.2.1.4.1.6.1.8.1.10)_inNPrec=I8_trgDev=CPU
smoke_TestNumpyBroadcastNgraphEvaluate/BroadcastLayerTest.CompareWithRefs/targetShape=(1.2.3.4.5.6.7.8.9.10)_axesMapping=()_mode=numpy_inShape=(1.2.1.4.1.6.1.8.1.10)_inNPrec=U8_trgDev=CPU

* Remove precision change from preprocess to avoid ACL unsupport convert dim > 6

* ACLConvertExecutor cannot support dimension > 6, don't let preprocess to add Convert

* Revert "ACLConvertExecutor cannot support dimension > 6, don't let preprocess to add Convert"

This reverts commit fd7a8b35af.

* Revert "Remove precision change from preprocess to avoid ACL unsupport convert dim > 6"

This reverts commit 3c2d9a5f17.

* Debug

* Debug incorrect precision checking issue

* Debug Eltwise FP64 unsupported issue

* Add logs for precision

* debug log

* Update for new dependent PRs merged

* Fix failure caused by preprocess

Fix below failures due to cannot find ops by name
     smoke_LPT/ReduceMaxTransformation.CompareWithRefImpl/f32_[1,3,10,10]_CPU_f32__256*

* Fix build error

* Fix failure caused by missing code during rebase

* Add debug

* Fix precision unsupport issue

* U16/I16/U64 precision support

* Resolve the issue of f64 reorder

Fix below issue:
Cannot create reorder primitive: unsupported reorder case

* Fix convert multiple child edge issue

* Solve ROI tensor failure issues

* Temporarily disable num_nodes comparison

* Only change convert precision for fp64

* Put convert precision change before reorder to avoid confusion

* Add debug log for transformation

* Fix rebase confilict

* Fix clang issue

* Temporarily disable test_infer_mixed_values python test of bf16

* Solve issue of smoke_ConvertCPULayerTest_BOOL_Dynamic_inputPRC=BF16 choose FP32 primType rather than BP16 primType

* Fix issue of pytorch_tests/test_outer.py

There are 2 output ports, but with the same port name, they should share the same tensor.

* Fix arm cannot find Eltwise executor issue

smoke_SetBlobCPU/SetBlobTest.CompareWithRefs/Type=INPUT_Device=CPU_PrecisionInNet=FP16_PrecisionInNgraph=BOOL
will report below error:
	 [ GENERAL_ERROR ] Supported Eltwise executor is not found
It need change convert precision to avoid such problem.

* Fix memory overwritten issue

* Temporarily skip arm fp16 SetBlobTest

* Fix compile error after rebase

* Restore smoke_IsOp test due to fixing pr merged

* Fix float to bf16 issue in avx2 isa

* solve onnx test xfail issue

* Skip test cases that ARM Eltwise executor FP16 is not supported

smoke_SetBlobCPU/SetBlobTest.CompareWithRefs/Type=INPUT_Device=CPU_PrecisionInNet=FP16_PrecisionInNgraph=BOOL
smoke_SetBlobCPU/SetBlobTest.CompareWithRefs/Type=BOTH_Device=CPU_PrecisionInNet=FP16_PrecisionInNgraph=BOOL

      [ GENERAL_ERROR ] Supported Eltwise executor is not found

* [CPU] improve reorder to support any precision

* Implement ReorderExecutor

* Fix builld error

* Not cache executor due to its primitive has been cached

* Keep convert one time at most

At most insert one convert if needed, if still cannot do reorder it will throw exception rather than insert the second convert
For example, below reorder will not be supported:
   FP64<->I64/U64/U32
   U32<->I64/U64
   U32<->I16/U16
   FP64<->FP64
   BIN<->BIN

* Only do conversion if layout is same

* update for only convert case

* Update for reviewer comments

* update for failure cases

* Address reviewer comments

* Update rebase issue

* minor update

* Solve unsupported precision issue in tranfromation rather than init_edge

* Remove unnecessary convert in init_edge

* Minor changes

* Update Reorder::reorderData

* Solve issue if only coversion without reorder

* Address reviewer comments

* Address reviewer comments

* Keep exception for unsuported precision

* update

* Revert reorder executor implement

* Solve float->bool issue on transformation pipeline

* Solve I64 is not supported issues

* Solve reviewer's comments

* Fixed dynamic top_k node issue

* Skip nhwc and nChw16c test cases for ConvertLayer

* Update for reviewers' comments

* Fix some failures

* Update for several failure cases

* Update for apiConformanceTests failures

* Fix incorrect node name after import model

* update

* update comments

* Solve issue of smoke_MatMul_NoTranspose and smoke_MatMul_BothTranspose

* Fixed AlignMatMulInputRanks scalar issue

* Address reviewers' comments, remove redundant path in graph.cpp

* Remove test_div_uint8_cpu from xfail_issue_58676

* Solve invalid number of nodes for smoke_Snippets_BroadcastSelect

* ConstantResultSubgraphTest of u16/i16/u32/i64/u64

* restore smoke_SetBlobCPU BOOL tests for arm

* [CPU] Fix ARM precision issue

ARM64 ACL prefers fp16 than fp32, API 2.0 requires input/output precision not changes,
then fp32 input will trigger convert node is added to convert fp32 to fp16.

* Solve some ARM64 failures

* Fix arm64 InferRequestVariableStateTest tests out of memory issue

ARM64 will force fp16 precision, which cause states memory can be fp16, so memcpy to state_memory
cannot use float * element_size, else it will be out of memory bound.

* Skip 2 arm64 tests caused by forcing fp16 precision

* Revert "Fix arm64 InferRequestVariableStateTest tests out of memory issue"

This reverts commit 3e12bd48c2.

* Fix python test_get_profiling_info failure issue

---------

Co-authored-by: Bell, Song <bell.song@intel.com>
Co-authored-by: Chen Peter <peter.chen@intel.com>
2023-11-07 15:25:05 +01:00
Sebastian Golebiewski
3f7989a817
[DOCS] Fixing link in Get Started article (#20881)
* Updating Get Started section

Addressing JIRA ticket: 124289

* Update get_started.md
2023-11-07 14:37:24 +01:00
Vitaliy Urusovskij
e3d7dffa83
Remove legacy API from FEs (except ONNX) (#20849)
* Remove `ngraph` from PT FE and FE tests utils

* Remove `ngraph` from Paddle FE

* Remove `InferenceEngine` from some ONNX FE test

* Port `generate_embedding.py` to API2.0

* CLangFormat

* Fix comments
2023-11-07 14:01:39 +01:00
Pawel Raasz
95aef4bf51
[core]Migrate Exp operator to new API (#20893)
* Migrate Exp operator to new API

* Add missing includes
2023-11-07 13:43:13 +01:00
Pawel Raasz
e82283cf85
[core]Migrate Mish operator to new API (#20892)
* Migrate Mish operator to new API

* Remove `visit_attributes` is same as base class

* Refactor Mish reference implementation

* Add cast as function is generic
-mish calculation is floating-point but return type can be integral.
2023-11-07 13:05:39 +01:00
Tomasz Jankowski
e8b6e17429
[core] Migrate Softplus operator to new API (#20900)
* Drop ngraph remains

* Use ov::Tensor

instaed of ngraph::HostTensor
2023-11-07 12:19:34 +01:00
Alina Kladieva
f17f17acc7
Use custom labeler to label changes not matching any pattern (#20888)
Needed for Smart CI (https://github.com/openvinotoolkit/openvino/pull/19825)
2023-11-07 12:16:32 +01:00
Pawel Raasz
368e6bfb8a
Fix constant folding in MulMulMulFusion (#20803)
* Fix constant folding in MulMulMulFusion
by add f64 precision in Multiply to perform evaluate for const folding

* Do not transform if input has not supported type
2023-11-07 11:57:29 +01:00
Anastasia Kuporosova
4a0098b26a
[PyOV] ngraph linter check update (#20870)
* [PyOV] ngraph linter check update

* Update src/bindings/python/requirements_test.txt

Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>

---------

Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>
2023-11-07 10:42:46 +00:00
Aleksandr Voron
681331d3d7
[CPU] Increase RandomUniform test mean and variance thresholds 2023-11-07 14:37:37 +04:00
Aleksandr Voron
c3948ca799
[nGraph Transformations] NMS convert precisions - change sequence of checks (#20795) 2023-11-07 14:34:12 +04:00
Pawel Raasz
cb53ee5db7
[core]Migrate ReLU operator to new API (#20874)
* Migrate ReLU operator to new API

* Optimize ReLU reference implementation

* Correct define const value in ReLU
2023-11-07 11:29:43 +01:00
Ilya Lavrenov
c0381ab08d
Updated labeler config (#20913) 2023-11-07 14:05:03 +04:00