Commit Graph

13177 Commits

Author SHA1 Message Date
Vitaliy Urusovskij
bcb38796ce
Ngraph helpers/builders cleaning (#20819)
* Delete `getNodeSharedPtr()`

* Remove `makeRoll` ng::builder

* Delete `makeSelect` ng::builder

* Delete `makeDepthToSpace` ng::builder

* Remove `CompareFunctions` and `getConstData` from ng::helpers

* Return `makeSelect` for compatibility with NPU

* Port `QuantizationGranularity`, `MemoryTransformation`

* Restore ng::helpers::QuantGranularity for BWD CMP
2023-11-09 10:51:00 +04:00
Vladimir Paramuzov
8f406067d1
[GPU] Remove binary convolution primitive and all related code (#20889) 2023-11-09 09:54:46 +04:00
Mingyu Kim
319a6584a2
[GPU] Decompose test combination to reduce test time (#20968) 2023-11-09 13:10:08 +09:00
Sergey Lyalin
854158612f
Scaled dot product attention (#20492)
* Added experimental ScaledDotProductAttention operation in opset12. Supported in PT FE for aten::scaled_dot_product_attention translation. Decomposed in the common optimizations as functional reference.

* Better ScaledDotProductAttention

- Moved decomposition to the decomposing transformation
- Implemented more ctors for the op
- Renamed is_causal to causal
- Shape/type inference native code instead of using decomposition
- Moved the op from opset12 to opset13
- Added Python wrapper for ScaledDotProductAttention

* Fix test that counts ops in the opsets

* Update src/core/src/op/scaled_dot_product_attention.cpp

Co-authored-by: Katarzyna Mitrus <katarzyna.mitrus@intel.com>

* Update src/core/src/op/scaled_dot_product_attention.cpp

Co-authored-by: Katarzyna Mitrus <katarzyna.mitrus@intel.com>

* Move ScaledDotProductAttentionDecomposition from fusions to decompositions.

* Remove not used legacy shape inference in ScaledDotProductAttention

* Better namespace usage

* Register all nodes in ScaledDotProductDecomposition for correct tracking of nodes and running next mather passes on all new nodes.

* Don't use register_new_node_

* ScaledDotProductAttention specification (with an extra scale argument)

* Code style fix

* Scale input implementation for ScaledDotProductAttention

* Handle attention_mask=0 case in the op spec

* Better description of scale input

* N->M in scale description

* Code style fix, remove debug print.

* Apply suggestions from code review

Co-authored-by: Katarzyna Mitrus <katarzyna.mitrus@intel.com>
Co-authored-by: Mateusz Mikolajczyk <mateusz.mikolajczyk@intel.com>

* Fix for case when is_causal is not passed

* Extended description of ScaledDotProduct op

* Better description in py op wrapper

* Basic shape propagation tests for ScaledDotProductAttention

* Added ScaledDotProductAttention to toc.

* Add op impl check

---------

Co-authored-by: Katarzyna Mitrus <katarzyna.mitrus@intel.com>
Co-authored-by: Mateusz Mikolajczyk <mateusz.mikolajczyk@intel.com>
2023-11-08 20:17:13 +01:00
Alina Kladieva
f627172e5a
Add separate label for docs snippets (#20966) 2023-11-08 22:19:35 +04:00
Andrey Kashchikhin
24cd7283e3
make cache space showing optional (#20962) 2023-11-08 17:28:08 +00:00
Mikhail Ryzhov
9616c8f510
corrected timeouts (#20954) 2023-11-08 17:45:53 +01:00
Vladislav Golubev
c2d09b9a15
FuseU4WeightsAndZeroPoint tests: avoid std::vector<std::int8_t> usage (#20918) 2023-11-08 19:54:23 +04:00
Sofya Balandina
25d94bd98b
[conformance] Skip empty test cache error (#20924) 2023-11-08 15:50:49 +00:00
Ilya Lavrenov
68e6484ecb
Fixed version detection without git (#20951) 2023-11-08 14:30:15 +01:00
Maciej Smyk
fdaa4b5d03
[DOCS] Small fixes in articles for master (#20947)
* Fixes

* Update deployment_intro.md

* Update docs/articles_en/openvino_workflow/deployment_intro.md

Co-authored-by: Sebastian Golebiewski <sebastianx.golebiewski@intel.com>

---------

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>
Co-authored-by: Sebastian Golebiewski <sebastianx.golebiewski@intel.com>
2023-11-08 13:40:52 +01:00
Oleksii Khovan
d07f272054
[GPU] Fix cum_sum_partial_sum implementation for dimensions >= BLOCK_SIZE (#20855)
- fix cum_sum_partial_sum kernel;
- add unit test and func test for big shapes;
- add test to compare Partial vs Ref performance;
- change kernels' priorities according to performance measurements;
- move common profiling helpers to test_utils.

Ticket: CVS-123590
2023-11-08 11:26:51 +00:00
Oleg Pipikin
588e96bc37
Refactor MemoryLayerTest (#20914)
* Refactor MemoryLayerTest

* Apply comments
2023-11-08 14:43:53 +04:00
Oleg Pipikin
ace986cac0
Refactor GenerateProposalsLayerTest, GridSampleLayerTest (#20772)
* Refactor GenerateProposalsLayerTest

* Refactor GridSampleLayerTest

* Fix

* Apply comments

* Apply comments
2023-11-08 14:06:28 +04:00
Pawel Raasz
b8eea7bf84
[core]Migrate Multiply operator to new API (#20853)
* Migrate Multiply operator to new API

* Add comment explain use of custom multiply

* Update custom multiply comment

Co-authored-by: Tomasz Jankowski <tomasz1.jankowski@intel.com>

---------

Co-authored-by: Tomasz Jankowski <tomasz1.jankowski@intel.com>
2023-11-08 13:53:20 +04:00
Pawel Raasz
6210deba49
[core]Migrate FakeQuantize operator to new API (#20895)
* Migrate FakeQuantize operator to new API

* Minor refactor in FakeQuantize reference
re-use existing functions in `get_inner_stride`
2023-11-08 13:52:21 +04:00
Andrei Gorbachev
87cef53088
[GPU] Refactor (#20938)
* maxmin

* mvn

* normalize_l2 and fix mvn

* prior_box_clustered

* prior_box

* pad

* roi_align

* scatter_update

* select

* shape_of

* shuffle_channels

* space_to_batch

* space_to_depth

* split

* squeeze_unsqueeze

* tile

* transpose

* variadic_split

* scatter_nd_update
2023-11-08 13:42:44 +04:00
Liu
9e7243d67c
fix typo (#20906)
Co-authored-by: Michal Lukaszewski <michal.lukaszewski@intel.com>
2023-11-08 13:10:11 +04:00
Ilya Lavrenov
d6cc3d7058
Disable warnings about API 1.0 in GNA, Python API 1.0 (#20933) 2023-11-08 12:45:22 +04:00
Alexander Kozlov
0f260c2ccd
[DOC]: Added INT4 weight compression description (#20812)
* Added INT4 information into weight compression doc

* Added GPTQ info. Fixed comments

* Fixed list

* Fixed issues. Updated Gen.AI doc

* Applied comments

* Added additional infor about GPTQ support

* Fixed typos

* Update docs/articles_en/openvino_workflow/gen_ai.md

Co-authored-by: Nico Galoppo <nico.galoppo@intel.com>

* Update docs/articles_en/openvino_workflow/gen_ai.md

Co-authored-by: Nico Galoppo <nico.galoppo@intel.com>

* Update docs/optimization_guide/nncf/code/weight_compression_openvino.py

Co-authored-by: Nico Galoppo <nico.galoppo@intel.com>

* Applied changes

* Update docs/articles_en/openvino_workflow/gen_ai.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Update docs/articles_en/openvino_workflow/gen_ai.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Update docs/articles_en/openvino_workflow/gen_ai.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Update docs/articles_en/openvino_workflow/model_optimization_guide/weight_compression.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Update docs/articles_en/openvino_workflow/model_optimization_guide/weight_compression.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Update docs/articles_en/openvino_workflow/model_optimization_guide/weight_compression.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Update docs/articles_en/openvino_workflow/model_optimization_guide/weight_compression.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Update docs/articles_en/openvino_workflow/model_optimization_guide/weight_compression.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Added table with results

* One more comment

---------

Co-authored-by: Nico Galoppo <nico.galoppo@intel.com>
Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>
2023-11-08 10:17:57 +04:00
Paul Youngsoo Ahn
c42a88a190
Support dynamic tensor_iterator (#20869)
* [GPU] Support dynamic tensoriterator with -1  num_iteration
- remove redundant codes

* [GPU] Refactoring methods for pre_process / post_process for body_network

* Add unit test for dynamic tensoriterator wo trip_count_id

* Follow-up code review
* Set inner network in loading of model cache
* Fix legacy loop unit tests
2023-11-07 15:11:08 -08:00
Roman Kazantsev
c6ca7865fb
[TF FE] Fix conversion of TF1 OD models out-of-the-box (#20916)
* [TF FE] Fix conversion of TF1 OD models out-of-the-box

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Add test While with nested If operation

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Update tests/layer_tests/tensorflow_tests/test_tf_While.py

---------

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2023-11-07 17:44:09 +00:00
Sebastian Golebiewski
ac1fb7b955
Fixing OS list in System Requirements for YUM (#20934) 2023-11-07 17:18:18 +01:00
River Li
a0849edca1
[CPU] migrate cpu plugin api 2.0 (#18124)
* [CPU] CPU plugin migrates to plugin API 2.0

* Fix legacy config/metric issue

* Fix some issue of ov_cpu_func_tests

1. set_tensors_impl segment fault
2. ov::loaded_from_cache unsupported issue

* Resolve some comments

1. ov::loaded_from_cache issue
2. throw_if_cancelled issue
3. import_model issue
4. set_tensor_impl issue
5. batched_inference issue

* Fix dynamic shape inference issue

* Fix build error

* keep original model info in infer_request

* Fix minor error

* cache internal tensors for input/output precision change

* Disable import model test cases with precision changes

* fix precision issue

* Fix issue for import model

* Fix InferRequestCancellationTests exception issue

* Skip InferRequestIOBBlobTest.*secondCallGetInputDoNotReAllocateData due to new plugin api have different behavior

* Fix graph name issue

* Fix ROI issues

* Fix Transpose shape issue

* Skip vie::Version test due to change to ov::Version

* Solve input port name changes issue

* Solve preprocess layout issue

* Fix minor issue

* tidy up code

* Fix conflict after rebase

* Fix Windows build warning

* Add aux tensors for precision change issue

* Fix import/export model issue

* WA single layer name changed by preprocess

* Revert "WA single layer name changed by preprocess"

This reverts commit bc8fcdd43c.

* Skip some legacy tests due to plugin api 2.0 is enabled

1. skip some python legacy tests for plugin api 2.0 some different behaviors
2. skip some smoke tests due to output port name was changed

* Fix 2 build warnings

* Skip some AUTO plugin tests

* Fix property issue caused by AUTO plugin

* Skip PSROIPooling issues

* Follow header files reference policy

* Split out transformation fixing for nop_elimination

* Fix AUTO plugin mismatch issue for get_tensor function

* Fix aux tensor shape issue

* Fix tensor shape issue

* WA python sync inference sample's segmentfault issue

* Fix reshape issue for dynamic inference

* Fixed incorrect tensor name in e2e test

Fixe issue: e2e ONNX_Customized_Cascade_Rcnn_api_2_True_batch_1_device_CPU_precision_FP325den8cnk

* Fix python segmentfault issue of plugin api 2.0

* Fix python segmentfault issue of plugin api 2.0

* Revert "Fix python segmentfault issue of plugin api 2.0"

This reverts commit 6f502e5d86.

* Fix onnx_duplicated_output_name due to empty tensor

Co-authored-by: Bell, Song <bell.song@intel.com>

* Remove redundant code

* Remove python segment fault WA

* Keep rt_info to fix test failure in case of legacy public api

* Fix output port names missing issue

* Adress some reviewers' comments

* Restore OnnxBackendNodeModelTest::test_maxpool_with_argmax_2d_precomputed_pads_cpu after fixing has been merged

* Resolve tensor sharing issue when there are same name output port name

In some case, model has 2 or more same name input/output ports, they aslo have the same
precision and partial_shape. Compiled_model will share the same ov::Descriptor::Tensor pointer
and ov::Tensor between multiple such ports.
Considered solving python segment fault issue to create seperated input/output ports, which also
need handle such tensor shared case, this patch will do it.

* Resolve tensor sharing issue when there are same name output port name

In some case, model has 2 or more same name input/output ports, they aslo have the same
precision and partial_shape. Compiled_model will share the same ov::Descriptor::Tensor pointer
and ov::Tensor between multiple such ports.
Considered solving python segment fault issue to create seperated input/output ports, which also
need handle such tensor shared case, this patch will do it.

* Better method to find shrared tensor desc

* rename with snake_case style

* Remove ngraph header files

* Keep external_ptr naming

* Add OPENVINO_SUPPRESS_DEPRECATED for some legacy code

* Use port's tensor_ptr to replace creating new tensor_ptr

* Resolve some reviewer comments

* Implement ov::IInferRequestInternalWrapper::GetPreProcess to recover python GetPrepProcess tests

* Remove unnecessary header files reference

* Assert the risk of precision change and reorder at the same time

* Modify legacy python test to fit plugin api 2.0 behavior

* Recover smoke_Transpose(2|4|5|6)D/TransposeLayerTest.CompareWithRefs due to fixing is merged

* Fix typo issue

* Address reviewer's comments

* Disable precision coversion

* Fix error when CpuBlockedMemoryDesc

* Remove precision mismatch WA

* WA precision issue for query_model

* Solve precision mismatch between compiled model and graph

* Fixe failure of query_model

* Rebase to new plugin api update

* Recover the test cases of precision mismatch

* Try to fix name changing for graph model

* Remove tets code

* Remove fp64

* Rebase to new plugin api update

* Update for some failure cases

* Fix bert_benchmark failure issue

* Avoid segment fault in arm acl

Legacy public api + cpu plugin api will add convert op by preprocess by default for unsupported precision,
but ACLConvertExecutor cannot support dimension > 6, so this test will be segment fault due to dimension > 6

smoke_TestNumpyBroadcastNgraphEvaluate/BroadcastLayerTest.CompareWithRefs/targetShape=(1.2.3.4.5.6.7.8.9.10)_axesMapping=()_mode=numpy_inShape=(1.2.1.4.1.6.1.8.1.10)_inNPrec=I8_trgDev=CPU
smoke_TestNumpyBroadcastNgraphEvaluate/BroadcastLayerTest.CompareWithRefs/targetShape=(1.2.3.4.5.6.7.8.9.10)_axesMapping=()_mode=numpy_inShape=(1.2.1.4.1.6.1.8.1.10)_inNPrec=U8_trgDev=CPU

* Remove precision change from preprocess to avoid ACL unsupport convert dim > 6

* ACLConvertExecutor cannot support dimension > 6, don't let preprocess to add Convert

* Revert "ACLConvertExecutor cannot support dimension > 6, don't let preprocess to add Convert"

This reverts commit fd7a8b35af.

* Revert "Remove precision change from preprocess to avoid ACL unsupport convert dim > 6"

This reverts commit 3c2d9a5f17.

* Debug

* Debug incorrect precision checking issue

* Debug Eltwise FP64 unsupported issue

* Add logs for precision

* debug log

* Update for new dependent PRs merged

* Fix failure caused by preprocess

Fix below failures due to cannot find ops by name
     smoke_LPT/ReduceMaxTransformation.CompareWithRefImpl/f32_[1,3,10,10]_CPU_f32__256*

* Fix build error

* Fix failure caused by missing code during rebase

* Add debug

* Fix precision unsupport issue

* U16/I16/U64 precision support

* Resolve the issue of f64 reorder

Fix below issue:
Cannot create reorder primitive: unsupported reorder case

* Fix convert multiple child edge issue

* Solve ROI tensor failure issues

* Temporarily disable num_nodes comparison

* Only change convert precision for fp64

* Put convert precision change before reorder to avoid confusion

* Add debug log for transformation

* Fix rebase confilict

* Fix clang issue

* Temporarily disable test_infer_mixed_values python test of bf16

* Solve issue of smoke_ConvertCPULayerTest_BOOL_Dynamic_inputPRC=BF16 choose FP32 primType rather than BP16 primType

* Fix issue of pytorch_tests/test_outer.py

There are 2 output ports, but with the same port name, they should share the same tensor.

* Fix arm cannot find Eltwise executor issue

smoke_SetBlobCPU/SetBlobTest.CompareWithRefs/Type=INPUT_Device=CPU_PrecisionInNet=FP16_PrecisionInNgraph=BOOL
will report below error:
	 [ GENERAL_ERROR ] Supported Eltwise executor is not found
It need change convert precision to avoid such problem.

* Fix memory overwritten issue

* Temporarily skip arm fp16 SetBlobTest

* Fix compile error after rebase

* Restore smoke_IsOp test due to fixing pr merged

* Fix float to bf16 issue in avx2 isa

* solve onnx test xfail issue

* Skip test cases that ARM Eltwise executor FP16 is not supported

smoke_SetBlobCPU/SetBlobTest.CompareWithRefs/Type=INPUT_Device=CPU_PrecisionInNet=FP16_PrecisionInNgraph=BOOL
smoke_SetBlobCPU/SetBlobTest.CompareWithRefs/Type=BOTH_Device=CPU_PrecisionInNet=FP16_PrecisionInNgraph=BOOL

      [ GENERAL_ERROR ] Supported Eltwise executor is not found

* [CPU] improve reorder to support any precision

* Implement ReorderExecutor

* Fix builld error

* Not cache executor due to its primitive has been cached

* Keep convert one time at most

At most insert one convert if needed, if still cannot do reorder it will throw exception rather than insert the second convert
For example, below reorder will not be supported:
   FP64<->I64/U64/U32
   U32<->I64/U64
   U32<->I16/U16
   FP64<->FP64
   BIN<->BIN

* Only do conversion if layout is same

* update for only convert case

* Update for reviewer comments

* update for failure cases

* Address reviewer comments

* Update rebase issue

* minor update

* Solve unsupported precision issue in tranfromation rather than init_edge

* Remove unnecessary convert in init_edge

* Minor changes

* Update Reorder::reorderData

* Solve issue if only coversion without reorder

* Address reviewer comments

* Address reviewer comments

* Keep exception for unsuported precision

* update

* Revert reorder executor implement

* Solve float->bool issue on transformation pipeline

* Solve I64 is not supported issues

* Solve reviewer's comments

* Fixed dynamic top_k node issue

* Skip nhwc and nChw16c test cases for ConvertLayer

* Update for reviewers' comments

* Fix some failures

* Update for several failure cases

* Update for apiConformanceTests failures

* Fix incorrect node name after import model

* update

* update comments

* Solve issue of smoke_MatMul_NoTranspose and smoke_MatMul_BothTranspose

* Fixed AlignMatMulInputRanks scalar issue

* Address reviewers' comments, remove redundant path in graph.cpp

* Remove test_div_uint8_cpu from xfail_issue_58676

* Solve invalid number of nodes for smoke_Snippets_BroadcastSelect

* ConstantResultSubgraphTest of u16/i16/u32/i64/u64

* restore smoke_SetBlobCPU BOOL tests for arm

* [CPU] Fix ARM precision issue

ARM64 ACL prefers fp16 than fp32, API 2.0 requires input/output precision not changes,
then fp32 input will trigger convert node is added to convert fp32 to fp16.

* Solve some ARM64 failures

* Fix arm64 InferRequestVariableStateTest tests out of memory issue

ARM64 will force fp16 precision, which cause states memory can be fp16, so memcpy to state_memory
cannot use float * element_size, else it will be out of memory bound.

* Skip 2 arm64 tests caused by forcing fp16 precision

* Revert "Fix arm64 InferRequestVariableStateTest tests out of memory issue"

This reverts commit 3e12bd48c2.

* Fix python test_get_profiling_info failure issue

---------

Co-authored-by: Bell, Song <bell.song@intel.com>
Co-authored-by: Chen Peter <peter.chen@intel.com>
2023-11-07 15:25:05 +01:00
Sebastian Golebiewski
3f7989a817
[DOCS] Fixing link in Get Started article (#20881)
* Updating Get Started section

Addressing JIRA ticket: 124289

* Update get_started.md
2023-11-07 14:37:24 +01:00
Vitaliy Urusovskij
e3d7dffa83
Remove legacy API from FEs (except ONNX) (#20849)
* Remove `ngraph` from PT FE and FE tests utils

* Remove `ngraph` from Paddle FE

* Remove `InferenceEngine` from some ONNX FE test

* Port `generate_embedding.py` to API2.0

* CLangFormat

* Fix comments
2023-11-07 14:01:39 +01:00
Pawel Raasz
95aef4bf51
[core]Migrate Exp operator to new API (#20893)
* Migrate Exp operator to new API

* Add missing includes
2023-11-07 13:43:13 +01:00
Pawel Raasz
e82283cf85
[core]Migrate Mish operator to new API (#20892)
* Migrate Mish operator to new API

* Remove `visit_attributes` is same as base class

* Refactor Mish reference implementation

* Add cast as function is generic
-mish calculation is floating-point but return type can be integral.
2023-11-07 13:05:39 +01:00
Tomasz Jankowski
e8b6e17429
[core] Migrate Softplus operator to new API (#20900)
* Drop ngraph remains

* Use ov::Tensor

instaed of ngraph::HostTensor
2023-11-07 12:19:34 +01:00
Alina Kladieva
f17f17acc7
Use custom labeler to label changes not matching any pattern (#20888)
Needed for Smart CI (https://github.com/openvinotoolkit/openvino/pull/19825)
2023-11-07 12:16:32 +01:00
Pawel Raasz
368e6bfb8a
Fix constant folding in MulMulMulFusion (#20803)
* Fix constant folding in MulMulMulFusion
by add f64 precision in Multiply to perform evaluate for const folding

* Do not transform if input has not supported type
2023-11-07 11:57:29 +01:00
Anastasia Kuporosova
4a0098b26a
[PyOV] ngraph linter check update (#20870)
* [PyOV] ngraph linter check update

* Update src/bindings/python/requirements_test.txt

Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>

---------

Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>
2023-11-07 10:42:46 +00:00
Aleksandr Voron
681331d3d7
[CPU] Increase RandomUniform test mean and variance thresholds 2023-11-07 14:37:37 +04:00
Aleksandr Voron
c3948ca799
[nGraph Transformations] NMS convert precisions - change sequence of checks (#20795) 2023-11-07 14:34:12 +04:00
Pawel Raasz
cb53ee5db7
[core]Migrate ReLU operator to new API (#20874)
* Migrate ReLU operator to new API

* Optimize ReLU reference implementation

* Correct define const value in ReLU
2023-11-07 11:29:43 +01:00
Ilya Lavrenov
c0381ab08d
Updated labeler config (#20913) 2023-11-07 14:05:03 +04:00
Maxim Vafin
cdd342ea49
[PT FE] Add ALIKED to model tests (#20899)
* Add ALIKED to model tests

* Update tests/model_hub_tests/torch_tests/test_aliked.py

* Update tests/model_hub_tests/torch_tests/test_aliked.py
2023-11-07 09:34:26 +01:00
Pawel Raasz
8f30470199
[core]Migrate MatMul operator to new API (#20857)
* Migrate MatMul operator to new API

* Correct get shapes references
2023-11-07 09:12:37 +01:00
Maxim Vafin
e976e7b90c
[PT FE] Add tests for Speech-Transformer (#20847)
* Add tests for Speech-Transformer

* Update tests/model_hub_tests/torch_tests/test_speech-transformer.py

* Update tests/model_hub_tests/torch_tests/test_speech-transformer.py
2023-11-07 08:32:22 +01:00
Pawel Raasz
dcdf6750a7
[core]Migrate Sign operator to new API (#20875)
* Migrate Sign operator to new API

* Optimize Sign reference implementation

* Fix code style
2023-11-07 08:31:04 +01:00
Tomasz Jankowski
a304f03852
[core] Migrate Softmax operator to new API (#20894)
* Drop ngraph remains

* Use ov::Tensor

instaed of ngraph::HostTensor
2023-11-07 08:26:58 +01:00
Roman Kazantsev
494a9cf9a9
[TF FE] Refine tests for complex tensors support (#20905)
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2023-11-07 10:29:45 +04:00
Tomasz Jankowski
5cd9659033
[core] Migrate HSwish operator to new API (#20854)
* Drop ngraph remains

* Use ov::Tensor

instaed of ngraph::HostTensor
2023-11-06 20:56:52 +01:00
Anastasiia Pnevskaia
64c21fd6f9
[MO] Fixed MO fallback unit test. (#20868)
* Fixed MO unit test to import paddle conditionally.

* Replace generate with pytest.mark.parametrize.
2023-11-06 22:13:53 +04:00
Karol Blaszczak
3036a3d249
[DOCS] improving the "conversion" section v2 (#20887)
adjustments to conversion and workflow
2023-11-06 18:17:44 +01:00
Roman Kazantsev
d0eb27bd3b
[TF FE] Support Complex Tensors (#20860)
* [TF FE] Support complex tensors

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Align output type for Real and Imag operations

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Update decoding complex types

* Add support for ComplexAbs, FFT and IFFT operations

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Correct axes based on a number of inner-most dimensions

* Add layer tests

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Update supported ops documentation

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Add a comment for ComplexTypeMark

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

---------

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2023-11-06 16:57:05 +04:00
Pawel Raasz
1083b3b58c
[core]Migrate Erf operator to new API (#20867)
* Migrate Erf operator to new API

* Remove `visit_attributes`  is same as base class

* Optimize reference implementation for size
2023-11-06 13:46:03 +01:00
Mikhail Ryzhov
28279013af
aligned lin build timeouts (#20885) 2023-11-06 14:07:24 +04:00
Sebastian Golebiewski
a554611644
Updating notebooks (#20865) 2023-11-06 10:08:58 +01:00
Pawel Raasz
7d74dac3ee
[core]Migrate GridSample operator to new API (#20852)
* MIgrate GridSample to new API

* Refactor GridSample to reduce binary size
- use function pointer instead std::function (simpler less code size)
- use RoundingGuard instead manual set/restore rounding mode
- move interpolate selection outside main data processing loop
2023-11-06 06:31:02 +01:00