Andrei Kochin
663bf04208
Revert Bitwise ops in PyTorch FE ( #20813 )
...
* revert bitwise op for PT FE
* revert coverity fixes
2023-11-02 13:08:55 +00:00
Roman Kazantsev
38b6092120
[TF FE] Switch off TF1 While support totally ( #20774 )
...
* [TF FE] Switch off TF1 While support totally
This is a total switch off due to GPU limitation
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com >
* Need additional fallback in Enter to avoid shapes problem
* Disable tests with While op
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com >
* Disable layer test for TF1 While
* Remove extra spaces
---------
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com >
2023-10-31 12:46:36 +04:00
rsato10
53820c0cf2
[TF FE]Support Inv operation for TensorFlow models ( #20720 )
...
* [TF FE]Support Inv operation for TensorFlow models
* added test tests/layer_tests/tensorflow_tests/test_tf_Inv.py and src/frontends/tensorflow_common/src/op/inv.cpp
* Update tests/layer_tests/tensorflow_tests/test_tf_Inv.py
* Update tests/layer_tests/tensorflow_tests/test_tf_Inv.py
* Update tests/layer_tests/tensorflow_tests/test_tf_Inv.py
---------
Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com >
2023-10-31 11:39:16 +04:00
Roman Kazantsev
fc4fe07a0e
[TF FE] Fix CTCLoss translator ( #20775 )
...
* Fix CTCLoss translator
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com >
* Expend layer tests for CTCLoss
---------
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com >
2023-10-31 08:51:18 +04:00
Mateusz Mikolajczyk
ce8ac6f478
[Opset13][TF FE] Enable tensorflow bitwise operators ( #20340 )
...
* Add opset-13 bitwise ops
* Fix issue in BinaryOps test
---------
Co-authored-by: Katarzyna Mitrus <katarzyna.mitrus@intel.com >
2023-10-30 23:18:28 +00:00
Roman Kazantsev
a4c47bf6ab
[TF FE] Fix body graph injection, CumSum and SparseFillEmptyRows ( #20680 )
...
* [TF FE] Fix body graph injection, CumSum and SparseFillEmptyRows
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com >
* Do not handle non-parameters in body
* Update layer test to cover default parameter and attribute values
* Fix layer tests
---------
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com >
2023-10-30 18:40:09 +04:00
Mateusz Mikolajczyk
fdb22c8610
[Opset13][PT FE] Update torch bitwise operators ( #20339 )
...
* Add opset-13 bitwise implementation
* Improvements in test
* Add transformation BitwiseOps->LogicalOps for bool
* Improve existing tests to better tests dtypes
* Disable transformatiions for supported bitwise ops
* Improvebitwise test inputs
* Update src/common/transformations/src/transformations/op_conversions/convert_bitwise_to_logical_bool.cpp
Co-authored-by: Katarzyna Mitrus <katarzyna.mitrus@intel.com >
* Update src/common/transformations/src/transformations/op_conversions/convert_bitwise_to_logical_bool.cpp
Co-authored-by: Katarzyna Mitrus <katarzyna.mitrus@intel.com >
* Update src/common/transformations/src/transformations/op_conversions/convert_bitwise_to_logical_bool.cpp
Co-authored-by: Katarzyna Mitrus <katarzyna.mitrus@intel.com >
* Update src/common/transformations/src/transformations/op_conversions/convert_bitwise_to_logical_bool.cpp
Co-authored-by: Katarzyna Mitrus <katarzyna.mitrus@intel.com >
* Update to REGISETR_PASS
---------
Co-authored-by: Katarzyna Mitrus <katarzyna.mitrus@intel.com >
2023-10-30 13:11:14 +00:00
Maxim Vafin
7b9db3d81b
[PT FE] Add torch.int16 dtype support ( #20735 )
...
* Add torch.int16 dtype support
* Add test
2023-10-30 10:12:55 +01:00
Ekaterina Aidova
53c9a0f3d4
update pytorch layer tests for torch 2.1 compatibility ( #20264 )
...
* update pytorch layer tests for torch 2.1 compatibility
2023-10-30 08:30:01 +04:00
Siddhant Chauhan
ae15f35f07
[PT FE] Add aten::is_nonzero ( #20589 )
...
* Add is_nonzero operator and test
* fix
* Update is_nonzero.cpp
* Update is_nonzero.cpp
* requested changes
* Update is_nonzero.cpp
* Update is_nonzero.cpp
---------
Co-authored-by: Maxim Vafin <maxim.vafin@intel.com >
2023-10-27 15:04:57 +04:00
Maxim Vafin
b06a0010ea
[PT FE] Disable failing pytorch layer test ( #20719 )
...
* [PT FE] Disable test
* Update tests/layer_tests/pytorch_tests/test_convnd.py
2023-10-27 07:54:30 +04:00
Maxim Vafin
5b8433ffbe
[PT FE] Fix issue with adding Result to mutated tensor ( #20690 )
...
* [PT FE] Fix issue with adding Result to mutated tensor
* Add test
2023-10-26 22:25:28 +02:00
Karan Jakhar
26632d1cd9
[PT FE] Add aten::__xor__ ( #20662 )
...
* Add __xor__
* Add xor tests
* add more xfail tests
* Update src/frontends/pytorch/src/op_table.cpp
Co-authored-by: Maxim Vafin <maxim.vafin@intel.com >
* Update src/frontends/pytorch/src/op_table.cpp
Co-authored-by: Maxim Vafin <maxim.vafin@intel.com >
* fix code style
---------
Co-authored-by: Maxim Vafin <maxim.vafin@intel.com >
2023-10-26 18:28:47 +04:00
Mikhail Ryzhov
4078bd9c19
[GHA] Speed up PyTorch Layer unit tests ( #20613 )
...
* test
* fixed tests
* typo
* fixed tests
* rest of the tests
* fixed rsub test
* tmp fix
* Revert "tmp fix"
This reverts commit b8bf1e9492e13497895da488612c9a137ef840bc.
* fixed test params
* reset thirdparty/pugixml
* Revert "fixed rsub test"
This reverts commit 9b6be34b8666936e8124b6622fcc5185b640de92.
* fixed typo
* fixed test data
* reset test_rsub
* removed unused param
* reverrted runner
* simplified call
* fixed random
* changed logical to auto mode
* Revert "fixed random"
This reverts commit 8a4f20b24641144f823a7e1f1ff92038634acf32.
* fixed test_all
* replaced random_sample with randn
* fixed rebase issue
* reverted logical splitting
* Update tests/layer_tests/pytorch_tests/test_repeat_interleave.py
Co-authored-by: Maxim Vafin <maxim.vafin@intel.com >
* Update tests/layer_tests/pytorch_tests/test_all.py
Co-authored-by: Maxim Vafin <maxim.vafin@intel.com >
* Apply suggestions from code review
Co-authored-by: Maxim Vafin <maxim.vafin@intel.com >
* fixed merge conflict
---------
Co-authored-by: Maxim Vafin <maxim.vafin@intel.com >
2023-10-26 13:10:51 +04:00
Siddhant Chauhan
bc463e886b
[PT FE] Add aten::log10 ( #20621 )
...
* Add log10 operator and test
* fix
* Update test_log.py
---------
Co-authored-by: Maxim Vafin <maxim.vafin@intel.com >
2023-10-25 14:14:22 +04:00
Maxim Vafin
8d0381b0fe
[PT FE] Implement custom op for types alignment ( #20431 )
...
* [PT FE] Implement custom op for types alignment
* Fix code style
* Fix inplace ops
* Fix layer tests
* Remove no longer needed change
* Fix ovc tests
* Fix fe tests
2023-10-23 22:54:08 +02:00
Roman Kazantsev
009ef5657c
[TF FE] Provide full support of TF1 Control flow and TensorArray* ops ( #20270 )
...
* [TF FE] Provide full support of TF1 Control flow and TensorArray ops
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com >
* Add missed header for TensorArrayV3 op
* Temporarily disable GRU cell fusion
* Update src/common/transformations/src/transformations/common_optimizations/moc_transformations.cpp
* Fix a case when element_shape for TensorArrayV3
* Fix translator for TensorArrayCloseV3
* Update summarize graph with TensorArrayCloseV3
* Add layer tests for TensorArrayScatterV3, Close, Size, Array
* Fix output shape for Merge node
* Remove unused variable
* Fix translator for TensorArrayConcatV3
* Fix translator for TensorArrayConcatV3
* Add layer tests for TensorArrayWriteV3, Gather, and Concat
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com >
* Add translator for GatherTree
* Fix TF FE unit-test for GatherTree
* Fix GatherTree translator
* Fix GatherTree translator to handle 1d end_token
* Fix undeclared parameter issue
* Fix GatherTree unit-test
* Add TensorArrayV3Replacer transformation
* Temporarily disable dangling transformation
* Recover RemoveMultiSubGraphOpDanglingParamsResults transformation
* Recover GRUCellFusion transformation
* Simplify check for GRUCellFusion transformation
* Use proper name for unit-tests
* Simplify translator for TensorArrayWriteV3
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com >
* Fix RemoveMultiSubgraphOpDanglingParamsResults transformation
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com >
* Additional fix for remove_multi_subgraph_op_dangling_params
* Make static TI run a dynamic subgraph
* Dedicated SL test
* Change condition to respect stat shapes
* Adjust test to cover the code path properly
* Recover fallback for still failing case GNMT
---------
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com >
Co-authored-by: Maksim Kutakov <maksim.kutakov@intel.com >
2023-10-23 22:50:26 +02:00
Andrey Kashchikhin
b67cff7cd5
[CI] [GHA] Introduce macOS ARM64 as a matrix parameter in the macOS pipeline ( #20363 )
...
* add m1 mac pipelines as a matrix parameter
* Update mac.yml
disable java_api because of macos arm64 - Java is not available on macOS arm64 runners
* Update mac.yml
added always condition for all tests
* Update mac.yml
* Update mac.yml
* Update mac.yml
* Update setup.py
temp commit
* Update tools/openvino_dev/setup.py
* use matrix for var
* add mxnet to extras only for x86_64
* skip failing tests
* use xfail for Python tests; add missing filter for transformations tests
* skip CPU func tests on x86_64 mac; skip some tests from CPU func tests on arm mac
* Update mac.yml
* skip tests on mac arm
* skip tests on darwin; apply review
* add more skips for python and c++ tests
* skip tf tests
* skip more tf tests; skip more Python UT stages
* rm alwayses, rm triggers, add nightly trigger
---------
Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com >
2023-10-23 15:06:22 +04:00
Mateusz Mikolajczyk
891f79ac84
[PT FE] Add aten::as_strided ( #19482 )
...
* Add aten::as_strided
* rm commented code
* Update src/frontends/pytorch/src/op/as_strided.cpp
Co-authored-by: Maxim Vafin <maxim.vafin@intel.com >
* Update src/frontends/pytorch/src/op/as_strided.cpp
Co-authored-by: Maxim Vafin <maxim.vafin@intel.com >
* Fix CI error
* Fix CI issues
* mark_node for remaining constants
* Add test reproducing issue
* Use strides from torchscript
* Add led model to test suite
* Add sugested changes
---------
Co-authored-by: Maxim Vafin <maxim.vafin@intel.com >
2023-10-20 14:24:10 +04:00
rsato10
9edbcb1d4d
[TF FE] Support ToBool operation ( #20511 )
...
* [TF FE][TF Hub] Support ToBool operations
* [TF FE][TF Hub] Support ToBool operations
* fixing select operation Support ToBool operations for TF Hub models
* added false and true const for tobool operations
* added reduction axes
* Apply suggestions from code review
* Update tests/layer_tests/tensorflow_tests/test_tf_ToBool.py
* Update tests/layer_tests/tensorflow_tests/test_tf_ToBool.py
* Update tests/layer_tests/tensorflow_tests/test_tf_ToBool.py
* Update src/frontends/tensorflow_common/src/op/tobool.cpp
* added second zero constant
* added correct types src\frontends\tensorflow_common\src\op\tobool.cpp
* added includes src\frontends\tensorflow_common\src\op\tobool.cpp
* Update src/frontends/tensorflow_common/src/op/tobool.cpp
* remove select and not_equal src/frontends/tensorflow_common/src/op/tobool.cpp
* Apply suggestions from code review
* Update src/frontends/tensorflow_common/src/op/tobool.cpp
* Apply suggestions from code review
* Update src/frontends/tensorflow_common/src/op/tobool.cpp
---------
Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com >
2023-10-20 14:22:30 +04:00
Siddhant Chauhan
ec2ae003aa
[TF FE][TF Hub] Support TruncateDiv operation ( #20615 )
...
* [TF FE][TF Hub] Support TruncateDiv operation
* [TF FE][TF Hub] Support TruncateDiv operation
* Update src/frontends/tensorflow_common/src/op/truncate_div.cpp
---------
Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com >
2023-10-20 01:58:58 +04:00
Siddhant Chauhan
070678fc19
[TF FE][TF Hub] Support TruncateMod operation ( #20468 )
...
* [TF FE][TF Hub] Support TruncateMod operation
* Update truncate_mod.cpp
* fix
2023-10-19 22:40:38 +04:00
Mustafa Cavus
3d5fe8d446
Llm and sd additional ops ( #20435 )
...
* TorchFX: New ops added (baddbbmm, leaky_relu_)
* TorchFX: Initial scaled_dot_product_flash_attention
* Code Formatting: scaled_fot_product_attention translation
* TorchFX unit test enabled for SDPA
* Typo fix in comment line
Co-authored-by: Maxim Vafin <maxim.vafin@intel.com >
---------
Co-authored-by: Maxim Vafin <maxim.vafin@intel.com >
2023-10-19 21:21:28 +04:00
Ekaterina Aidova
222fbb1aec
[PT FE]: support aten::fill_diagonal_, aten::fill ( #20395 )
...
* [PT FE]: support aten::fill_diagonal_, aten::fill
* remove xfail
* Update src/frontends/pytorch/src/op/full.cpp
Co-authored-by: Maxim Vafin <maxim.vafin@intel.com >
* Update tests/model_hub_tests/torch_tests/test_hf_transformers.py
---------
Co-authored-by: Maxim Vafin <maxim.vafin@intel.com >
2023-10-18 10:58:54 +02:00
Siddhant Chauhan
a30e25c725
[TF FE][TF Hub] Support BatchMatMulV3 operation ( #20528 )
...
* [TF FE][TF Hub] Support BatchMatMulV3 operation
* Update src/frontends/tensorflow_common/src/op/matmul.cpp
* Update src/frontends/tensorflow_common/src/op/matmul.cpp
---------
Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com >
2023-10-18 09:49:33 +04:00
Siddhant Chauhan
07a29f80b4
[TF FE][TF Hub] Support Xlog1py operation ( #20500 )
...
* [TF FE][TF Hub] Support Xlog1py operation
* Update test_tf_Xlog1py.py
2023-10-17 11:36:13 +04:00
Siddhant Chauhan
a5b5623ece
[TF FE][TF Hub] Support Xlogy operation ( #20467 )
...
* [TF FE][TF Hub] Support Xlogy operation
* fix
* fix
* fix
* fix
* Update tests/layer_tests/tensorflow_tests/test_tf_Xlogy.py
* Update tests/layer_tests/tensorflow_tests/test_tf_Xlogy.py
---------
Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com >
2023-10-16 15:52:30 +04:00
Andrey Kashchikhin
2e33019e68
[CI] [GHA] Align pipelines ( #20388 )
...
* add ONNX layer tests
* add parallelism for pytest; add missing stages to win; align execution steps
* add missng shell option
* add missing dep, update marker
* align stages and execution commands; rm parallelization
* enable trigger
* add missing export; rm not-applicable stage from mac and win pipeline
* add missing requirement, rm not-applicable stage
* add missing test parameters
* try to pi onnxruntime version; skip mxnet on mac; correct vars in win
* rm constraint
* skip on win
* use xfail
* remove always(), rm trigger for mac
* return push trigger for mac
2023-10-13 23:48:15 +04:00
Ekaterina Aidova
9bedafb560
[PT FE]: support aten::erf and aten::adaptive_avg_pool1d ( #20350 )
...
* [PT FE]: support aten::erf and aten::adaptive_avg_pool1d
* align adaptive avg pools for different sizes
* refactor adaptive max pool
2023-10-11 17:33:32 +04:00
Roman Kazantsev
0bb6450398
[TF FE] Support TF 2.14 and add OnesLike translator ( #20385 )
...
* [TF FE] Support TF 2.14 and add OnesLike translator
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com >
* Update tests constraints
* Update open_model_zoo
* Adopt TF Lite test to 2.14 TF
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com >
* Support TF Lite layer tests for diffrent TF versions
---------
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com >
2023-10-11 15:24:32 +04:00
Ilya Lavrenov
35308ce34d
Use np.float32 instead of np.float ( #20377 )
2023-10-11 03:29:56 +04:00
Ilya Lavrenov
b3ead62631
Fixed numpy deprecation error ( #20375 )
2023-10-11 00:38:23 +04:00
Ekaterina Aidova
0dcde7f7bc
[PT FE]: support aten::pixel_unshuffle ( #20325 )
2023-10-10 15:18:35 +04:00
Ekaterina Aidova
a5b6606132
[PT FE]: support aten::amax, aten::amin, aten::clip, aten::clamp_ ( #20338 )
2023-10-10 11:05:10 +00:00
Andrey Kashchikhin
1454e77bbf
[CI] [GHA] Introduce GHA macOS Pipeline ( #20212 )
...
* start transferring
* start with samples
* start with initial two stages
* change name
* skip pytorch tests; rm unused comments
* rm setupvars sourcing; make test steps similar to those in linux pipeline
* add missing options and setupvars sourcing
* add skips for mac
* install wheels directly
* add deployment target
* add skips for pytorch layer tests; experiment with samples
* do not exclude files for archives; set rpath
* apply comments; rm unnecessary stages
* Update mac.yml
fixed MO Python API tests
* Update .github/workflows/mac.yml
* Update openvino.cmake
add LC_RPATH to libopenvino.dylib
* Update src/cmake/openvino.cmake
* Update CMakeLists.txt
reverted changes in samples build
* Update openvino.cmake
removed rpath changes
* add setupvars
* disable pr trigger
---------
Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com >
2023-10-10 14:43:09 +04:00
Ekaterina Aidova
67a62186ee
support aten::channel_shuffle ( #20240 )
...
* support aten::channel_shuffle
* remove getting rank
2023-10-10 08:16:26 +02:00
Maxim Vafin
feaf05cc5f
[PT FE] Support aten::max_poolnd_with_indices ( #20322 )
2023-10-10 08:00:02 +02:00
Ekaterina Aidova
bb2c2fab6c
[PT FE]: support aten::log1p, fixes for where and linalg_norm ( #20167 )
...
* [PT FE]: support aten::log1p, fixes for where and linalg_norm
* clarify norm behaviour
2023-10-06 08:26:12 +00:00
Maxim Vafin
35e72251e9
[PT FE] Add support for aten::numpy_T and aten::feature_dropout ( #20136 )
...
* Add support for aten::numpy_t and aten::feature_dropout
* Update tests/layer_tests/pytorch_tests/test_transpose.py
Co-authored-by: Ekaterina Aidova <ekaterina.aidova@intel.com >
---------
Co-authored-by: Ekaterina Aidova <ekaterina.aidova@intel.com >
2023-10-03 09:52:29 +00:00
Roman Kazantsev
b409ea1930
[TF FE] Support TF1 While Control flow ( #20105 )
...
* [TF FE] Support TF1 While Control flow
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com >
* Apply code-style fix
* Update API for OpPlace to store back edge
* Fix build: no rvalue by reference passing
* Fix build issue: correct type
* Fix TF FE unit-tests
* Apply code-review feedback: remove unused vars
* Fix fusing complicated case of TF1 While
* Remove unused variable
* Update MO unit test
* Fix layer tests for While
* Handle Switch and NextIteration nodes connected directly
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com >
---------
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com >
2023-10-02 09:56:10 +04:00
Maxim Vafin
f38b5f4f06
[PT FE] Support moving TupleConstruct inside If body ( #20081 )
...
* Support moving TupleConstruct inside If body
* Fix win build
---------
Co-authored-by: Alina Kladieva <alina.kladieva@intel.com >
2023-09-28 23:11:09 +02:00
Maxim Vafin
84d98d8bf7
[PT FE] Add support for aten::pixel_shuffle ( #20124 )
...
* [PT FE] Add support for aten::pixel_shuffle
* Add comments
* Update src/frontends/pytorch/src/op/pixel_shuffle.cpp
2023-09-28 19:09:54 +02:00
Maxim Vafin
fea6db1a5f
[PT FE] Fix issue when cat input is folded to tensor ( #20090 )
...
* [PT FE] Fix issue when cat input is folded to tensor
* CHeck real first input
* Update src/frontends/pytorch/src/op/cat.cpp
2023-09-28 10:52:09 +02:00
Anastasiia Pnevskaia
2bbfe7b44d
Added support of shapes and types from original FW in ov.convert_model() ( #20009 )
...
* Added support of shapes and types from paddle, torch and tf.
* Removed changes from requirements.
* Corrected test.
* Moved helper methods to utils.
* Separated tests by frameworks.
* Removed changes from complex_params test.
2023-09-26 17:41:01 +04:00
Ekaterina Aidova
c76475288b
[PT FE]: support aten::scatter_reduce and extend aten::scatter ( #19980 )
...
* [PT FE]: support aten::scatter_reduce and extend aten::scatter out arg support
* create mapping inside function
2023-09-22 15:05:33 +04:00
Pavel Esir
9271b79540
[OVC] do not parse inputs for py_api ( #19742 )
...
* [OVC] do not parse inputs
* fix unit-tests
* remove redundant lines, add test case
* add one more unit-test
* skip None values
* replace str with List in test_mo_import_from_memory
* corrected type hints, added a safety assert
2023-09-22 15:05:21 +04:00
Ekaterina Aidova
26d18c924b
[PT FE]: support aten::broadcast_tensors ( #19994 )
...
* broadcast tensors
* [PT FE]: support aten::broadcast_tensors
* apply review comments
* remove add
2023-09-22 13:54:44 +04:00
Ekaterina Aidova
8d59fcd34f
[PT FE]: extend logical operations support ( #19981 )
...
* [PT FE]: extend logical operations support
* tests
* more tests
2023-09-22 10:11:36 +04:00
Ekaterina Aidova
fde054e4a6
[PT FE]: support aten::minimum aten::maximum ( #19996 )
2023-09-22 09:30:57 +04:00
Maxim Vafin
058b45e608
[PT FE] Fix aten::repeat regression ( #19991 )
...
* Revert "[PT FE] Simplify repeat operation (#19926 )"
This reverts commit f926e0e392 .
* Fix aten::repeats regression
* Simplify
* Update src/frontends/pytorch/src/op_table.cpp
* Add impacted model
2023-09-21 23:58:09 +02:00