* extract cpu func tests into a reusable workflow
* add needs
* check paths
* correct paths
* rm usage of setupvars
* extract samples
* use yaml string
* split mac workflow
* check variables
* try github workspace
* check full dir
* set variables
* try with sub directory
* Revert "try with sub directory"
This reverts commit d891ee3f28.
* extract c++ tests from linux
* change
* use cxx workflow in mac workflows
* add noninteractive
* extract cpu func tests
* rm unused
* enable mac
* fix reusable workflow name
* correct path for sparse-checkout
* extract python UT
* remove unnecessary asterisk
* remove another unnecessary asterisk
* use reusable action for linux cc
* add check for setupvars existence
* check with manually installed opencl
* add components input to samples workflow; rm pr triggers for mac workflows
* add missing needs
* add missing option for deps install script
* use disables in test themselves instead of ifs in workflows
* use reusable workflow for cxx tests in linux arm
* use python reusable workflow in linux arm
* add missing endif
* use self-hosted for samples, add x86_64 constraint for jax
* check paths
* find gompby partial name
* skip failing tests on arm; correct gomp finding for ovc
* check tests
* add debian packages job; use job_ prefix for reusable workflows with jobs
* extract tf hub model tests
* extract tf model hub perf tests
* extract pytorch models tests
* do not use container on GHA runners
* extract onnx runtime
* add missing deps
* skip test for linux arm
* rm always()s
* fix quotes
* correct paths
* correct ifs, check dir for onnxruntime
* correct path for onnxruntime utils folder; install python3
* use self-hosted as input
* check for self-hosted runner via name, pass version
* skip cpu plugin unittest
* check cxx tests
* rm pr trigger
* [TF FE] Switch on ConvLSTM2D layer tests in pre-commit
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
* Remove extra blank line
---------
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
* add arm as a matrix for build job
* uncomment
* comment
* try inside pipeline
* check location
* another dirs
* try to privide correct action path
* use corrected action
* use newer commit
* use newer commit
* use newer commit
* use newer action commit
* add setting
* rm from pipeline, adapt action iteslf
* add missing deps
* enable samples and debian jobs
* correct yml
* correct image name
* correct syntax, use self-hosted option
* enable onnx runtime and c++, use newer action
* enable Python and CPU Func tests
* add missing deps for arm64
* increase timeout for python tests
* disable some tests, add more time
* skip failing tests
* skip speech sample test on arm
* dummy chang
* skip mxnet mo on arm, run all tests
* rm quotes
* separate linux x86 and arm64 workflows
* rm unused matrix refs, add timeouts
* add skips for c++ tests and some Python tests
* correct cache keys, extend timeout
* skip more python tests
* add more skips: for python and CPU func
* extend cpu func list with skips
* disable cpu func tests and python api 2.0 tests
* rm disable job
* styling, rm pr trigger, rm always(), rm unnecessary changes
* revert
* use ifs instead of comments, provide better wording for skips
* [TF FE] Support different types: u16, u64, u32, boolean
Support constants of different types and dtype attribute values
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
* Use map instead of unordered_map
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
* Add tests with different types
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
---------
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
* Update reverse infer to allow changing shape if it already partially defined
* Update tests
* Remove changes in If
---------
Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>
* Input/output order Keras tests.
* Added precommit mark.
* Added xfail.
* Small correction.
* Check input/outputs by names in FW.
* Moved output order tests to Python API group.
* Corrected comments.
* [TF FE] Fix TF1 SSD PPN model conversion
It contains a case when one Merge node eliminated different conditional flows.
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
* Add layer test
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
---------
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
* Added experimental ScaledDotProductAttention operation in opset12. Supported in PT FE for aten::scaled_dot_product_attention translation. Decomposed in the common optimizations as functional reference.
* Better ScaledDotProductAttention
- Moved decomposition to the decomposing transformation
- Implemented more ctors for the op
- Renamed is_causal to causal
- Shape/type inference native code instead of using decomposition
- Moved the op from opset12 to opset13
- Added Python wrapper for ScaledDotProductAttention
* Fix test that counts ops in the opsets
* Update src/core/src/op/scaled_dot_product_attention.cpp
Co-authored-by: Katarzyna Mitrus <katarzyna.mitrus@intel.com>
* Update src/core/src/op/scaled_dot_product_attention.cpp
Co-authored-by: Katarzyna Mitrus <katarzyna.mitrus@intel.com>
* Move ScaledDotProductAttentionDecomposition from fusions to decompositions.
* Remove not used legacy shape inference in ScaledDotProductAttention
* Better namespace usage
* Register all nodes in ScaledDotProductDecomposition for correct tracking of nodes and running next mather passes on all new nodes.
* Don't use register_new_node_
* ScaledDotProductAttention specification (with an extra scale argument)
* Code style fix
* Scale input implementation for ScaledDotProductAttention
* Handle attention_mask=0 case in the op spec
* Better description of scale input
* N->M in scale description
* Code style fix, remove debug print.
* Apply suggestions from code review
Co-authored-by: Katarzyna Mitrus <katarzyna.mitrus@intel.com>
Co-authored-by: Mateusz Mikolajczyk <mateusz.mikolajczyk@intel.com>
* Fix for case when is_causal is not passed
* Extended description of ScaledDotProduct op
* Better description in py op wrapper
* Basic shape propagation tests for ScaledDotProductAttention
* Added ScaledDotProductAttention to toc.
* Add op impl check
---------
Co-authored-by: Katarzyna Mitrus <katarzyna.mitrus@intel.com>
Co-authored-by: Mateusz Mikolajczyk <mateusz.mikolajczyk@intel.com>
* [TF FE] Fix conversion of TF1 OD models out-of-the-box
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
* Add test While with nested If operation
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
* Update tests/layer_tests/tensorflow_tests/test_tf_While.py
---------
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
* [TF FE] Support complex tensors
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
* Align output type for Real and Imag operations
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
* Update decoding complex types
* Add support for ComplexAbs, FFT and IFFT operations
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
* Correct axes based on a number of inner-most dimensions
* Add layer tests
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
* Update supported ops documentation
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
* Add a comment for ComplexTypeMark
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
---------
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
* [TF FE] Switch off TF1 While support totally
This is a total switch off due to GPU limitation
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
* Need additional fallback in Enter to avoid shapes problem
* Disable tests with While op
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
* Disable layer test for TF1 While
* Remove extra spaces
---------
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
* [TF FE] Fix body graph injection, CumSum and SparseFillEmptyRows
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
* Do not handle non-parameters in body
* Update layer test to cover default parameter and attribute values
* Fix layer tests
---------
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>