* Input/output order Keras tests.
* Added precommit mark.
* Added xfail.
* Small correction.
* Check input/outputs by names in FW.
* Moved output order tests to Python API group.
* Corrected comments.
* [GPU] Support LSTMSequence w/ -1 seq_length
Co-authored-by:Taylor Yeonbok Lee <taylor.lee@intel.com>
Co-authored-by:Andrew Park <andrew.park@intel.com>
* Fix GetInputInfo to retrieve input pid from LSTMCell
* LSTMCell use ov::PartialShape instead of cldnn::tensor
* implement lstm_elt_inst::calc_output_layouts
* implement lstm_elt_impl::static_canonicalize_shapes
* Add functional tests
* Fix unit test failure
---------
Co-authored-by: Andrew Park <andrew.park@intel.com>
* [TF FE] Fix TF1 SSD PPN model conversion
It contains a case when one Merge node eliminated different conditional flows.
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
* Add layer test
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
---------
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
* Migrate Constant operator to new API
- refactor to reduce binary size
* Fix code style
* Fix build issues
* Apply corrections after review:
- Restore mem_size calculation for bit widths >= 8
- Remove element type helpers functions
* Use float cast for floating types except f64
* Try using a custom action directly from repo
* Run smart CI under ubuntu-latest
* Set output + add a sample step
* Update linux.yml
* Add components.yml
* Add some conditions
* Just to check if reference to "needs" work in job context
* Update linux.yml
* More example cases
* Dummy change to CPU
* Fix typo
* Fix SAMPLES_AFFECTED variable
* Use more correct dependents key
* Fighting with messy GHA conditions
* No brackets and no double quotes in conditions
* Revert "Dummy change to CPU"
This reverts commit 4eae09e5b5.
* Use refactored action
* Move action implementation to openvino repo
* Extend components.yml config
* Update labeler.yml
* Dummy change to TF FE
* Fix indentation
* Add missing needs
* Add missing records
* Allow missing records for components in validation
* install_openvino_dependencies as a separate step for Python_Unit_Tests
* Improve config validation
* Revert "Dummy change to TF FE"
This reverts commit 01190864d1.
* Dummy change to model hub tests
* Update CPU component config
* Dummy change to Python API
* Dummy change to Python API
* Revert "Dummy change to Python API"
This reverts commit 3fce0bb3fb.
* Dummy change to Python API
* Simplify conditions. Cover "no components changed" case
* Update components.yml
* Update .gitignore
* Revert "Dummy change to Python API"
This reverts commit e57ea9852c.
* Fix dependencies scopes
* Add simple unit tests for smart ci functionality
* Revert "Dummy change to model hub tests"
This reverts commit c3d6837e22.
* Use ghapi module with permissive license
* Cover install_build_dependencies.sh script by labeler
* More labels
* Use ghapi. Apply review comments
* Enable dot files to be matched by labeler
* Warning instead of error in artifacts upload where smart ci is enabled
* Fix master merge
* Fix condition for TF FE common tests
* Fix condition for Pytorch FE tests
* Remove condition for pytorch model tests
* Allow any label as a component
* Refactor tests log handling
* Allow any defined label as a component
* Rearrange config structure. Fill the config with actual data
* Run full scope on changes to non-matching files
* Add missing conditions
---------
Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
* Added experimental ScaledDotProductAttention operation in opset12. Supported in PT FE for aten::scaled_dot_product_attention translation. Decomposed in the common optimizations as functional reference.
* Better ScaledDotProductAttention
- Moved decomposition to the decomposing transformation
- Implemented more ctors for the op
- Renamed is_causal to causal
- Shape/type inference native code instead of using decomposition
- Moved the op from opset12 to opset13
- Added Python wrapper for ScaledDotProductAttention
* Fix test that counts ops in the opsets
* Update src/core/src/op/scaled_dot_product_attention.cpp
Co-authored-by: Katarzyna Mitrus <katarzyna.mitrus@intel.com>
* Update src/core/src/op/scaled_dot_product_attention.cpp
Co-authored-by: Katarzyna Mitrus <katarzyna.mitrus@intel.com>
* Move ScaledDotProductAttentionDecomposition from fusions to decompositions.
* Remove not used legacy shape inference in ScaledDotProductAttention
* Better namespace usage
* Register all nodes in ScaledDotProductDecomposition for correct tracking of nodes and running next mather passes on all new nodes.
* Don't use register_new_node_
* ScaledDotProductAttention specification (with an extra scale argument)
* Code style fix
* Scale input implementation for ScaledDotProductAttention
* Handle attention_mask=0 case in the op spec
* Better description of scale input
* N->M in scale description
* Code style fix, remove debug print.
* Apply suggestions from code review
Co-authored-by: Katarzyna Mitrus <katarzyna.mitrus@intel.com>
Co-authored-by: Mateusz Mikolajczyk <mateusz.mikolajczyk@intel.com>
* Fix for case when is_causal is not passed
* Extended description of ScaledDotProduct op
* Better description in py op wrapper
* Basic shape propagation tests for ScaledDotProductAttention
* Added ScaledDotProductAttention to toc.
* Add op impl check
---------
Co-authored-by: Katarzyna Mitrus <katarzyna.mitrus@intel.com>
Co-authored-by: Mateusz Mikolajczyk <mateusz.mikolajczyk@intel.com>
- fix cum_sum_partial_sum kernel;
- add unit test and func test for big shapes;
- add test to compare Partial vs Ref performance;
- change kernels' priorities according to performance measurements;
- move common profiling helpers to test_utils.
Ticket: CVS-123590
* Migrate Multiply operator to new API
* Add comment explain use of custom multiply
* Update custom multiply comment
Co-authored-by: Tomasz Jankowski <tomasz1.jankowski@intel.com>
---------
Co-authored-by: Tomasz Jankowski <tomasz1.jankowski@intel.com>