* [GPU] Fix with permute mismatching input layout with ouput in batch 2
* Add unit test
* Fix unit test
* Don't use deprecated interface for layer test
* Added torch script backend
* Added ts_backend to pytorch layer tests
* Added use_ts_backend fixture to the test suite to activate the
torchscript backend
* Fixed failing test_dict layer test
* Added USE_TS_BACKEND as an env variable
* Removed use_ts_backend fixture
* Added more tests for ts backend
* Added more information in the comments about usage
* Removed convolution3d test from precommit_ts_backend
* Added some torchscript backend tests to ci
* Removed tests from CI as torch.compile doesn't support 3.11 currently
* Fixed linter issues
* Addressed PR comments and linter issues
* Mark some legacy API as deprecated
* Try to fix some issues
* Fixed some warnings
* Disable deprecation warnings for GNA
* Fixed some warnings
* Disable deprecation errors for all plugins
* Suppress some warnings
* Suppress some warnings
* Suppress deprecated for tests
* Mark all contend as suppressed
* Try to fix extensions
* Suppress more warnings
* Suppress warnings for transformations
* Global suppress of deprecation warnings
* FIxed some warnings
* Fixed comments
* Create macro for deprecation API
* Fixed data tests
* Fixed mock_engine for proxy tests
* Fixed some caching tests
* FIxed build
* Fixed CoreThreading tests
* Try to fix crash in functional tests
* Fixed typo
* Fixed typo
* Small change
* Remove shared pointer from MockPluginWrapper
* Small fixes
* Do not throw an exception from device_supports_cache_dir
* Review detectron prior grid generator for:
- Check interval shapes and label propagation.
- Check template implementation of shape infer.
- Add update or correct unit test for static and dynamic shapes.
* Remove ngraph namespace in reviewed op
* Use detectron validation util to check inputs
in all related detectrons operators
* Relax first dim of feat map and im data check
* Fix test after dimension validation update
* Fix typo in detectron util file name
* Create separated auto_batch plugin testcase
* Add test sample into azure
* Move to auto_plugin directory
* Fix CI build issues
* move batch test cases from gpu/cpu/template plugin to auto batch plugin
* Check OpenCL to decide whether enable auto_batch gpu test cases
* Revert "move batch test cases from gpu/cpu/template plugin to auto batch plugin"
This reverts commit 9f4f2ce1af.
* Add functional tests for auto_batch
* Runtime check gpu available to decide whether run gpu test cases
* Remove HW plugins from functional test
1. Apply Template plugin for auto_batch functional test
2. Remove unnecessary code
* Restore some original tests
* Apply new api property replace old config
* Solve warning suppressions issue
* Fix CI build error
* Solve CI failure issues
* Fix getOutputsFromFunctionWithSeveralOutputs bug
---------
Co-authored-by: Chen Peter <peter.chen@intel.com>
* [TF FE] Report a reason of no conversion of internal operations
Some operations during translations can be temporarily converted to InternalOperation
such as Const operation of string type for which we need to define more elaborated reason
why it is represented as InternalOperation.
Also, restrict instantiation of InternalOperation because instead user should use FrameworkNode.
InternalOperation is a base class for internal operation types of TF FE that have
extended API compare to FrameWorkNode.
For all internal operation we defined a reason why it is not converted to OpenVINO opset
that will be reported in TF FE if they are not gone finally.
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
* Update src/frontends/tensorflow/tests/convert_unsupported.cpp
* Correct a script for generation of the test model
---------
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>