* Moved stress tests to OV API 2.0
* Fix for using ouput index instead of get_index
* Removed ov::runtime namespace in stress tests
* Updated stress tests according to latest changes in OV 2.0
* Fix memleaks tests
* Updated run_memcheck.py to process gtest_filter
* Updated fillTensors, added InferAPI1 and InferAPI2 classes
* Updated test_inference_with_streams
* Updated isImage, comments
* Updated fillTensors to fill image_info inputs with positive pseudo-random numbers
* Removed redundant variable in fillTensors
* Update AUTO OV 2.0 c++ configuration API.
Signed-off-by: ywang2 <yang4.wang@intel.com>
* Support the OV 2.0 key to set model priority and add the test case to verify the prioirty map logic within AUTO plugin.
Signed-off-by: ywang2 <yang4.wang@intel.com>
* Replace the old model priority key and add the corresponding test case.
Signed-off-by: ywang2 <yang4.wang@intel.com>
* Fix LSTMSequence/GPUSequence validation behavior consistent with RNNSequence
Fixed issue with no exception if num_directions=2, but 'm_direction' is not set to BIDIRECTIONAL. Previously there was no error with this (and luckily it failed later in some CPU transformations during compile_network)
Corrected several tests which use copy-pasted num_directions=2 without m_direction set
Also for dynamic 'num_directions' - output shape still has 1 or 2 directions, because m_direction is known. Tests for GRU/LSTM are updated for this
Also several tests worked incorrectly for LSTMv0 - expectation was specific error to be thrown, but no expection was also allowed
* Fixed clang-format
* Store the expected output data in the TestCase class
* Skip the failing ONNX If tests
* Disable failing ONNX Softmax tests
* Disable the remaining failures
* ROI tensor support for Template plugin + tests for Template and CPU plugins
GPU doesn'tsupport ROI tensors, so tests were not added for GPU
* Added asserts for unsupported mixed axis order (like 0,3,1,2), and unsupported types like int4/int2 for ROI tensors
* Avoid duplicated outputs with the same name
* Revert onnx graph changes
* Allow output duplicates in ie_plugin_internal check
* Add test with onnx model
* Check get_tensor_ptr instead of any_name
* More outputs test
* Refactor to use std::transform
* test manifest update
* Remove redundant header
* INTERPRETER segfaults fix for duplicated output names
* Simplify duplication assert
* Update test names
* Test update
Currently, calling QueryNetwork from Myriad plugin with dynamic network could result in exception, this PR should fix this by removing nodes that could cause it from consideration.
Co-authored-by: Polina <polina.brzezinskaya@intel.com>
Support matmuls with two non-const inputs.
Detect concat inputs to matmul as changing batch size and
handle appropriately.
Enable tests in GNA_SW_EXACT mode for convolution stride > kernel size.
* Remove fp16 of Convert layer test from skip_tests.config.cpp as it works now
* update repo
* initial code commit
* add runtime reference
* apply ov::Model
* initial lstmcell-1 definition
* initial change
* apply Peepholes
* apply input_forget option
* apply initial test case of lstmsequence-1
* fix clang-format error
* fix clang-format error 2
* add lstms_sequence test cases by runtime reference and onnx test cases
* fix clang-format error
* fix clang-format error
* fix onnx test failure of LSTM IE_CPU
* fix clang-format issue
* fix clang-format issue 2
* add type_prop and visitor api test of lstm_sequence_v1
* fix clang-format error
* replace input/refOut data to hard coded and remove unnecessary enum definition
* update namespace of Tensor()
* remove supported test cases in disabling list