* Add new compile_model api for ONNX RUNTIME OV EP
Allow compile_model() accept model/weight data.
* Update minor place
* Cache model if possible
* Compute hash based on model_xml and model_weight
* Update typo
* Change hash key computation for model's weights
* Resolve test case issue
* Use tensor replace blob for hash computation
* Fix hash computation isssue and add more test cases
* Fix a build issue caused by data format
* Add type_prop tests
* Add shape_infer tests
* Update shape_infer to preserve interval dim and label
* Unified approach for get_data_as_shape and negative value checks
* Remove redundant gtest header
* rename one hot shape infer test file
* Add test for shape_infer with default ctor and adjust resolve_axis
* Move get_data_as_shape changes to the one hot custom util
* Adjust custom get_data_as_shape
* Select shape_infer tests update
* Add Select type_prop tests
* Add evaluate_lower/upper for select
* Revert evaluate_lower/upper for Select
* Use get_node_input_partial_shapes
* Style and headers improvements
* Style apply
* Rename select shape infer file tests
* Use default ctor for output_shapes init
* Use helper for shape_labels init and add more dim test cases
* Review tile for shape inference:
- propagate labels and dimension
- template implementation of shape inference
- if repeats is not-positive output dim is always 0
* Refactor Tile shape inference
* Review preserve partial values and labels
* Add support to evaluate bounds from repeats input
* Remove not used code
* Review dims and labels propagation for logical not
* Review dims and labels propagation
for logical and, or, xor
* Remove duplicated tests
* Expand logical ops tests by numpy broadcast
and inputs order
* Review template shape infer of logical ops
- add static shape inference test
- add default ctor test
* Default ctor test for LogicalNot op
* Review labels and dimension propagation
- check dimensions propagation with partial dimension
- extend testing for labels an dimensions propagation
* Shape inference support bounds evaluation
on begin, end inputs
* Review static shape inference
* Move sequence generator to dev API
to avoid create unnecessary library dependency
* Fix windows build issue
* Use strided slice in scatter update test
of partial value propagation
* Remove unused constant from test
* Fix strided dim calculation
* Fix clip lb,ub for strided dim calculation
* Use op strides if absent in input_shapes
* Move back SeqGen to shape inference
Co-authored-by: Evgenya Stepyreva <evgenya.stepyreva@intel.com>
* Review einsum shape and label propagation
- extend type_prop test by check labels and einsum properties
* Review template implementation of shape inference
- rename StaticShape inference test file
- use common fixture and rename test cases
- add default ctor test
- add equation string setter
* Fix einsum label propagation check
due to improvement of dimensions and labels merge
* Remove BWDCMP_RTTI_DEFINITION from einsum op
Co-authored-by: Evgenya Stepyreva <evgenya.stepyreva@intel.com>
* change AUTO default hint to Latency
* Change the comment tput to latency according to wangyang opinion
* fix testcase for MULTI, the hint default value returned by MULTI is throughput
* Remove the redundant testcase and modify the name of the testcase that returns the default value of hint
* Code optimization according to bell opinion, add comments to testcase
* Correct the comments of testcase
* When user sets num_streams, AUTO/MULTI does not set the default hint to HW plugin
* Fix the problem that smoke_AUTO_MULTI_ReturnDefaultHintTest fails to run
* add num_streams and default hint mock testcase
* add auto default perf hint mock testcase
* temp resolution to support model path for CPU in auto
Signed-off-by: fishbell <bell.song@intel.com>
* disable batch when load through model path
Signed-off-by: fishbell <bell.song@intel.com>
* add mark for future release
Signed-off-by: fishbell <bell.song@intel.com>
* implement step1: donotparse batch config if user not set explictly
Signed-off-by: fishbell <bell.song@intel.com>
* correct typo in case
Signed-off-by: fishbell <bell.song@intel.com>
Signed-off-by: fishbell <bell.song@intel.com>
* Use openvino pass graph_rewrite
* Use openvino pass pattern matcher
* Remove ngraph opsets
* Remove ngraph.hpp
* Remove ngraph includes
* Remove ngraph includes
* Use transformations API
* Remove ngraph includes
* Remove ngraph rt_info
* Remove unused ngraph includes
* Replace ngraph:: scope with ov:: here and there
* Remove serialize proxy header
* Remove proxy include
* Bring back a file for vpu-plugin
* Fix after upstream merge
* Remove nested namespace conflict
Co-authored-by: Andrei Kochin <andrei.kochin@intel.com>
Co-authored-by: Ivan Tikhonov <ivan.tikhonov@intel.com>
Co-authored-by: Mateusz Tabaka <mateusz.tabaka@intel.com>
* [Python API][Tools] Top up upper bound version for dependencies: NumPy, TensorFlow, NetworkX
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
* Remove upper-bound for TensorFlow and NumPy
* Revert "Remove upper-bound for TensorFlow and NumPy"
This reverts commit 662085df2e.
* Remove upper-bound for NumPy for default installation
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>