* 1. refine the logic to ov::device::properties setting.
2. the config overrides will be performed if same config setting is came from CMD line.-a
Signed-off-by: Wang, Yang <yang4.wang@intel.com>
* Update configuration sample file within README.md.
* Update.
* Update.
* 1. Update configuration example file within REAMDME.md for Python version.
2. implement the config DEVICE_PROPERTIES value convertation between the string type and dictionary of Python type.
3. Update the configuration file loading and dumping logic.
Signed-off-by: Wang, Yang <yang4.wang@intel.com>
* Update.
* Update.
* Update.
* Update.
* Update.
* 1. Enable configs to be interchangeable between C++ and Python.
2. Update perf_count showing logic.
Signed-off-by: Wang, Yang <yang4.wang@intel.com>
* Revert the logic of showing show performance counters.
* Update help msg for loading config option.
---------
Signed-off-by: Wang, Yang <yang4.wang@intel.com>
* Benchmark_app set ov::hint::allow_auto_batching through compile_model
* Remove the process about allow_auto_batching in set_property of core
* Remove allow_auto_batching and auto_batch_timeout property from AUTO plugin
* Reserve the info logs and add API to check auto_batching
* Update test case, rm AB property test from core config tests
* Update some API in AUTO plugin config
* fix Paddle unit tests unexpected exceptions and seg fault issue
* parse confine from reqfile to keep algin with other requirements
* Apply suggestions from code review
* Apply suggestions from code review
* [TF FE] Support NonMaxSuppression with named outputs
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
* Simplify the test for NMS named outputs
* Share a script for test model generation
---------
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
* Applied w/a to resolve softmax accuracy issue
The original impl resulted in accuracy issue if leftover is not aligned with subgroup size.
(e.g., for shape [1024, 306] where the lws = 32, itemsNum = 9, leftover = 18, subgroup size = 16)
In such a case, the result got wrong if subgroup block read/write is used.
As a w/a, not to use subgroup block read/write if leftover is not aligned with nsubgroup size.
However we can come up with better itenNum size / lefover handling in the follot bwing up work.
* Fix build error & minor revise
* Fix condition
* [LPT][TESTS] GrConv: added test cases with per channel dq on weights and without reshape
* FoldFQ: don't transform FQ with quantization by several dimensions
* ConvolutionTransformation: supported GrConv with per channel dq on weights and without reshape
* fold_reshape: refactoring
* [TF FE] Test ResourceGather operation and fix debug caps
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
* Fix test generation script
---------
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>