The Python version uses `app_inputs_info` to represent different input configurations, but the C++ version extends that use case and uses `app_inputs_info` to represent different input images as well. That means that the assumption that if `app_input_info.size() > 1`, then input shape is dynamic, doesn’t always hold for C++
Ticket 117673
* Change `VPUX`/`VPU` occurrences to `NPU`
* Switch `HARDWARE_AWARE_IGNORED_PATTERNS` VPU to NPU
* Rename `MYRIAD plugin`
* Rename vpu_patterns to npu_patterns in tools/pot
* Rename vpu.json to npu.json in tools/pot
* Rename restrict_for_vpu to restrict_for_npu in tools/pot
* Change keembayOptimalBatchNum to npuOptimalBatchNum
---------
Co-authored-by: Dan <mircea-aurelian.dan@intel.com>
* [GNA] Fix for GeminiLake detection
* Added HWGeneration::GNA_1_0_E enumerator
Added DeviceVersion::GNAEmbedded1_0 enumerator, changed the meaning of DeviceVersion::GNA1_0.
Updated ConvLowPrecision test with all supported targets
* [GNA] Extended a few tests with GNA1.0
* Fix -api sync for single -data_shape
Tickets 111187 and 111185
I wasn’t able to find C++ equivalent of Python’s `info.original_shape.is_static`. Later I realized that it shouldn’t be considered because -shape cmd arg should have higher priority for shape inference than model’s shape. So I removed it from Python.
Replace
`if benchmark.inference_only and batch_size.is_dynamic:`
with
`if allow_inference_only_or_sync and batch_size.is_dynamic:`
to reset batch_size to static in case of dynamic shape with single -data_shape
* Check only app_input_info.size() == 1 because if it's gretaer than 1, input shape is dynamic and there are more that one static shapes. Apply TODO
* Deprecate ExecutableNetwork and InferRequest API
* Fixed some warnings
* Fixed some warnings
* Try to fix documentation
* Try to skip documentation warnings
* Small fixes for openvino::pugixml creation for Dev packages
* Flexiable components installation
* Fixed compilation for x86
* Added extra checks for ENABLE_NCC_STYLE
* Fixed typo in RPM
* Build using conanfile.txt
* Update .ci/azure/linux_arm64.yml
* Several improvements
* Removed conanfile.py
* Try to use activate / deactivate
* Fixed clang-format code style
* Supported TBB version from Conan
* Added more NOMINMAX
* Fixed static build
* More improvements for static build
* Add usage of static snappy in case of static build
* More fixes
* Small fixes
* Final fixes
* Remove constructors for ov Exceptions
* Fixed linux build
* Fixed ONNX Frontend
* Fixed paddle
* Fixed exceptions in tests
* Deprecate constructors for ov::Exception
* Suppress some warnings
* Merge several exceptions
* Some small changes
* Suppress more warnings
* More warnings
* mode warnings
* Suppress more warnings
* More warnings
* 1. refine the logic to ov::device::properties setting.
2. the config overrides will be performed if same config setting is came from CMD line.-a
Signed-off-by: Wang, Yang <yang4.wang@intel.com>
* Update configuration sample file within README.md.
* Update.
* Update.
* 1. Update configuration example file within REAMDME.md for Python version.
2. implement the config DEVICE_PROPERTIES value convertation between the string type and dictionary of Python type.
3. Update the configuration file loading and dumping logic.
Signed-off-by: Wang, Yang <yang4.wang@intel.com>
* Update.
* Update.
* Update.
* Update.
* Update.
* 1. Enable configs to be interchangeable between C++ and Python.
2. Update perf_count showing logic.
Signed-off-by: Wang, Yang <yang4.wang@intel.com>
* Revert the logic of showing show performance counters.
* Update help msg for loading config option.
---------
Signed-off-by: Wang, Yang <yang4.wang@intel.com>
* Benchmark_app set ov::hint::allow_auto_batching through compile_model
* Remove the process about allow_auto_batching in set_property of core
* Remove allow_auto_batching and auto_batch_timeout property from AUTO plugin
* Reserve the info logs and add API to check auto_batching
* Update test case, rm AB property test from core config tests
* Update some API in AUTO plugin config