* Remove NV12 and I420 blobs and deprecate some legacy API
* Fixed some errors
* Remove NV12 blobs
* Remote NV12 conversion
* Fixed other warnings
* Suppress version
* Fix some warnings
* Fixed version
* Try to fix some warnings
* Suppress warnings in C header
* Suppress warnings in C
* Fixed Windows exceptions
* Try to fix warnings
* Try to fix C bindings build
* Suppress InferRequest
* Fixed some build issues
* Fixed some errors
* Fixed build all for macOS
* Suppress some warnings
* Fixed merge conflict
* Remove NV12 and I420 blobs and deprecate some legacy API
* Fixed some errors
* Remove NV12 blobs
* Remote NV12 conversion
* Fixed other warnings
* Suppress version
* Fix some warnings
* Fixed version
* Try to fix some warnings
* Suppress warnings in C header
* Suppress warnings in C
* Fixed Windows exceptions
* Try to fix warnings
* Try to fix C bindings build
* Suppress InferRequest
* Fixed some build issues
* Fixed some errors
* Custom attribute reading and While operation support
* Rearanges FLATBUFFERS_LOCALE_INDEPENDENT setting
* Style
* Make flatbuffers code as version independent as possible
* Comments addressed
* Add static shape adapter
- Adapters holds CPU dimension which can be reference to it or vector
- Add ov::optional for holding optional result from shape inference
- Add new `infer` function in `IStaticShapeInfer`
* Temporary support of StaticShape
* Fix build issues
* Correct shape adapter compare
- minor static shape adapter refactor
* Minor corrections in ShapeInferenceTA
* Fix subscript operator in StaticShapeRef
* Fuse convert reorder to prev MVN/Concat node
Signed-off-by: Andrew Park <andrew.park@intel.com>
* Add dynamic TCs for ov_gpu_unit_test
Signed-off-by: Andrew Park <andrew.park@intel.com>
* Add descriptions for changes
Signed-off-by: Andrew Park <andrew.park@intel.com>
* Fix kernel selection failure
Signed-off-by: Andrew Park <andrew.park@intel.com>
* Add is_type_conversion_only function for reorder_node
Signed-off-by: Andrew Park <andrew.park@intel.com>
---------
Signed-off-by: Andrew Park <andrew.park@intel.com>
Fixes an issue when AlignEltwiseInputRanks is applied on FakeQuantize with
scalar as a first input and input/output low/high being Shape{1} constants.
In such case FakeQuantize output is still a scalar, so the difference
between output rank and input/output low/high rank is negative.
Ticket: CVS-112454
* generate cpu mapping table pre tbb
* change function name
* fix proc_type_table is wrong in RPL
* add getCpuMapFromCores test, fix comments
* modify test case
* fix comments
* fix code style
* add throw an exception
* fix numa_nodes=0 on ARM
* modify numa_nodes
* fix ExportOptimalNumStreams failed on ARM
* fix comments
* add discription of get_cpu_mapping_from_cores
* update for numactl support
* fix cores is wrong
---------
Co-authored-by: Wanglei Shen <wanglei.shen@intel.com>
* [GPU] Add shape of subgraphs markup and initial cpu implementations for some of primitives
* Apply review comments
* Exclude eltwise with boolean mode types from shape of subgraphs and fix leftovers
* There were two issues in runtime buffer fusing
1) Missing condition in matcher for dyanmic tensor
2) If the node is marked as can_be_optimized = true at build time and then turned out to false at runtime, the kernel compilation has been skipped becuaes it was checking node->can_be_optimized
=> To resolve this issue, added can_be_optimzied to impl_param and let the impl create check can_be_optimized in impl_param instead of that in node.
* Fixed primtiive::can_be_optimize to be set through function