* support numactl on Linux in multi-threading 2.0
* update cache file reader
* fix warning
* keep change for numactl support only
* keep change for numactl support only
* keep change for numactl support only
* keep change for numactl support only
* keep change for numactl support only
* fix typo
* update for comments
* fix code style issue
* update is_cpu_map_available()
* update for comments
* update for comments
* update for comments
* Review interpolate shapes and label propagation
* Review shape_infer template implementation
* Update shape infer of interpolate in GPU plugin
- Add new tensor accessor for ov::Tensor map
* Correct casting in dim::scale function
* Remove validation of size of input 1 in v0
* Relax inputs check for interpolate v4
* Correct GPU shape inference
* Use ov::Tensors in interpolate's evaluate
- Remove some duplicated code
- Apply comments from review
* Set shape in interpolate's eval for output tensor
* Tests
* Add eval_lower/upper support to ReduceMax
* Add support for ITensorAccessor in reduce shape infer
* Add tests for duplicated axes and output shapes size
* Push to output_shapes instead final copy to vector
* Remove old shape_infer API
* Move axes rank validation to shape_infer
* Restore shape_infer API for GPU
* Update docs for frontend extensions
* Apply suggestions from code review
Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>
* fix order in openvino_framework_map
* no discard return value
* add note of openvino_contrib repo
* update example for PT
* note
* add paragraph of named inputs and outputs
Signed-off-by: Mateusz Tabaka <mateusz.tabaka@intel.com>
* title underline too short
* review comments
* remove m_ prefix from CustomOp attr names
---------
Signed-off-by: Mateusz Tabaka <mateusz.tabaka@intel.com>
Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>
Co-authored-by: Michal Lukaszewski <michal.lukaszewski@intel.com>
* enable cpu map for mac
enable nstreams and nthreads setting for mac
* keep streams=1 for M1
* add explicit type conversion
* remove definition of static cpu
* Update with master
* separate branches for __APPLE__ and __EMSCRIPTEN__
* modify the implementation of is_cpu_map_available function
---------
Co-authored-by: Wanglei Shen <wanglei.shen@intel.com>
* Small fixes for openvino::pugixml creation for Dev packages
* Flexiable components installation
* Fixed compilation for x86
* Added extra checks for ENABLE_NCC_STYLE
* Fixed typo in RPM
* Limitations refactoring
* fix CI builds/tests
* changes after review
* Move GraphCompiler initialization to constructor
* resolve conflicts after rebase
* update after review
* resolve problem with double initialization for Limitations
* Optimize strides calculation using one loop
* Calculate strides on get_strides or set_shape
instead in ctor in TensorView
* Call once update strides on get
* Added dependencies via vcpkg
* Try to remove global imported targets
* Fix for conan
* Fixed RHEL case
* Fixed RHEL, U18 cases
* Returned OpenCV finding
* Update cmake/templates/OpenVINOConfig.cmake.in
Fixed IMPORTED_GLOBAL => GLOBAL in OpenVINOConfig.cmake.in template file
* Apply suggestions from code review
Properply used NAMES in find_package
* Fixed case with zlib
* Final fixes
* Fixes
* Removed CONFIG from find package ZLIB
* Fixed RHEL
* Reverted changes with gflags back
* Turn off LTO after thirdparty dependencies are built
* benchmark_app: except ALLOW_AUTO_BATCHING
Running benchmark_app.py with -b 1 -d CPU fails with
`Unsupported property ALLOW_AUTO_BATCHING by CPU plugin`.
C++ benchmark_app sets ALLOW_AUTO_BATCHING in
ov::Core::compile_model() call, which doesn't trigger the error.
* Move `ALLOW_AUTO_BATCHING` to `device_config`