Commit Graph

557 Commits

Author SHA1 Message Date
Wanglei Shen
72cb4e4820 add additional checks for bad cache information in VM (#21059)
* add additional checks for bad cache information in VM

* update implementation
2023-11-27 07:51:44 +08:00
Ilya Lavrenov
e087ed083c Revert "the proxy device with id should be also a proxy device. (#21248)" (#21271)
This reverts commit c491381a0d.
2023-11-24 17:49:37 +04:00
Wang, Yang
c491381a0d the proxy device with id should be also a proxy device. (#21248) 2023-11-24 12:09:46 +04:00
Sun Xiaoxia
b7edd5df69 migrate threading related interface from API 1.0 to 2.0 (#21167)
* migrate threading related interface from API 1.0 to 2.0

* fix code style

* fix @ref issue in doc

* change <> to quotation marks

* restore threading related interface API 1.0

* restore the changes of legacy code
2023-11-24 10:52:44 +04:00
Fang Xu
03d54a579e binding pcore for stream calculation (#19550)
* binding pcore for stream calculation

* remove useless branch

* modify the function of query cache size

* fix compilation error

* use MT2.0 interface

* bind core when there is ecore

* initialize Xbyak::util::Cpu object at the begining of compile_model

* restore the file

* initialize Xbyak::util::Cpu object at the beginning of cpu plugin

* remove unused header

* extract task executor creation into a separate function

---------

Co-authored-by: Wanglei Shen <wanglei.shen@intel.com>
2023-11-23 12:14:36 +08:00
River Li
aaa08d013a [Core]Remove macro CONFIG from core_impl (#21142)
* Remove macro CONFIG from core_impl

* Fix errors

* Solve AutoBatching_Test_DetectionOutput failure issue
2023-11-21 12:51:30 +04:00
Pawel Raasz
5d6d6a2cfe [core]Optimize OV assert macors to reduce CPU plugin binary size (#21180)
* Optimize OV assertions to reduce bin-size of libs

* Migrate assertion leftovers in CPU plugin

* Add NotImplemented::create to support use with macro
OPENVINO_ASSERT_HELPER

* Remove CheckLocInfo struct

---------

Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
2023-11-21 07:16:30 +01:00
Haiqi Pan
79c9f4ad44 migrate executable_network test case to compiled_model (#19541)
* add get property test case

* add CanCreateTwoExeNetworksAndCheckFunction and pluginDoesNotChangeOriginalNetwork

* tmp

* add canSetInputPrecisionForNetwork and canSetOutputPrecisionForNetwork

* fix code style

* tmp

* Revert "tmp"

This reverts commit ff3f8d56d5.

* tmp

* add CanCompileModelWithEmptyProperties

* remove comment

* add infer request

* add CanLoadNetworkWithCustomLocale

* add CanLoadNetworkWithCustomLocale

* fix error

* remove case in io_tensor.cpp

* fix Unsupported metric key: OPTIMAL_BATCH_SIZE

* Update src/tests/functional/plugin/conformance/test_runner/api_conformance_runner/include/api_conformance_helpers.hpp

Co-authored-by: Chen Peter <peter.chen@intel.com>

* fix batch size

* remove useless code

* remove failed case CanCompileModelWithEmptyProperties, CanLoadNetworkWithCustomLocale and LoadNetworkWithBigDeviceIDThrows to test

* Revert "remove failed case CanCompileModelWithEmptyProperties, CanLoadNetworkWithCustomLocale and LoadNetworkWithBigDeviceIDThrows to test"

This reverts commit 1317d0773c.

* Revert "remove useless code"

This reverts commit b3dd0ffaab.

* Revert "fix batch size"

This reverts commit 2afd673cff.

* Revert "Update src/tests/functional/plugin/conformance/test_runner/api_conformance_runner/include/api_conformance_helpers.hpp"

This reverts commit 9d6030952f.

* Revert "fix Unsupported metric key: OPTIMAL_BATCH_SIZE"

This reverts commit 2de26547ea.

* try to add optimal_batch_size in cpu plugin

* return model when optimal_batch_size not found in apply_auto_batching

* revert cpu plugin

* gna cannot support some model

* update test case name

* skip CanCreateTwoCompiledModelsAndCheckRuntimeModel in gna

* Update src/inference/src/dev/core_impl.cpp

Co-authored-by: Chen Peter <peter.chen@intel.com>

* Update src/inference/src/dev/core_impl.cpp

Co-authored-by: Chen Peter <peter.chen@intel.com>

* replace deviceName.substr(pos + 1) with deviceNameWithoutBatch

* fix bug

* Update src/inference/src/dev/core_impl.cpp

Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>

* Update src/tests/functional/plugin/shared/include/behavior/compiled_model/compiled_model_base.hpp

Co-authored-by: River Li <river.li@intel.com>

* Update src/tests/functional/plugin/shared/include/behavior/compiled_model/compiled_model_base.hpp

Co-authored-by: River Li <river.li@intel.com>

* Update src/plugins/intel_gna/tests/functional/shared_tests_instances/skip_tests_config.cpp

Co-authored-by: River Li <river.li@intel.com>

* Update src/plugins/intel_gna/tests/functional/shared_tests_instances/skip_tests_config.cpp

Co-authored-by: River Li <river.li@intel.com>

---------

Co-authored-by: Chen Peter <peter.chen@intel.com>
Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
Co-authored-by: River Li <river.li@intel.com>
2023-11-15 13:27:42 +01:00
Maksim Kutakov
75acdac5f9 Extend the existing IO binding subgraph tests (#20987) 2023-11-13 15:22:41 +01:00
Maciej Smyk
5a04359200 [DOCS] Updating links for master 2023-11-13 12:23:23 +01:00
Maksim Kutakov
fb3751717f [Inference] Return state ptr by value (#21011)
* Return state ptr by value

* Fix mock class
2023-11-13 14:24:54 +04:00
Ilya Lavrenov
68e6484ecb Fixed version detection without git (#20951) 2023-11-08 14:30:15 +01:00
River Li
a0849edca1 [CPU] migrate cpu plugin api 2.0 (#18124)
* [CPU] CPU plugin migrates to plugin API 2.0

* Fix legacy config/metric issue

* Fix some issue of ov_cpu_func_tests

1. set_tensors_impl segment fault
2. ov::loaded_from_cache unsupported issue

* Resolve some comments

1. ov::loaded_from_cache issue
2. throw_if_cancelled issue
3. import_model issue
4. set_tensor_impl issue
5. batched_inference issue

* Fix dynamic shape inference issue

* Fix build error

* keep original model info in infer_request

* Fix minor error

* cache internal tensors for input/output precision change

* Disable import model test cases with precision changes

* fix precision issue

* Fix issue for import model

* Fix InferRequestCancellationTests exception issue

* Skip InferRequestIOBBlobTest.*secondCallGetInputDoNotReAllocateData due to new plugin api have different behavior

* Fix graph name issue

* Fix ROI issues

* Fix Transpose shape issue

* Skip vie::Version test due to change to ov::Version

* Solve input port name changes issue

* Solve preprocess layout issue

* Fix minor issue

* tidy up code

* Fix conflict after rebase

* Fix Windows build warning

* Add aux tensors for precision change issue

* Fix import/export model issue

* WA single layer name changed by preprocess

* Revert "WA single layer name changed by preprocess"

This reverts commit bc8fcdd43c.

* Skip some legacy tests due to plugin api 2.0 is enabled

1. skip some python legacy tests for plugin api 2.0 some different behaviors
2. skip some smoke tests due to output port name was changed

* Fix 2 build warnings

* Skip some AUTO plugin tests

* Fix property issue caused by AUTO plugin

* Skip PSROIPooling issues

* Follow header files reference policy

* Split out transformation fixing for nop_elimination

* Fix AUTO plugin mismatch issue for get_tensor function

* Fix aux tensor shape issue

* Fix tensor shape issue

* WA python sync inference sample's segmentfault issue

* Fix reshape issue for dynamic inference

* Fixed incorrect tensor name in e2e test

Fixe issue: e2e ONNX_Customized_Cascade_Rcnn_api_2_True_batch_1_device_CPU_precision_FP325den8cnk

* Fix python segmentfault issue of plugin api 2.0

* Fix python segmentfault issue of plugin api 2.0

* Revert "Fix python segmentfault issue of plugin api 2.0"

This reverts commit 6f502e5d86.

* Fix onnx_duplicated_output_name due to empty tensor

Co-authored-by: Bell, Song <bell.song@intel.com>

* Remove redundant code

* Remove python segment fault WA

* Keep rt_info to fix test failure in case of legacy public api

* Fix output port names missing issue

* Adress some reviewers' comments

* Restore OnnxBackendNodeModelTest::test_maxpool_with_argmax_2d_precomputed_pads_cpu after fixing has been merged

* Resolve tensor sharing issue when there are same name output port name

In some case, model has 2 or more same name input/output ports, they aslo have the same
precision and partial_shape. Compiled_model will share the same ov::Descriptor::Tensor pointer
and ov::Tensor between multiple such ports.
Considered solving python segment fault issue to create seperated input/output ports, which also
need handle such tensor shared case, this patch will do it.

* Resolve tensor sharing issue when there are same name output port name

In some case, model has 2 or more same name input/output ports, they aslo have the same
precision and partial_shape. Compiled_model will share the same ov::Descriptor::Tensor pointer
and ov::Tensor between multiple such ports.
Considered solving python segment fault issue to create seperated input/output ports, which also
need handle such tensor shared case, this patch will do it.

* Better method to find shrared tensor desc

* rename with snake_case style

* Remove ngraph header files

* Keep external_ptr naming

* Add OPENVINO_SUPPRESS_DEPRECATED for some legacy code

* Use port's tensor_ptr to replace creating new tensor_ptr

* Resolve some reviewer comments

* Implement ov::IInferRequestInternalWrapper::GetPreProcess to recover python GetPrepProcess tests

* Remove unnecessary header files reference

* Assert the risk of precision change and reorder at the same time

* Modify legacy python test to fit plugin api 2.0 behavior

* Recover smoke_Transpose(2|4|5|6)D/TransposeLayerTest.CompareWithRefs due to fixing is merged

* Fix typo issue

* Address reviewer's comments

* Disable precision coversion

* Fix error when CpuBlockedMemoryDesc

* Remove precision mismatch WA

* WA precision issue for query_model

* Solve precision mismatch between compiled model and graph

* Fixe failure of query_model

* Rebase to new plugin api update

* Recover the test cases of precision mismatch

* Try to fix name changing for graph model

* Remove tets code

* Remove fp64

* Rebase to new plugin api update

* Update for some failure cases

* Fix bert_benchmark failure issue

* Avoid segment fault in arm acl

Legacy public api + cpu plugin api will add convert op by preprocess by default for unsupported precision,
but ACLConvertExecutor cannot support dimension > 6, so this test will be segment fault due to dimension > 6

smoke_TestNumpyBroadcastNgraphEvaluate/BroadcastLayerTest.CompareWithRefs/targetShape=(1.2.3.4.5.6.7.8.9.10)_axesMapping=()_mode=numpy_inShape=(1.2.1.4.1.6.1.8.1.10)_inNPrec=I8_trgDev=CPU
smoke_TestNumpyBroadcastNgraphEvaluate/BroadcastLayerTest.CompareWithRefs/targetShape=(1.2.3.4.5.6.7.8.9.10)_axesMapping=()_mode=numpy_inShape=(1.2.1.4.1.6.1.8.1.10)_inNPrec=U8_trgDev=CPU

* Remove precision change from preprocess to avoid ACL unsupport convert dim > 6

* ACLConvertExecutor cannot support dimension > 6, don't let preprocess to add Convert

* Revert "ACLConvertExecutor cannot support dimension > 6, don't let preprocess to add Convert"

This reverts commit fd7a8b35af.

* Revert "Remove precision change from preprocess to avoid ACL unsupport convert dim > 6"

This reverts commit 3c2d9a5f17.

* Debug

* Debug incorrect precision checking issue

* Debug Eltwise FP64 unsupported issue

* Add logs for precision

* debug log

* Update for new dependent PRs merged

* Fix failure caused by preprocess

Fix below failures due to cannot find ops by name
     smoke_LPT/ReduceMaxTransformation.CompareWithRefImpl/f32_[1,3,10,10]_CPU_f32__256*

* Fix build error

* Fix failure caused by missing code during rebase

* Add debug

* Fix precision unsupport issue

* U16/I16/U64 precision support

* Resolve the issue of f64 reorder

Fix below issue:
Cannot create reorder primitive: unsupported reorder case

* Fix convert multiple child edge issue

* Solve ROI tensor failure issues

* Temporarily disable num_nodes comparison

* Only change convert precision for fp64

* Put convert precision change before reorder to avoid confusion

* Add debug log for transformation

* Fix rebase confilict

* Fix clang issue

* Temporarily disable test_infer_mixed_values python test of bf16

* Solve issue of smoke_ConvertCPULayerTest_BOOL_Dynamic_inputPRC=BF16 choose FP32 primType rather than BP16 primType

* Fix issue of pytorch_tests/test_outer.py

There are 2 output ports, but with the same port name, they should share the same tensor.

* Fix arm cannot find Eltwise executor issue

smoke_SetBlobCPU/SetBlobTest.CompareWithRefs/Type=INPUT_Device=CPU_PrecisionInNet=FP16_PrecisionInNgraph=BOOL
will report below error:
	 [ GENERAL_ERROR ] Supported Eltwise executor is not found
It need change convert precision to avoid such problem.

* Fix memory overwritten issue

* Temporarily skip arm fp16 SetBlobTest

* Fix compile error after rebase

* Restore smoke_IsOp test due to fixing pr merged

* Fix float to bf16 issue in avx2 isa

* solve onnx test xfail issue

* Skip test cases that ARM Eltwise executor FP16 is not supported

smoke_SetBlobCPU/SetBlobTest.CompareWithRefs/Type=INPUT_Device=CPU_PrecisionInNet=FP16_PrecisionInNgraph=BOOL
smoke_SetBlobCPU/SetBlobTest.CompareWithRefs/Type=BOTH_Device=CPU_PrecisionInNet=FP16_PrecisionInNgraph=BOOL

      [ GENERAL_ERROR ] Supported Eltwise executor is not found

* [CPU] improve reorder to support any precision

* Implement ReorderExecutor

* Fix builld error

* Not cache executor due to its primitive has been cached

* Keep convert one time at most

At most insert one convert if needed, if still cannot do reorder it will throw exception rather than insert the second convert
For example, below reorder will not be supported:
   FP64<->I64/U64/U32
   U32<->I64/U64
   U32<->I16/U16
   FP64<->FP64
   BIN<->BIN

* Only do conversion if layout is same

* update for only convert case

* Update for reviewer comments

* update for failure cases

* Address reviewer comments

* Update rebase issue

* minor update

* Solve unsupported precision issue in tranfromation rather than init_edge

* Remove unnecessary convert in init_edge

* Minor changes

* Update Reorder::reorderData

* Solve issue if only coversion without reorder

* Address reviewer comments

* Address reviewer comments

* Keep exception for unsuported precision

* update

* Revert reorder executor implement

* Solve float->bool issue on transformation pipeline

* Solve I64 is not supported issues

* Solve reviewer's comments

* Fixed dynamic top_k node issue

* Skip nhwc and nChw16c test cases for ConvertLayer

* Update for reviewers' comments

* Fix some failures

* Update for several failure cases

* Update for apiConformanceTests failures

* Fix incorrect node name after import model

* update

* update comments

* Solve issue of smoke_MatMul_NoTranspose and smoke_MatMul_BothTranspose

* Fixed AlignMatMulInputRanks scalar issue

* Address reviewers' comments, remove redundant path in graph.cpp

* Remove test_div_uint8_cpu from xfail_issue_58676

* Solve invalid number of nodes for smoke_Snippets_BroadcastSelect

* ConstantResultSubgraphTest of u16/i16/u32/i64/u64

* restore smoke_SetBlobCPU BOOL tests for arm

* [CPU] Fix ARM precision issue

ARM64 ACL prefers fp16 than fp32, API 2.0 requires input/output precision not changes,
then fp32 input will trigger convert node is added to convert fp32 to fp16.

* Solve some ARM64 failures

* Fix arm64 InferRequestVariableStateTest tests out of memory issue

ARM64 will force fp16 precision, which cause states memory can be fp16, so memcpy to state_memory
cannot use float * element_size, else it will be out of memory bound.

* Skip 2 arm64 tests caused by forcing fp16 precision

* Revert "Fix arm64 InferRequestVariableStateTest tests out of memory issue"

This reverts commit 3e12bd48c2.

* Fix python test_get_profiling_info failure issue

---------

Co-authored-by: Bell, Song <bell.song@intel.com>
Co-authored-by: Chen Peter <peter.chen@intel.com>
2023-11-07 15:25:05 +01:00
Vladislav Golubev
2932e9e938 ReshapeBMatMul and ReshapeAMatMul: avoid circular dependencies creation (#20771) 2023-10-31 15:00:52 +04:00
Fang Xu
82f191b0e7 choose Pcore to compile model for GPU plugin (#20472)
* choose Pcore to compile model for GPU plugin

* provide function to update executor config

* set callback executor to nullptr for GPU plugin

* fix code style

* fix warning

* optimize duplicate code

* set callback executor to nullptr for another gpu compile_model

* add description for new function

* add smoke test

* fix code style

* modify function definition

---------

Co-authored-by: Wanglei Shen <wanglei.shen@intel.com>
2023-10-30 16:24:36 +08:00
Ilya Lavrenov
620a0fc289 Fixed compilation with C++23 (#20724) 2023-10-27 16:29:40 +04:00
River Li
be25d9038e Fix stride issue for ZeroDims (#20686)
* Fix stride issue for ZeroDims

* Add test case

* Fix ITensor::is_continuous() issue

* Fix the same issue in gpu plugin and template plugin
2023-10-27 09:27:53 +04:00
Sun Xiaoxia
b9c64370fb Fix memory leak on windows (#20590) 2023-10-26 09:42:50 +04:00
Gorokhov Dmitriy
63299ec217 [CPU] FullyConnected acceleration with 4bit weights decompression (#20607) 2023-10-26 01:08:07 +04:00
Fang Xu
361011c75e fix coverity scan issue (#20678) 2023-10-25 12:48:43 +04:00
Vladimir Paramuzov
307176e5c6 [GPU] Fixed surfaces shape in create_tensor_nv12 helpers (#20539) 2023-10-25 12:46:47 +04:00
Ilya Churaev
7ceff55b71 Add AlignedBuffer to OpenVINO developer API (#20532)
* Add AlignedBuffer to OpenVINO developer API

* Fixed build

* Fixed code style and remove opset deprecation

* Fixed Windows build

* Fixed GNA

* Fixed comment
2023-10-24 06:13:23 +00:00
Yuan Hu
84a0994ec5 [core] fix memory leak issue imported by #18868 (#19832)
* try to fix memory leak issue

cpustreamer is released, but there are still thread id in t_stream_count_map

* fix threadlocal affect all threads

Signed-off-by: HU Yuan2 <yuan2.hu@intel.com>

* add comment for local() function to avoid mistaken modification
in the future

Signed-off-by: HU Yuan2 <yuan2.hu@intel.com>

* use custom stread id

Signed-off-by: HU Yuan2 <yuan2.hu@intel.com>

* fix review comments

Signed-off-by: HU Yuan2 <yuan2.hu@intel.com>

* fix format issue

Signed-off-by: HU Yuan2 <yuan2.hu@intel.com>

* create shared_ptr before assert

Signed-off-by: HU Yuan2 <yuan2.hu@intel.com>

---------

Signed-off-by: HU Yuan2 <yuan2.hu@intel.com>
2023-10-24 13:59:08 +08:00
Fang Xu
5e017dc5d2 fix compilation issue for openmp on windows (#20312)
* fix compilation issue for openmp on windows

* update based on suggestions
2023-10-23 15:18:51 +04:00
Ilya Churaev
865b21ecd4 Introduce WA to improve performance of find_port() method (#20573)
* Introduce WA to improve performance of find_port() method

* Add mutex

* Remove redindant lock

* Reduce the number of get_tensor_ptr calls

* Fixed typo

* Removed WAs from Hetero plugin
2023-10-23 13:44:58 +04:00
Karan Jakhar
5dafee4ac1 fixing type, suppored -> supported (#20639) 2023-10-22 17:25:59 +04:00
Wang, Yang
86000bb8a2 [GPU] Reserve CPU resource for GPU inference (#19214)
* Update.

* Update.

* Update.

* add test case.

* Update.

* Update test cases.

* Update.

* Update.

* Updated.

* Updated.

* Updated.

---------

Co-authored-by: Chen Peter <peter.chen@intel.com>
2023-10-17 05:42:56 +00:00
Ilya Lavrenov
56d74a82cb Relocatable developer package (#20327)
* Merge Linux CC + static build + clang compiler

* Improvements

* Removed ie prefixes from cmake scripts

* Fixes for NPU

* Initial relocatable OpenVINO Developer package

* Improvements

* Try to fix

* improvements

* Export a lot of headers

* Removed NVIDIA pipeline; make it to be a job

* Fixes

* Fixes 2

* Try ilya-lavrenov repo

* Clean-up

* More imrpovements

* Even more improvements

* Override export, install

* Override export, install

* Disable pythonwheel generation for relocatable OV dev package

* Fixed issues with versions

* Fixed android build

* Fixed android build

* Fixed NPU build

* Update src/bindings/python/CMakeLists.txt
2023-10-12 22:59:04 +00:00
Vladislav Golubev
5894fbe69d [CPU] Group & NF4 decompression transformation support (#20039) 2023-10-11 15:25:00 +04:00
Ilya Churaev
8020530e67 Reduce ngraph namespace usage from core component (#20309)
* Reduce ngraph namespace usage from core component

* Fixed build

* Fixed build 2

* Added missed opset to legacy API
2023-10-11 07:09:04 +04:00
Wanglei Shen
60b82372d1 Support SRF in MT 2.0 on Linux (#20301)
* add test data for SRF on Linux

* update cpu map detection for Ecore only platform

* update test data for smoke test of streams generation

* update test data
2023-10-10 13:59:27 +08:00
Ilya Lavrenov
e30f75bb4d Rpath story (#20297) 2023-10-10 06:27:26 +02:00
Ilya Lavrenov
ead4b8a0ec Moved cmake functions, variables to API 2.0 naming style (#20281)
* Merge Linux CC + static build + clang compiler

* Improvements

* Removed ie prefixes from cmake scripts

* Fixes for NPU
2023-10-09 22:30:32 +04:00
yanlan song
ad41d0f52f rework auto test cases (#19862)
* initial commit

Signed-off-by: fishbell <bell.song@intel.com>

* clean up

Signed-off-by: fishbell <bell.song@intel.com>

* fix windows build failure

Signed-off-by: fishbell <bell.song@intel.com>

* enable auto func tests

Signed-off-by: fishbell <bell.song@intel.com>

* enable auto_func_test to ci

Signed-off-by: fishbell <bell.song@intel.com>

* some clean up in gpu case

Signed-off-by: fishbell <bell.song@intel.com>

* clang

Signed-off-by: fishbell <bell.song@intel.com>

* fix build warning

Signed-off-by: fishbell <bell.song@intel.com>

* enable new tests

Signed-off-by: fishbell <bell.song@intel.com>

* fix build warning

Signed-off-by: fishbell <bell.song@intel.com>

* enable consistency test

Signed-off-by: fishbell <bell.song@intel.com>

* try fix build error on manylinux

Signed-off-by: fishbell <bell.song@intel.com>

* enable cpplint

Signed-off-by: fishbell <bell.song@intel.com>

* enable clang-format

Signed-off-by: fishbell <bell.song@intel.com>

enable some tests

Signed-off-by: fishbell <bell.song@intel.com>

* fix typo

Signed-off-by: fishbell <bell.song@intel.com>

* clang for unit tests

Signed-off-by: fishbell <bell.song@intel.com>

* fix merge conflict

Signed-off-by: fishbell <bell.song@intel.com>

---------

Signed-off-by: fishbell <bell.song@intel.com>
2023-10-07 14:44:25 +04:00
Ilya Lavrenov
d6c2a10b38 Merge Linux CC + static build + clang compiler (#20243)
* Merge Linux CC + static build + clang compiler

* Improvements

* Fixes
2023-10-06 00:30:11 +04:00
Ivan Tikhonov
3d6fb85a99 Model builders refactoring: rename dirs, targets, file names (#19885)
* Model builders refactoring

* Apply review comments

* resolve review commets: update cmake target names

* fix build: use correct headers

* fix headers

* fix build

* fix docs
2023-10-04 18:08:24 +02:00
Ilya Churaev
ea37126ea5 Removed ie:: namespace (#20172) 2023-10-02 14:02:14 +04:00
Ilya Lavrenov
95e3096684 Added build on RedHat system to build & test RPM packages (#20134)
* Added GHA workflow for RPM packages

* Avoid rebuild for RPM / Debian packages

* Removed conditional include headers

* try only post-build

* Beautification

* Fixed testdata generation for mulit-config generators
2023-10-01 23:23:06 +04:00
Ilya Lavrenov
a6e7bac962 Added RISC-V Conan build (#20064) 2023-09-27 12:24:20 +04:00
Wang, Yang
bf7fcb08e7 No exception throws when getting version from unregistered plugin (#19722)
* Updated the behavior of core.get_version() and added corresponding test cases.

* Remove the prompt message.

* Update src/inference/src/dev/core_impl_ie.cpp

Co-authored-by: Ilya Churaev <ilyachur@gmail.com>

---------

Co-authored-by: Ilya Churaev <ilyachur@gmail.com>
Co-authored-by: yanlan song <bell.song@intel.com>
Co-authored-by: Chen Peter <peter.chen@intel.com>
2023-09-25 17:43:21 +04:00
Sun Xiaoxia
678e919b13 CPU pinning on Windows (#19405)
* add cpu pinning on windows

* remove pinning limitation on windows

* only support the machine with one numa node

* fix code style

* fix build error on macos

* set mask initial value

* fix test failure on window

* fix build failure on macos, add limitation on windows machine with two sockets

* fix test failure on windows

* fix test failure

* fix comments
2023-09-23 11:28:15 +08:00
Vladislav Golubev
0e0e1b0ee6 SmartReshape: ReshapeMatMul transformations are fixed (#19987)
* SmartReshape: ReshapeMatMul transformations are fixed

* clang-format fixes
2023-09-21 19:49:07 +02:00
Wanglei Shen
1786e366a2 Support new format for CPU information on Win7 (#19759) 2023-09-21 09:17:47 +04:00
Ilya Churaev
4a0e3fc77f Remove ngraph from some sources (#19966) 2023-09-21 06:27:30 +04:00
Ilya Churaev
9c908f5245 Migrate template backend to new api (#19843)
* Added set_element_type for tensor

* Moved template backend to new API

* Revert "Added set_element_type for tensor"

This reverts commit 27608d2ea0.

* Fixed build

* Fixed more errors

* Fixed loop implementation

* Fixed ONNX tests

* Small change

* Fixed set_shape for host tensor

* Fixed ReadValue Assign tests

* Fixed template tests

* Fixed build

* Fix Windows build

* Fixed more errors

* Fixed CTCLoss

* Fixed comments

* Removed all comments

* Fixed tensors update

* Try to fix tests
2023-09-19 11:46:11 +04:00
Ilya Lavrenov
54609e7b72 Removed FixRtInfo pass (#16870)
* Removed FixRtInfo pass

* Removed FixRtInfo pass

* Fixed macOS compilation
2023-09-19 00:07:59 +04:00
Ilya Lavrenov
db395155b3 Removed warnings suppressions for extra modukes (#16479) 2023-09-15 02:53:32 +00:00
Ilya Lavrenov
ba67db66ae Properly enable CMAKE_COMPILE_WARNING_AS_ERROR (#19828)
* Properly enable CMAKE_COMPILE_WARNING_AS_ERROR_DEFAULT

* Properly enable CMAKE_COMPILE_WARNING_AS_ERROR
2023-09-15 01:20:00 +04:00
Ilya Lavrenov
35a0706dff Replaced several cmake utilities with new ov_ prefix (#19819)
* Replaced several cmake utilities with new ov_ prefix

* Replaced several cmake utilities with new ov_ prefix
2023-09-14 16:22:50 +04:00
Ilya Churaev
fa667156cb Check HolderTests under the proxy (#19785)
* Skip only virtual device tests

* Fixed proxy life time

* Fixed compiled model get property

* Fixed code style

* Try to fix LTO
2023-09-14 15:11:26 +04:00