Commit Graph

4030 Commits

Author SHA1 Message Date
Egor Duplenskii
d66e322529 [CPU] Add and correct tests for int8 LSTM (#17447) 2023-06-14 09:58:31 +04:00
Pawel Raasz
ca0d40969a Reduce binary size by optimize exceptions (#18022)
* Separate macros for OPENVINO_THROW
add default message to exception to avoid using literals

* Restore the suppress deprecated macro in node

* Restore to public the Exception ctor
for nvidia plugin
2023-06-14 07:27:51 +04:00
Sergey Shlyapnikov
e631f65a9b [GPU] Fix in-order queue synchronization issue related to OCL/OneDNN impls interaction with CPU impls (#17976) 2023-06-14 10:15:04 +09:00
Tomasz Dołbniak
b023119b9a Preprocessor's resize shared tests (#17937) 2023-06-13 17:13:45 +00:00
Fang Xu
661f66b5c5 combine test for stream_info_table (#17930)
* combine test for stream_info_table

* fix failed test cases

* add check for performance hint and core type

* add comments for the parameters of test case
2023-06-14 00:40:08 +08:00
Wanglei Shen
0541a12730 update numactl support and add test cases (#17879)
* update numactl support and add test cases

* fix typo

* remove debug info

* update for comments

* update for comments

* move test case into separated file

* update for comments

* update code style
2023-06-13 23:37:08 +08:00
Egor Duplenskii
e738c4e83f [CPU] Use primitive priority list more efficiently (#17135) 2023-06-13 13:55:27 +00:00
Anastasia Kuporosova
d95c49d888 [PyOV] Deprecate python IE and nG API (#17791) 2023-06-13 13:18:30 +00:00
Egor Duplenskii
dcba37d897 [CPU][TESTS] Exclude cpplint check for custom test targets (#17867) 2023-06-13 12:18:59 +00:00
Anastasiia Pnevskaia
77711be786 tf.Graph decoder. (#16355)
* tf.Graph decoder.

* Fix conflicts.

* Fixed det_input_node()

* Added support of non-frozen models.

* Cleaned code.

* Small fix.

* Small corrections.

* Error fixes.

* Code style.

* Code style.

* Code style.

* Small correction.

* Fixed float32 attributes.

* Small correction.

* Fixed tests.

* Fixed errors.

* Added statefull partitioned call test.

* Import fix.

* Code corrections.

* BOM test fixed.

* Corrected check, added comment.

* Added checks.

* Supported TF Fraph Iterator in load_by_model().

* Clang format.

* Small correction.

* Fixed example_input logic, added tests.

* Added comment.

* Small correction.

* Corrected example_input description.

* Moved load_by_model test to MO Python API tests.

* Minor corrections.

* Code corrections.

* Small correction.

* Clang format.

* Fixed tests.

* Import change.

* Moved GraphIterator to common FE.

* Tests refactoring, minor fixes.

* Small test correction.

* Removed not needed change.

* Removed commented code.

* Removed not needed change.

* Unit tests fix.

* Temporarily added debug output.

* Test fix.

* Applied comments.

* Fixed test.
2023-06-13 16:04:26 +04:00
Maxim Vafin
48dec1000e [PT FE] Support inplace operations on aliases of tensors (#17856)
* Support operations on aliases of tensors

* Add tests

* Fix issue with convnd

* Fix code style

* Fix issue with tensor index of mutated tensor

* Fix if types alignment

* Fix issues in keypoint detectron2

* Fix issue with masks in detectron2

* Fix acuracy issue in mobilevitv2 models

* Remove unused includes

* Return upsample case in lictconstruct replacer

* Fix types, apply review feedback

* Apply feedback

* Revert change of not using shared_from_this for getitem

* Fix issue in prim::device transformation

* Fix layer tests

* Apply review feedback

* Fix issue with not existing alias to tensor
2023-06-13 13:30:48 +02:00
Ilya Churaev
6043bcb5c0 Fixed Core import model call (Ported from proxy) (#18020) 2023-06-13 08:16:25 +00:00
Yury Gaydaychuk
b25c8ef860 [Commit slider] Tune shell settings for linux (#17884) 2023-06-13 11:54:45 +04:00
Alexandra Sidorova
883a70c91e [Snippets] Fixed set for Windows (#17859) 2023-06-13 10:12:54 +04:00
Ilya Churaev
c8f3ed814b Finalize deprecation of public IE API (#17962)
* Remove NV12 and I420 blobs and deprecate some legacy API

* Fixed some errors

* Remove NV12 blobs

* Remote NV12 conversion

* Fixed other warnings

* Suppress version

* Fix some warnings

* Fixed version

* Try to fix some warnings

* Suppress warnings in C header

* Suppress warnings in C

* Fixed Windows exceptions

* Try to fix warnings

* Try to fix C bindings build

* Suppress InferRequest

* Fixed some build issues

* Fixed some errors

* Fixed build all for macOS

* Suppress some warnings

* Fixed merge conflict
2023-06-13 07:12:17 +04:00
Wang Kai
2d5e087b8b fixing typos in src/plugins/intel_cpu/src/transformations (#18016) 2023-06-13 00:19:46 +04:00
Ilya Churaev
0743e9bfb5 Removed legacy methods SetBatch and SetBlob (#17984)
* Removed legacy methods SetBatch and SetBlob

* Fixed GPU plugin build

* Remove DYN_BATCH_LIMIT from tests

* Revert some changes in GPU plugin
2023-06-12 18:54:23 +00:00
Ilya Churaev
df44f92a97 Remove NV12 and I420 blobs and deprecate some legacy API (#17919)
* Remove NV12 and I420 blobs and deprecate some legacy API

* Fixed some errors

* Remove NV12 blobs

* Remote NV12 conversion

* Fixed other warnings

* Suppress version

* Fix some warnings

* Fixed version

* Try to fix some warnings

* Suppress warnings in C header

* Suppress warnings in C

* Fixed Windows exceptions

* Try to fix warnings

* Try to fix C bindings build

* Suppress InferRequest

* Fixed some build issues

* Fixed some errors
2023-06-12 21:15:02 +04:00
Ilya Lavrenov
90a0e5f81a Refactored setup.py; used pip wheel to build package (#17991) 2023-06-12 14:42:09 +00:00
Evgenya Stepyreva
dd02a0f440 [TFLite] Custom attribute reading and While operation support (#17932)
* Custom attribute reading and While operation support

* Rearanges FLATBUFFERS_LOCALE_INDEPENDENT setting

* Style

* Make flatbuffers code as version independent as possible

* Comments addressed
2023-06-12 14:42:18 +04:00
Aleksandr Voron
1588a33217 [CPU][ARM] Dynamic shapes support in ARM transformations (#17548) 2023-06-12 14:13:16 +04:00
Ivan Novoselov
a106eb0d75 [Snippets] Documentation update (#17678) 2023-06-12 11:07:49 +04:00
Pawel Raasz
16bde0bba6 Add static shape adapters to reduce dimension conversion in shape_infer for CPU (#17862)
* Add static shape adapter
- Adapters holds CPU dimension which can be reference to it or vector
- Add ov::optional for holding optional result from shape inference
- Add new `infer` function in `IStaticShapeInfer`

* Temporary support of StaticShape

* Fix build issues

* Correct shape adapter compare
- minor static shape adapter refactor

* Minor corrections in ShapeInferenceTA

* Fix subscript operator in StaticShapeRef
2023-06-12 07:05:46 +02:00
Sergey Shlyapnikov
70e0caca4f [GPU] Fix dynamic padding processing of static dimension (#17978) 2023-06-12 08:39:42 +04:00
Mateusz Tabaka
8653f1cbd9 Fix static analysis issues in pruning mask propagation (#17910)
Ticket: CVS-108960
2023-06-11 10:49:31 +02:00
Mateusz Tabaka
93689cc417 Don't constantfold weights in MatMulConstTransposesExtraction transformation (#17917)
get_constant_from_source for Transpose node calls evaluate method
twice which is unnecessary in this case.

Ticket: CVS-105967
2023-06-11 09:49:21 +02:00
Ilya Lavrenov
50c85f01ab ARM static build (#17970) 2023-06-10 23:40:39 +04:00
Ilya Churaev
a0e8d9a630 Deprecate main IE developer API classes (#17983)
* Deprecate main IE develiper API classes

* Remove legacy snippet

* Fixed warning for VariableState
2023-06-10 19:25:54 +04:00
Ivan Tikhonov
74100670ac Delete the deprecated LowLatency (version1) transformation (#17965)
* Delete the deprecated LowLatency (version1) transformation

* detele LowLatency refs from the docs
2023-06-10 12:24:43 +04:00
Wilson Seok
cff083f83d [GPU] gather nd shape agnostic kernel implementation (#17940)
* gather nd shape agnostic kernel implementation

* add func test

* fix minor bugs

* minor bug fixes

* fix win build error
2023-06-10 00:28:00 -07:00
Andrew Kwangwoong Park
c413825845 [GPU] Fuse type conversion only reorders to the prev nodes (#17881)
* Fuse convert reorder to prev MVN/Concat node

Signed-off-by: Andrew Park <andrew.park@intel.com>

* Add dynamic TCs for ov_gpu_unit_test

Signed-off-by: Andrew Park <andrew.park@intel.com>

* Add descriptions for changes

Signed-off-by: Andrew Park <andrew.park@intel.com>

* Fix kernel selection failure

Signed-off-by: Andrew Park <andrew.park@intel.com>

* Add is_type_conversion_only function for reorder_node

Signed-off-by: Andrew Park <andrew.park@intel.com>

---------

Signed-off-by: Andrew Park <andrew.park@intel.com>
2023-06-09 16:07:01 -07:00
Egor Duplenskii
36a4c0cb2b [CPU][DEBUG_CAPS] Fix compilation warnings (#17977) 2023-06-10 02:09:59 +04:00
Wang Kai
7b86b427cb fixing some typos (#17980) 2023-06-10 01:13:31 +04:00
Edward Shogulin
8adce06348 [LPT] tests rename for nightly: part #2 (#17926) 2023-06-10 01:12:52 +04:00
Ilya Churaev
724eb94a1d Deprecate plugins config keys (#17974) 2023-06-09 17:55:14 +00:00
Mateusz Tabaka
67f7808fc4 AlignEltwiseInputRanks - fix when output rank is less than constant rank (#17895)
Fixes an issue when AlignEltwiseInputRanks is applied on FakeQuantize with
scalar as a first input and input/output low/high being Shape{1} constants.
In such case FakeQuantize output is still a scalar, so the difference
between output rank and input/output low/high rank is negative.

Ticket: CVS-112454
2023-06-09 20:31:20 +04:00
Ilya Churaev
c8e331003f Port some changes from proxy branch (#17961)
* Port some changes from proxy branch

* Port test changes

* Rewrite approach for compile model and tensor

* Fixed review
2023-06-09 16:08:53 +02:00
Ilya Lavrenov
a0119fe33c Android debug build (#17955) 2023-06-09 08:03:10 +04:00
Gorokhov Dmitriy
e564c50d35 [CPU] Fixed KEY_LP_TRANSFORMS_MODE internal propery behavior (#17945) 2023-06-08 21:09:37 +04:00
Sun Xiaoxia
9e8d64bf70 Xiaoxia/update cpu mapping (#17451)
* generate cpu mapping table pre tbb

* change function name

* fix proc_type_table is wrong in RPL

* add getCpuMapFromCores test, fix comments

* modify test case

* fix comments

* fix code style

* add throw an exception

* fix numa_nodes=0 on ARM

* modify numa_nodes

* fix ExportOptimalNumStreams failed on ARM

* fix comments

* add discription of get_cpu_mapping_from_cores

* update for numactl support

* fix cores is wrong

---------

Co-authored-by: Wanglei Shen <wanglei.shen@intel.com>
2023-06-08 22:28:35 +08:00
Alexandra Sidorova
385cfee24a [Snippets] Fixed copy runtime info which contains PortDescriptors (#17774) 2023-06-08 12:22:48 +00:00
Sergey Shlyapnikov
58d79aa3a6 [GPU] Add shape_of subgraphs markup and initial cpu implementations (#17762)
* [GPU] Add shape of subgraphs markup and initial cpu implementations for some of primitives

* Apply review comments

* Exclude eltwise with boolean mode types from shape of subgraphs and fix leftovers
2023-06-08 13:46:21 +04:00
Alexandra Sidorova
eb3e6a65eb [Snippets] Add support of MHA Tokenization for different precisions (#15647) 2023-06-08 08:05:14 +00:00
Ilya Lavrenov
bdfa970c7a Support different names for ITTAPI (#17889) 2023-06-08 11:08:58 +04:00
Taylor Yeonbok Lee
f246015dd7 [GPU] Fix issue in runtime buffer fusing (#17909)
* There were two issues in runtime buffer fusing
1) Missing condition in matcher for dyanmic tensor
2) If the node is marked as can_be_optimized = true at build time and then turned out to false at runtime, the kernel compilation has been skipped becuaes it was checking node->can_be_optimized
=> To resolve this issue, added can_be_optimzied to impl_param and let the impl create check can_be_optimized in impl_param instead of that in node.

* Fixed primtiive::can_be_optimize to be set through function
2023-06-07 19:39:26 -07:00
Edward Shogulin
655c21adf1 [CPU] Quantized MHA extension for SmoothQuant (#17906) 2023-06-07 14:31:06 +00:00
Anton Voronov
2547301fa7 [CPU] gemm convolution: fixed bias offset (#17357) 2023-06-07 17:15:01 +04:00
Georgy Krivoruchko
ee659c1ce8 [TF FE] Workaround for Broadcast/Concat issue with empty tensors (#17864)
* Added transformation for Concat

* Added test

* CI fix

* Fixed behavior of the "empty tensor list" test
2023-06-07 14:17:20 +04:00
Pawel Raasz
f023f5d672 Add interpolate from all opsets to cpu shape infer (#17875) 2023-06-07 11:28:45 +02:00
hyunback kim
13028397b7 Optimize permute gemm onednn (#17621)
* [GPU] Optimized out permute in permute-gemm(onednn) pattern.

Permute can be optimized out when permute's in and out are compatible and onednn gemm.

Signed-off-by: hyunback <hyunback.kim@intel.com>
2023-06-07 16:20:59 +09:00