Compare commits

...

460 Commits

Author SHA1 Message Date
Luwei Zhou
1f39925343 Revert reorder WR. 2022-05-27 10:55:22 +08:00
Li, Tingqian
997337c33d Add OV_CPU_DEBUG_LOG controls debug logs to show 2022-05-26 21:08:58 -04:00
Li, Tingqian
595523e6d3 fix cpplint 2022-05-26 22:35:31 +08:00
Li, Tingqian
0b21f70339 fix oneDNN register_jit_code log 2022-05-26 22:32:10 +08:00
Li, Tingqian
bb429a855a change VERBOSE_LOG to DEBUG_LOG 2022-05-26 22:31:41 +08:00
Li, Tingqian
ff38537aea Merge branch 'luocheng/onednn_2.6' of github.com:luo-cheng2021/openvino into luocheng/onednn_2.6 2022-05-26 22:07:08 +08:00
Li, Tingqian
d570dc4e16 Add a new CPU_DEBUG_CAPS: OV_CPU_SUMMARY_PERF 2022-05-26 22:07:00 +08:00
Li, Tingqian
0f23ee9ad6 Merge branch 'luocheng/onednn_2.6' of github.com:luo-cheng2021/openvino into luocheng/onednn_2.6 2022-05-26 09:22:54 -04:00
Li, Tingqian
9f5e3c50b9 cpu dump check supports compare dump files 2022-05-26 09:22:42 -04:00
Luwei Zhou
fa38c36b2f Update ONEDNN version to fix AVX2 bug. 2022-05-26 19:59:11 +08:00
Li, Tingqian
c533aa6b8b fix cpplint 2022-05-26 07:32:18 -04:00
Li, Tingqian
5b6827649c fix binary prelu post Ops 2022-05-26 06:21:55 -04:00
Li, Tingqian
f037208b3f Generate exec graph in cpu dump check tool 2022-05-26 05:52:41 -04:00
Li, Tingqian
a1ccf115df Add verbose log 2022-05-26 02:46:15 -04:00
Li, Tingqian
8b7f7d6391 Add CPU dump check tool 2022-05-25 21:07:02 -04:00
Zhang Yi3
6a8b37fe4f [CPU] Add BF16 AMX test for Matmul 2022-05-25 17:28:57 +08:00
Luo Cheng
f326f0b824 make nightly case run. tested on amx/avx512/avx2. 2022-05-25 13:25:56 +08:00
Luo Cheng
ecdda2c909 mix legacy/new binary postops 2022-05-24 20:21:42 +08:00
Zhang Yi3
c9304c5c7f [WA] skip fakequantize fusing in bf16 2022-05-24 17:26:46 +08:00
Luwei Zhou
04ba093dca Compiling issue fix. 2022-05-23 16:01:29 +08:00
Luwei Zhou
d81896502f WR enforce reorder bug and add NSPC into deconv supported list. 2022-05-23 14:53:35 +08:00
Luo Cheng
26a20e5875 testcase for conv brgconv avx512/amx 2022-05-23 09:48:50 +08:00
Luo Cheng
d7d4097418 testcase for conv brgconv avx512/amx 2022-05-22 21:10:08 +08:00
Luo Cheng
91ec6b0806 testcase init support amx check 2022-05-20 22:30:27 +08:00
Luo Cheng
841b2dc47a disable BF16 fusing fakequant testcase 2022-05-20 20:57:30 +08:00
Luo Cheng
b53bc6bac5 MemoryInput/Output support bf16; Enforce bf16 'NO' should enable snipptes 2022-05-20 20:01:19 +08:00
Li, Tingqian
f5af87f810 Fix cpplint error 2022-05-20 11:59:55 +03:00
Li, Tingqian
c4021f8e6e Fix primType check issue 2022-05-20 11:15:55 +03:00
Luo Cheng
a1f21f3797 fix clang check 2022-05-19 20:12:50 +08:00
Luo Cheng
b08bf8a9de testcase subgraph sets default ENFORCE_BF16 to NO 2022-05-19 19:55:35 +08:00
Luo Cheng
cecd11457e OVClassBasicTest case typo 2022-05-19 19:55:35 +08:00
Luo Cheng
d9d934bf86 [WA] bf16 crash due to MemoryInput/Output 2022-05-19 19:55:35 +08:00
Li, Tingqian
c302bc94da Add cpuDebugFuncTests target 2022-05-19 03:59:19 +03:00
Li, Tingqian
c20d762af8 Fix ConcatConvSumInPlaceTest 2022-05-19 02:55:58 +03:00
Luo Cheng
f5ea549d97 fix gemm bf16 fail 2022-05-18 10:40:35 +08:00
Luo Cheng
b2ba3d5055 add gemm int8 binary postops to fix GroupConvolutionQDqTransformation fail 2022-05-18 08:08:49 +08:00
Luo Cheng
a9104e1a88 add gemm int8 binary postops to fix GroupConvolutionQDqTransformation fail 2022-05-17 19:51:24 +08:00
Luwei Zhou
fbf241d9d8 WR to disable the LPT multiplyToGroupConv test because the transformation was disabled in d5e16f 2022-05-16 11:27:33 +08:00
Zhang Yi3
63b283fe88 [WA] remove invalid FQ tests 2022-05-16 15:43:26 +08:00
Luo Cheng
167f74a7bc fix avx2 groupconv accuracy problem 2022-05-16 07:24:33 +08:00
Luo Cheng
36be40c7e9 fix gemm bf16 win crash 2022-05-14 11:58:25 +08:00
Luo Cheng
543963bf8d test subgraph added nhwc format check 2022-05-13 19:07:23 +08:00
Luo Cheng
901210fa6c testcase conv maxpool will check brgconv instead of jit 2022-05-13 18:50:39 +08:00
Luo Cheng
eb87aacc49 [WA] Remove the moc fail case by #af4731a1 2022-05-13 18:17:10 +08:00
Luo Cheng
a56e1a9c4b group deconv may crash on memory out of bound 2022-05-13 17:26:16 +08:00
Luo Cheng
11238a504c Merge pull request #43 from liubo-intel/liubo/onednn_2.6_bugfix
fix CPU 'ConvolutionBackpropDataLayerTest' fail issue
2022-05-13 13:59:24 +08:00
Luwei Zhou
6955c389d6 [WR] Removed failed the ReduceMean tests caused by 21f3555. 2022-05-13 11:42:38 +08:00
liubo-intel
9e9e3dd01b reopen 'FuseDeconvolutionAndSimpleOperation' Transform to fix CPU 'ConvolutionBackpropDataLayerTest' fail issue 2022-05-13 11:31:21 +08:00
Zhang Yi3
3bfc042a35 fix xmm zero check 2022-05-12 23:52:31 +08:00
Luo Cheng
4696b9e58a [WA] make cpu case to run completed 2022-05-12 15:35:16 +08:00
Luo Cheng
e17179d795 Merge remote-tracking branch 'upstream/master' into luocheng/onednn_2.6 2022-05-06 13:38:39 +08:00
yanlan song
912f40e74d stateful inferface impl for AUTO/HETERO (#11590)
* CPU for stateful model

Signed-off-by: fishbell <bell.song@intel.com>

* log

Signed-off-by: fishbell <bell.song@intel.com>

* hetero impl

Signed-off-by: fishbell <bell.song@intel.com>

* enable tests

Signed-off-by: fishbell <bell.song@intel.com>
2022-05-06 12:43:56 +08:00
guozhong wang
870455675c add cumulative_throughput for python (#11195) 2022-05-06 12:43:41 +08:00
guozhong wang
0a65f5f607 selectdevice returns MULTI:device in cumulative_throughput (#11367)
* selectdevice returns MULTI:device in cumulative_throughput

* load multi with throughput and disable cpu helper in cumulative

* disable cpu helper in cumulative_throughput

* add cumulative to bechmark_app help message

* modify benchmark_app.hpp clang-format
2022-05-06 12:42:59 +08:00
Andrew Kwangwoong Park
caccde6a82 [GPU] Add TC for MXNet-style NMS model(decrease_label_id) (#10624)
- Add TC for decrease_label_id=true to cover MXNet-style NMS models
- Fix segfault issue that occurs when data precision is fp16

Signed-off-by: Andrew Kwangwoong Park <andrew.kwangwoong.park@intel.com>
Signed-off-by: Andrew Park <andrew.park@intel.com>
2022-05-06 10:36:46 +09:00
Artur Kulikowski
1331eabbed Getting of python version depend on the used shell (#11614) 2022-05-05 23:42:54 +02:00
Katarzyna Mitrus
50287625d7 [Eye-9] Python API for Eye-9 (#11552) 2022-05-05 16:16:36 +02:00
mei, yang
2d0ffd8fe5 Specify GenerateProposals-9 (#11004) 2022-05-05 10:28:18 +02:00
Bo Liu
d560cf19a3 Specify ROIAlign-9 (#11067) 2022-05-05 10:27:58 +02:00
cecilia peng
e68613a2fc Specify MulticlassNonMaxSuppression-9 operation (#11083) 2022-05-05 10:27:47 +02:00
Mateusz Tabaka
68ef1555bc Enable test_loop_simple_precommit on GPU (#11625) 2022-05-05 09:55:02 +02:00
Tomasz Jankowski
0babf20bd2 Remove unsued map initialization (#11621)
Details:
- Unused variable removed.
- Added pytest markers declaration to avoid useless warnings.
Tickets:
70158
2022-05-05 00:50:34 +02:00
Mateusz Tabaka
b92e10ff6e Fix setting default affinity property in cpuFuncTests (#11609)
CPU plugin sets default affinity to HYBRID_AWARE
if it's running on AlderLake, so we need to reflect that
in cpuFuncTests.
2022-05-04 12:42:43 +02:00
Tetiana Gubanova
7dd8fbd47e [GPU] Einsum with repeated labels and ellipsis support (#11615)
* Einsum test helper

* Einsum single layer tests

* Add Einsum decomposition with repeated labels and ellipsis support
to GPU transformations pipeline

Co-authored-by: Oleksii Khovan <okhovan@lohika.com>
2022-05-03 20:46:22 +09:00
opoluektov-lohika
1ed03bbe6b Fix conformance test runner handling of input directories (#11611)
Check first whether the path specified by --input_dirs is a directory.

Otherwise the argument is always treated as a .lst file,
and in case it is a directory it silently fails,
which causes the test runner to not execute any tests intended.
2022-05-02 22:00:21 +09:00
Artur Kulikowski
26aa197cb1 Expand ONNX functions in Graph ctor (#11599)
* Expand ONNX function in constructor of class Graph

* Add test to expanding function GreaterOrEqual inside If op
2022-05-02 10:12:41 +02:00
Jan Iwaszkiewicz
8d221d1e06 Add lower bound for wheel package (#11595) 2022-04-28 13:02:46 +02:00
Sebastian Golebiewski
844fbde328 Update installing-openvino-windows-header.md (#11221)
* Update installing-openvino-windows-header.md

* Update docs/install_guides/installing-openvino-windows-header.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
2022-04-27 13:09:31 +02:00
Karol Blaszczak
f5781b1255 DOCS-cpu_language_review-port (#11537)
* DOCS-cpu_language_review

Co-Authored-By: Yuan Xu <yuan1.xu@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/CPU.md

* Update docs/OV_Runtime_UG/supported_plugins/Device_Plugins.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
2022-04-27 13:07:38 +02:00
Luo Cheng
bacc15c275 cherry pick from 2.7 to 2.6 2022-04-25 18:59:07 +08:00
Luo Cheng
04fabe7b20 rebase onednn master 2022-04-25 15:58:51 +08:00
Luo Cheng
bd09a6a218 fix compiler error 2022-04-25 15:58:51 +08:00
dmitrygo
21f3555f59 [CPU][WA] Enabled ReduceSum -> AvgPool transformation due to perf issues 2022-04-25 15:58:51 +08:00
dmitrygo
eae4782284 [CPU] Optimize processing for FQ + Sum + FQ post ops pattern 2022-04-25 15:58:51 +08:00
Vladislav Golubev
af4731a1f1 [WA] remove layout compatibility chheck that leads to the fase-positive exceptions 2022-04-25 15:58:51 +08:00
Vladislav Golubev
cd4150c8ef [WA] Add node name if tensor names are empty 2022-04-25 15:58:51 +08:00
dmitrygo
5e1a5aef3e [CPU] Optimize post ops processing 2022-04-25 15:58:51 +08:00
dmitrygo
8ee5514629 Fixed FQ post op optimization 2022-04-25 15:58:51 +08:00
dmitrygo
7e4539f6df [CPU][WA] Disabled Deconvolution + post ops fusing optimization 2022-04-25 15:58:51 +08:00
dmitrygo
3bebf4a76d [CPU] Enabled I8 precision on activations for Convolution node 2022-04-25 15:58:51 +08:00
dmitrygo
d5e16f7844 Post ops optimizations 2022-04-25 15:58:51 +08:00
dmitrygo
838f71eb9a [CPU] Enabled brconv implementation 2022-04-25 15:58:51 +08:00
dmitrygo
81b1fbd5c1 Migrate on OneDNN 2.7 2022-04-25 15:58:51 +08:00
Mateusz Tabaka
54e5af95da Disable two cases from smoke_Conv_3D_FP32_fusingScaleShiftAndFakeQuantizePerChannel (#11536)
They sporadically fail in precommit.
Ticket: 84153
2022-04-20 14:34:46 +02:00
Mateusz Tabaka
e53f702f81 Handle negative axis in SimplifySecondInputOfReshape (#11524)
Fixes #11501
2022-04-19 12:20:21 +02:00
Karol Blaszczak
22398ac9cd sphinx google search (#11439) (#11506)
porting from 22.1 as per Andrey's request from 04.08
* sphinx google search

* fixes

* fixes

* fix version tabs

Co-authored-by: Nikolay Tyukaev <nikolay.tyukaev@intel.com>
2022-04-13 12:39:36 +02:00
Karol Blaszczak
1b5756a4d7 Docs benchmarktool python correction - port (#11505)
* DOCS-benchmarktool_python_correction

add info on tool installation

* Update docs/OV_Runtime_UG/Samples_Overview.md

Co-authored-by: Helena Kloosterman <helena.kloosterman@intel.com>

Co-authored-by: Helena Kloosterman <helena.kloosterman@intel.com>
2022-04-13 11:14:39 +02:00
Karol Blaszczak
1e1735b022 Fixed operation names (#11447) (#11507)
Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>
2022-04-13 11:14:24 +02:00
Alexandra Sidorova
9dffa706fb Added shell for Eye-9 (#11210) 2022-04-12 12:49:07 +02:00
Przemyslaw Wysocki
76610393a0 [PYTHON] Change dependabot schedule (#11497) 2022-04-12 11:17:29 +02:00
Katarzyna Mitrus
5bedbbe05d [ONNX] Add Scan operator to ONNX Frontend (#11053) 2022-04-12 10:35:15 +02:00
Karol Blaszczak
e1cd7bfc5b DOCS-review GPU article changes (#11477)
As per ticket #CVS-80053
int8 link removed
2022-04-12 10:08:10 +02:00
Phillip Schmidt
4da0941cd2 Update README.md (#8048)
Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>
2022-04-06 06:51:09 +03:00
Luo Cheng
e72d32065c [FrontEnd] add assign, meshgrid, expand_v2 for paddle faster_rcnn model (#10627)
* assign, meshgrid op support

* fix review comments

* add copyright message

Co-authored-by: Ilya Churaev <ilya.churaev@intel.com>
2022-04-06 09:43:18 +08:00
Alexander Zhogov
9f01380558 Azure CI: Set master ref branch (#11475) 2022-04-05 22:23:07 +03:00
Ilya Lavrenov
c2703c81f6 Don't install nlohmann_json_schema_validator for samples (#11446)
* Try to improve gflags

* Try to improve gflags: part 2

* Tried to use dependencies on system

* Use nlohmann_jsonConfig from system

* Enabled nlohmann_json from system

* Improvements

* handle system gflags in developer package

* Simplifications

* Simplify dependency management

* Corrected package names

* Fixed subgraphsDumper configure stage

* Try to fix rhel8

* Try to fix macosx

* Fixed VPUX build

* Fixed aliasing issues

* Suppress some wanrings

* export gflags when build it

* Fixed some LTO

* Try to fix Mac

* revert

* use gflags as private dependency

* Aligned targets in developer package

* Fixed frontends tests build on U20 with LTO

* PAssed

* Don't use pkg_search_module(zlib ..) during cross-compilation

* Removed unused variables

* Fixed finding of zlib during cross-compilation

* CVS-83529

* Use nothreads_static

* Fixed python
2022-04-05 19:16:32 +00:00
Vladimir Paramuzov
050e2e518d [GPU] Replaced tensor dims usages with layout methods calls (#10984) 2022-04-05 21:49:34 +03:00
Irina Efode
01f530d443 API Conformance refactoring (#11421)
* ie_infer_request + ie_exec_net

* build

* build

* ov_compiled_model

* ov_infer_request

* ov_plugin

* ie_plugin

* build
2022-04-05 18:43:36 +03:00
Irina Efode
3f953b37c1 Reduce conformance execution time (#11416) 2022-04-05 18:40:33 +03:00
Jan Iwaszkiewicz
8e9fb18882 [PYTHON] Properties API, improvements of OVAny (#11389)
* WIP for POC

* Bulkwork

* Clean up current solution

* Extend Core methods

* Refactor Any python converts

* add submodule to runtime

* Add initial tests

* Fix copy-paste error

* Extend test

* Improve casting options and move common parts to utils

* Add newline

* Fix properties, remove class approach, move to all string Properties API

* Fix codestyle

* Fix pystyle

* ResolveTODOs, better align and extend python api

* Add extended test cases

* Fix properties in one of compile_model overload.

Co-authored-by: Alexey Lebedev <alexey.lebedev@intel.com>

* Update src/bindings/python/tests/test_inference_engine/test_properties.py

Co-authored-by: Alexey Lebedev <alexey.lebedev@intel.com>

Co-authored-by: Alexey Lebedev <alexey.lebedev@intel.com>
2022-04-05 18:01:36 +03:00
Chenhu Wang
fca2595293 Specify NonMaxSuppression-9 (#10729)
* define_NMS-9_specification

* default value as true for soft-nms-supressed-by-iou

* soft nms support

* comments apply
2022-04-05 15:58:02 +03:00
Karol Blaszczak
bff9769dd2 DOCS-transitionguide_name_correction (#11450)
OpenVINO™  2.0 => OpenVINO™ API 2.0
2022-04-05 13:33:28 +02:00
Przemyslaw Wysocki
55497a12d8 [PYTHON] Add method to_dtype() to Type class (#11433)
* Add Type method to_dtype()

* Clang formatting

* Add Type __init__ binding

* Add Type constructor

* Add numpy types classes to test
2022-04-05 14:00:02 +03:00
Ilya Churaev
86495ceb0f Update readme content (#11201)
* Update readme content

* Updated HW support matrix

* Update README.md

* Update doc links to nightly

* Update README.md

* Update README.md

* Update README.md

* Add logo

* Update README.md

* fixed links

* Update README.md
2022-04-05 13:09:22 +03:00
Andrey Noskov
3a36d90c11 [GNA] Moved PWL functional tests (#10731)
* Moving PWL to ngraph

* improving the running time of php_search; refactoring the pwl operation

* fixed erros & refactored code

* moved PWL op to GNA

* Update src/plugins/intel_gna/ops/pwl.hpp

Co-authored-by: Elizaveta Lobanova <elizaveta.lobanova@intel.com>

* Update src/plugins/intel_gna/ops/reference/pwl.hpp

Co-authored-by: Elizaveta Lobanova <elizaveta.lobanova@intel.com>

* Update src/plugins/intel_gna/ops/pwl.cpp

Co-authored-by: Elizaveta Lobanova <elizaveta.lobanova@intel.com>

* Update src/plugins/intel_gna/transformations/transpose_to_pwl.hpp

Co-authored-by: Elizaveta Lobanova <elizaveta.lobanova@intel.com>

* Update src/plugins/intel_gna/transformations/transpose_to_pwl.cpp

Co-authored-by: Elizaveta Lobanova <elizaveta.lobanova@intel.com>

* fixed compilation error

* Update inference-engine/tests/unit/gna/ngraph/transformations/gna_pwl.cpp

Co-authored-by: Elizaveta Lobanova <elizaveta.lobanova@intel.com>

* added some tests; changed algorithm of checking accuracy of pwl; refactoring

* added first and last segments; added fq and fixed errors

* fixed after review & rewrote some tests on ngraph

* removed debug logs & fixed code style check error

* s/ngraph_helper/ngraph_util

* removed TRANSFORMATIONS_API in PWLApproximation class declaration

* removed OPENVINO_API in Pwl class declaration

* replaced the deprecated version of evaluate() with a new one

* fixed some problems after reviewing

* fixed a problem when a value of function of left point of segment is less than minimum of function

* corrected a value of the right point of last segments

* [GNA] Moved pwl func tests

* Deleted deprecated test

* s/OPENVINO_RTTI/OPENVINO_OP

* Deleted conflicted test file

* fixed after review

Co-authored-by: Dmitrii Khurtin <dmitrii.khurtin@intel.com>
Co-authored-by: Elizaveta Lobanova <elizaveta.lobanova@intel.com>
2022-04-05 11:43:24 +03:00
Maxim Gordeev
9c8a6aacb7 [IE Samples] Activating new parameter is compact mode(memory_reuse) in speech sample (#11405)
* [IE Samples] Activating new parameter is compact mode(memory_reuse) in speech sample

* changed format

* renamed the option to memory_reuse

* renamed the option
2022-04-05 11:31:05 +03:00
Andrey Noskov
a018298023 [GNA] Deleted duplicated config test (#10601) 2022-04-05 11:27:01 +03:00
Maksim Doronin
4edf85c928 [RT][VPU]: Fixes for dynamic models (#10826)
* DynamicShapeResolver is able to save information about dynamic output in order to pass it in INFER_DYNAMIC_SHAPE mode. Previously, it propagated fully dynamic output shape (however ranks were equal) and dynamic Convolutions and Poolings were performed incorrectly. Now in the case of dynamic batch, DSR propagates only dynamic batch and Convolutions and Poolings are performed properly as a Loop of single-batch operations.
* Fixed dynamicToStaticShapeTranspose transformation. There was a bug: transposition indices could not be applied with Scatter because the formula is not applicable for this. Replaced with Gather.
i.e. Shape of output tensor of Transpose with transition [0,3,1,2] indices (NHWC [1, 224, 224, 3]->NCHW [1, 3, 224, 224]) was calculated by ScatterElementsUpdate. So output_shape[transposition[i]] = input_shape[i] and the result was output_shape=[1, 224, 3, 224] which was wrong. Vise-versa Gather does output_shape[i] = input_shape[transposition[i]] and the result is [1, 3, 224, 224] which is right.
* MaxPool and AvgPool can be sliced for loop in case of dynamic batch
* Convert stage for inputs is not inserted in the VPU model in the case of OV API 2.0. It did not cause a problem with non-dynamic functions because Graph Transformer has a pass to eliminate redundant converts (u8->f16, ~f16->f16~). In the case of dynamic inputs, yet another inserted Convert breaks data<->shape relations.
2022-04-05 11:19:43 +03:00
Egor Shulman
ed190374fd [CPU] Added support of 'Batched' memory type (#10909) 2022-04-05 09:20:19 +03:00
Ilya Lavrenov
4ad20fb53f Use system dependencies (#11419)
* Try to improve gflags

* Try to improve gflags: part 2

* Tried to use dependencies on system

* Use nlohmann_jsonConfig from system

* Enabled nlohmann_json from system

* Improvements

* handle system gflags in developer package

* Simplifications

* Simplify dependency management

* Corrected package names

* Fixed subgraphsDumper configure stage

* Try to fix rhel8

* Try to fix macosx

* Fixed VPUX build

* Fixed aliasing issues

* Suppress some wanrings

* export gflags when build it

* Fixed some LTO

* Try to fix Mac

* revert

* use gflags as private dependency

* Aligned targets in developer package

* Fixed frontends tests build on U20 with LTO

* PAssed

* Don't use pkg_search_module(zlib ..) during cross-compilation

* Removed unused variables

* Fixed finding of zlib during cross-compilation
2022-04-05 04:47:22 +03:00
Ilya Lavrenov
90366c2c60 Fixed detection of sample type c / cpp (#11444) 2022-04-05 01:57:10 +03:00
Vladislav Volkov
d654071b51 CPU Plugin refactoring: Transition from Intel MKL-DNN to oneDNN (#11023) 2022-04-05 01:10:53 +03:00
Yegor Kruglov
74638251e8 [MO] Support TensorFlow FusedBatchNorm with channel_first data_format (#11084)
* layout fix in FusedBatchNorm decomposition

* added tests
2022-04-05 00:13:58 +03:00
Anton Chetverikov
c1dc71ce28 [MO] Fix IndexError inside ScatterNDUpdate shape inference function (#11220)
* Restore inputs order in IR Reader

* Add WA to numpy ndarrays IndexError

* Add comments to code

* Add unit test
2022-04-04 23:59:24 +03:00
Svetlana Dolinina
c75bc65b83 added recursive run for transformation to fix fp16 IR with Interpolate inside If/Loop/TI (#10905)
* added recursive run for transformation to fix fp16 IR with Interpolate inside If

* added test for interpolate inside If

* remove useless variable

* fixed transformaion for divide

* fix code style

* commit auto change

* review fix

* add test for recursive call of divide marks

* removed empty line
2022-04-04 23:33:59 +03:00
Ilya Churaev
60f9f3dc92 Enabled LTO for static CPU (#11426)
* Enabled LTO for static CPU

* Update CMakeLists.txt

* Update CMakeLists.txt
2022-04-04 20:22:22 +03:00
Maxim Shevtsov
500d36e1c0 cherry-picking opt guide changes from the release branch (#11430) 2022-04-04 19:41:17 +03:00
Mikhail Letavin
417d75d80b [GPU] Fix allocation for cached USM remote blobs (#11304) 2022-04-04 18:16:53 +03:00
Fedor Zharinov
787610b0db -l option is replaced with -extensions (#10878) 2022-04-04 16:08:38 +03:00
Elizaveta Lobanova
9b4e8f5b59 [GNA] Fixed cascade concats binding (#11326) 2022-04-04 15:56:13 +03:00
Karol Blaszczak
da8388e263 [DOCS] polish autodevice article (#11171) (#11427)
the article has been changed much and its language has been impacted in the process. Here are some corrections.
2022-04-04 13:12:18 +02:00
Gleb Kazantaev
a52092deb0 Enable FQ fusions in MOC (#11269)
* Enable FQ fusions in MOC

* Fix codestyle
2022-04-04 13:34:01 +03:00
Edward Shogulin
542a374c40 [LPT] Introduce new quantization mode attribute (#11380) 2022-04-04 13:27:03 +03:00
Ilya Churaev
b9ba0bb40c Removed OV_NEW_API (#11082)
Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
2022-04-04 13:07:12 +03:00
Dmitry Pigasin
80e7857eca [Python Speech Sample] Change argument format (#11012)
* Change `-i` argument format

* Change `-sf` argument format

* Change `-o` and `-r` argument format

* Fix saving file with multiple utterances

* Fix flake8 D415

* fix scale factor for imported models
2022-04-04 13:03:39 +03:00
Roman Kazantsev
9dee25fa79 [MO] Support TensorFlow Grouped Conv2DBackpropInput (#11420)
* [MO] Support TensorFlow Grouped Conv2DBackpropInput

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Correct computation of group number for ConvBackpropInput operation

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Fix get_conv_backprop_groups function

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Add unit-tests for Deconvolution shape inference

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>
2022-04-04 12:30:31 +03:00
Chenhu Wang
60521a92c9 [CPU] Fixed NMS implementation on APL targets (#10649) 2022-04-04 10:52:22 +03:00
Maxim Andronov
65a182aaea [CPU] Extract weight cache to executable network (#11118) 2022-04-04 10:47:58 +03:00
Vladimir Paramuzov
afdaa7cf89 [GPU] Align permute axis format with IE (#11379) 2022-04-04 10:28:51 +03:00
Ilya Lavrenov
d879e34363 Tbb: download only if system libraries are not found (#11415)
* Download custom TBB on demand

* Download TBBBind on demand

* Fixed install steps

* FIxes
2022-04-03 19:55:54 +03:00
Edward Shogulin
5d821453ae [LPT] Introduce new granularity attribute instead of OperationPerTensorQuantizationRestriction (#11330) 2022-04-03 19:35:04 +03:00
Ilya Lavrenov
29fb8c79b1 Don't use template plugin unconditionally (#11409) 2022-04-02 11:40:45 +03:00
Ilya Lavrenov
4fcc18c00e Tbb 2018 and older usage (#11411)
* fixed TBB

* Fixed compilation with old TBBs

* Fixed installation for custom provided TBB
2022-04-02 11:11:13 +03:00
Ilya Znamenskiy
1e4a1b2b4a [GPU] Klockwork issue 57997 fix (#10956) 2022-04-02 11:06:56 +03:00
Roman Kazantsev
1a288c2e99 Enable MO unit-tests but bom tests (#11399)
Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>
2022-04-02 10:58:23 +03:00
Ivan Mikhalev
6105ea3902 [CPU] [DEBUG CAPS] Revert DNNL_VERBOSE (#11410)
Compilation with ENABLE_CPU_DEBUG_CAPS was fixed.
Previous to this change it failed due to undefined dnnl::impl::md2dim_str
(since DNNL_VERBOSE was disabled in the scope of PR #11244).
2022-04-02 10:57:29 +03:00
Andrey Zaytsev
415daecc26 Cherry-pick Feature/azaytsev/doc fixes 2022 1 1 (#11388) (#11407)
* Removed a redundant image

* Fixed ops specifications and other issues

* converted html links to anchor links

* converted html links to anchor links

* Fixed a link

* Fixed a link

* Changed anchor links according to dev review
# Conflicts:
#	docs/OV_Runtime_UG/Operations_specifications.md
2022-04-01 19:53:58 +03:00
Ilya Lavrenov
8ae7c9f2cc Disabled TBBBind usage for oneTBB (#11386) 2022-04-01 19:09:06 +03:00
Maxim Gordeev
2388f3b976 Updated docs for python's version of hello_reshape_ssd (#11401) 2022-04-01 18:21:40 +03:00
Vladislav Golubev
a02b3f4995 [Transformation] MarkPrecisionSensitiveDivides extending to mark fp32 divides (#11391)
* MarkPrecisionSensitiveDivides: fp32 divides marking enabled

* ConvertDivide: added a negative test-case with fp32 divide on precision sensitive subgraph
2022-04-01 17:31:23 +03:00
Maksim Derbasov
56df3962e3 Fix for warnings spotted by clang compiler (#11384) 2022-04-01 16:10:51 +03:00
Maxim Andronov
3d92c8c4c7 [CPU] Avoid inserting reorder after RNN in native order case (#10799) 2022-04-01 16:02:50 +03:00
Nikita Semaev
fca159293d Fix Bucketize Conformance tests for Template plugin (#11029)
* Right fill in the values of the inputs

* Using create_and_fill_tensor_unique_sequence() instead of create_and_fill_tensor()

* Fixing a problem with a missing parameter when calling the create_and_fill_tensor method

* Fix Bucketize Conformance tests inputs generation for Template plugin

* Correct filling of the first port (data)
2022-04-01 15:22:45 +03:00
Andrey Zaytsev
cad355a03e Docs labels adjustment (#11227) (#11294)
* Adjusted documentation labels

* Renamed images

* fix doc tests

Co-authored-by: CCR\ntyukaev <nikolay.tyukaev@intel.com>
# Conflicts:
#	docs/IE_PLUGIN_DG/ExecutableNetwork.md
2022-04-01 15:06:55 +03:00
Anton Grishin
7efc85063b [GNA] Add GRUCell/GRUSequence/LSTMSequence support (#11333)
* Add grucell/gruseq/lstmseq unrolling

Add tests

* remove bidirectional decomposition

* completly remove bidirectional_sequences_decomposition
2022-04-01 14:16:11 +03:00
Nikita Semaev
dc55f8bb5a Correct the order of passing arguments to the InputGenerateData constructor (Fix Round, Ceiling Conformance tests for Template plugin) (#11099)
* Correct the order of passing arguments to the InputGenerateData constructor

* Full range correction for random numbers

* Refactoring the argument sequence of the InputGenerateData class constructor

* A small imperfection

* Rollback changes that are related to range
2022-04-01 13:42:10 +03:00
Alexey Lebedev
6eaa15745a [PYTHON API] Tensor.data property for low precisions + packing (#11131)
* rebase old branch with master

* Fix doc style

* fix test

* update tests

* Add missed param

* Rewrite docstring for tensor and refactor set_input_tensors test

* update python exclusives

* keep compatibility

* remove notes about slices

* fix code style

* Fix code style
2022-04-01 12:04:04 +03:00
Karol Blaszczak
701d75eafa [DOCS]continue_language_review-transitionguide (#11177)
PR for 22.1 made, now porting to release...
some discrepancy between this version and the 22.1 branch seems to exist, so I adjusted the conflicting link to avoid build check errors...

the overview has been merged, the remaining articles are reviewed here
2022-04-01 17:03:40 +08:00
Ilya Churaev
80739700ff Added clone method for ov::Model (#11390)
* Added clone method for ov::Model

* Changed python API
2022-04-01 10:52:31 +03:00
Ilya Churaev
8ab5dbade0 Revert "Add constant folding to hetero to avoid dynamism on GPU (#10572)" (#11370)
This reverts commit 5b18677f1b.
2022-04-01 10:16:14 +03:00
Bo Liu
070f27a089 Paddle FasterRCNN Ops Conversion: roi_align, strided_slice, where (#10893)
* Paddle FasterRCNN Ops Conversion: roi_align, strided_slice, where

* add check for 'aligned' feature of 'roi_align' op; use common function for idx_node in 'striede_slice' op

* Apply suggestions from code review

* use common funciton for stride_slice and slice, OP_CHECK for 'where' op conversion

* Apply suggestions from code review
2022-04-01 14:37:28 +08:00
yanlan song
4057e408d8 Bell/shape auto (#11284)
* Fix batchability check of MAX_BATCH_SIZE

* Applied review comment

* clonenetwork in auto

Signed-off-by: fishbell <bell.song@intel.com>

* clone in correct way

Signed-off-by: fishbell <bell.song@intel.com>

Co-authored-by: Taylor Yeonbok Lee <taylor.lee@intel.com>
2022-04-01 11:09:22 +08:00
Mikhail Nosov
e52bd441e2 Frontend exception safety (#11368)
* Frontend exception safety

Every call to frontend's API (except Places) can throw exception. If during exception handling, FrontEndManager is destroyed and calls 'dlclose' for plugin - call stack will be corrupted and crash will occur.

Solution is to wrap 'plugins' calls with try/catch and throw new exception in 'openvino' context

TODO: currently "Place" objects don't have 'actual' wrappers, so exception in 'place' objects will potentially cause such crash (if exception handler destroys FrontEndManager). Workaround for user would be to try/catch any calls of Place API on their side.
We're not expecting users to use Place API directly, so this workaround looks acceptable

* Add check for exception message

* Keep type of frontend exception during rethrow

* IR FE tests: don't expect InferenceEngine::exception as it be not propagated as is by FrontEndManager
2022-03-31 22:23:40 +03:00
Anastasia Kuporosova
dd54cb9c17 [Python API] Remove old api class from the new api (#10470)
* [Python API] Remove old api class from the new api

* start working on refactoring of OVAny

* fix tests

* fix code-style

* remove tuple test

* fix test

* fix omz hash

* one more overload

* fix pyfloat

* move from_ov_any to utils

* code-style

* move function from common to utils
2022-03-31 21:57:05 +03:00
Elizaveta Lobanova
d3060d4bcc [GNA] Fixed handling of unaligned crop layer (#11316) 2022-03-31 20:03:51 +03:00
Vladimir Paramuzov
1cb254307e [GPU] Gather params update (#11369) 2022-03-31 19:46:38 +03:00
Ilya Lavrenov
3c724a1dee Build with system TBB (#11244)
* Build with system TBB

* Fixes

* Check whether system TBB is available

* Try to fix ONNX Runtime build with system TBB

* Test

* Fixed compilation of threading.cpp

* Fixed unset of cache dirs

* Limit dearch paths of TBB

* Try to enable pip packages with custom TBB

* Fix for TBB 2021.2

* Install only needed TBB libraries

* Install TBB from system to pip package

* Reverted usage of TBBROOT

* Fixed oneTBB case

* Try to fix Android

* Escape some paths

* Added samples path

* Fixed TBBBind usage for case of system TBB
2022-03-31 18:05:59 +03:00
Ekaterina Aidova
d99104cf55 [OMZ]: update submodule (#11305) 2022-03-31 17:53:41 +03:00
Alina Kladieva
dc83410cd7 Revert "Skip sporadic GPU canInferOnUserQueue test case (#11310)" (#11362)
This reverts commit 458378e9e7.
2022-03-31 17:38:03 +03:00
Vladimir Paramuzov
15b4553eaf [GPU] Align OneHot primitive parameters with ngraph (#11361) 2022-03-31 17:13:49 +03:00
Alexey Lebedev
1efb0a034f [PYTHON API] release GIL (#10810)
* AsyncInferQueue nogil update + refactoring

* nogil in compiled model

* nogil in Core

* fix refactoring

* nogil in infer_request

* add tests

* Fix code style

* update test with incrementing reference counting

* try to fix code style

* fix code style

* release gil in reshape and preprocessing

* make args optional in test

* fix code style

* add docs about GIL

* try to link doc string with docs

* Apply suggestions from code review

Co-authored-by: Jan Iwaszkiewicz <jan.iwaszkiewicz@intel.com>

* Fix docs

* docs refactoring

* Apply review comments

* Fix code style

Co-authored-by: Jan Iwaszkiewicz <jan.iwaszkiewicz@intel.com>
Co-authored-by: Anastasia Kuporosova <anastasia.kuporosova@intel.com>
2022-03-31 16:12:48 +03:00
Maxim Andronov
1d247815be Don't execute reference::strided_slice if input/output tensor is empty (#11337) 2022-03-31 15:42:10 +03:00
Alexandra Sidorova
9185f03e77 Added specification for Eye-9 (#11104)
* Added specification for EyeLike-9

* Update docs/ops/generation/EyeLike_9.md

* removed batch from TF

* minor fix

* Applied comment by Anton

* Added new example with dynamic output, added corner case

* Fixed corner case description

* Rename matrix

* applied comments by Yuan

* Added diag_idx as input, minor fixes, renaming

* added support of batch_shape from TF

Co-authored-by: Andrei Kochin <andrei.kochin@intel.com>
2022-03-31 14:46:55 +03:00
Smirnov Grigorii
a87e8f7880 moved TransformationsTestsF method's definitions from .hpp to .cpp (#11359)
* moved

* fix style
2022-03-31 14:07:41 +03:00
Pavel Esir
16a5962698 [MO] pad fusing fix (#10453)
* pad fusing fix

* added unit-tests for pad fusing fix

* fixed port reconnecting

* Update tools/mo/openvino/tools/mo/middle/passes/fusing/mark_unfused_nodes.py

Co-authored-by: Evgenya Stepyreva <evgenya.stepyreva@intel.com>

Co-authored-by: Evgenya Stepyreva <evgenya.stepyreva@intel.com>
2022-03-31 14:07:07 +03:00
Smirnov Grigorii
763d522759 add graph_comparator tests (#11360) 2022-03-31 14:06:36 +03:00
Mikhail Ryzhov
a9853d2790 [GNA] Additional tests on compact mode (#10969)
* Moved InitGNADevice to plugin constructor

* Added tests for ordering layers

* Added allocator header

* Fixed fused_iterator header

* protected GNAMemRequestsQueue properties

* Fixed unit test names

* Fixed compile issue

* Fixed default initialization

* Fixed depricated matchers

* Fixed pwl deprecated tests

* Added page alignment

* Reset gnadevice in the tests

* Update src/plugins/intel_gna/gna_fused_iterator.hpp

Co-authored-by: Nadezhda Ageeva <nkogteva@gmail.com>

* Revert "Update src/plugins/intel_gna/gna_fused_iterator.hpp"

This reverts commit d624bdadaf.

Co-authored-by: Nadezhda Ageeva <nkogteva@gmail.com>
2022-03-31 13:56:25 +03:00
Elizaveta Lobanova
3578ee9c3f [GNA] Remove extra FQ layers from the final network (#10599)
* [GNA] Fuse all FakeQuantize layers with their previous layers

* [GNA] Fuse FQ with previous layer if it's not required for precision change

* [GNA] Fixed MatMulOverloadCorrectionTest
2022-03-31 13:21:27 +03:00
Sergey Shlyapnikov
79e3272237 [GPU] Update eltwise calc_output_layout function and prevent output layouts invalidation after adding reorders for weights (#11073) 2022-03-31 13:15:47 +03:00
Sergey Shlyapnikov
f2af1ef88a [GPU] Update memory location to __local in GPU Detection Output (#11209)
* [GPU] Update memory location to __local in GPU Detection Output

* Replace hardcoded stack size with JIT constant
2022-03-31 13:13:58 +03:00
Egor Shulman
23476c8eee CC for LPT transformation call in CPU plugin (#11341)
* Use CC in LPT

* Applied comment
2022-03-31 12:35:27 +03:00
Przemyslaw Wysocki
f45ca99de6 [PYTHON] Add ov::clone_function bindings (#11331)
* Add clone_function bindings

* Add clone_function binding tests

* Debugging changes

* Minor changes

* Code style

* Add an assert

* Update src/bindings/python/tests/test_ngraph/test_basic.py

* Update src/bindings/python/src/pyopenvino/graph/model.cpp

Co-authored-by: Jan Iwaszkiewicz <jan.iwaszkiewicz@intel.com>
Co-authored-by: Anastasia Kuporosova <anastasia.kuporosova@intel.com>
2022-03-31 12:14:51 +03:00
Alina Kladieva
7d0750fa2a Add SKIP_IF_CURRENT_TEST_IS_DISABLED macros for needed cases (#11335) 2022-03-31 11:12:19 +03:00
Ilya Lavrenov
c795382b1f Configurable OpenCL usage in BA (#11344) 2022-03-31 10:49:40 +03:00
Maxim Gordeev
fad66d8442 [IE Samples] New command line parameters format for speech sample (#11051)
* New command line parameters format for speech sample

* fixed notes

* changed format for scale factor

* changed format for scale factor in tests

* added more variants, when name is directy specified for i/o/r like it is done for sf

* removed nthreads flag

* fixed notes

* changed output params

* updated tests with new format

Co-authored-by: Alexander Zhogov <alexander.zhogov@intel.com>
2022-03-31 10:09:31 +03:00
Daria Mityagina
5c917cfaaa [ICV][XLink] - port XLink changes from mdk (#10212)
-76384
Port changes from MDK
2022-03-31 09:50:26 +03:00
Sergey Shlyapnikov
24a74672f6 [GPU] Fix remote blobs tests (#11349) 2022-03-31 09:04:46 +03:00
Vladimir Paramuzov
f5f93cfbeb [GPU] Strided slice primitive params update (#11339) 2022-03-31 08:54:05 +03:00
Ilya Churaev
3e58ccbce7 Fixed evaluate for ov::Tensor (#11354)
* Fixed evaluate for ov::Tensor

* Fixed old ops with EvaluationContext
2022-03-31 07:47:49 +03:00
Anton Pankratov
78285f9db4 Added ov::NotImplemented Exception (#11124)
* Added ov::NotImplemented Exception

* add ie namespace

* Try fix
2022-03-31 07:36:13 +03:00
Oleg Pipikin
88e20199f0 Fix query network for hetero plugin (#10556)
* Fix query network for hetero plugin

* Apply comments

* Fix1

* Add tests

* Apply comments 2

* Apply comments 3
2022-03-31 07:24:46 +03:00
Anastasia Kuporosova
d107cec39f [Python API] Update names in Model class (#11348)
* [Python API] Update names in Model class

* fix code-style

* remove from_capsule in the new API
2022-03-31 00:33:57 +03:00
Anastasia Kuporosova
4c7050f6a9 [Python API] Improve configuration files (#10960)
* [Python API] Improve configuration files

* fix config files

* update setup.cfd + change quotes

* move all codestyle checks to py_checks job

* update requirements_test.txt

* fix  codestyle according to flake-docstring

* fix

* fix mypy

* apply comments
2022-03-30 20:26:36 +03:00
Oleg Pipikin
be6db5d69a Fix for str_to_container if string value has whitespaces (#10224)
* Fix for str_to_container if string value has whitespaces

* Add test

* Add trim for leading and trailing whitespaces

* Apply comments

* Apply comments 2

* Apply comments 3
2022-03-30 19:48:29 +03:00
Alexey Lebedev
c8720f122d [PYTHON API] lifetime test for CompiledModel and extension (#11120)
* Add test for lifitime and extensions

* Remove ExtendedModel
2022-03-30 19:43:07 +03:00
Mateusz Bencer
7a0d85a067 remove resize asserts (#11234) 2022-03-30 19:13:55 +03:00
Mikhail Nosov
a635150b9d [IE Common] Enable explicit TBlob declaration in all compilers (#11183)
* Enable explicit TBlob declaration in all compilers

This fixes problems when linking gcc compiled IE with clang compiled
applications.

Previous to this change, only clang compilers would consider TBlob<T>
templated types as declared externally. When *declared* explictly (with
the `extern template` syntax), the C++ spec says
that any inline methods of the templated class (such as TBlob<T>
constructors) should be ignored in favor of the externally instantiated
version of that templated type:

    "An explicit instantiation declaration (an extern template) skips
    implicit instantiation step: the code that would otherwise cause an
    implicit instantiation instead uses the explicit instantiation
    definition provided elsewhere (resulting in link errors if no such
    instantiation exists)."

However, when IE is compiled with gcc, it does not see the explicit
`extern template` declarations of TBlob<T> (due to the `#ifdef
__clang__` guards in `ie_blob.h`). As an end result, presumably due to
link-time-optimizations during IE library compilation(?), none of the
TBlob<T> implementations are actually included in the IE dynamic
libraries.

* Fix warnings for windows

* Fix typo
2022-03-30 18:56:49 +03:00
Ilya Sharikov
1906c27c2d Update mo_tool parameter for converter.py (#11319)
* Update mo_tool parameter for converter.py

* Hot fix
2022-03-30 18:32:53 +03:00
Egor Shulman
e507b630eb Use CC in CPUSpecificTransform (#11231) 2022-03-30 18:26:35 +03:00
Gleb Kazantaev
8317493e65 Update Accuracy Check inside Func tests (#11314) 2022-03-30 17:44:03 +03:00
Alexey Lebedev
ee31b648d1 [docs] port from release branch (#11309)
* save work

* Add common snipp

* update ie pipeline with python snippets

* ov_common_snippet

* Python snippets for graph construction

* Fix docs

* Add missed old api snippets

* Fix names

* Fix markers

* Fix methods call
2022-03-30 17:03:29 +03:00
Sergey Shlyapnikov
5dc3da926c [GPU] Add fusion through feature (#9674)
* [GPU] Add fuse through feature

* Apply review comments


Tasks marked as not passed:
* https://dev.azure.com/openvinoci/dldt/_build/results?buildId=313769&view=results
* https://dev.azure.com/openvinoci/dldt/_build/results?buildId=313766&view=results
2022-03-30 16:43:32 +03:00
Alexander Kozlov
4b4bd7399c Fixed conflicts (#11332) 2022-03-30 16:10:03 +03:00
Anastasia Kuporosova
9fa5150d71 [Python API][Docs] Fix references for several classes (#11251) 2022-03-30 13:29:30 +03:00
Ilya Lavrenov
932f8bf767 Install 97-myriad-usbboot.rules to install_dependencies (#11301) 2022-03-30 13:03:42 +03:00
Ilya Churaev
17f8f7ec25 Fixed typo in exception message (#11322) 2022-03-30 12:45:09 +03:00
Vladimir Paramuzov
fccd5d4445 [GPU] ShapeOf op (#10983) 2022-03-30 12:27:04 +03:00
Ivan Novoselov
1beb7158d5 [Snippets] Develop Snippets test infrastructure (#10605) 2022-03-30 12:21:19 +03:00
Sergey Shlyapnikov
cd703580b6 [GPU] Host time optimizations for in order queue (#11255)
* [GPU] Host time optimizations

* Fix failed fusings_gpu/permute_eltwise_loop.basic/* tests
2022-03-30 10:53:53 +03:00
Ivan Tikhonov
f13b6252e9 Fix insertion of tensor names after UnrollTensorIterator transformation (#11276)
* revert previous version of convert_seq_to_ti transformation

* try to check that outputs of TI are connected to Result nodes

* add unit tests

* fix codestyle

* fix Memory tests

* revert local change

* revert local change

* replace duplicated code with lambda
2022-03-30 10:26:04 +03:00
Maxim Andronov
72f802f282 [CPU] Fix Parameter -> Result model for dynamic case (#10764) 2022-03-30 10:20:52 +03:00
Anton Pankratov
614a6a3457 [CPU] Graphs are created in compiled_model constructor (#10872) 2022-03-30 10:02:12 +03:00
Vladimir Gavrilov
e7b35c3b00 nGraph reference for the operation RDFT. (#11175)
* Written nGraph reference for the operation RDFT.

* Used std::reverse() algorithm to simplify the function reverse_shape() from fft_common.cpp.

* Added assert into the function offset_from_coords_and_strides().

* Deleted redundant variable.

* Deleted redundant functions from the reference implementation of (I)DFT.

* Renamed the method reverse_shape() in fft_common.hpp.

* Code style fix.
2022-03-30 09:38:05 +03:00
Alexander Zhogov
1386f52dd6 Azure CI: Disable Model Optimizer UT 2022-03-30 09:01:05 +03:00
Ilya Lavrenov
9f923ba39f Install only proper GNA library files (#11243) 2022-03-30 08:58:31 +03:00
yanlan song
f6e5ec9684 return device infer request in passthrough mode (#11253)
Signed-off-by: fishbell <bell.song@intel.com>
2022-03-30 09:21:13 +08:00
Ilya Lavrenov
4145291e84 Added f16 tests for tensor (#11273) 2022-03-30 01:11:07 +03:00
Anton Chetverikov
5d719cbc7b [MO] Remove _output_shape nodes attribute while loading TF meta graph (#11078)
* Remove _output_shape attribute while loading meta_graph

* Update tools/mo/openvino/tools/mo/front/tf/loader.py

Simplify code

Co-authored-by: Evgenya Stepyreva <evgenya.stepyreva@intel.com>

* Add comment about the problem

Co-authored-by: Evgenya Stepyreva <evgenya.stepyreva@intel.com>
2022-03-29 21:27:00 +03:00
Gleb Kazantaev
01b701349d Move tensor utils to common utils (#11306) 2022-03-29 21:08:55 +03:00
Alina Kladieva
458378e9e7 Skip sporadic GPU canInferOnUserQueue test case (#11310) 2022-03-29 19:48:37 +03:00
Ilya Lavrenov
fb99fd1d2f Try to remove MO install rules (#11208)
Co-authored-by: Alexander Zhogov <alexander.zhogov@intel.com>
2022-03-29 19:24:30 +03:00
Dmitrii Khurtin
4b0417018a [GNA] Fixed clang 13 build issue (#11238) 2022-03-29 19:06:50 +03:00
Vitaliy Urusovskij
d261da4820 Replace get_os_name() with get_os_type() in get_lib_path() (#11300)
In case of ubuntu system `get_os_name()` returns "ubuntu",
but `get_os_type()` returns "Linux" which is expected by tests
2022-03-29 18:17:38 +03:00
Evgenya Stepyreva
ed030e113e StridedSlice default shape inference (#11292) 2022-03-29 16:42:52 +03:00
Gleb Kazantaev
5b0a1fe7bb Move FunctionsComparator to common utils (#11277)
* Move FunctionsComparator to common utils

* Fix includes
2022-03-29 14:51:17 +03:00
Jade Cho
070c47ec09 [GPU] fix a bug of onednn sum post-op (#11254)
+ Add a unit test for this.
2022-03-29 20:35:09 +09:00
Mikhail Letavin
23732417c5 [GPU] Propagate output flag to mutable_data deps to ensure correct event creation (#11021) (#11021) 2022-03-29 13:08:58 +03:00
Maxim Andronov
d764fe7d27 [CPU] Prohibit int8 desc convolution creation in case s8 precision on data input (#10730) 2022-03-29 09:44:06 +00:00
Gleb Kazantaev
866f006a83 Transformations Python API (#10971)
* Keep changes

* Update tests

* Keep changes

* Cleanup

* Add predicates support; new pattern ops; new tests

* support for public passes; added tests

* Fix compilation warning

* Fix code style

* Added docstrings; code cleanup

* Update python API tests

* Fix build on Windows

* Revert back pass registration logic

* Fix flake8 errors

* Update docstrings; fix utils.hpp

* Cleanup

* Cleanup

* Fix flake errors

* Fix mypy

* Skip mypy for passes
2022-03-29 11:41:23 +03:00
Bo Liu
02c60c76ab Paddle FasterRCNN Ops Conversion: greater_than, less_than, gather, floor (#9657)
* Paddle FasterRCNN Ops Conversion: greater_than, less_than, gather, floor

* Apply suggestions from code review

* fix 'gather' testcase failure issue on CI

* implement 'axis' input for 'Gather' Op conversion with testcase comment;use common function for all elementwise Ops
2022-03-29 16:20:37 +08:00
Alexey Lebedev
8f88889876 [docs] python snippets for devices master (#11176)
* Update CPU docs

* update GPU docs

* update with sphinxtab

* Fix docs

* Add preprocessig snippet

* Fix path

Co-authored-by: Anastasia Kuporosova <anastasia.kuporosova@intel.com>
2022-03-29 11:01:53 +03:00
Anton Pankratov
bc9f140bb4 Fixed element type destruction in global scope (#11211)
* Fixed element type destruction in global scope

* Fixed mispirint
2022-03-29 10:21:41 +03:00
Nikolay Tyukaev
8aa98ce0d5 fix wildcard sphinxdirective (#11264) 2022-03-28 20:56:56 +03:00
Nikolay Tyukaev
da9596bade cvs-80083 (#11281) 2022-03-28 20:53:21 +03:00
Dmitry Belyakin
30ec7366bb [OMZ]: update submodule (#11279) 2022-03-28 19:48:53 +03:00
Dmitrii Khurtin
a925ec6a29 changed symlink order of libgna (#11267) 2022-03-28 19:48:08 +03:00
Ilya Lavrenov
19d0e5ba52 CMAKE: IE_VERSION => OpenVINO_VERSION (#11242)
* IE_VERSION => OpenVINO_VERSION

* Reverted installation of python unconditionally
2022-03-28 19:32:21 +03:00
Karol Blaszczak
8b591c141e Update installing-openvino-overview.md (#11271) 2022-03-28 15:15:32 +00:00
Dmitry Belyakin
27741d316e [OMZ]: update submodule (#11239)
* [OMZ]: update submodule

* bump omz ver
2022-03-28 15:45:35 +03:00
Valentin Dymchishin
52937967bb Add dynamism in memory tests (API 2) (#10589) 2022-03-28 12:51:53 +03:00
Irina Efode
76e2f2697f [CONFORMANCE] Fix run of Conformance tests (#11225) 2022-03-28 12:29:27 +03:00
Ilya Churaev
10698abc29 Revert vpu custom kernel master (#11228)
* Added original VPU custom kernel doc

* Moved to new API

* Added links from introduction

* Fixed intro
2022-03-28 12:18:29 +03:00
RavirajSitaram
30884a8161 Fix -Winfinite-recursion error reported by compiler (#11247)
Signed-off-by: Raviraj P Sitaram <raviraj.p.sitaram@intel.com>
2022-03-28 11:54:11 +03:00
Nikolay Tyukaev
aded1a2c70 a bunch of doc fixes (#11230) (#11237) 2022-03-25 20:22:43 +03:00
Evgenya Stepyreva
2d7f46b95a Update Divide_1.md (#11232) 2022-03-25 15:41:55 +00:00
Helena Kloosterman
05f97f2bb5 Update two paragraphs in performance hints docs (#11223) 2022-03-25 16:07:46 +03:00
Ilya Churaev
4dc0d6e711 Fixed comments after #11155 (#11202)
* Fixed comments after #11155

* Add information about plugin option
2022-03-25 16:02:18 +03:00
Smirnov Grigorii
a2705b1fed f64 to f32 and 0.0. to 0.1 (#11127) 2022-03-25 15:10:23 +03:00
Andrey Somsikov
3442e90144 Fix setupvars.bat patching (#11160)
* Fix setupvars.bat patching

setupvars.bat shoudl not be patched for regular Debug and Release
configurations.

* Use SRTEQUAL for cmake string comparison
2022-03-25 14:15:04 +03:00
Eddy Kim
200026f28b Missing backslashes right after mo (#11216) 2022-03-25 13:39:33 +03:00
Nikita Malinin
5e3e8d6084 [POT] Update POT with the ResultRenaming flag (#10989)
* Update POT with the ResultRenaming flag

* Update flag

* Update gold
2022-03-25 13:22:20 +03:00
Mikhail Nosov
8bfde58fd9 [Core] Improve performance for 'ov::Model::add_output' (#11052)
* Improve performance for 'ov::Model::add_output'

On first call of `add_output(tensor_name)` all available tensor names are cached.
Next calls take nodes from cache which significantly reduces complexity.
Cache is invalidated if topological cache is not valid or cache points to incorrect output (no tensor name of this node anymore)

The same caching is done for 'add_output(op_name, output_index)'

Tests:
- Verifies that adding outputs to all nodes has linear complexity O(N), not O(N^2)
- Verifies cache invalidation scenarios

* Fix python tests

* Update topological cache after add_output(Output<Node>) by adding result to the end of cached ops

* Add 'm_shared_rt_info' to 'result node just for consistency (there is actually no scenario which may fail due to absence of this info for Result

* Added test cases to verify that names cache should be cleared on refresh of 'get_ordered_ops'
2022-03-25 12:17:41 +03:00
Anastasia Popova
45a21bf902 Fixed setting of input name for case of multiple outputs from Parameter. (#11019)
* Fixed setting of input name for case of multiple outputs from Parameter.

* Small correction.
2022-03-25 12:11:07 +03:00
Sergey Lyubimtsev
fae1c27657 Add missed dependencies for OpenVINO Runtime wheel build (#11193)
* Disable IncrediBuild

* Add dependencies for frontends

* new line

* Add dependencies for frontends

* Enable IncrediBuild
2022-03-25 11:42:40 +03:00
Oleg Pipikin
aa0ab0e995 Move TF frontend tests (#11038) 2022-03-25 08:28:07 +03:00
Oleg Pipikin
5b18677f1b Add constant folding to hetero to avoid dynamism on GPU (#10572)
* Add constant folding to hetero to avoid dynamism on GPU

* Aplly comments

* Apply comments 2

* Fix1
2022-03-25 07:10:23 +03:00
Ilya Lavrenov
a883dc0b85 DOCS: ported changes from 2022.1 release branch (#11206)
* Extensibility guide with FE extensions and remove OV_FRAMEWORK_MAP from docs

* Rework of Extensibility Intro, adopted examples to missing OPENVINO_FRAMEWORK_MAP

* Removed OPENVINO_FRAMEWORK_MAP reference

* Frontend extension detailed documentation

* Fixed distributed snippets

* Fixed snippet inclusion in FE extension document and chapter headers

* Fixed wrong name in a snippet reference

* Fixed test for template extension due to changed number of loaded extensions

* Update docs/Extensibility_UG/frontend_extensions.md

Co-authored-by: Ivan Tikhonov <ivan.tikhonov@intel.com>

* Minor fixes in extension snippets

* Small grammar fix

Co-authored-by: Ivan Tikhonov <ivan.tikhonov@intel.com>

Co-authored-by: Ivan Tikhonov <ivan.tikhonov@intel.com>

* DOCS: transition banner (#10973)

* transition banner

* minor fix

* update transition banner

* updates

* update custom.js

* updates

* updates

* Documentation fixes (#11044)

* Benchmark app usage

* Fixed link to the devices

* More fixes

* Update docs/OV_Runtime_UG/multi_device.md

Co-authored-by: Sergey Lyubimtsev <sergey.lyubimtsev@intel.com>

* Removed several hardcoded links

Co-authored-by: Sergey Lyubimtsev <sergey.lyubimtsev@intel.com>

* Updated documentation for compile_tool (#11049)

* Added deployment guide (#11060)

* Added deployment guide

* Added local distribution

* Updates

* Fixed more indentations

* Removed obsolete code snippets (#11061)

* Removed obsolete code snippets

* NCC style

* Fixed NCC for BA

* Add a troubleshooting issue for PRC installation (#11074)

* updates

* adding gna to linux

* add missing reference

* update

* Update docs/install_guides/installing-model-dev-tools.md

Co-authored-by: Sergey Lyubimtsev <sergey.lyubimtsev@intel.com>

* Update docs/install_guides/installing-model-dev-tools.md

Co-authored-by: Sergey Lyubimtsev <sergey.lyubimtsev@intel.com>

* Update docs/install_guides/installing-model-dev-tools.md

Co-authored-by: Sergey Lyubimtsev <sergey.lyubimtsev@intel.com>

* Update docs/install_guides/installing-model-dev-tools.md

Co-authored-by: Sergey Lyubimtsev <sergey.lyubimtsev@intel.com>

* Update docs/install_guides/installing-model-dev-tools.md

Co-authored-by: Sergey Lyubimtsev <sergey.lyubimtsev@intel.com>

* update

* minor updates

* add gna item to yum and apt

* add gna to get started page

* update reference formatting

* merge commit

* add a troubleshooting issue

* update

* update

* fix CVS-71846

Co-authored-by: Sergey Lyubimtsev <sergey.lyubimtsev@intel.com>

* DOCS: fixed hardcoded links  (#11100)

* Fixes

* Use links

* applying reviewers comments to the Opt Guide (#11093)

* applying reviewrs comments

* fixed refs, more structuring (bold, bullets, etc)

* refactoring tput/latency sections

* next iteration (mostly latency), also brushed the auto-batching and other sections

* updates sync/async images

* common opts brushed

* WIP tput redesigned

* minor brushing of common and auto-batching

* Tput fully refactored

* fixed doc name in the link

* moved int8 perf counters to the right section

* fixed links

* fixed broken quotes

* fixed more links

* add ref to the internals to the TOC

* Added a note on the batch size

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

* [80085] New images for docs (#11114)

* change doc structure

* fix manager tools

* fix manager tools 3 step

* fix manager tools 3 step

* new img

* new img for OV Runtime

* fix steps

* steps

* fix intendents

* change list

* fix space

* fix space

* code snippets fix

* change display

* Benchmarks 2022 1 (#11130)

* Minor fixes

* Updates for 2022.1

* Edits according to the review

* Edits according to review comments

* Edits according to review comments

* Edits according to review comments

* Fixed table

* Edits according to review comments

* Removed config for Intel® Core™ i7-11850HE

* Removed forward-tacotron-duration-prediction-241 graph

* Added resnet-18-pytorch

* Add info about Docker images in Deployment guide (#11136)

* Renamed user guides (#11137)

* fix screenshot (#11140)

* More conservative recommendations on dynamic shapes usage in docs (#11161)

* More conservative recommendations about using dynamic shapes

* Duplicated statement from C++ part to Python part of reshape doc (no semantical changes)

* Update ShapeInference.md (#11168)

* Benchmarks 2022 1 updates (#11180)

* Updated graphs

* Quick fix for TODO in Dynamic Shapes article

* Anchor link fixes

* Fixed DM config (#11199)

* DOCS: doxy sphinxtabs (#11027)

* initial implementation of doxy sphinxtabs

* fixes

* fixes

* fixes

* fixes

* fixes

* WA for ignored visibility attribute

* Fixes

Co-authored-by: Sergey Lyalin <sergey.lyalin@intel.com>
Co-authored-by: Ivan Tikhonov <ivan.tikhonov@intel.com>
Co-authored-by: Nikolay Tyukaev <nikolay.tyukaev@intel.com>
Co-authored-by: Sergey Lyubimtsev <sergey.lyubimtsev@intel.com>
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
Co-authored-by: Maxim Shevtsov <maxim.y.shevtsov@intel.com>
Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>
Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>
Co-authored-by: Ilya Naumov <ilya.naumov@intel.com>
Co-authored-by: Evgenya Stepyreva <evgenya.stepyreva@intel.com>
2022-03-24 22:27:29 +03:00
Anastasia Kuporosova
8118147a96 [Python API] Fix documentation for Core API (#11187)
* [Python API] Fix documentation for Core API

* fix style
2022-03-24 19:56:45 +03:00
Ilya Churaev
a8f9863f72 Add compiler requirements master (#11190)
* Added software tab for Linux installer

* Added information for apt and yum

* Update docs/install_guides/installing-openvino-apt.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

Update docs/install_guides/installing-openvino-apt.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

Update docs/install_guides/installing-openvino-linux.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

Update docs/install_guides/installing-openvino-apt.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

Update docs/install_guides/installing-openvino-apt.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
2022-03-24 14:25:49 +00:00
Sofya Balandina
0c6c3b0d74 [subgraphsDumper] Serialize the lightest operations from existing (#11097) 2022-03-24 16:43:46 +03:00
Irina Efode
f156429b11 [IE TESTS] API Conformance review (#11133)
* [IE TESTS] API Conformance review. Part 1

* properties

* properties
2022-03-24 16:34:05 +03:00
Maxim Shevtsov
7dc1d0935c Auto batching improved tests (#11179)
* wip remote tests2, fixed smoke_canInferOnUserContext

* completed the OV 1.0 tests for remote blobs

* updated OV 2.0 tests for remote blobs with auto-batching (using the ngraph func that is reshape-able by the batch)

* re-using the DetectionOutput-based ngraph func that is 100% batch-reshapeble
2022-03-24 16:23:00 +03:00
Ilya Churaev
b5dbabe41d Fixed registration of template plugin (#11155)
* Fixed registration of template plugin

* Added option for template plugin

* Fixed static build
2022-03-24 14:39:20 +03:00
Mikhail Nosov
9d865a2133 [Model Caching] Enabling per-device cache dir (#10774)
* Initial commit

10 more caching tests

* Fix clang-format

* Added brief explanations to each test

* Fix review comments
2022-03-24 11:24:47 +03:00
Wilson Seok
0cc119cd86 Fix N/A reported ops in conformance test (#11165)
* fix N/A ops in OpImplCheck

* update RandomUniform with parameter

* modify Constant function
2022-03-24 09:59:37 +03:00
Vladislav Volkov
5e6fd8c721 Validation of invalid names in CC factory (#11170) 2022-03-24 09:47:10 +03:00
Gleb Kazantaev
714d7a79c7 Fix SimplifySecondInputOfReshape (#11128) 2022-03-23 23:00:39 +00:00
Vladimir Dudnik
3a8800cbd2 [Docs][IE Samples] fix hard links (#11144)
* fix hard links

* change encoding

* fix TM

Co-authored-by: CCR\ntyukaev <nikolay.tyukaev@intel.com>
2022-03-23 23:48:08 +03:00
Irina Efode
001d098410 [IE TESTS][CONFORMANCE] Fix issue with checkmarks for op impl check (#11178) 2022-03-23 20:10:49 +03:00
Irina Efode
5847e3cb44 [IE TESTS][CONFORMANCE] Allow to serialize report to CSV (#11119)
* [IE TESTS][CONFORMANCE] Allow to serialize report to CSV

* Small improvement

* Move import
2022-03-23 17:36:07 +03:00
Wilson Seok
32886cf90f change to fp16 ov::Model (#11018) 2022-03-23 17:09:20 +03:00
Fedor Zharinov
fe3270c232 [benchmark_app]Exception handling in callback (#11123)
* Exception handling in callback

* stylefix
2022-03-23 16:55:41 +03:00
Fedor Zharinov
1315cfaa64 Add CHW/HWC heuristics for tensors with 3 dimensions (#10817) 2022-03-23 16:54:56 +03:00
Alexey Varyzgin
75bbfe336d [CPU] API2.0 input issues (#10222) 2022-03-23 16:41:59 +03:00
hyunback kim
fc2a8eb1a6 [GPU] Fix unused variable build issue (#11157)
* [GPU] Fix unused variable build issue

Signed-off-by: hyunback <hyunback.kim@intel.com>

* Update to remove unused variables.

Signed-off-by: hyunback <hyunback.kim@intel.com>
2022-03-23 16:26:09 +03:00
Artur Kulikowski
3674ec403f Update ONNX to version 1.11.0 (#11046) 2022-03-23 13:52:33 +01:00
Ilya Churaev
a63e2080e1 Added group for transformation passes (#11102) 2022-03-23 14:08:04 +03:00
Chen Peter
066882579d [AUTO] Fix mess table in doc (#11158)
Signed-off-by: Peter Chen <peter.chen@intel.com>
2022-03-23 13:35:28 +03:00
Nikita Malinin
b50079143f Update IRReader with the ResultRenaming flag (#10988) 2022-03-23 13:16:40 +03:00
Ilya Churaev
eb80b28624 Added groups for core headers (#11080) 2022-03-23 12:19:07 +03:00
Alexey Lebedev
3de9189d50 [core][python] ov::serialize (#10945)
* add ov::serialize

* create python binding

* update python tools

* use ov::serialize in benchmark app

* remove serialize from python offline_transformations

* fix import

* revert pot

* update docs

* apply review comments

* add const

* make bin path optional

* Add docs

* add compare test
2022-03-23 11:44:00 +03:00
Yuan Hu
af874e7754 update AUTO Debug doc with snippets (#11117)
Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>
2022-03-23 16:31:03 +08:00
Nikita Malinin
eb74afe417 Fix FBC missed spaces (#11091) 2022-03-23 11:17:42 +03:00
Liubov Talamanova
99f0093615 [POT] Fixed bug with out port for nodes with multiple outputs (#10932)
* Fixed bug with out port

* Add test

* Fix test

* Change test model
2022-03-23 11:17:06 +03:00
Edward Shogulin
cd361ecae1 [CPU] Code generation: Floor & Ceiling & Round implementation (#10666) 2022-03-23 10:32:25 +03:00
Ilya Churaev
9bc7ebda7b Add couple words about tensor names master (#11116)
* Added more information about tensor names

* Fixed comment and added documentation for extensions

* Fixed typo
2022-03-23 10:04:21 +03:00
Ilya Churaev
1ad4a99478 Port api reference (#11152) 2022-03-23 10:03:16 +03:00
Oleg Pipikin
6710d78e63 Move shared frontend tests (#11036)
Co-authored-by: Ilya Churaev <ilya.churaev@intel.com>
2022-03-23 09:00:34 +03:00
Mingyu Kim
ac58e8eaf1 [GPU] Temporal WA to fix accuracy issue in a network (#11016)
resample_opt generates wrong result when feature depth is two.
2022-03-23 14:40:20 +09:00
Wang, Yang
e259548530 Add logic test case for auto batching enable (#10626)
* Add test case for the loadNetwork with Auto Batching.

Signed-off-by: Wang, Yang <yang4.wang@intel.com>

* Enable logic test case for GPU.

Signed-off-by: Wang, Yang <yang4.wang@intel.com>

* Update.

Signed-off-by: Wang, Yang <yang4.wang@intel.com>

* Enable property for config key 'AUTO_BATCH_DEVICE_CONFIG'.

Signed-off-by: Wang, Yang <yang4.wang@intel.com>

* Omit {}.

Signed-off-by: Wang, Yang <yang4.wang@intel.com>

* Add commont test for the property ALLOW_AUTO_BATCHING.

Signed-off-by: Wang, Yang <yang4.wang@intel.com>

* Add commont test for AUTO Batching plugin.

Signed-off-by: Wang, Yang <yang4.wang@intel.com>
2022-03-23 09:30:19 +08:00
Alexey Suhov
4a47c95c04 Update release version in readme (#11145) 2022-03-23 01:06:08 +03:00
Yuan Xu
2693ff3e48 fix a reference link (#11050) 2022-03-22 22:51:56 +03:00
Egor Duplensky
090799a362 [Samples] Fix cpp benchmark_app help message (#11055) 2022-03-22 22:51:05 +03:00
Maksim Kutakov
3ae167b88e Updated note about latency, added note about mem usage with dynamic shapes (#11126) 2022-03-22 22:45:38 +03:00
Karol Blaszczak
8cac305b8f [DOCS]transition_guide_intro_language (#11134)
a few language suggestions and grammar issues
2022-03-22 22:27:19 +03:00
Irina Efode
eeea2de3eb Extend PDPD regex by legacy (#10994) 2022-03-22 15:09:33 +03:00
Ilya Lavrenov
2f46890444 Removed obsolete scri[ts (#11107) 2022-03-22 14:52:03 +03:00
Nikita Semaev
5dcb6c2cee Correcting a regular expression (#10822) 2022-03-22 13:41:23 +03:00
Dmitrii Khurtin
2ae7e45edb [GNA] Moving PWL op to GNA (#8207)
* Moving PWL to ngraph

* improving the running time of php_search; refactoring the pwl operation

* fixed erros & refactored code

* moved PWL op to GNA

* Update src/plugins/intel_gna/ops/pwl.hpp

Co-authored-by: Elizaveta Lobanova <elizaveta.lobanova@intel.com>

* Update src/plugins/intel_gna/ops/reference/pwl.hpp

Co-authored-by: Elizaveta Lobanova <elizaveta.lobanova@intel.com>

* Update src/plugins/intel_gna/ops/pwl.cpp

Co-authored-by: Elizaveta Lobanova <elizaveta.lobanova@intel.com>

* Update src/plugins/intel_gna/transformations/transpose_to_pwl.hpp

Co-authored-by: Elizaveta Lobanova <elizaveta.lobanova@intel.com>

* Update src/plugins/intel_gna/transformations/transpose_to_pwl.cpp

Co-authored-by: Elizaveta Lobanova <elizaveta.lobanova@intel.com>

* fixed compilation error

* Update inference-engine/tests/unit/gna/ngraph/transformations/gna_pwl.cpp

Co-authored-by: Elizaveta Lobanova <elizaveta.lobanova@intel.com>

* added some tests; changed algorithm of checking accuracy of pwl; refactoring

* added first and last segments; added fq and fixed errors

* fixed after review & rewrote some tests on ngraph

* removed debug logs & fixed code style check error

* s/ngraph_helper/ngraph_util

* removed TRANSFORMATIONS_API in PWLApproximation class declaration

* removed OPENVINO_API in Pwl class declaration

* replaced the deprecated version of evaluate() with a new one

* fixed some problems after reviewing

* fixed a problem when a value of function of left point of segment is less than minimum of function

* corrected a value of the right point of last segments

* s/OPENVINO_RTTI/OPENVINO_OP

Co-authored-by: Elizaveta Lobanova <elizaveta.lobanova@intel.com>
2022-03-22 13:24:57 +03:00
Alexander Sesorov
fb7249d496 [bandit] Add nosec for subprocess (#10941) 2022-03-22 12:56:54 +03:00
Ekaterina Aidova
37adb6d8a9 Docs: update AC info in API 2.0 migration guide (master) (#11113) 2022-03-22 12:42:13 +03:00
Ekaterina Aidova
1b9fbf25fd [OMZ]: update submodule (#11069) 2022-03-22 12:41:59 +03:00
Anastasia Kazantaeva
46df794908 Add changes to contribution guide (#10675) 2022-03-22 11:51:53 +03:00
Andrey Zaytsev
e782cd18b7 Feature/azaytsev/img updates (#11110)
* Updated images

* Updated images
2022-03-22 11:13:17 +03:00
Ilya Churaev
2a43c64336 Removed indentation (#11112) 2022-03-22 10:09:24 +03:00
Oleg Pipikin
7ccc48110d Move common test utils (#11022)
Co-authored-by: Ilya Churaev <ilya.churaev@intel.com>
2022-03-22 09:52:38 +03:00
Yury Gaydaychuk
1b9c58125c Checking in setShape() method if blob was preallocated moved to the case of increased shape (#11009)
* allow setshape for preallocated  blob in the case of non-increased memory size

* codestyle

* test cases added

* pytest aligned
2022-03-22 09:35:37 +03:00
Sofya Balandina
dfac195ffe Fix saving report in dir with more than one non-existing level (#11088) 2022-03-22 09:26:08 +03:00
Jade Cho
a7df1531db [GPU] Code refactoring to choose between binary_add and sum (#10724)
+ Fix colorization-sig accuracy issue using oneDNN
	Memory crash in case reuse_eltwise_sum_post in oneDNN and memory_pool
	And print node in/out gpu_usm_mem addr at OV_GPU_Verbose >= 1
+ Check the size of z spatial axis for checking fulltensor.
+ Remove program_helpers's functions.

Co-authored-by: hyunback <hyunback.kim@intel.com>
2022-03-22 14:58:36 +09:00
Aleksandr Korolev
e8288eb31d [VPU] removal deprecated test (#10597)
* [VPU] removal deprecated test

* Adding ie plugin cache reset to avoid myriad device is not opened issue

* Review changes

* Review changes
2022-03-21 21:14:41 +03:00
Mikhail Nosov
d84d00e2d6 Fix issue with output's friendly name after post-processing (#11095)
Scenario:
- Node "Split" with multiple outputs (e.g. 3). All outputs are connected to "Result"s
- Add post-processing step (e.g. convert element type, can be also implicit)

Issue: after post-processing, 3 new results will be created, each will have "Split" friendly name - inconsistency with IRv10 rules
Fix:
- For nodes with multiple outputs, add '.<idx>' suffix to new output's friendly name
- If no post-processing is applied, return immediately, keeping original results as is

Tests:
- Split with 3 outputs where 2 outputs have post-processing.
- Split with 3 outputs, post-processing doesn't create any nodes
2022-03-21 20:26:24 +03:00
Sergey Lyubimtsev
dacdf67c2c Update Benchmark guides (#11076) (#11085)
* - Update Benchmark Tool usage message
- Remove not existed paths
- Fix examples
* remove reference on FPGA

(cherry picked from commit 3caa77eb30)

# Conflicts:
#	samples/cpp/benchmark_app/README.md
2022-03-21 19:31:17 +03:00
Maxim Gordeev
3ac6e95ead [IE Samples] Fixed hanging of samples if InferImpl() throws exception (#11075)
* [IE Samples] Fixed hanging of samples if InferImpl() throws exception

* improved re-throw an exception
2022-03-21 19:30:30 +03:00
Evgenya Stepyreva
2f0620600f Reshape documentation (#10901)
* Reshape documentation

* Converting Model : reshape metrined, Supported Devices: no shape inference mentioning

* demos removed
2022-03-21 19:13:24 +03:00
Liubov Talamanova
bdc89b1571 Moved quantization templates to openvino/tools/pot (#10814) 2022-03-21 14:17:55 +03:00
Fedor Zharinov
0b444ab2db [benchmark_app]Show network original I/O info (#10694)
* Show network original I/O info

* additional no-name case check

* stylefix

* Update samples/cpp/benchmark_app/main.cpp

Co-authored-by: Nadezhda Ageeva <nkogteva@gmail.com>

* Update samples/cpp/benchmark_app/main.cpp

Co-authored-by: Nadezhda Ageeva <nkogteva@gmail.com>

Co-authored-by: Nadezhda Ageeva <nkogteva@gmail.com>
2022-03-21 13:57:56 +03:00
Alexey Lebedev
332d27ca82 Replace enforcebf16 and qb by infer_precision (#11007) 2022-03-21 13:36:00 +03:00
Daria Mityagina
51ee3f81cb [XLink] - run XLink tests in pre-commit (#10302)
* [XLink] - tests to smoke scope

* [XLink] - small change in XLink related file to trigger ie-tests-windows-myriadx

* [XLink] - azure windows and linux

* [XLink] - azure windows and linux

* [XLink] - azure windows and linux - change dir?

* [XLink] - azure windows and linux - change dir?

* [XLink] - azure windows and linux - install?

* [XLink] - azure windows and linux - xlink cmake

* [XLink] - azure windows and linux - XLinkTests because another target with the same name already exists

* [XLink] - azure windows and linux - XLinkTests because another target with the same name already exists

* [XLink] - azure windows and linux - install TARGETS given target XLinkTests which does not exist

* [XLink] - azure windows and linux - remove smoke
2022-03-21 13:28:44 +03:00
Yuan Xu
be9bbb676d sync the same updates with 22/1 (#11071)
* Add Overview page

* Revert "Add Overview page"

* updates

* update

* updates
2022-03-21 09:11:39 +00:00
Sergey Shlyapnikov
782ef6b42e [GPU] Add estimation of required memory for CAFFE_OPT_2 stage of GPU Detection Output (#11001) 2022-03-21 12:06:07 +03:00
Tomasz Dołbniak
b480a49d66 RandomUniform-8: shape inference fix (#11047)
* Shape inference fix

* Update src/core/src/op/random_uniform.cpp

Co-authored-by: Evgenya Stepyreva <evgenya.stepyreva@intel.com>

* Update src/core/tests/type_prop/random_uniform.cpp

Co-authored-by: Evgenya Stepyreva <evgenya.stepyreva@intel.com>

Co-authored-by: Evgenya Stepyreva <evgenya.stepyreva@intel.com>
2022-03-21 12:04:51 +03:00
Edward Shogulin
a887b41db6 [snippets] Tokenization fix: fused names (#10997) 2022-03-21 11:44:43 +03:00
hyunback kim
ad179980d9 [GPU] Fix implicit concat padding offset issue in OneDNN (#11062)
Inserting padding into oneDNN primitive has issue with implicit concat behavior.
Deconv onedNN initialized output buffer to 0 including padding area. Padding area should be reserved.
Use oneDNN offset from program_node in/out lower_padding instead of oneDNN memory desc.

Signed-off-by: hyunback <hyunback.kim@intel.com>
2022-03-21 16:57:39 +09:00
Evgeny Kotov
d7005af4a5 [GNA] add 3D shape input support for StridedSlice (#10818)
* add 3D shape to test and rename crop4d to strided_slice

* remove ConvertStridedSliceToCropNegative2 since 3D is now supported

* add myriad functional tests to skip-list
2022-03-21 10:15:59 +03:00
Mateusz Tabaka
c18030207c [ONNX] Avoid allocating vector for constants if possible (#10860)
For FLOAT, DOUBLE, INT32, INT64, UINT64 we can get a pointer
to data from TensorProto and pass it to Constant constructor.
2022-03-20 14:07:50 +01:00
Ilya Lavrenov
5390aa7ebc Updated multi code snippets (#11037) 2022-03-20 15:44:33 +08:00
Yuan Hu
72e8661157 [Auto PLUGIN] update Auto docs (#10889)
* update Auto docs

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* update python snippets

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* remove vpu, fix a mistaken in python code

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* update MYRIAD device full name

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* update API name

old API use name Inference Engine API
NEW API usen name OpenVINO Runtime API 2.0

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* update tab name, and code format

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* fix AUTO4 format issue

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* update set_property code

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* auto draft

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* mv code into .cpp and .py

modify the devicelist part accoding to the review

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* remove priority list in code and document

modify the begning of the document
remove perfomance data
remove old API
use compile_model instead of set_property
add a image about cpu accelerate

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* fix mis print and code is not match document

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* try to fix doc build issue

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* fix snippets code compile issue

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>
2022-03-19 18:25:35 +03:00
Maksim Derbasov
76fde1f7b0 Optimization for ie_memcpy (#10996) 2022-03-19 14:33:21 +03:00
Vladimir Paramuzov
b2110b352c [GPU] Update data structures for conv/pool/deconv params (#9641) 2022-03-19 11:11:07 +03:00
Irina Efode
3322b74bd9 [IE TESTS] Align Impl status (#11045) 2022-03-18 19:48:14 +03:00
Ilya Lavrenov
e3098ece7e DOCS: port changes from releases/2022/1 (#11040)
* Added migration for deployment (#10800)

* Added migration for deployment

* Addressed comments

* more info after the What's new Sessions' questions (#10803)

* more info after the What's new Sessions' questions

* generalizing the optimal_batch_size vs explicit value message

* Update docs/OV_Runtime_UG/automatic_batching.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Update docs/OV_Runtime_UG/automatic_batching.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Update docs/OV_Runtime_UG/automatic_batching.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Update docs/OV_Runtime_UG/automatic_batching.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Update docs/OV_Runtime_UG/automatic_batching.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Update docs/OV_Runtime_UG/automatic_batching.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Perf Hints docs and General Opt Guide refactoring (#10815)

* Brushed the general optimization page

* Opt GUIDE, WIP

* perf hints doc placeholder

* WIP

* WIP2

* WIP 3

* added streams and few other details

* fixed titles, misprints etc

* Perf hints

* movin the runtime optimizations intro

* fixed link

* Apply suggestions from code review

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* some details on the FIL and other means when pure inference time is not the only factor

* shuffled according to general->use-case->device-specifics flow, minor brushing

* next iter

* section on optimizing for tput and latency

* couple of links to the features support matrix

* Links, brushing, dedicated subsections for Latency/FIL/Tput

* had to make the link less specific (otherwise docs compilations fails)

* removing the Temp/Should be moved to the Opt Guide

* shuffled the tput/latency/etc info into separated documents. also the following docs moved from the temp into specific feature, general product desc or corresponding plugins

-   openvino_docs_IE_DG_Model_caching_overview
-   openvino_docs_IE_DG_Int8Inference
-   openvino_docs_IE_DG_Bfloat16Inference
-   openvino_docs_OV_UG_NoDynamicShapes

* fixed toc for ov_dynamic_shapes.md

* referring the openvino_docs_IE_DG_Bfloat16Inference to avoid docs compilation errors

* fixed main product TOC, removed ref from the second-level items

* reviewers remarks

* reverted the openvino_docs_OV_UG_NoDynamicShapes

* reverting openvino_docs_IE_DG_Bfloat16Inference and openvino_docs_IE_DG_Int8Inference

* "No dynamic shapes" to the "Dynamic shapes" as TOC

* removed duplication

* minor brushing

* Caching to the next level in TOC

* brushing

* more on the perf counters ( for latency and dynamic cases)

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Updated common IE pipeline infer-request section (#10844)

* Updated common IE pipeline infer-reqest section

* Update ov_infer_request.md

* Apply suggestions from code review

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

Co-authored-by: Maxim Shevtsov <maxim.y.shevtsov@intel.com>
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* DOCS: Removed useless 4 spaces in snippets (#10870)

* Updated snippets

* Added link to encryption

* [DOCS] ARM CPU plugin docs (#10885)

* initial commit

ARM_CPU.md added
ARM CPU is added to the list of supported devices

* Update the list of supported properties

* Update Device_Plugins.md

* Update CODEOWNERS

* Removed quotes in limitations section

* NVIDIA and Android are added to the list of supported devices

* Added See Also section and reg sign to arm

* Added Preprocessing acceleration section

* Update the list of supported layers

* updated list of supported layers

* fix typos

* Added support disclaimer

* update trade and reg symbols

* fixed typos

* fix typos

* reg fix

* add reg symbol back

Co-authored-by: Vitaly Tuzov <vitaly.tuzov@intel.com>

* Try to fix visualization (#10896)

* Try to fix visualization

* New try

* Update Install&Deployment for migration guide to 22/1 (#10933)

* updates

* update

* Getting started improvements (#10948)

* Onnx updates (#10962)

* onnx changes

* onnx updates

* onnx updates

* fix broken anchors api reference (#10976)

* add ote repo (#10979)

* DOCS: Increase content width (#10995)

* fixes

* fix

* Fixed compilation

Co-authored-by: Maxim Shevtsov <maxim.y.shevtsov@intel.com>
Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
Co-authored-by: Aleksandr Voron <aleksandr.voron@intel.com>
Co-authored-by: Vitaly Tuzov <vitaly.tuzov@intel.com>
Co-authored-by: Ilya Churaev <ilya.churaev@intel.com>
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
Co-authored-by: Victoria Yashina <victoria.yashina@intel.com>
Co-authored-by: Nikolay Tyukaev <nikolay.tyukaev@intel.com>
2022-03-18 17:48:45 +03:00
Irina Efode
2f5cb43cba Change CODEOWNERS file (#11041) 2022-03-18 17:28:53 +03:00
Taylor Yeonbok Lee
d539a04efb [GPU] Add multiple output support in kernel selector, helper, set_arguments (#10755) 2022-03-18 17:26:07 +03:00
Taylor Yeonbok Lee
e5ad30f194 [GPU] Added a new unittest to test the mapped external primitive of a loop is fused to other primitive (#10832) 2022-03-18 17:25:32 +03:00
Mateusz Bencer
fe406d1606 Cpp fix of python segfault, reverted pybind workaround (#10749)
* test fix of segfault

* styles applied

* added keep_alive to pybind

* remove redundant code

* fix json tests

* review remarks

* introduced correct path to dlls in CI

* removing passing path via env variable

* introduced cpp solution

* remove keep alive

* review remarks

* remove explicit removing model

* removed shared_objects from ir frontend

* core test updated

* unified approach to handle extensions by frontends

* added nullptr check

* Revert "added nullptr check"

This reverts commit 666f5e4489.

* Revert "unified approach to handle extensions by frontends"

This reverts commit bf85ac24a6.

* m_extensions declaration in Frontend

* added assert
2022-03-18 16:29:46 +03:00
Yuan Xu
a5362a4d58 Apply same changes from 22/1 to master (#11035)
* Add Overview page

* Revert "Add Overview page"

* fix errors & formatting

* fix article usage according to the styles

* fix errors

* update according to PXT comments

* CVS-80775

* update support matrix with Python version

* fix formatting

* fix formatting

* CVS-71745

* update formatting

* fix formatting

* fix formatting

* fix links & errors

* fix formatting

* update bullet points

* update

* adjust the order

* update

* update

* updates

* update references

* update

* update

* apply same updates with 22/1

* minor fix
2022-03-18 12:07:07 +00:00
Maxim Vafin
c8f4f9b7db [MO] Fix swish value infer (#10802) 2022-03-18 14:56:37 +03:00
Maksim Kutakov
dfdbdb4601 [CPU] CPU plugin docs refactoring (#10970)
* CPU device documentation refresh

* Bfloat16 inference page aligned with the new API

* Bfloat16 inference section moved to CPU main

* First review comments applied

* Second review step comments applied

* OneDNN reference changed to the GitHub page

* AvgPool added to the oneDNN ops list
2022-03-18 14:56:22 +03:00
Ilya Churaev
a4d164eda4 Fixed macro for plugin registration. Register only required plugins (#11031) 2022-03-18 13:13:33 +03:00
Irina Efode
931313c03b Move Subgraph Dumper tool to RTTI declaration (#10862) 2022-03-18 13:01:53 +03:00
Anton Dudchenko
4cbdf9e737 [VPU] Fix MyriadPlugin build with enabled options of Conditional Compilation (#10811)
Added initialization for streamPacketDesc_t
CVS-80386
2022-03-18 12:20:15 +03:00
Yuan Hu
967f056761 [AUTOPLUGIN] update multi plugin document for ov2.0 (#10688)
* update multi document

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* update

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* update snippets ov::enableProfile

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* fix build issue

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* use Anymap in snippets

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* fix format and set property

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* update python

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* try fo fix test document issue

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* removed NEW IE-CENTRIC API and upated set_property

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* update ov::optimal_number_of_infer_requests

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>
2022-03-18 11:31:36 +03:00
David Nam
000723acd0 Add ReadValue and Assign to template plugin tests (#9132)
* Add readvalue, assign to templte plugin test

* Fix clang error

* Fix clang error

* Remove unnecessary comment

* Fix type-casting error

* Fix ci issue regarding const value

* Change Function to Model

* Fix op scope

* Change way to get variable

* Fix type-casting error

* Set variable id to const

* Fix side-effect in ieFuncTests

* Implement Assign-3, ReadValue-3 in evaluates_map

* Correct setting attribute

* Correct setting attribute

* Remove unnecessarily added method

* Roll back v6

* Use member variable for variable_id in assign-3, read_value-3

* Get data pointer from host tensor

* Remove visitor API test for ReadValue-6, Assign-6

* Implement visitor api test for read_value-6, assign-6

* Fix clang error

* Split read_value and assign into each file for visitor test

Co-authored-by: Ilya Churaev <ilya.churaev@intel.com>
2022-03-18 07:13:40 +00:00
Mikhail Nosov
c4f5bce3b0 If CMAKE_BUILD_TYPE is not set - set it to 'Release' by default (#11026)
This behavior is already used by default because ONNX is enabled by default and thirdparty/onnx/onnx/CMakeLists.txt forcing CMAKE_BUILD_TYPE to Release if it is not set

It fixes the following issues:
- When ONNX frontend is disabled - source is built for Debug, which is very unexpected comparing to Release with ONNX frontend enabled
- When ONNX frontend is disabled, even libopenvino.so could not be built due to some generated makefiles issues

It is set to 'Release' (not to 'Debug') to comply with default behavior when ONNX is enabled (it is default option working for most users)
2022-03-18 06:59:36 +03:00
Maxim Vafin
0be4bca954 Incremental improvement of MO user guide. (#11010)
* Incremental improvement of MO user guide.

* Apply feedback
2022-03-17 22:38:20 +03:00
Anton Romanov
73994d7c70 Fixed coverity in benchmark app (#11013) 2022-03-17 18:26:59 +03:00
Alina Kladieva
aedcf2cb9f Skip hanging Myriad canRun3AsyncRequestsConsistentlyFromThreads (#11011)
* Skip Myriad canRun3AsyncRequestsConsistentlyFromThreads

* Update skip_tests_config.cpp

Fix typo
2022-03-17 17:52:25 +03:00
Mikhail Nosov
4b3dd808df [First Inference] Read time improvements via using 'mmap/munmap' (#10907)
* Performance improvement for constant creation

The issue is that 'are_all_data_elements_bitwise_identical()' is called every time in Constant constructor, and it potentially checks all buffer which is O(N) complexity.
While it is needed only if client uses 'get_all_data_elements_bitwise_identical'

Solution:
- Defer calculation until first call of 'get_all_data_elements_bitwise_identical'
- Store calculated value in mutable class member to reuse it on next calls of 'get_all_data_elements_bitwise_identical'

Test verifies both cases:
a) that constant creation with shared memory data (now O(1)) is significantly faster than creation+bitwiseCheck O(N)
b) Than once calculated, value is taken from cache, which is significantly faster than re-calculation

* fix clang-format

* Stash - Linux implementation

* Windows mmap implementation + unicode

* Clang for windows

* removed debug print

* Add handling of empty bin file

* fix windows includes

* Fix python test

* Unit tests
Fix for Constant with size > 4GB

* Fix review comments
2022-03-17 17:16:06 +03:00
Indira Salyahova
3a8fd7135e [POT] Method сhange to get bias shape in bc and fbc algoritms (#10463)
* refactoring: get bias shape in bc and fbc algoritms

* use scipy to take most frequent shape

* pylint

* update reference

* pylint

* Update test_sanity.py

* update test_sanity.py

* Update test_sanity.py
2022-03-17 16:46:28 +03:00
Sergey Lyubimtsev
412f2190d1 Update for get started samples (#10975)
* Update for get started samples

* Update docs/get_started/get_started_demos.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/get_started/get_started_demos.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/get_started/get_started_demos.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* formatting

* rewording

* fix links

* fix formatting

* Update docs/get_started/get_started_demos.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/get_started/get_started_demos.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* replace squeezenet1.1 with googlenet-v1

* GoogleNet v1 Caffe* model

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
2022-03-17 16:01:42 +03:00
Vladimir Zinoviev
f12ab5c182 [LPT] Tests on reshape with channel dim reducing (#10999) 2022-03-17 16:00:45 +03:00
Mikhail Nosov
857a1ad2af Caching snippets try/catch to make coverity happy (#11005) 2022-03-17 09:58:18 +00:00
Artyom Anokhov
2b4ee3d937 Fix Deployment Manager configs for MacOS and Win-HDDL target (master) (#11000)
* DM configs: Updated path for MacOS. Removed MovidiusDriver for HDDL target for Windows

* DM config MacOS: Updated name for libov_runtime
2022-03-17 12:45:07 +03:00
Andrey Noskov
0091d52c78 [GNA] Added SW_FP32 mode w/o SF for BasicLSTM (#10115)
* [GNA] Added SW_FP32 mode w/o SF for BasicLSTM

* deleted additional test
 added sw_fp32 mode for exisiting test
 changed reference output for new mode

* [GNA] Fixed according to review

* [GNA] Parametrized weights range

* fixed after review

Co-authored-by: Mikhail Ryzhov <mikhail.ryzhov@intel.com>
2022-03-17 11:32:46 +03:00
Anton Romanov
bfa0e3e1a4 Fix samples tests on azure (#10892)
* Fix samples tests on azure

* fixed bmp reader
2022-03-17 11:17:25 +03:00
Dawid Kożykowski
cf80156fcc Expose Output object bindings to Python (#10743) 2022-03-16 18:31:38 +01:00
Dawid Kożykowski
6f64de4c27 DynamicQuantizeLinear op support (#10565) 2022-03-16 18:30:15 +01:00
Karol Blaszczak
33d90c5c77 [DOCS] update HETERO execution (#10758)
* [DOCS] update HETERO execution

a basic language review

* Update docs/OV_Runtime_UG/hetero_execution.md

* Update docs/OV_Runtime_UG/hetero_execution.md

* Update docs/OV_Runtime_UG/hetero_execution.md

* Update docs/OV_Runtime_UG/hetero_execution.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
2022-03-16 18:37:30 +03:00
Yegor Kruglov
7cfd8698ce [Master] Cascade RCNN res101 document model support (#10902)
* cascade rcnn model support

* fix typo

* specify model directory

* comments resolving
2022-03-16 18:16:28 +03:00
Vladimir Zinoviev
25e1f6ac97 [CODEOWNERS] LPT folder owner change to LPT team (#10993) 2022-03-16 17:38:29 +03:00
Vladislav Volkov
ed8c9d6f9a CPU Plugin refactoring: class names (#10639) 2022-03-16 17:16:29 +03:00
Yegor Kruglov
f7b2e3a8ca SoftSign-9 specification (#10690)
* SoftSign-9 specification

* opset9 update

* fix formula

* comments resolving
2022-03-16 14:58:26 +03:00
Vladimir Gavrilov
f7875da083 nGraph shell of operations RDFT and IRDFT (#10353)
* Written header files for the nGraph operations RDFT and IRDFT.

* Written nGraph shell for the operation RDFT.

* Added missed include.

* Added RDFT to opset9 table.

* Code style fixes.

* Written the nGraph shell of the operation IRDFT.

* Added IRDFT to opset9 table.

* Started to write shape infer tests for RDFT.

* Refactoring: shape infer functions of RDFT and IRDFT moved into separate files.

* Written shape infer tests for RDFT.

* Written shape infer tests for IRDFT operation.

* Fixed code style.

* Fixes in the shape infer function of RDFT.

* Fixes in the shape infer function of RDFT.

* Fixes in the shape infer function of IRDFT.

* Deleted redundant includes in include/ngraph/op/irdft.hpp and include/ngraph/op/rdft.hpp

* Deleted redundant includes in include/openvino/op/rdft.hpp and include/openvino/op/irdft.hpp.

* Deleted redundant includes in cpp-files of nGraph shells of operations IRDFT and RDFT.

* Code style fixes.

* Shape inference functions of operations RDFT and IRDFT moved to the namespace ov::op::util.

* Deleted RDFT and IRDFT from docs/template_plugin/backend/opset_int_tbl.hpp.

* Deleted 'using namespace ngraph' from cpp-files of nGraph shells of operations RDFT and IRDFT.

* Fixed typos.

* Merged some loops in shape inference functions of RDFT and IRDFT.

* Written visitor tests for RDFT and IRDFT.

* Small change.

* Common part of RDFT and IRDFT shape validation moved into the separate file.

Co-authored-by: Ilya Churaev <ilya.churaev@intel.com>
2022-03-16 14:54:32 +03:00
Nadezhda Ageeva
097006d97a [GNA] Update documentation (cherry-pick from release) (#10974)
* [GNA] Update documentation (release) (#10873)

* parent 5f755d5e4a
author Nadezhda Ageeva <nadezhda.ageeva@intel.com> 1646919359 +0300
committer Nadezhda Ageeva <nadezhda.ageeva@intel.com> 1647270928 +0300

[GNA] Updte documentation (release)

Update docs/OV_Runtime_UG/supported_plugins/GNA.md

Co-authored-by: Denis Orlov <denis.orlov@intel.com>

Update docs/OV_Runtime_UG/supported_plugins/GNA.md

Co-authored-by: Denis Orlov <denis.orlov@intel.com>

Update docs/OV_Runtime_UG/supported_plugins/GNA.md

Co-authored-by: Denis Orlov <denis.orlov@intel.com>

Update docs/OV_Runtime_UG/supported_plugins/GNA.md

Co-authored-by: Denis Orlov <denis.orlov@intel.com>

Apply comments

Move snippets to separate file

Add notes about POT and 2d convolutions

* Add lins to GNA setup

* cleanup after rebase

* [GNA] small docs fixes (#10959)

* [GNA] small docs fixes

* Update docs/OV_Runtime_UG/supported_plugins/GNA.md

Co-authored-by: Victoria Yashina <victoria.yashina@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GNA.md

Co-authored-by: Victoria Yashina <victoria.yashina@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GNA.md

Co-authored-by: Victoria Yashina <victoria.yashina@intel.com>

Co-authored-by: Victoria Yashina <victoria.yashina@intel.com>

Co-authored-by: Victoria Yashina <victoria.yashina@intel.com>
2022-03-16 14:37:55 +03:00
Ilya Znamenskiy
848a824260 [GPU] Removed unneeded data from the code (#10987) 2022-03-16 14:22:54 +03:00
Chenhu Wang
3bb83b4ddd [CPU] Fixed undefined values in MVN and Interpolate nodes (#10829) 2022-03-16 14:11:07 +03:00
Alexandra Sidorova
68452e97e2 [CPU] Fixed default ellipsis in StridedSlice (#10713) 2022-03-16 14:09:42 +03:00
Chen Xu
49fb48e744 [CPU] Topk node extend to support horizontal sort for all layouts for top_k == 1 (#10700) 2022-03-16 14:09:14 +03:00
Luo Cheng
4d8adabcaa [CPU] Fusing in non-0 output port exception (#10837) 2022-03-16 14:08:28 +03:00
Alexander Zhogov
8a813d0da9 Update windows.yml
Azure CI: Enable IB again
2022-03-16 13:05:36 +03:00
Mikhail Nosov
7cea7dd4e6 Docs: model caching page update according to OpenVINO API 2.0 (#10981) 2022-03-16 12:22:33 +03:00
Ilya Znamenskiy
2687f6fb2e [GPU] Enabled IFM leftovers inside fully_connected_imad kernel (#10912) 2022-03-16 12:02:36 +03:00
Alexander Zhogov
06f55bd8e8 Azure CI: Disable IB 2022-03-16 08:57:45 +03:00
Yuan Hu
50adb2240c [AUTOPLUGIN] don't check dynamic shape when there is only one device (#10868)
* don't check dynamic shape when there is only one device

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* remove redundant if

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>
2022-03-16 09:56:23 +08:00
guozhong wang
8c9c592fcf Guozhong/add hint cumulative throughput (#10727)
* mod docs/_static/images/dataset.png and docs/_static/images/inputs.png

* add new hint cumulative_throughput

* clang format properties.hpp

* add set properties and get properties test case for CUMULATIVE_THROUGHPUT

* reset docs/_static/images/dataset.png and docs/_static/images/inputs.png

* reset docs/_static/images/dataset.png and docs/_static/images/inputs.png

* reset dataset.png and inputs.png

* reset dataset.png and inputs.png

* remove test value cumulative_throughput from gpuplugin and cpuplugin testcase

* rollback dataset.png and inputs.png to 41818a377
2022-03-16 09:56:05 +08:00
Yuan Hu
4f6dcc1d32 [AUTOPLUGIN] add fps log (#10662)
* add fps log

add format '%lf' for log
add INFO_RUN and DEBUG_RUN, code only run when greater than special log level
add fps log for device
print device config info with DEBUG_RUN
add mock test for DEBUG_RUN and INFO_RUN

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* use n / end -start instead of (n-1) / ((nst start) -(1st start))

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>
2022-03-16 09:55:41 +08:00
Andrey Zaytsev
ea8f1d0344 Cherry-picked Changes to the OpenVINO 2.0 Transition Guide (#10936) (#10978) 2022-03-15 21:09:33 +00:00
Nadezhda Ageeva
3c721b0f03 [nGraph] [MO] Add ngraph transformations for PRelu fusing (#10209)
* MO: add PRelu fusing pattern with Sub

* PRelu fusing

* Apply review comments

* Code style
2022-03-15 18:57:59 +03:00
Mikhail Nosov
93722fe101 Docs. Fix link in layout overview (#10968) 2022-03-15 15:35:02 +00:00
Sofya Balandina
499ffcaa59 [conformance] SetUp timeout per test (#10426) 2022-03-15 18:28:19 +03:00
azhogov
37e05afd12 GitHub org control: all ignored accounts are showed now 2022-03-15 17:27:08 +03:00
Nikita Malinin
e103af056e [POT] References & golds update (#10937)
* Update references

* Update golds & add stats dumping

* Statistics_data upd

* Enable densenet in nightly

* Pylint fixes

* Update try-except WA

* Update simplified gold
2022-03-15 16:55:48 +03:00
Vladimir Paramuzov
4e8e56e887 [GPU] Removed unused variable to fix android build (#10961) 2022-03-15 16:22:13 +03:00
Bartek Szmelczynski
aed3b59796 [DOCS] Python snippets for Query device page (#10789)
* create python snippets

* remove redundant space
2022-03-15 16:11:32 +03:00
Taylor Yeonbok Lee
13b6a3d86e [GPU] Fix batchability check of MAX_BATCH_SIZE (#10660)
* Fix batchability check of MAX_BATCH_SIZE

* Applied review comment
2022-03-15 21:31:03 +09:00
Vitaliy Urusovskij
2dbd60c1ae Mark get_type_info_static() op class methods as hidden (#10691)
* Mark `get_type_info_static()` as hidden

Each plugin linked with openvino library contains `type_info_static` symbols. In case when one of the libraries is unloaded and app tries to get opset, it leads to segfault. So mark `get_type_info_static()` as hidden to use only one implementation exactly from openvino lib

* Fix "'visibility' attribute ignored" issue by moving `TestPass` out of test scope

* Fix clang format

* Small update of `If` op

* Revert "fix 79520 (#10449)" to correctly compare DiscreteTypeInfo via `==`

This reverts commit 29883a152a.
2022-03-15 14:59:13 +03:00
Vladislav Golubev
060a149322 [nGraph] get_constant_from_source fix (#10787) 2022-03-15 14:50:21 +03:00
Jan Iwaszkiewicz
660c6a3e84 [DOCS] Python Exclusives overview (#10946)
* Add python docs

* Small fix

* Apply comments

* Fix style
2022-03-15 14:26:12 +03:00
Tomasz Dołbniak
29144d3a6b ONNX Expand extended support (#10833) 2022-03-15 12:21:30 +01:00
Bartek Szmelczynski
840e622da5 add snippets for automatic batching (#10910)
* add snippets for automatic batching

* Update docs/snippets/ov_auto_batching.py

Co-authored-by: Alexey Lebedev <alexey.lebedev@intel.com>

* add missing bracket

Co-authored-by: Alexey Lebedev <alexey.lebedev@intel.com>
2022-03-15 14:17:20 +03:00
Sergey Lyubimtsev
5f27c74d96 Add description for zsh: no matches found : openvino-dev[...] issue. (#10950)
* Add description for `zsh: no matches found : openvino-dev[...]` issue.

* spell check
2022-03-15 13:37:39 +03:00
Ilya Znamenskiy
fc5356cd3b [GPU] Spelling fixes (#10952) 2022-03-15 12:25:55 +03:00
Yuan Xu
e341cdf541 Add Python version for Docker installation (#10840)
* update support matrix with Python version

* fix formatting

* update formatting

* fix formatting

* fix formatting
2022-03-15 12:06:48 +03:00
Yuan Xu
e71a05d94d Minor updates to Linux/macOS/Windows installation guide (#10955)
* Add Overview page

* Revert "Add Overview page"

* CVS-71745

* fix links & errors

* fix formatting

* update bullet points
2022-03-15 12:06:30 +03:00
Mateusz Tabaka
c58ef365b5 Fuse FQ->Mul also if first FQ input can be constantfolded (#10712)
The change fixes FQ fusions for subgraphs like 'Const weights'->FQ->Transpose->Multiply.
After PullTransposeThroughFQUp transformation, we end up with following:
'Const weights'->Transpose->FQ->Multiply. Because of the Transpose on first
FakeQuantize inputs, Multiply could not be fused since FakeQuantizeMulFusion
expected that weights is a Constant node.

Ticket: 77785
2022-03-15 09:54:41 +01:00
Nikita Semaev
758d12b9cb The correct namespace LayerTestsDefinitions (#10821)
Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
2022-03-15 11:34:33 +03:00
Vladimir Zinoviev
4a2d0f39dd [LPT] Turn back checks in reshape transformation when subtract is absent (#10939) 2022-03-15 11:34:12 +03:00
Mikhail Nosov
ef5ad90dd7 [Read time improvement] Avoid calling of 'are_all_data_elements_bitwise_identical()' during Constant creation (#10858)
* Performance improvement for constant creation

The issue is that 'are_all_data_elements_bitwise_identical()' is called every time in Constant constructor, and it potentially checks all buffer which is O(N) complexity.
While it is needed only if client uses 'get_all_data_elements_bitwise_identical'

Solution:
- Defer calculation until first call of 'get_all_data_elements_bitwise_identical'
- Store calculated value in mutable class member to reuse it on next calls of 'get_all_data_elements_bitwise_identical'

Test verifies both cases:
a) that constant creation with shared memory data (now O(1)) is significantly faster than creation+bitwiseCheck O(N)
b) Than once calculated, value is taken from cache, which is significantly faster than re-calculation

* fix clang-format

Co-authored-by: Ilya Churaev <ilya.churaev@intel.com>
2022-03-15 10:03:47 +03:00
Mingyu Kim
0a8f97f429 [GPU] Friendly exception msg for alloc failure (#10953) 2022-03-15 15:48:31 +09:00
Steve Yoo
a9ff10b365 Add SLT to Template Plugin: ROIAling-3 (#10625) 2022-03-15 07:31:27 +03:00
Mingyu Kim
0b556fd7a9 [GPU] More friendly exception message (#10721)
* [GPU] More friendly exception message

* Apply Ilya's comment

Co-authored-by: Ilya Znamenskiy <ilya.znamenskiy@intel.com>

Co-authored-by: Ilya Znamenskiy <ilya.znamenskiy@intel.com>
2022-03-15 09:57:22 +09:00
Karol Blaszczak
3b2b055bfd repair 2 png files (#10949) 2022-03-14 22:03:09 +03:00
Ilya Churaev
ad1c4a24c3 Deprecate version inside DiscreteTypeInfo (#10781)
* Deprecate version inside DiscreteTypeInfo

* Fixed code style

* Fixed openvino for macOS

* Fixed build for macOS

* Fixed errors for Windows build
2022-03-14 21:18:00 +03:00
Ryan Loney
575b2fad73 Update Binder URL on the tutorials landing page (#10877)
* Update Binder URL on the tutorials landing page

Binder URL was linking to a file. It should go to an actual Binder tutorial.

(Replaces https://github.com/openvinotoolkit/openvino/pull/10747)

* binder logo

* fixes

Co-authored-by: CCR\ntyukaev <nikolay.tyukaev@intel.com>
2022-03-14 21:11:01 +03:00
Ilya Churaev
0d404633a9 Add OpenVINO as required component and remove clang-format from example (#10944) 2022-03-14 20:05:24 +03:00
Mateusz Tabaka
23eaa80325 Don't memset Constant's buffer if it's about to be filled with data (#10861)
* Don't memset Constant's buffer if it's about to be filled with data

* dont memset buffer in visit_attributes
2022-03-14 19:36:40 +03:00
Irina Efode
3aa525c003 SubgraphDumper: clone only not constant graph (#10867)
* SubgraphDumper: clone only not constant graph

* Apply comments
2022-03-14 18:49:11 +03:00
Maxim Vafin
4e27d936b5 Update Convert_YOLACT.md (#10942) 2022-03-14 15:21:43 +00:00
Mikhail Nosov
72fe6082ea [Preprocess] InputTensorInfo::set_from implementation (#10839)
* InputTensorInfo::from implementation

If user's application already has `ov::runtime::Tensor` object created,
it will be possible to reuse basic characteristics for input (shape, precision) from tensor using InputTensorInfo::from method

* Rename 'from' to 'set_from' as  in Python 'from' keyword is used for import modules
Python bindings: from ov.Tensor and from numpy array

* Style fix (quotes)

* Apply suggestions from code review

Co-authored-by: Ilya Churaev <ilyachur@gmail.com>

* Fix code style

* Use set_from in hello_classification CPP sample

Co-authored-by: Ilya Churaev <ilyachur@gmail.com>
2022-03-14 18:02:51 +03:00
Vladimir Paramuzov
4c4581940a [GPU] Use int64_t type for axis in concat (#9790) 2022-03-14 18:02:21 +03:00
Anastasia Kuporosova
d0b4cae2f8 [Python API] move util under utils (#10923)
* [Python API] move util under utils

* fix importing

Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
2022-03-14 14:40:28 +03:00
Mikhail Nosov
96f954c704 [Preprocessing] Crop preprocessing support (#10805)
* Crop preprocessing support

Note: instead of 'ov::Coordinate' simple std::vector<int> is used because Coordinate don't support negative dimensions

Added unit tests, template reference tests, cpu and gpu tests

* Added python bindings
Fix review comments

* Fixed python code style

* Fix thresholds

* Fix python style

Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
2022-03-14 13:32:27 +03:00
Roman Kazantsev
6ec8e53183 Update Model Optimizer User Guide (#10759)
* Remove install prerequisites steps, order FWs, and move pre-processing details

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Update Introduction: examples of MO CLIs, references to parameters description pages

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Update Setting Input Shape section

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Update Optimizing Preprocessing Computation page

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Revert location of Additional_Optimizations.md

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Describe layout and FP16 support in MO

* Fix docs issue

* Apply feedback

* Apply review feedback

* Clean-up Resources

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Mention FP16 compression in MO Introduction

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Apply the first portion of feedback

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Apply the second portion of feedback

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Apply review feedback

* Apply review feedback

* Apply the third portion of feedback

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Apply suggestions from code review

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Apply feedback for FP16 compression documentation

* Apply review for FP16 page

* Apply suggestions from code review

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Update docs/MO_DG/prepare_model/Additional_Optimizations.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Apply feedback

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Apply feedback

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Apply feedback

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Address feedback about tutorials, input_shape option

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Rework Setting Input Shapes section

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Update "See also" list

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Correct conversion documents for each FW

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Refactor TensorFlow converting document and expand Embedding Preprocessing document

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Fix a link to POT

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Apply suggestions from code review

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

Co-authored-by: Maxim Vafin <maxim.vafin@intel.com>
Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>
Co-authored-by: Sergey Lyalin <sergey.lyalin@intel.com>
2022-03-14 13:08:58 +03:00
Mikhail Nosov
43cb3920fb Docs: preprocessing use case with saving model to IR (#10698)
* Docs: added preprocessing use case with saving resulting model to IR

* Enable Model Caching to 'application code' section

* Fix review comments
2022-03-14 12:12:20 +03:00
Alexey Suhov
ef00057c8e Change product version to 2022.2.0 (#10911)
* Change product version to 2022.2.0

* change OPENVINO_VERSION_MINOR
2022-03-14 11:42:03 +03:00
yanlan song
a6583965a5 try avoid timeout in batch plugin during transition in auto plugin (#10753)
* initial debug

Signed-off-by: fishbell <bell.song@intel.com>

* refine

Signed-off-by: fishbell <bell.song@intel.com>

* remove debug msg

Signed-off-by: fishbell <bell.song@intel.com>
2022-03-14 11:05:26 +03:00
Ilya Churaev
0bc6196d96 Migrate to new RTTI for all transformations and graph structures (#10703)
* Migrate to new RTTI for all transformations and graph structures

* Fixed code style
2022-03-14 06:57:21 +03:00
Maxim Gordeev
4be227f5a1 [IE Samples] Fixed rights for file with image (#10924) 2022-03-12 01:51:34 +03:00
Alexander Zhogov
45d34e7885 CODEOWNERS: Fix 3d party dependencies 2022-03-11 21:43:09 +03:00
Dawid Kożykowski
3b5f3d1957 Snippets for preprocessing migration page (#10899)
* add placeholder for python version of first snippet

* fix problem with placeholder

* fix wrong file name

* fix fragment name

* update python snippets

* move imports to the top of the code fragments
2022-03-11 21:19:07 +03:00
Przemyslaw Wysocki
de3088adce [Docs] Add Python snippets for configure devices (#10913)
* Add Python docs for configure devices

* Bugfixes

* Minor changes

* Minor changes

* Format changes

* Minor changes
2022-03-11 21:17:22 +03:00
Anastasia Kuporosova
23604ca28c [Python API] Update doc style (#10708)
* [Python API] Update doc style

* apply comments

Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
2022-03-11 19:18:20 +03:00
Irina Efode
7790f2036f [IE TESTS][CONFORMANCE] Fix for IRs with dynamic shapes (#10880) 2022-03-11 19:17:53 +03:00
Vladimir Zinoviev
eebe8c70f9 [LPT] Fix out of bounds access in reshape (#10791) 2022-03-11 18:04:14 +03:00
Mikhail Nosov
86322c916b Fix loading time issues for POT models (with lots of results) (#10898)
* Fix loading time issues for POT models (with lots of results)

* Same for 'optimized_strided_slice'
2022-03-11 17:44:36 +03:00
Andrey Noskov
6fdd983750 [GNA] Added multi crop test (#10459) 2022-03-11 15:05:14 +03:00
Andrey Noskov
caaacb2db4 [GNA] Moved single Lstm-cell test from deprecated tests (#10472)
* [GNA] Single lstm-cell test added

* Added additional config for test

* one more input and hidden shape

* Added cell with ReLU
Deleted deprecated test

* test added as lstm_cell_basic

* Enabled gna_compact_mode

Co-authored-by: Mikhail Ryzhov <mikhail.ryzhov@intel.com>

* enabled compact_mode in all tests

Co-authored-by: Mikhail Ryzhov <mikhail.ryzhov@intel.com>
2022-03-11 15:03:16 +03:00
Ilya Churaev
d93ce1e246 Added intro to transformation guide (#10894) 2022-03-11 14:27:11 +03:00
Vladimir Dudnik
f48b233629 update omz intel models, fix docs (#10843) 2022-03-11 12:34:55 +03:00
Vladislav Volkov
9d74f5cd76 Export/import fixed for param->result and const->result models (#10838)
Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
2022-03-11 11:10:56 +03:00
Nikolay Tyukaev
2940db0fb1 benchmark legal, snippet margin bottom (#10886) 2022-03-11 11:10:11 +03:00
Sergey Lyubimtsev
dd076264eb add pre-release description for wheels packages (2) (#10813)
* add pre-release description for wheels packages

* refactoring

* lines

* Revert "lines"

This reverts commit 01a74dc168.

* linters

* linters

* nighly revision of docs URL
2022-03-11 11:09:17 +03:00
Sergey Lyubimtsev
0dc2ab182b Update APT instructions according to repository configuration (#10869) 2022-03-11 10:45:31 +03:00
Alexey Lebedev
97efdb5020 [docs] python snippet for dynamic shapes (#10762)
* Create snipp

* link python snipp with doc

* fix docs

* Apply suggestions from code review

Co-authored-by: Jan Iwaszkiewicz <jan.iwaszkiewicz@intel.com>

* Fix cpp comments

Co-authored-by: Jan Iwaszkiewicz <jan.iwaszkiewicz@intel.com>
2022-03-11 08:42:33 +03:00
Elizaveta Lobanova
4e0a740eb3 [GNA] Support of overload correction for MatMul with 2 non-constant layers (#10447) 2022-03-10 15:16:17 +03:00
Vladimir Paramuzov
09246e2db8 [GPU] GPU plugin docs (#10734) 2022-03-10 15:01:52 +03:00
Anton Pankratov
a8a2640fb7 Added callback and wait migration guide (#10775)
* Added callback and wait migration guide

* Added start async

* Simplified wait

* Added selector for sync async

* fixed doc

* fixed build

* fixed doc

* fixed doc
2022-03-10 14:00:42 +03:00
Irina Efode
5566b67238 Frontend support in Subgraph dumper (#10765)
* Init

* Enable frontends

* Update read_ir_compare_with_refs.cpp

* Remove extra line

* Update CMakeLists.txt
2022-03-10 13:34:47 +03:00
Nikita Malinin
4746d0881b [POT] Update BC with the Parameter nodes connection (#10848)
* Update BC with the Parameter nodes connection

* Update test_sanity with octave
2022-03-10 10:28:47 +03:00
Tatiana Savina
d7372d678c [DOCS] fixes for nightly (#10842)
* fixes for nightly

* modify xfile

* change launcher ref
2022-03-10 09:10:54 +03:00
Katarzyna Mitrus
531fa9018d [DOCS] Python snippets for Hetero execution page (#10769)
* Update docs ov hetero snippets

* Add missing space

* Update precision hint

* Update hetero docs snippets with GPU profiling
2022-03-09 19:34:42 +03:00
Karol Blaszczak
44ec4661a4 Update Auto plugin docs (#10623)
* Update Auto plugin docs

Revise auto plugin and auto plugin debugging articles. Include necessary image files.

* Update docs/OV_Runtime_UG/supported_plugins/AutoPlugin_Debugging.md

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/AutoPlugin_Debugging.md

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/AutoPlugin_Debugging.md

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

* Update docs/OV_Runtime_UG/auto_device_selection.md

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/AutoPlugin_Debugging.md

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

* Update docs/OV_Runtime_UG/auto_device_selection.md

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/AutoPlugin_Debugging.md

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

* Update AutoPlugin_Debugging.md

* include review corrections

* Update auto_device_selection.md

* Update auto_device_selection.md

* Update auto_device_selection.md

* Update auto_device_selection.md

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>
2022-03-09 18:09:37 +03:00
Serhii Pavlovskyi
948347f3dd ncc build fixes (#10367)
* fix .ncc_style target names

it was breaking configure on system with libclang-12-dev, clang-12,
ninja and cmake 3.17+(ninja complains about duplicate
target). with lower cmake version configure succeeds, but build exits
immediately with error. by replacing ninja with make error becomes
warning(it's still significant, make just skips duplicate rules, i.e.
doesn't check style of some source files, rule duplication is genuine
bug). without libclang-12-dev and clang-12 ENABLE_NCC_STYLE is OFF and
bug is not triggered

* silence uninitialized warning in core_integration

probably it was always initialized before use, but compiler wasn't made
aware of it

* fix function spelling to unbreak code style checks in benchmark_app

* include <thread> for std::this_thread

existing code was relying on namespace pollution by old libstdc++

* replace is_pod with is_standard_layout && is_trivial

is_pod is deprecated, it breaks build on current gcc

Co-authored-by: Serhii Pavlovskyi <spavlovskyi@lohika.com>
Co-authored-by: Ilya Churaev <ilya.churaev@intel.com>
2022-03-09 13:42:06 +03:00
Vladimir Dudnik
d9976332b0 upd open-model-zoo, upd docs, upd ac cfgs (#10676) 2022-03-09 11:48:47 +03:00
Ilya Churaev
702f8cf223 Fixed duplicated words (#10827) 2022-03-09 08:06:12 +00:00
Taylor Yeonbok Lee
3e7e0d5651 [DRYRUN] Fix dryrun in partial build (#10761)
When partial build is called for dryrun, do constant propagate too.
In normal case, partial build is not doing constant propate for saving build time of internal program.
However, if partial build is called with dryrun, it will fail at transfer_constants due to the generic nodes which does not have impl.
2022-03-07 13:37:21 +09:00
Tatiana Savina
de47a3b4a4 POT documentation updates (#10578)
* POT changes

* change install

* change img size

* remove cli option
2022-03-06 09:14:39 +03:00
Nikita Malinin
41818a377f [POT] Update IEEngine with the Dynamic model support (#10717)
* Update IEEngine with the Dynamic models support

* Update with the batch

* Method naming fix

* Update image_loader & tests with dynamic models

* Update test_sanity.py

* Replace custom_mo_config from the model
2022-03-05 15:49:21 +03:00
Egor Duplensky
3b8e960b10 [CPU] Avoid using cache for constant inplace or multi-child edges (#10573) 2022-03-05 14:37:50 +03:00
Tatiana Savina
3b8ca9f0af [DOCS] Fixes for nightly (#10806)
* add img

* wb img for input

* dataset added

* add img

* wb img for input

* dataset added

* ov_fix

* more imgs

* new img

* new img

* nlp

* new img

* delete img
2022-03-05 13:03:46 +03:00
Maksim Kutakov
e87ea5d611 [CPU] Use raw pointer to share peer data for constants (#10744) 2022-03-05 12:32:11 +03:00
Andrey Zaytsev
0f8c599ce7 Re-structure Model Optimizer User Guide and Clean-up (#10801)
* Modified the workflow diagram

* Moved supported topology lists to separate topics

* Additional changes

* Removed Supported Topologies list and Deprecated pages

* Created the Model Conversion Tutorials section for instructions for specific models

* Topic names alignment, removed Default_Model_Optimizer_Optimizations.md

* Additional structural changes

* Fixed links

* heading fixes
2022-03-05 12:31:15 +03:00
Roman Kazantsev
0c20e7a3ca [MO] Remove IR frontend from available frontend list in MO (#10798)
* [MO] Remove IR frontend from available frontend list in MO

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Fix issue - forget to pass FEM

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Fix issue for TF with new FE and default legacy

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>
2022-03-04 20:50:02 +03:00
Yuan Xu
3b24ed032a Yuan install guide 22/1 (#10786)
* Add Overview page

* Revert "Add Overview page"

* fix errors & formatting

* fix article usage according to the styles

* fix errors

* update according to PXT comments
2022-03-04 19:32:10 +03:00
Ilya Churaev
cb9049076b Enabled clang-format for cc and itt libs (#10793) 2022-03-04 18:40:18 +03:00
Dmitry Pigasin
c28cebb2a6 [CPP Speech Sample] Fix result saving when batch size is not 1 (#10714)
* Fix result saving when batch size is not 1

* Remove useless if statement

* improved processing scores for model with more than one outputs

* added checking on count of model outputs

* improve if statements

* divide fix for model with several outputs to other PR

Co-authored-by: Maxim Gordeev <maxim.gordeev@intel.com>
2022-03-04 15:41:47 +03:00
Anuj Mittal
7e8bbf4968 installing-openvino-yocto.md: fix install instructions (#10785)
Change _ to : as per the new override syntax.

Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
2022-03-04 15:41:37 +03:00
Nikita Malinin
69ad9e80e1 [POT] Update OverflowCorrection algo for nodes without bias (#10687)
* Update OverflowCorrection algo for nodes without bias

* Pylint line fix

* Update OC with the last add name

* Pylint fix
2022-03-04 14:50:44 +03:00
Irina Efode
32edd596e3 [IE TESTS] Functional test review: Part 4 (#10772)
* [IE TESTS] Move specific import_export_tests to gna and myriad

* add
2022-03-04 14:42:16 +03:00
Ilya Churaev
ed702910bd Enable clang for transformations (#10778)
* Enable clang for transformations

* Fixed code style

* Fixed build

* Fixed macOS
2022-03-04 13:38:42 +03:00
Irina Efode
082ebbcbf8 [IE TESTS] Remove NgraphConversionTests (#10770) 2022-03-04 09:52:58 +00:00
Fedor Zharinov
043a773f61 [Benchmark_app]Check all I/O names (#10745)
* Check all I/O names

* stylefix
2022-03-04 09:49:03 +03:00
hyunback kim
5cee51e9c4 [GPU] update to check quantize fusing condition in oneDNN (#10680)
* [GPU] update the condition for minimize_local_reorders

* Update to check needs reorder condition in quantize.

Signed-off-by: hyunback <hyunback.kim@intel.com>
2022-03-04 14:30:07 +09:00
yanlan song
8a2252b774 fix multi infer result corrupt issue (#10704)
* do not share blob

Signed-off-by: fishbell <bell.song@intel.com>

* build error

Signed-off-by: fishbell <bell.song@intel.com>

* remove comment codes

Signed-off-by: fishbell <bell.song@intel.com>
2022-03-04 08:13:12 +03:00
Mateusz Bencer
fd18632d89 Update --extenions MO doc (#10763) 2022-03-04 07:24:52 +03:00
Wang, Yang
78c9f5b0a2 Add coommon test of the key PERFORMANCE_HINT for AUTO plugin API 2.0. (#10505)
* Add coommont test of the key PERFORMANCE_HINT for AUTO plugin API 2.0.

Signed-off-by: Wang, Yang <yang4.wang@intel.com>

* Add common test case for config check.

Signed-off-by: Wang, Yang <yang4.wang@intel.com>

* Update.

Signed-off-by: Wang, Yang <yang4.wang@intel.com>

* Update.

Signed-off-by: Wang, Yang <yang4.wang@intel.com>

* Use the implemented property test case.

Signed-off-by: Wang, Yang <yang4.wang@intel.com>
2022-03-04 10:04:48 +08:00
Alexander Kozlov
1bbd92a8f8 Revised Tuning For Performance and Model optimization docs (#10276)
* Revised Tuning for performance and Model optimization docs

* Fixed links

* Fixed link

* Applied comments

* Fixed one more comment
2022-03-03 18:58:58 +03:00
Ilya Churaev
554b50eb85 Remove redundant calls from set_argument (#10701)
* Remove redundant calls from set_argument

* Fixed tests
2022-03-03 18:01:59 +03:00
Vladimir Gavrilov
f8ce57319b Specifications of operations RDFT and IRDFT (#10242)
* Written the draft of the specification of the operation RFFT.

* Started to write the specification of the operation IRFFT.

* Small fix.

* Renamed RFFT operation as RDFT.

* Fix in Operations_specifications.md.

* Written the specification of the operation IRDFT.

* Fixes in examples.

* Fixes in opset9.md and Operations_specifications.md.

* Small fix.

* Replaced opset8 by opset9 in opset9.md.

* Deleted redundant sentences.

* Small fix.

* Replaced input_shape by data_shape.

* Fixed mistypes.

* Fixes of mistypes.

* Fixed typo.

* Fixed RDFT specification, in order to perform signal_size input as in TF and PyTorch.

* Fixes in examples for RDFT.

* Fixes in the output shape calculation of IRDFT. Now this calculation is as in TF and PyTorch.
2022-03-03 13:47:23 +00:00
Maxim Gordeev
f81f819ecd [IE Samples] Improved processing outputs for model with more than one output (#10737)
* Improved processing outputs for model with more than one output

* fixed condition

* added checking count of output/reference files
2022-03-03 16:35:41 +03:00
3305 changed files with 106149 additions and 48301 deletions

View File

@@ -13,6 +13,7 @@ resources:
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/openvino_contrib
ref: master
jobs:
- job: android_arm64

View File

@@ -13,11 +13,13 @@ resources:
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/openvino_contrib
ref: master
- repository: testdata
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/testdata
ref: master
jobs:
- job: Lin
@@ -109,7 +111,8 @@ jobs:
set -e
$(REPO_DIR)/install_build_dependencies.sh
# Move jdk into contrib
sudo apt --assume-yes install openjdk-11-jdk
# 'clang' compiler is to check that samples can be built using it
sudo apt --assume-yes install openjdk-11-jdk clang
# For opencv-python: python3-setuptools and pip upgrade
python3 -m pip install --upgrade pip
python3 -m pip install -r $(REPO_DIR)/src/bindings/python/src/compatibility/openvino/requirements.txt
@@ -223,6 +226,14 @@ jobs:
displayName: 'Build cpp samples'
continueOnError: false
- script: |
export CC=clang
export CXX=clang++
$(INSTALL_DIR)/samples/cpp/build_samples.sh -i $(INSTALL_DIR)
workingDirectory: $(BUILD_SAMPLES_DIR)
displayName: 'Build cpp samples - clang'
continueOnError: false
- script: $(INSTALL_DIR)/samples/c/build_samples.sh -i $(INSTALL_DIR)
workingDirectory: $(BUILD_SAMPLES_DIR)
displayName: 'Build c samples'
@@ -288,6 +299,10 @@ jobs:
displayName: 'VPU UT'
continueOnError: false
- script: . $(SETUPVARS) && $(INSTALL_TEST_DIR)/XLinkTests --gtest_output=xml:TEST-XLinkTests.xml
displayName: 'XLink Tests'
continueOnError: false
- script: . $(SETUPVARS) && $(INSTALL_TEST_DIR)/ieMultiPluginUnitTests --gtest_output=xml:TEST-ieMultiPluginUnitTests.xml
displayName: 'MULTI UT'
continueOnError: false
@@ -336,6 +351,7 @@ jobs:
- script: |
export PATH=$HOME/.local/bin:$PATH
export IE_APP_PATH=$(INSTALL_DIR)/samples_bin
export LD_LIBRARY_PATH=$IE_APP_PATH:$LD_LIBRARY_PATH
export IE_APP_PYTHON_PATH=$(INSTALL_DIR)/samples/python/
export SHARE=$(INSTALL_DIR)/tests/smoke_tests/samples_smoke_tests_data/
export WORKSPACE=$(INSTALL_DIR)

View File

@@ -13,6 +13,7 @@ resources:
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/openvino_contrib
ref: master
jobs:
- job: linux_arm64
@@ -34,13 +35,13 @@ jobs:
OPENVINO_REPO_DIR: $(Build.Repository.LocalPath)
OPENVINO_CONTRIB_REPO_DIR: $(OPENVINO_REPO_DIR)/../openvino_contrib
OPENCV_REPO_DIR: $(OPENVINO_REPO_DIR)/../opencv
BUILD_PYTHON: $(WORK_DIR)/build_python
BUILD_PYTHON: $(WORK_DIR)/build_python
BUILD_OPENCV: $(WORK_DIR)/build_opencv
BUILD_OPENVINO: $(WORK_DIR)/build
BUILD_OPENVINO_PYTHON: $(WORK_DIR)/build_python
BUILD_OPEN_MODEL_ZOO: $(WORK_DIR)/build_open_model_zoo
INSTALL_OPENVINO: $(WORK_DIR)/install_openvino
INSTALL_PYTHON: $(INSTALL_OPENVINO)/extras/python
INSTALL_PYTHON: $(INSTALL_OPENVINO)/extras/python
INSTALL_OPENCV: $(INSTALL_OPENVINO)/extras/opencv
INSTALL_OPEN_MODEL_ZOO: $(INSTALL_OPENVINO)/extras/open_model_zoo
WORK_DIR: $(Pipeline.Workspace)/_w
@@ -125,20 +126,20 @@ jobs:
cmakeArgs: >
-GNinja
-DVERBOSE_BUILD=ON
-DOpenCV_DIR=$(INSTALL_OPENCV)/cmake
-DENABLE_OPENCV=OFF
-DPYTHON_INCLUDE_DIRS=$(INSTALL_PYTHON)/include/python3.8
-DPYTHON_LIBRARY=$(INSTALL_PYTHON)/lib/libpython3.8.so
-DENABLE_PYTHON=ON
-DPYTHON_MODULE_EXTENSION=".so"
-DENABLE_TESTS=ON
-DENABLE_FUNCTIONAL_TESTS=ON
-DENABLE_GAPI_TESTS=OFF
-DENABLE_GAPI_PREPROCESSING=OFF
-DENABLE_DATA=OFF
-DCMAKE_EXE_LINKER_FLAGS=-Wl,-rpath-link,$(INSTALL_OPENCV)/lib
-DTHREADING=SEQ -DENABLE_LTO=ON
-DCMAKE_TOOLCHAIN_FILE=$(OPENVINO_REPO_DIR)/cmake/arm64.toolchain.cmake
-DOpenCV_DIR=$(INSTALL_OPENCV)/cmake
-DENABLE_OPENCV=OFF
-DPYTHON_INCLUDE_DIRS=$(INSTALL_PYTHON)/include/python3.8
-DPYTHON_LIBRARY=$(INSTALL_PYTHON)/lib/libpython3.8.so
-DENABLE_PYTHON=ON
-DPYTHON_MODULE_EXTENSION=".so"
-DENABLE_TESTS=ON
-DENABLE_FUNCTIONAL_TESTS=ON
-DENABLE_GAPI_TESTS=OFF
-DENABLE_GAPI_PREPROCESSING=OFF
-DENABLE_DATA=OFF
-DCMAKE_EXE_LINKER_FLAGS=-Wl,-rpath-link,$(INSTALL_OPENCV)/lib
-DTHREADING=SEQ -DENABLE_LTO=ON
-DCMAKE_TOOLCHAIN_FILE=$(OPENVINO_REPO_DIR)/cmake/arm64.toolchain.cmake
-DCMAKE_BUILD_TYPE=$(BUILD_TYPE)
-DENABLE_SAMPLES=ON
-DBUILD_java_api=OFF
@@ -173,19 +174,19 @@ jobs:
cmakeArgs: >
-GNinja
-DInferenceEngineDeveloperPackage_DIR=$(BUILD_OPENVINO)
-DENABLE_PYTHON=ON
-DPYTHON_EXECUTABLE=$(INSTALL_PYTHON)/bin/python3.8
-DPYTHON_INCLUDE_DIRS=$(INSTALL_PYTHON)/include/python3.8
-DENABLE_PYTHON=ON
-DPYTHON_EXECUTABLE=$(INSTALL_PYTHON)/bin/python3.8
-DPYTHON_INCLUDE_DIRS=$(INSTALL_PYTHON)/include/python3.8
-DPYTHON_LIBRARIES=$(INSTALL_PYTHON)/lib
-DPYTHON3_NUMPY_INCLUDE_DIRS=/usr/local/lib/python3.8/site-packages/numpy/core/include
-DPYTHON3_NUMPY_INCLUDE_DIRS=/usr/local/lib/python3.8/site-packages/numpy/core/include
-DPYTHON_MODULE_EXTENSION=".so"
-DPYBIND11_FINDPYTHON=OFF
-DPYBIND11_NOPYTHON=OFF
-DPYTHONLIBS_FOUND=TRUE
-DCMAKE_BUILD_TYPE=$(BUILD_TYPE)
-DENABLE_DATA=OFF
-DCMAKE_EXE_LINKER_FLAGS=-Wl,-rpath-link,$(INSTALL_OPENCV)/lib
-DCMAKE_TOOLCHAIN_FILE=$(OPENVINO_REPO_DIR)/cmake/arm64.toolchain.cmake
-DCMAKE_EXE_LINKER_FLAGS=-Wl,-rpath-link,$(INSTALL_OPENCV)/lib
-DCMAKE_TOOLCHAIN_FILE=$(OPENVINO_REPO_DIR)/cmake/arm64.toolchain.cmake
-DCMAKE_CXX_COMPILER_LAUNCHER=ccache
-DCMAKE_C_COMPILER_LAUNCHER=ccache
-DCMAKE_INSTALL_PREFIX=$(INSTALL_OPENVINO)
@@ -211,15 +212,15 @@ jobs:
inputs:
cmakeArgs: >
-GNinja
-DCMAKE_BUILD_TYPE=$(BUILD_TYPE)
-DENABLE_PYTHON=ON
-DCMAKE_BUILD_TYPE=$(BUILD_TYPE)
-DENABLE_PYTHON=ON
-DPYTHON_EXECUTABLE=/usr/local/bin/python3.8
-DPYTHON_INCLUDE_DIR=$(INSTALL_PYTHON)/include/python3.8
-DPYTHON_INCLUDE_DIR=$(INSTALL_PYTHON)/include/python3.8
-DPYTHON_LIBRARY=$(INSTALL_PYTHON)/lib
-DCMAKE_TOOLCHAIN_FILE=$(OPENVINO_REPO_DIR)/cmake/arm64.toolchain.cmake
-DOpenVINO_DIR=$(BUILD_OPENVINO)
-DInferenceEngine_DIR=$(BUILD_OPENVINO)
-DOpenCV_DIR=$(INSTALL_OPENCV)/cmake
-DOpenCV_DIR=$(INSTALL_OPENCV)/cmake
-Dngraph_DIR=$(BUILD_OPENVINO)
-DIE_EXTRA_MODULES=$(OPENVINO_CONTRIB_REPO_DIR)/modules
-DCMAKE_INSTALL_PREFIX=$(INSTALL_OPEN_MODEL_ZOO)

View File

@@ -4,6 +4,7 @@ resources:
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/openvino_contrib
ref: master
jobs:
- job: Lin

View File

@@ -4,6 +4,7 @@
# type: github
# endpoint: openvinotoolkit
# name: openvinotoolkit/testdata
# ref: master
jobs:
- job: Lin_lohika

View File

@@ -13,11 +13,13 @@ resources:
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/openvino_contrib
ref: master
- repository: testdata
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/testdata
ref: master
jobs:
- job: Mac

View File

@@ -13,11 +13,13 @@ resources:
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/openvino_contrib
ref: master
- repository: testdata
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/testdata
ref: master
jobs:
- job: Win
@@ -30,7 +32,7 @@ jobs:
maxParallel: 2
# About 150% of total time
timeoutInMinutes: 150
timeoutInMinutes: 180
pool:
name: WIN_VMSS_VENV_D8S_WU2
@@ -133,7 +135,7 @@ jobs:
- script: |
set PATH=$(WORK_DIR)\ninja-win;%PATH%
call "$(MSVS_VARS_PATH)" && $(CMAKE_CMD) -G "Ninja Multi-Config" -DENABLE_WHEEL=ON -DENABLE_ONEDNN_FOR_GPU=$(CMAKE_BUILD_SHARED_LIBS) -DENABLE_GAPI_PREPROCESSING=$(CMAKE_BUILD_SHARED_LIBS) -DBUILD_SHARED_LIBS=$(CMAKE_BUILD_SHARED_LIBS) -DENABLE_REQUIREMENTS_INSTALL=OFF -DENABLE_FASTER_BUILD=ON -DCMAKE_BUILD_TYPE=$(BUILD_TYPE) -DENABLE_TESTS=ON -DENABLE_STRICT_DEPENDENCIES=OFF -DENABLE_PYTHON=ON -DPYTHON_EXECUTABLE="C:\hostedtoolcache\windows\Python\3.7.6\x64\python.exe" -DPYTHON_INCLUDE_DIR="C:\hostedtoolcache\windows\Python\3.7.6\x64\include" -DPYTHON_LIBRARY="C:\hostedtoolcache\windows\Python\3.7.6\x64\libs\python37.lib" -DIE_EXTRA_MODULES=$(OPENVINO_CONTRIB_REPO_DIR)\modules -DCMAKE_C_COMPILER:PATH="$(MSVC_COMPILER_PATH)" -DCMAKE_CXX_COMPILER:PATH="$(MSVC_COMPILER_PATH)" $(REPO_DIR)
call "$(MSVS_VARS_PATH)" && $(CMAKE_CMD) -G "Ninja Multi-Config" -DENABLE_WHEEL=ON -DENABLE_ONEDNN_FOR_GPU=$(CMAKE_BUILD_SHARED_LIBS) -DBUILD_SHARED_LIBS=$(CMAKE_BUILD_SHARED_LIBS) -DENABLE_REQUIREMENTS_INSTALL=OFF -DENABLE_FASTER_BUILD=ON -DCMAKE_BUILD_TYPE=$(BUILD_TYPE) -DENABLE_TESTS=ON -DENABLE_STRICT_DEPENDENCIES=OFF -DENABLE_PYTHON=ON -DPYTHON_EXECUTABLE="C:\hostedtoolcache\windows\Python\3.7.6\x64\python.exe" -DPYTHON_INCLUDE_DIR="C:\hostedtoolcache\windows\Python\3.7.6\x64\include" -DPYTHON_LIBRARY="C:\hostedtoolcache\windows\Python\3.7.6\x64\libs\python37.lib" -DIE_EXTRA_MODULES=$(OPENVINO_CONTRIB_REPO_DIR)\modules -DCMAKE_C_COMPILER:PATH="$(MSVC_COMPILER_PATH)" -DCMAKE_CXX_COMPILER:PATH="$(MSVC_COMPILER_PATH)" $(REPO_DIR)
workingDirectory: $(BUILD_DIR)
displayName: 'CMake'
@@ -167,13 +169,6 @@ jobs:
workingDirectory: $(BUILD_SAMPLES_TESTS_DIR)
displayName: 'Install Samples Tests'
- script: $(CMAKE_CMD) -DCMAKE_INSTALL_PREFIX=$(INSTALL_DIR) -DCOMPONENT=tests -P cmake_install.cmake && xcopy $(REPO_DIR)\temp\opencv_4.5.2\opencv\* $(INSTALL_DIR)\opencv\ /e /h /y
workingDirectory: $(BUILD_DIR)
displayName: 'Install tests'
- script: dir $(INSTALL_DIR) /s
displayName: 'List install files'
- script: $(INSTALL_DIR)\samples\cpp\build_samples_msvc.bat -i $(INSTALL_DIR)
workingDirectory: $(BUILD_SAMPLES_DIR)
displayName: 'Build cpp samples'
@@ -198,9 +193,15 @@ jobs:
python -m pytest $(INSTALL_DIR)\tests\smoke_tests\ --env_conf $(INSTALL_DIR)\tests\smoke_tests\env_config.yml -s --junitxml=TEST-SamplesSmokeTests.xml
workingDirectory: $(INSTALL_DIR)
displayName: 'Samples Smoke Tests'
condition: eq(variables['CMAKE_BUILD_SHARED_LIBS'], 'ON')
continueOnError: false
- script: $(CMAKE_CMD) -DCMAKE_INSTALL_PREFIX=$(INSTALL_DIR) -DCOMPONENT=tests -P cmake_install.cmake && xcopy $(REPO_DIR)\temp\opencv_4.5.2\opencv\* $(INSTALL_DIR)\opencv\ /e /h /y
workingDirectory: $(BUILD_DIR)
displayName: 'Install tests'
- script: dir $(INSTALL_DIR) /s
displayName: 'List install files'
- script: rd /Q /S $(BUILD_DIR)
displayName: 'Clean build dir'
continueOnError: false
@@ -240,6 +241,10 @@ jobs:
displayName: 'VPU UT'
continueOnError: false
- script: call $(SETUPVARS) && $(INSTALL_TEST_DIR)\XLinkTests --gtest_output=xml:TEST-XLinkTests.xml
displayName: 'XLink Tests'
continueOnError: false
- script: call $(SETUPVARS) && $(INSTALL_TEST_DIR)\onnxImporterUnitTests --gtest_output=xml:TEST-onnxImporterUnitTests.xml
displayName: 'ONNX Importer UT'
continueOnError: false

View File

@@ -1,213 +0,0 @@
// Copyright (C) 2018-2020 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
DOCKER_CONTAINER_NAME= "openvino-onnx-ci-container"
DOCKER_IMAGE_TAG = "openvino-onnx-ci-image"
ONNX_MODEL_ZOO_SHA = "d58213534f2a4d1c4b19ba62b3bb5f544353256e"
BACKEND_CONFIGURATIONS = [
[ name: "Release", build_type: "Release" ],
[ name: "Debug", build_type: "Debug" ],
]
// workaround for aborting previous builds on PR update
@NonCPS
def stopPreviousRunningBuilds() {
def jobname = env.JOB_NAME
if (jobname.startsWith("onnx-ci/openvino onnx ci/openvino/PR")){
def buildnum = env.BUILD_NUMBER.toInteger()
def job = Jenkins.instance.getItemByFullName(jobname)
def job_newest = job.builds.first()
for (build in job.builds.reverse()[0..<-1]) {
if (build.isBuilding()){
echo "Stop task = ${build} because newest #${job_newest} is on the way"
build.doStop();
continue;
}
}
}
}
def getGitPrInfo(String project, String workdir) {
def gitPrInfo = [
prAuthorEmail : "",
commitAuthorEmail : "",
commitHash : "",
commitSubject : ""
]
try {
dir ("${workdir}/${project}") {
gitPrInfo.prAuthorEmail = sh (script: 'git log -1 --pretty="format:%ae" ', returnStdout: true).trim()
gitPrInfo.commitAuthorEmail = sh (script: 'git log -1 --pretty="format:%ce" ', returnStdout: true).trim()
gitPrInfo.commitSubject = sh (script: 'git log -1 --pretty="format:%H" ', returnStdout: true).trim()
gitPrInfo.commitHash = sh (script: 'git log -1 --pretty="format:%s" ', returnStdout: true).trim()
}
}
catch(e) {
echo "Failed to retrieve ${project} git repository information!"
echo "ERROR: ${e}"
}
return gitPrInfo
}
def notifyByEmail(def gitPrInfo) {
stage('Notify') {
String notifyPeople = "${gitPrInfo.prAuthorEmail}, ${gitPrInfo.commitAuthorEmail}"
emailext (
subject: "OpenVino CI: PR ${CHANGE_ID} ${currentBuild.result}!",
body: """
Status: ${currentBuild.result}
Pull Request Title: ${CHANGE_TITLE}
Pull Request: ${CHANGE_URL}
Branch: ${CHANGE_BRANCH}
Commit Hash: ${gitPrInfo.commitSubject}
Commit Subject: ${gitPrInfo.commitHash}
Jenkins Build: ${RUN_DISPLAY_URL}
""",
to: "${notifyPeople}"
)
}
}
def gitSubmoduleUpdate(String repository_name, String workdir) {
dir ("${workdir}/${repository_name}") {
sh label: "Init ${repository_name} submodules",
script:
"""
git submodule init && git submodule update \
--init \
--no-fetch \
--recursive
"""
}
}
def prepare_repository(String workdir) {
dir("${workdir}") {
println "Preparing repository in directory: ${workdir}"
checkout scm
gitSubmoduleUpdate(PROJECT_NAME, workdir)
}
}
def updateModels() {
sh """
./src/bindings/python/tests/test_onnx/model_zoo_preprocess.sh -d ${HOME}/ONNX_CI/models_data -o -s ${ONNX_MODEL_ZOO_SHA}
"""
}
def get_docker_container_name(Map configuration){
println "RUN get_docker_container_name for ${configuration.name}"
String docker_container_name = "${DOCKER_CONTAINER_NAME}_${BUILD_NUMBER}_${env.CHANGE_ID}_${configuration.name}"
return docker_container_name
}
def buildDockerImage(Map configuration, String workdir) {
String docker_image_tag = "${DOCKER_IMAGE_TAG}_${BUILD_NUMBER}_${env.CHANGE_ID}_${configuration.name}".toLowerCase()
println "docker_image_tag: ${docker_image_tag}"
updateModels()
sh """
docker build --tag=${docker_image_tag} \
--build-arg BUILD_TYPE=${configuration.build_type} \
--file=.ci/openvino-onnx/Dockerfile \
--build-arg http_proxy=${HTTP_PROXY} \
--build-arg https_proxy=${HTTPS_PROXY} .
"""
}
def runTests(Map configuration, String workdir) {
println "Run tests for ${configuration.name}"
String docker_image_tag = "${DOCKER_IMAGE_TAG}_${BUILD_NUMBER}_${env.CHANGE_ID}_${configuration.name}".toLowerCase()
String docker_container_name = get_docker_container_name(configuration)
// Run only basic unit tests in Debug configuration
if (configuration.build_type == "Debug") {
sh """
docker run --name ${docker_container_name} ${docker_image_tag}
"""
}
// Run unit-tests AND large model tests by default
else {
sh """
docker run --name ${docker_container_name} \
--volume ${HOME}/ONNX_CI/models_data/model_zoo/onnx_model_zoo_${ONNX_MODEL_ZOO_SHA}:/root/.onnx/model_zoo/onnx_model_zoo \
--volume ${HOME}/ONNX_CI/data/model_zoo/MSFT:/root/.onnx/model_zoo/MSFT \
${docker_image_tag} /bin/bash -c "tox && tox -e zoo_models"
"""
}
}
def getConfigurationsMap() {
def configurationsMap = [:]
for (backend in BACKEND_CONFIGURATIONS) {
def configuration = backend.clone()
configurationsMap[configuration.name] = {
stage(configuration.name) { CONFIGURATION_WORKFLOW(configuration) }
}
}
return configurationsMap
}
CONFIGURATION_WORKFLOW = { configuration ->
node("OpenVINO") {
String workdir = "${HOME}/workspace/${BUILD_NUMBER}_${env.CHANGE_ID}_${configuration.name}"
try {
PROJECT_NAME = "openvino"
stage("Clone repository") {
prepare_repository(workdir)
}
stage("Prepare Docker environment") {
dir("${workdir}") {
buildDockerImage(configuration, workdir)
}
}
stage("Run tests") {
timeout(time: 60, unit: 'MINUTES') {
runTests(configuration, workdir)
}
}
}
catch(e) {
// Set result to ABORTED if exception contains exit code of a process interrupted by SIGTERM
if ("$e".contains("143")) {
currentBuild.result = "ABORTED"
} else {
currentBuild.result = "FAILURE"
}
def gitPrInfo = getGitPrInfo(PROJECT_NAME, workdir)
notifyByEmail(gitPrInfo)
}
finally {
stage("Cleanup") {
String docker_container_name = get_docker_container_name(configuration)
sh """
docker rm -f ${docker_container_name}
rm -rf ${workdir}
"""
}
}
}
}
pipeline {
agent none
options {
skipDefaultCheckout true
timeout(activity: true, time: 120, unit: 'MINUTES')
}
stages {
stage('Parallel CI') {
steps {
stopPreviousRunningBuilds()
script {
parallelStagesMap = getConfigurationsMap()
parallel parallelStagesMap
}
}
}
}
}

View File

@@ -1,65 +0,0 @@
// Copyright (C) 2018-2020 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
timeout(30)
{
node(LABEL) {
BUILD_WORKSPACE = "$WORKSPACE/$BUILD_NUMBER"
WATCHDOG_ROOT = "$BUILD_WORKSPACE/.ci/openvino-onnx/watchdog"
VENV_PATH = "${BUILD_WORKSPACE}/.wdvenv"
try {
stage("Clone repository") {
dir ("$BUILD_WORKSPACE") {
checkout([$class: 'GitSCM', branches: [[name: "*/$BRANCH"]],
doGenerateSubmoduleConfigurations: false, extensions: [[$class: 'CloneOption', timeout: 30]], submoduleCfg: [],
userRemoteConfigs: [[credentialsId: "${GITHUB_KEY}", url: "${OPEN_VINO_URL}"]]])
}
}
stage("Prepare environment") {
sh """#!/bin/bash
if [ ! -d ${VENV_PATH} ]; then
python3 -m venv ${VENV_PATH}
source ${VENV_PATH}/bin/activate
pip install -r ${WATCHDOG_ROOT}/requirements.txt
fi
"""
}
stage("Run script") {
withCredentials([
usernamePassword(credentialsId: '7157091e-bc04-42f0-99fd-dc4da2922a55',
usernameVariable: 'username',
passwordVariable: 'password')])
{
dir ("$BUILD_WORKSPACE") {
sh """#!/bin/bash
source ${VENV_PATH}/bin/activate
export PYTHONHTTPSVERIFY=0
python ${WATCHDOG_ROOT}/src/main.py \
--msteams-url=${MSTEAMS_URL_FILE} \
--github-credentials '${username}' '${password}' \
--github-org=${GITHUB_ORG} \
--github-project=${GITHUB_PROJECT} \
--jenkins-token=${JENKINS_TOKEN_FILE} \
--jenkins-server=${JENKINS_SERVER} \
--jenkins-user=${JENKINS_USER} \
--ci-job=${CI_JOB_NAME} \
--watchdog-job=${WATCHDOG_JOB_NAME}
"""
}
}
}
} catch (e) {
echo "$e"
currentBuild.result = "FAILURE"
} finally {
stage("Cleanup") {
sh """
cd $BUILD_WORKSPACE
rm -rf ..?* .[!.]* *
"""
}
}
}
}

View File

@@ -1,6 +0,0 @@
python-jenkins==1.7.0
retrying==1.3.3
pygithub==1.51
timeout-decorator==0.4.1
requests==2.23.0
wheel

View File

@@ -1,108 +0,0 @@
#!/usr/bin/python3
# Copyright (C) 2018-2021 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
import logging
import timeout_decorator
from datetime import datetime
from retrying import retry
from github import Github, GithubException
# Logging
logging.basicConfig(format='%(name)s - %(levelname)s - %(message)s')
log = logging.getLogger(__name__)
log.setLevel(logging.INFO)
_RETRY_LIMIT = 3
_RETRY_COOLDOWN_MS = 2000
_REQUEST_TIMEOUT_S = 10
class GitWrapper:
"""Class wrapping PyGithub API.
The purpose of this class is to wrap methods from PyGithub API used in Watchdog, for less error-prone and
more convenient use. Docs for used API, including wrapped methods can be found at:
https://pygithub.readthedocs.io/en/latest/introduction.html
:param github_credentials: Credentials used for GitHub
:param repository: GitHub repository name
:param project: GitHub project name
:type github_credentials: String
:type repository: String
:type project: String
"""
def __init__(self, github_credentials, repository, project):
self.git = Github(*github_credentials)
self.repository = repository
self.project = project
self.github_credentials = github_credentials
@retry(stop_max_attempt_number=_RETRY_LIMIT, wait_fixed=_RETRY_COOLDOWN_MS)
def get_git_time(self):
"""Retrieve time from GitHub.
Used to reliably determine time during Watchdog run.
:return: Datetime object describing current time
:rtype: datetime
"""
try:
datetime_object = self._get_git_time()
except ValueError as e:
raise GitWrapperError(str(e))
except GithubException as e:
message = 'GitHub Exception during API status retrieval. Exception: {}'.format(str(e))
raise GitWrapperError(message)
except timeout_decorator.TimeoutError:
message = 'GitHub Exception during API status retrieval. Timeout during API request.'
raise GitWrapperError(message)
return datetime_object
@retry(stop_max_attempt_number=_RETRY_LIMIT, wait_fixed=_RETRY_COOLDOWN_MS)
def get_pull_requests(self):
"""Retrieve paginated list of pull requests from GitHub.
:return: Paginated list of Pull Requests in GitHub repo
:rtype: github.PaginatedList.PaginatedList of github.PullRequest.PullRequest
"""
try:
prs = self._get_pull_requests()
except GithubException as e:
message = 'GitHub Exception during API status retrieval. Exception: {}'.format(str(e))
raise GitWrapperError(message)
return prs
@timeout_decorator.timeout(_REQUEST_TIMEOUT_S)
def _get_git_time(self):
"""Private method retrieving time from GitHub.
:return: Datetime object describing current time
:rtype: datetime
"""
datetime_string = self.git.get_api_status().raw_headers.get('date', '')
datetime_format = '%a, %d %b %Y %H:%M:%S %Z'
datetime_object = datetime.strptime(datetime_string, datetime_format)
return datetime_object
@timeout_decorator.timeout(_REQUEST_TIMEOUT_S)
def _get_pull_requests(self):
"""Private method retrieving pull requests from GitHub.
:return: Paginated list of Pull Requests in GitHub repo
:rtype: github.PaginatedList.PaginatedList of github.PullRequest.PullRequest
"""
return self.git.get_organization(self.repository).get_repo(self.project).get_pulls()
class GitWrapperError(Exception):
"""Base class for exceptions raised in GitWrapper.
:param message Explanation of the error
"""
def __init__(self, message):
self.message = message
log.exception(message)

View File

@@ -1,91 +0,0 @@
#!/usr/bin/python3
# Copyright (C) 2018-2021 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
import requests
import jenkins
import logging
from retrying import retry
# Logging
logging.basicConfig(format='%(name)s - %(levelname)s - %(message)s')
log = logging.getLogger(__name__)
log.setLevel(logging.INFO)
_RETRY_LIMIT = 3
_RETRY_COOLDOWN_MS = 5000
class JenkinsWrapper:
"""Class wrapping Python-Jenkins API.
The purpose of this class is to wrap methods from Python-Jenkins API used in Watchdog, for less error-prone and
more convenient use. Docs for used API, including wrapped methods can be found at:
https://python-jenkins.readthedocs.io/en/latest/
:param jenkins_token: Token used for Jenkins
:param jenkins_user: Username used to connect to Jenkins
:param jenkins_server: Jenkins server address
:type jenkins_token: String
:type jenkins_user: String
:type jenkins_server: String
"""
def __init__(self, jenkins_token, jenkins_user, jenkins_server):
self.jenkins_server = jenkins_server
self.jenkins = jenkins.Jenkins(jenkins_server, username=jenkins_user,
password=jenkins_token)
@retry(stop_max_attempt_number=_RETRY_LIMIT, wait_fixed=_RETRY_COOLDOWN_MS)
def get_build_console_output(self, job_name, build_number):
return self.jenkins.get_build_console_output(job_name, build_number)
@retry(stop_max_attempt_number=_RETRY_LIMIT, wait_fixed=_RETRY_COOLDOWN_MS)
def get_job_info(self, job_name):
return self.jenkins.get_job_info(job_name)
@retry(stop_max_attempt_number=_RETRY_LIMIT, wait_fixed=_RETRY_COOLDOWN_MS)
def get_build_info(self, job_name, build_number):
return self.jenkins.get_build_info(job_name, build_number)
@retry(stop_max_attempt_number=_RETRY_LIMIT, wait_fixed=_RETRY_COOLDOWN_MS)
def get_queue_item(self, queue_id):
"""Attempt to retrieve Jenkins job queue item.
Exception communicating queue doesn't exist is expected,
in that case method returns empty dict.
:param queue_id: Jenkins job queue ID number
:type queue_id: int
:return: Dictionary representing Jenkins job queue item
:rtype: dict
"""
try:
return self.jenkins.get_queue_item(queue_id)
except Exception as e:
# Exception 'queue does not exist' is expected behaviour when job is running
if 'queue' in str(e) and 'does not exist' in str(e):
return {}
else:
raise
@retry(stop_max_attempt_number=_RETRY_LIMIT, wait_fixed=_RETRY_COOLDOWN_MS)
def get_idle_ci_hosts(self):
"""Query Jenkins for idle servers.
Send GET request to Jenkins server, querying for idle servers labeled
for OpenVino-ONNX CI job.
:return: Number of idle hosts delegated to OpenVino-ONNX CI
:rtype: int
"""
jenkins_request_url = self.jenkins_server + 'label/ci&&onnx/api/json?pretty=true'
try:
log.info('Sending request to Jenkins: %s', jenkins_request_url)
r = requests.Request(method='GET', url=jenkins_request_url, verify=False)
response = self.jenkins.jenkins_request(r).json()
return int(response['totalExecutors']) - int(response['busyExecutors'])
except Exception as e:
log.exception('Failed to send request to Jenkins!\nException message: %s', str(e))
raise

View File

@@ -1,89 +0,0 @@
#!/usr/bin/python3
# Copyright (C) 2018-2021 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
import argparse
import sys
from watchdog import Watchdog
DEFAULT_MSTEAMS_URL_FILE = '/home/lab_nerval/tokens/msteams_url'
DEFAULT_GITHUB_ORGANIZATION = 'openvinotoolkit'
DEFAULT_GITHUB_PROJECT = 'openvino'
DEFAULT_JENKINS_TOKEN_FILE = '/home/lab_nerval/tokens/crackerjack'
DEFAULT_JENKINS_SERVER = 'https://crackerjack.intel.com/'
DEFAULT_JENKINS_USER = 'lab_nerval'
DEFAULT_CI_JOB_NAME = 'onnx/OpenVino_CI'
DEFAULT_WATCHDOG_JOB_NAME = 'onnx/ci_watchdog'
def main(args):
"""
Read args passed to script, load tokens and run watchdog.
Keyword arguments:
:param args: arguments parsed by argparse ArgumentParser
:return: returns status code 0 on successful completion
"""
jenkins_server = args.jenkins_server.strip()
jenkins_user = args.jenkins_user.strip()
jenkins_token = open(args.jenkins_token).read().replace('\n', '').strip()
msteams_url = open(args.msteams_url).read().replace('\n', '').strip()
github_credentials = args.github_credentials
github_org = args.github_org
github_project = args.github_project
ci_job = args.ci_job.strip()
watchdog_job = args.watchdog_job.strip()
quiet = args.quiet
wd = Watchdog(jenkins_token=jenkins_token,
jenkins_server=jenkins_server,
jenkins_user=jenkins_user,
github_credentials=github_credentials,
git_org=github_org,
git_project=github_project,
msteams_url=msteams_url,
ci_job_name=ci_job,
watchdog_job_name=watchdog_job)
wd.run(quiet=quiet)
return 0
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('--msteams-url', help='Path to MS Teams channel url to communicate messages.',
default=DEFAULT_MSTEAMS_URL_FILE, action='store', required=False)
parser.add_argument('--github-credentials', help='GitHub user credentials to access repo.',
nargs="+", required=True)
parser.add_argument('--github-org', help='Name of organization on GitHub.',
default=DEFAULT_GITHUB_ORGANIZATION, action='store', required=False)
parser.add_argument('--github-project', help='Name of project on GitHub.',
default=DEFAULT_GITHUB_PROJECT, action='store', required=False)
parser.add_argument('--jenkins-token', help='Path to Jenkins user token to access build info.',
default=DEFAULT_JENKINS_TOKEN_FILE, action='store', required=False)
parser.add_argument('--jenkins-server', help='Jenkins server address.',
default=DEFAULT_JENKINS_SERVER, action='store', required=False)
parser.add_argument('--jenkins-user', help='Jenkins user used to log in.',
default=DEFAULT_JENKINS_USER, action='store', required=False)
parser.add_argument('--ci-job', help='Jenkins CI job name.',
default=DEFAULT_CI_JOB_NAME, action='store', required=False)
parser.add_argument('--watchdog-job', help='Jenkins CI Watchdog job name.',
default=DEFAULT_WATCHDOG_JOB_NAME, action='store', required=False)
parser.add_argument('--quiet', help="Quiet mode - doesn\'t send message to communicator.",
action='store_true', required=False)
args = parser.parse_args()
sys.exit(main(args))

View File

@@ -1,128 +0,0 @@
#!/usr/bin/python3
# Copyright (C) 2018-2021 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
import requests
class MSTeamsCommunicator:
"""Class communicating with MSTeams using Incoming Webhook.
The purpose of this class is to use MSTeams API to send message.
Docs for used API, including wrapped methods can be found at:
https://docs.microsoft.com/en-us/outlook/actionable-messages/send-via-connectors
"""
def __init__(self, _ci_alerts_channel_url):
self._ci_alerts_channel_url = _ci_alerts_channel_url
self._queued_messages = {
self._ci_alerts_channel_url: [],
}
@property
def messages(self):
"""
Get list of queued messages.
:return: List of queued messages
:return type: List[String]
"""
return self._queued_messages.values()
def queue_message(self, message):
"""
Queue message to be sent later.
:param message: Message content
:type message: String
"""
self._queued_messages[self._ci_alerts_channel_url].append(message)
def _parse_text(self, watchdog_log, message):
"""
Parse text to display as alert.
:param watchdog_log: Watchdog log content
:param message: Unparsed message content
:type watchdog_log: String
:type message: String
"""
message_split = message.split('\n')
log_url = None
if len(message_split) == 3:
log_url = message_split[-1]
title = message_split[0]
text = message_split[1]
header = watchdog_log.split(' - ')
header_formatted = '{} - [Watchdog Log]({})'.format(header[0], header[1])
return title, log_url, '{}\n\n{}'.format(header_formatted, text)
def _json_request_content(self, title, log_url, text_formatted):
"""
Create final json request to send message to MS Teams channel.
:param title: Title of alert
:param log_url: URL to PR
:param text_formatted: General content of alert - finally formatted
:type title: String
:type title: String
:type title: String
"""
data = {
'@context': 'https://schema.org/extensions',
'@type': 'MessageCard',
'themeColor': '0072C6',
'title': title,
'text': text_formatted,
'potentialAction':
[
{
'@type': 'OpenUri',
'name': 'Open PR',
'targets':
[
{
'os': 'default',
'uri': log_url,
},
],
},
],
}
return data
def _send_to_channel(self, watchdog_log, message_queue, channel_url):
"""
Send MSTeams message to specified channel.
:param watchdog_log: Watchdog log content
:param message_queue: Queued messages to send
:param channel_url: Channel url
:type watchdog_log: String
:type message_queue: String
:type channel_url: String
"""
for message in message_queue:
title, log_url, text_formatted = self._parse_text(watchdog_log, message)
data = self._json_request_content(title, log_url, text_formatted)
try:
requests.post(url=channel_url, json=data)
except Exception as ex:
raise Exception('!!CRITICAL!! MSTeamsCommunicator: Could not send message '
'due to {}'.format(ex))
def send_message(self, watchdog_log, quiet=False):
"""
Send queued messages as single communication.
:param watchdog_log: Watchdog log content
:param quiet: Flag for disabling sending report through MS Teams
:type watchdog_log: String
:type quiet: Boolean
"""
for channel, message_queue in self._queued_messages.items():
if not quiet and message_queue:
self._send_to_channel(watchdog_log, message_queue, channel)

View File

@@ -1,505 +0,0 @@
#!/usr/bin/python3
# Copyright (C) 2018-2021 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
import datetime
import time
import re
import logging
import requests
from ms_teams_communicator import MSTeamsCommunicator
from jenkins_wrapper import JenkinsWrapper
from jenkins import NotFoundException
from git_wrapper import GitWrapper, GitWrapperError
import os
import json
# Logging
logging.basicConfig(format='%(name)s - %(levelname)s - %(message)s')
log = logging.getLogger(__name__)
log.setLevel(logging.INFO)
# Watchdog static constant variables
_SCRIPT_DIR = os.path.dirname(os.path.realpath(__file__))
_BUILD_DURATION_THRESHOLD = datetime.timedelta(minutes=60)
_CI_START_THRESHOLD = datetime.timedelta(minutes=30)
_AWAITING_JENKINS_THRESHOLD = datetime.timedelta(minutes=5)
_WATCHDOG_DIR = os.path.expanduser('~')
_PR_REPORTS_CONFIG_KEY = 'pr_reports'
_CI_BUILD_FAIL_MESSAGE = 'ERROR: py3: commands failed'
_CI_BUILD_SUCCESS_MESSAGE = 'py3: commands succeeded'
_GITHUB_CI_CHECK_NAME = 'OpenVINO-ONNX'
INTERNAL_ERROR_MESSAGE_HEADER = '!!! --- !!! INTERNAL WATCHDOG ERROR !!! --- !!!'
ERROR_MESSAGE_HEADER = '!!! OpenVino-ONNX CI Error !!!'
WARNING_MESSAGE_HEADER = 'OpenVino-ONNX CI WARNING'
INFO_MESSAGE_HEADER = 'OpenVino-ONNX CI INFO'
class Watchdog:
"""Class describing OpenVino-ONNX-CI Watchdog.
Watchdog connects to GitHub and retrieves the list of current pull requests (PRs) in
OpenVino repository. Then it connects to specified Jenkins server to
check CI jobs associated with every PR. Watchdog verifies time durations for Jenkins
initial response, job queue and execution against time treshold constants. Every fail
is logged and reported through MS Teams communicators.
:param jenkins_token: Token used for Jenkins
:param jenkins_server: Jenkins server address
:param jenkins_user: Username used to connect to Jenkins
:param github_credentials: Credentials used to connect to GitHub
:param msteams_url: URL used to connect to MS Teams channel
:param ci_job_name: OpenVino-ONNX CI job name used in Jenkins
:param watchdog_job_name: Watchdog job name used in Jenkins
:type jenkins_token: String
:type jenkins_server: String
:type jenkins_user: String
:type github_credentials: String
:type msteams_url: String
:type ci_job_name: String
:type watchdog_job_name: String
.. note::
Watchdog and OpenVino-ONNX CI job must be placed on the same Jenkins server.
"""
def __init__(self, jenkins_token, jenkins_server, jenkins_user, github_credentials, git_org,
git_project, msteams_url, ci_job_name, watchdog_job_name):
self._config_path = os.path.join(_WATCHDOG_DIR, '{}/.{}_ci_watchdog.json'.format(_WATCHDOG_DIR, git_project))
# Jenkins Wrapper object for CI job
self._jenkins = JenkinsWrapper(jenkins_token,
jenkins_user=jenkins_user,
jenkins_server=jenkins_server)
# Load GitHub token and log in, retrieve pull requests
self._git = GitWrapper(github_credentials, repository=git_org, project=git_project)
# Create MS Teams api object
self._msteams_hook = MSTeamsCommunicator(msteams_url)
self._ci_job_name = ci_job_name.lower()
self._watchdog_job_name = watchdog_job_name
# Read config file
self._config = self._read_config_file()
# Time at Watchdog initiation
self._now_time = datetime.datetime.now()
self._current_prs = {}
self._ms_teams_enabled = True
def run(self, quiet=False):
"""Run main watchdog logic.
Retrieve list of pull requests and pass it to the method responsible for checking them.
:param quiet: Flag for disabling sending report through communicator
:type quiet: Boolean
"""
try:
pull_requests = self._git.get_pull_requests()
except GitWrapperError:
message = 'Failed to retrieve Pull Requests!'
log.exception(message)
self._queue_message(message, message_severity='internal')
# Check all pull requests
for pr in pull_requests:
try:
self._check_pr(pr)
except Exception as e:
log.exception(str(e))
self._queue_message(str(e), message_severity='internal', pr=pr)
self._update_config()
self._send_message(quiet=quiet)
def _read_config_file(self):
"""Read Watchdog config file stored on the system.
The file stores every fail already reported along with timestamp. This
mechanism is used to prevent Watchdog from reporting same failure
multiple times. In case there's no config under the expected path,
appropriate data structure is created and returned.
:return: Returns dict of dicts with reported fails with their timestamps
:rtype: dict of dicts
"""
if os.path.isfile(self._config_path):
log.info('Reading config file in: {}'.format(self._config_path))
file = open(self._config_path, 'r')
data = json.load(file)
else:
log.info('No config file found in: {}'.format(self._config_path))
data = {_PR_REPORTS_CONFIG_KEY: {}}
return data
def _check_pr(self, pr):
"""Check pull request (if there's no reason to skip).
Retrieve list of statuses for every PR's last commit and interpret them. Filters out statuses
unrelated to OpenVino-ONNX Jenkins CI and passes relevant statuses to method that interprets them.
If no commit statuses related to Jenkins are available after time defined by
**_AWAITING_JENKINS_THRESHOLD** calls appropriate method to check for builds waiting in queue.
:param pr: GitHub Pull Requests
:type pr: github.PullRequest.PullRequest
"""
log.info('===============================================')
log.info('Checking PR#{}'.format(pr.number))
# Get last Jenkins status
last_status = self._get_last_status(pr)
# Append PR checked in current run for Watchdog config
self._current_prs[str(pr.number)] = self._get_pr_timestamps(pr, last_status)
if self._should_ignore(pr) or self._updated_since_last_run(pr):
log.info('Ignoring PR#{}'.format(pr.number))
return
# Calculate time passed since PR update (any commit, merge or comment)
pr_time_delta = self._now_time - pr.updated_at
if last_status:
# Interpret found CI statuses
log.info('Last status: {} at {}'.format(last_status.description, last_status.updated_at))
self._interpret_status(last_status, pr)
elif pr_time_delta > _CI_START_THRESHOLD:
# If there's no status after assumed time - check if build is waiting in queue
log.info('CI for PR {}: NO JENKINS STATUS YET'.format(pr.number))
self._check_missing_status(pr)
@staticmethod
def _get_pr_timestamps(pr, last_status):
"""Get dict containing PR timestamp and last status timestamp.
:param pr: Single PR being currently checked
:type pr: github.PullRequest.PullRequest
:return: Dictionary with PR and last status update timestamps
:rtype: dict
"""
pr_timestamp = time.mktime(pr.updated_at.timetuple())
if last_status:
status_timestamp = time.mktime(last_status.updated_at.timetuple())
else:
status_timestamp = None
pr_dict = {'pr_timestamp': pr_timestamp,
'status_timestamp': status_timestamp}
return pr_dict
@staticmethod
def _get_last_status(pr):
"""Get last commit status posted from Jenkins.
:param pr: Single PR being currently checked
:type pr: github.PullRequest.PullRequest
:return: Either last PR status posted from Jenkins or None
:rtype: github.CommitStatus.CommitStatus
"""
# Find last commit in PR
last_commit = pr.get_commits().reversed[0]
# Get statuses and filter them to contain only those related to Jenkins CI
# and check if CI in Jenkins started
statuses = last_commit.get_statuses()
jenk_statuses = [stat for stat in statuses if
_GITHUB_CI_CHECK_NAME in stat.context]
try:
last_status = jenk_statuses[0]
except IndexError:
last_status = None
return last_status
@staticmethod
def _should_ignore(pr):
"""Determine if PR should be ignored.
:param pr: Single PR being currently checked
:type pr: github.PullRequest.PullRequest
:return: Returns True if PR should be ignored
:rtype: Bool
"""
# Ignore PR if it has WIP label or WIP in title
if 'WIP' in pr.title:
log.info('PR#{} should be ignored. WIP tag in title.'.format(pr.number))
return True
label_names = [label.name for label in pr.labels]
if 'WIP' in label_names:
log.info('PR#{} should be ignored. WIP label present.'.format(pr.number))
return True
# Ignore PR if base ref is not master
if 'master' not in pr.base.ref:
log.info('PR#{} should be ignored. Base ref is not master'.format(pr.number))
return True
# Ignore PR if mergeable state is 'dirty' or 'behind'.
# Practically this ignores PR in case of merge conflicts
ignored_mergeable_states = ['behind', 'dirty', 'draft']
if pr.mergeable_state in ignored_mergeable_states:
log.info('PR#{} should be ignored. Mergeable state is {}. '.format(pr.number, pr.mergeable_state))
return True
# If no criteria for ignoring PR are met - return false
return False
def _updated_since_last_run(self, pr):
# Ignore if PR was already checked and there was no update in meantime
pr_number = str(pr.number)
current_pr_timestamps = self._current_prs.get(pr_number)
last_pr_timestamps = self._config[_PR_REPORTS_CONFIG_KEY].get(pr_number)
if current_pr_timestamps == last_pr_timestamps:
log.info('PR#{} - No update since last check'.format(pr.number))
return True
else:
return False
def _check_missing_status(self, pr):
"""Verify if missing status is expected.
This method checks if CI build for last was scheduled and still waits in queue for
executor.
:param pr: Single PR being currently checked
:type pr: github.PullRequest.PullRequest
"""
pr_time_delta = self._now_time - pr.updated_at
try:
build_number = self._build_scheduled(pr)
if self._build_in_queue(pr, build_number):
message = ('PR# {}: build waiting in queue after {} minutes.'
.format(pr.number, pr_time_delta.seconds / 60))
severity = 'warning'
else:
message = ('PR# {}: missing status on GitHub after {} minutes.'
.format(pr.number, pr_time_delta.seconds / 60))
severity = 'error'
self._queue_message(message, message_severity=severity, pr=pr)
except TypeError:
log.info('Committer outside of OpenVino organization')
def _build_scheduled(self, pr):
"""Check if Jenkins build corresponding to PR was scheduled.
This method takes last Jenkins build for given PR and compares hash from Jenkins console output
and sha from PR object to determine if CI build for appropriate commit was scheduled.
:param pr: Single PR being currently checked
:type pr: github.PullRequest.PullRequest
:return: Returns build number or -1 if no build found
:rtype: int
"""
pr_number = str(pr.number)
project_name_full = self._ci_job_name + '/PR-' + pr_number
try:
# Retrieve console output from last Jenkins build for job corresponding to this PR
last_build_number = self._jenkins.get_job_info(project_name_full)['lastBuild']['number']
console_output = self._jenkins.get_build_console_output(project_name_full, last_build_number)
# Check if CI build was scheduled - commit hash on GH must match hash in last Jenkins build console output
# Retrieve hash from Jenkins output
match_string = '(?:Obtained .ci/[a-zA-Z/]+Jenkinsfile from ([a-z0-9]{40}))'
retrieved_sha = re.search(match_string, console_output).group(1)
if retrieved_sha == pr.get_commits().reversed[0].sha:
return last_build_number
else:
return -1
except (NotFoundException, AttributeError, requests.exceptions.HTTPError):
message = ('PR #{}: Jenkins build corresponding to commit {} not found!'
.format(pr_number, pr.get_commits().reversed[0].sha))
self._queue_message(message, message_severity='error', pr=pr)
return -1
def _build_in_queue(self, pr, build_number):
"""Check if Jenkins build waits in queue.
This method verifies if CI build is waiting in queue based on console output.
:param pr: Single PR being currently checked
:param build_number: Jenkins build number to retrieve console output from
:type pr: github.PullRequest.PullRequest
:type build_number: int
:return: Returns True if CI build is waiting in queue
:rtype: Bool
"""
pr_number = str(pr.number)
project_name_full = self._ci_job_name + '/PR-' + pr_number
# Retrieve console output
try:
console_output = self._jenkins.get_build_console_output(project_name_full, build_number)
except NotFoundException:
return False
# Check if build is waiting in queue (and not already running on an executor)
if 'Waiting for next available executor on' in console_output \
and 'Running on' not in console_output:
log.info('CI for PR %s: WAITING IN QUEUE', pr_number)
return True
else:
return False
def _interpret_status(self, status, pr):
"""
Verify GitHub status passed to the method.
This method verifies last commit status for given PR, calling appropriate methods
to further validate the status.
:param status: GitHub commit status
:param pr: Single PR being currently checked
:type status: github.CommitStatus.CommitStatus
:type pr: github.PullRequest.PullRequest
"""
try:
# Retrieve build number for Jenkins build related to this PR
build_number = self._retrieve_build_number(status.target_url)
# CI build finished - verify if expected output is present
finished_statuses = ['Build finished', 'This commit cannot be built', 'This commit looks good']
pending_statuses = ['This commit is being built', 'Testing in progress',
'This commit is scheduled to be built']
if any(phrase in status.description for phrase in finished_statuses):
self._check_finished(pr, build_number)
# CI build in progress - verify timeouts for build queue and duration
elif any(phrase in status.description for phrase in pending_statuses):
self._check_in_progress(pr, build_number)
else:
message = 'ONNX CI job for PR# {}: unrecognized status: {}'.format(pr.number, status.description)
self._queue_message(message, message_severity='error', pr=pr)
except Exception:
# Log Watchdog internal error in case any status can't be properly verified
message = 'Failed to verify status "{}" for PR# {}'.format(status.description, pr.number)
log.exception(message)
self._queue_message(message, message_severity='internal', pr=pr)
def _retrieve_build_number(self, url):
"""Retrieve Jenkins CI job build number from URL address coming from GitHub commit status.
:param url: URL address from GitHub commit status
:type url: String
:return: Returns build number
:rtype: int
"""
# Retrieve the build number from url string
match_obj = re.search('(?:/PR-[0-9]+/)([0-9]+)', url)
try:
number = int(match_obj.group(1))
return number
except Exception:
log.exception('Failed to retrieve build number from url link: %s', url)
raise
def _queue_message(self, message, message_severity='info', pr=None):
"""Add a message to message queue in communicator object.
The queued message is constructed based on message string passed as
a method argument and message header. Message header is mapped to message severity
also passed as an argument.
:param message: Message content
:param message_severity: Message severity level
:type message: String
:type message_severity: int
"""
log.info(message)
internal = False
if 'internal' in message_severity:
message_header = INTERNAL_ERROR_MESSAGE_HEADER
internal = True
elif 'error' in message_severity:
message_header = ERROR_MESSAGE_HEADER
elif 'warning' in message_severity:
message_header = WARNING_MESSAGE_HEADER
else:
message_header = INFO_MESSAGE_HEADER
# If message is related to PR attatch url
if pr:
message = message + '\n' + pr.html_url
send = message_header + '\n' + message
if self._ms_teams_enabled:
self._msteams_hook.queue_message(send)
def _check_finished(self, pr, build_number):
"""Verify if finished build output contains expected string for either fail or success.
:param pr: Single PR being currently checked
:param build_number: Jenkins CI job build number
:type pr: github.PullRequest.PullRequest
:type build_number: int
"""
pr_number = str(pr.number)
log.info('CI for PR %s: FINISHED', pr_number)
# Check if FINISH was valid FAIL / SUCCESS
project_name_full = self._ci_job_name + '/PR-' + pr_number
build_output = self._jenkins.get_build_console_output(project_name_full, build_number)
if _CI_BUILD_FAIL_MESSAGE not in build_output \
and _CI_BUILD_SUCCESS_MESSAGE not in build_output:
message = ('ONNX CI job for PR #{}: finished but no tests success or fail '
'confirmation is present in console output!'.format(pr_number))
self._queue_message(message, message_severity='error', pr=pr)
def _send_message(self, quiet=False):
"""Send messages queued in MS Teams objects to designated channel.
Queued messages are being sent as a single communication.
:param quiet: Flag for disabling sending report through communicator
:type quiet: Boolean
"""
if any(messages for messages in self._msteams_hook.messages):
try:
watchdog_build = self._jenkins.get_job_info(self._watchdog_job_name)['lastBuild']
watchdog_build_number = watchdog_build['number']
watchdog_build_link = watchdog_build['url']
except Exception:
watchdog_build_number = 'UNKNOWN'
watchdog_build_link = self._jenkins.jenkins_server
send = self._watchdog_job_name + '- build ' + str(
watchdog_build_number) + ' - ' + watchdog_build_link
if self._ms_teams_enabled:
self._msteams_hook.send_message(send, quiet=quiet)
else:
log.info('Nothing to report.')
def _check_in_progress(self, pr, build_number):
"""Check if CI build succesfully started.
Checks if build started within designated time threshold, and job is
currently running - it didn't cross the time threshold.
:param pr: Single PR being currently checked
:param build_number: Jenkins CI job build number
:type pr: github.PullRequest.PullRequest
:type build_number: int
"""
pr_number = str(pr.number)
log.info('CI for PR %s: TESTING IN PROGRESS', pr_number)
project_name_full = self._ci_job_name + '/PR-' + pr_number
build_info = self._jenkins.get_build_info(project_name_full, build_number)
build_datetime = datetime.datetime.fromtimestamp(build_info['timestamp'] / 1000.0)
build_delta = self._now_time - build_datetime
log.info('Build %s: IN PROGRESS, started: %s minutes ago', str(build_number),
str(build_delta))
# If build still waiting in queue
if build_delta > _CI_START_THRESHOLD and self._build_in_queue(pr, build_number):
message = ('ONNX CI job build #{}, for PR #{} waiting in queue after {} '
'minutes'.format(build_number, pr_number, str(build_delta.seconds / 60)))
self._queue_message(message, message_severity='warning', pr=pr)
elif build_delta > _BUILD_DURATION_THRESHOLD:
# CI job take too long, possibly froze - communicate failure
message = ('ONNX CI job build #{}, for PR #{} started, '
'but did not finish in designated time of {} '
'minutes!'.format(build_number, pr_number,
str(_BUILD_DURATION_THRESHOLD.seconds / 60)))
self._queue_message(message, message_severity='error', pr=pr)
def _update_config(self):
"""Update Watchdog config file with PRs checked in current Watchdog run, remove old entries.
:param current_prs: List of PR numbers checked during current Watchdog run
:type current_prs: list of ints
"""
# Cleanup config of old reports
log.info('Writing to config file at: {}'.format(self._config_path))
new_config = {_PR_REPORTS_CONFIG_KEY: self._current_prs}
file = open(self._config_path, 'w+')
json.dump(new_config, file)

View File

@@ -7,7 +7,7 @@ updates:
directory: "/src/bindings/python"
schedule:
interval: weekly
day: monday
day: sunday
time: "13:00"
open-pull-requests-limit: 0
reviewers:
@@ -15,4 +15,3 @@ updates:
- akuporos
labels:
- "category: dependencies"

View File

@@ -5,11 +5,10 @@
"IGNORE_LOGINS": [
"openvino-ci",
"openvino-pushbot",
"lab-nerval",
"lab-nerval-onnx-ci",
"onnx-watchdog-agent",
"workbench-ci-bot",
"openvino-pot-ci"
"openvino-pot-ci",
"sysicvvpux",
"ote-ci-bot"
],
"MAX_MEMBERS_TO_REMOVE": 15,
"EMAILS_FILE_PATH": "dev_emails-test.txt",
@@ -28,7 +27,7 @@
"openvino-ie-gna-maintainers": "category: GNA",
"openvino-ie-gpu-maintainers": "category: GPU",
"openvino-ie-lpt-maintainers": "category: LP transformations",
"openvino-ie-multi-maintainers": "category: MULTI",
"openvino-ie-auto-multi-maintainers": "category: MULTI",
"openvino-ie-python-api-maintainers": "category: python api",
"openvino-ie-template-maintainers": "category: TEMPLATE",
"openvino-ie-tests-maintainers": "category: IE Tests",

View File

@@ -157,7 +157,7 @@ class GithubOrgApi:
self.github_users_by_email[email] = org_member
if not is_valid_name(org_member.name):
self.members_to_fix_name.add(org_member)
elif not is_user_ignored(org_member):
else:
self.members_to_remove.add(org_member)
print("\nOrg members - no Intel emails:")

View File

@@ -1,4 +1,4 @@
name: IE Python Checks
name: Python API Checks
on:
workflow_dispatch:
@@ -23,8 +23,9 @@ jobs:
with:
python-version: '3.6'
- name: Install dependencies
run: python -m pip install -r src/bindings/python/src/compatibility/openvino/requirements_dev.txt
- name: Run Flake on samples
run: python -m pip install -r src/bindings/python/requirements_test.txt
# samples code-style
- name: Run flake8 on samples
run: python -m flake8 ./ --config=setup.cfg
working-directory: samples/python
- name: Create code style diff for samples
@@ -38,21 +39,53 @@ jobs:
with:
name: samples_diff
path: samples_diff.diff
- name: Run Flake on src
# IE Python API Flake code-style
- name: Run flake8 on IE Python API
run: python -m flake8 ./ --config=setup.cfg
working-directory: src/bindings/python/src/compatibility/openvino
- name: Create code style diff for Python src
- name: Create code style diff for IE Python API
if: failure()
run: |
python -m black -l 160 -S ./
git diff > src_diff.diff
git diff > ie_python_diff.diff
working-directory: src/bindings/python/src/compatibility/openvino
- uses: actions/upload-artifact@v2
if: failure()
with:
name: src_diff
path: src_diff.diff
- name: Run Flake on wheel
name: ie_python_diff
path: ie_python_diff.diff
# nGraph Python API Flake code-style
- name: Run flake8 on nGraph Python API
run: python -m flake8 ./src/compatibility/ngraph --config=setup.cfg
working-directory: src/bindings/python
- name: Create code style diff for nGraph Python API
if: failure()
run: |
python -m black -l 160 -S ./
git diff > pyngraph_diff.diff
working-directory: src/bindings/python/src/compatibility/ngraph
- uses: actions/upload-artifact@v2
if: failure()
with:
name: pyngraph_diff
path: pyngraph_diff.diff
# Python API 2.0 Flake code-style
- name: Run flake8 on Python API 2.0
run: python -m flake8 ./src/openvino --config=setup.cfg
working-directory: src/bindings/python
- name: Create code style diff for Python API 2.0
if: failure()
run: |
python -m black -l 160 -S ./
git diff > pyopenvino_diff.diff
working-directory: src/bindings/python/src/openvino
- uses: actions/upload-artifact@v2
if: failure()
with:
name: pyopenvino_diff
path: pyopenvino_diff.diff
# wheel Flake code-style
- name: Run flake8 on wheel
run: python -m flake8 ./ --config=../setup.cfg
working-directory: src/bindings/python/wheel
- name: Create code style diff for wheel
@@ -66,10 +99,24 @@ jobs:
with:
name: wheel_diff
path: wheel_diff.diff
- name: Run MyPy
# Python API 2.0 tests Flake code-style
- name: Run flake8 on python tests
# ignore lack of docs in tests
run: python -m flake8 tests/ --config=setup.cfg
working-directory: src/bindings/python
# IE Python API mypy check
- name: Run mypy on IE Python API
run: python -m mypy ./ --config-file ./setup.cfg
working-directory: src/bindings/python/src/compatibility/openvino
# nGraph Python API mypy check
- name: Run mypy on nGraph Python API
run: python -m mypy ./src/compatibility/ngraph --config-file ./setup.cfg
working-directory: src/bindings/python
# Python API 2.0 mypy check
- name: Run mypy on Python API 2.0
run: python -m mypy ./src/openvino --config-file ./setup.cfg
working-directory: src/bindings/python
- name: Run Bandit
run: python -m bandit -r ./ -f screen
working-directory: src/bindings/python/src/compatibility/openvino

6
.gitmodules vendored
View File

@@ -1,6 +1,6 @@
[submodule "src/plugins/intel_cpu/thirdparty/mkl-dnn"]
path = src/plugins/intel_cpu/thirdparty/mkl-dnn
url = https://github.com/openvinotoolkit/oneDNN.git
[submodule "src/plugins/intel_cpu/thirdparty/onednn"]
path = src/plugins/intel_cpu/thirdparty/onednn
url = https://github.com/luo-cheng2021/oneDNN.git
ignore = dirty
[submodule "thirdparty/xbyak"]
path = thirdparty/xbyak

View File

@@ -4,7 +4,7 @@
if(DEFINED BUILD_SHARED_LIBS AND NOT BUILD_SHARED_LIBS)
# 'target_link_libraries' does not work correctly when called from
# different directly where 'add_library' is called: CMake generates
# different directory where 'add_library' is called: CMake generates
# incorrect OpenVINOConfig.cmake in this case
cmake_minimum_required(VERSION 3.17)
else()
@@ -13,7 +13,9 @@ endif()
project(OpenVINO DESCRIPTION "OpenVINO toolkit")
set(IE_MAIN_SOURCE_DIR ${OpenVINO_SOURCE_DIR}/inference-engine)
if(NOT CMAKE_BUILD_TYPE)
set(CMAKE_BUILD_TYPE "Release" CACHE STRING "CMake build type" FORCE)
endif()
find_package(IEDevScripts REQUIRED
PATHS "${OpenVINO_SOURCE_DIR}/cmake/developer_package"
@@ -49,6 +51,7 @@ file(REMOVE "${CMAKE_BINARY_DIR}/InferenceEngineTargets.cmake")
file(REMOVE "${CMAKE_BINARY_DIR}/OpenVINOTargets.cmake")
foreach(component IN LISTS openvino_export_components)
file(REMOVE "${CMAKE_BINARY_DIR}/${component}_dev_targets.cmake")
file(REMOVE "${CMAKE_BINARY_DIR}/ov_${component}_dev_targets.cmake")
unset(${component} CACHE)
endforeach()
unset(openvino_export_components CACHE)

View File

@@ -39,14 +39,19 @@ Jenkinsfile @openvinotoolkit/openvino-admins
# IE CPU:
/src/plugins/intel_cpu/ @openvinotoolkit/openvino-ie-cpu-maintainers @openvinotoolkit/openvino-ie-cpu-developers
/src/common/low_precision_transformations/ @openvinotoolkit/openvino-ie-cpu-maintainers @openvinotoolkit/openvino-ie-cpu-developers
/src/plugins/intel_cpu/thirdparty/mkl-dnn/ @openvinotoolkit/openvino-ie-cpu-maintainers @openvinotoolkit/openvino-ie-cpu-developers
/src/plugins/intel_cpu/thirdparty/onednn/ @openvinotoolkit/openvino-ie-cpu-maintainers @openvinotoolkit/openvino-ie-cpu-developers
#IE LPT
/src/common/low_precision_transformations/ @openvinotoolkit/openvino-ie-lpt-maintainers
# IE GPU:
/src/inference/include/ie/gpu/ @openvinotoolkit/openvino-ie-gpu-maintainers @openvinotoolkit/openvino-ie-gpu-developers
/src/inference/include/ie/cldnn/ @openvinotoolkit/openvino-ie-gpu-maintainers @openvinotoolkit/openvino-ie-gpu-developers
/src/inference/include/openvino/runtime/intel_gpu/ @openvinotoolkit/openvino-ie-gpu-maintainers @openvinotoolkit/openvino-ie-gpu-developers
/src/plugins/intel_gpu/ @openvinotoolkit/openvino-ie-gpu-maintainers @openvinotoolkit/openvino-ie-gpu-developers
/docs/snippets/gpu/ @openvinotoolkit/openvino-ie-gpu-maintainers @openvinotoolkit/openvino-ie-gpu-developers
/docs/OV_Runtime_UG/supported_plugins/GPU.md @openvinotoolkit/openvino-ie-gpu-maintainers @openvinotoolkit/openvino-ie-gpu-developers
/docs/OV_Runtime_UG/supported_plugins/GPU_RemoteTensor_API.md @openvinotoolkit/openvino-ie-gpu-maintainers @openvinotoolkit/openvino-ie-gpu-developers
# IE VPU:
/src/plugins/intel_myriad @openvinotoolkit/openvino-ie-vpu-maintainers
@@ -63,6 +68,9 @@ Jenkinsfile @openvinotoolkit/openvino-admins
/src/plugins/intel_gna/ @openvinotoolkit/openvino-ie-gna-maintainers
/src/inference/include/ie/gna/ @openvinotoolkit/openvino-ie-gna-maintainers
# IE ARM CPU:
/docs/OV_Runtime_UG/supported_plugins/ARM_CPU.md @openvinotoolkit/openvino_contrib-arm_plugin-maintainers
# IE Auto (MULTI) plugin:
/src/plugins/auto/ @openvinotoolkit/openvino-ie-auto-multi-maintainers
/src/inference/include/ie/multi-device/ @openvinotoolkit/openvino-ie-auto-multi-maintainers
@@ -71,8 +79,8 @@ Jenkinsfile @openvinotoolkit/openvino-admins
/src/frontends/paddle/ @openvinotoolkit/openvino-ie-paddle-maintainers
# IE Tests:
/src/tests/ @openvinotoolkit/openvino-ie-tests-maintainers
/src/tests_deprecated/ @openvinotoolkit/openvino-ie-tests-maintainers
/src/tests/ @openvinotoolkit/openvino-ie-tests-maintainers @openvinotoolkit/openvino-ie-test-developers
/src/tests_deprecated/ @openvinotoolkit/openvino-ie-tests-maintainers @openvinotoolkit/openvino-ie-test-developers
/src/tests/functional/inference_engine/ngraph_reader/ @openvinotoolkit/openvino-ie-tests-maintainers @openvinotoolkit/openvino-ngraph-maintainers
/src/tests/functional/inference_engine/transformations/ @openvinotoolkit/openvino-ie-tests-maintainers @openvinotoolkit/openvino-ngraph-maintainers
@@ -82,6 +90,6 @@ Jenkinsfile @openvinotoolkit/openvino-admins
*.md @openvinotoolkit/openvino-docs-maintainers
# Control 3d party dependencies
**/*requirements*.* @openvino-configuration-mgmt
**/setup.py @openvino-configuration-mgmt
/scripts/install_dependencies/ @openvino-configuration-mgmt
**/*requirements*.* @openvinotoolkit/openvino-configuration-mgmt
**/setup.py @openvinotoolkit/openvino-configuration-mgmt
/scripts/install_dependencies/ @openvinotoolkit/openvino-configuration-mgmt

View File

@@ -1,68 +1,55 @@
# How to contribute to the OpenVINO repository
We suppose that you are an enthusiastic coder, want to contribute some code. For that purpose OpenVINO project now has a repository on the GitHub, to simplify everybody's life! All the bug fixes, new functionality, new tutorials etc. should be submitted via the GitHub's mechanism of pull requests.
We welcome community contributions to OpenVINO™. Please read the following guide to learn how to find ideas for contribution, practices for good pull requests, checking your changes with our tests and more.
If you are not familiar with the mechanism - do not worry, it's very simple. Keep reading.
## Before you start contributing you should
- Make sure you agree to contribute your code under [OpenVINO (Apache 2.0)](https://github.com/openvinotoolkit/openvino/blob/master/LICENSE) license.
- If you are submitting a new module, you should go into [openvino_contrib](https://github.com/openvinotoolkit/openvino_contrib) repository by default.
- If you are going to fix a bug, check that it's still exists. This can be done by building the latest [releases/2020/3](https://github.com/openvinotoolkit/openvino/tree/releases/2020/3) branch (LTS release) or the latest master branch, and make sure that the error is still reproducible there. We do not fix bugs that only affect older non-LTS releases like 2020.2 for example (more details about [branching strategy](https://github.com/openvinotoolkit/openvino/wiki/Branches))
- Make sure that nobody beat you into fixing or reporting the issue by doing a search on the [Github OpenVINO issues](https://github.com/openvinotoolkit/openvino/issues) page, and making sure that there isn't someone working on it. In the latter case you might provide support or suggestion in the issue or in the linked pull request.
- If you have a question about the software, then this is **NOT** the right place. You should open up a question at the [OpenVINO forum](https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/bd-p/distribution-openvino-toolkit). In order to post a decent question from the start, feel free to read the official forum guidelines.
- Make sure you agree to contribute your code under [OpenVINO (Apache 2.0)](https://github.com/openvinotoolkit/openvino/blob/master/LICENSE) license.
- Figure out what youre going to contribute. If you dont know what you are going to work on, navigate to the [Github "Issues" tab](https://github.com/openvinotoolkit/openvino/issues). Make sure that there isn't someone working on it. In the latter case you might provide support or suggestion in the issue or in the linked pull request.
- If you are going to fix a bug, check that it's still exists in the latest release. This can be done by building the latest master branch, and make sure that the error is still reproducible there. We do not fix bugs that only affect older non-LTS releases like 2020.2 for example (more details about [branching strategy](https://github.com/openvinotoolkit/openvino/wiki/Branches)).
Before you open up anything on the OpenVINO GitHub page, be sure that you are at the right place with your problem.
## "Fork & Pull Request model" for code contribution
### [](https://github.com/openvinotoolkit/openvino/wiki/Contribute#the-instruction-in-brief)The instruction in brief
### [](https://github.com/openvinotoolkit/openvino/blob/master/CONTRIBUTING.md#the-instruction-in-brief)The instruction in brief
- Register at GitHub. Create your fork of OpenVINO repository [https://github.com/openvinotoolkit/openvino](https://github.com/openvinotoolkit/openvino) (see [https://help.github.com/articles/fork-a-repo](https://help.github.com/articles/fork-a-repo) for details).
- Register at GitHub. Create your fork of OpenVINO repository [https://github.com/openvinotoolkit/openvino](https://github.com/openvinotoolkit/openvino) (see [https://help.github.com/articles/fork-a-repo](https://help.github.com/articles/fork-a-repo) for details).
- Install Git.
- Set your user name and email address in a Git configuration according to GitHub account (see [https://git-scm.com/book/en/v2/Getting-Started-First-Time-Git-Setup](https://git-scm.com/book/en/v2/Getting-Started-First-Time-Git-Setup) for details).
- Choose a task for yourself. It could be a bugfix or some new code.
- Choose a base branch for your work. More details about branches and policies are here: [Branches](https://github.com/openvinotoolkit/openvino/wiki/Branches)
- Clone your fork to your computer.
- Create a new branch (with a meaningful name) from the base branch you chose.
- Modify / add the code following our [Coding Style Guide](https://github.com/openvinotoolkit/openvino/wiki/CodingStyleGuideLines) and [Documentation guidelines](https://github.com/openvinotoolkit/openvino/wiki/CodingStyleGuideLinesDocumentation).
- Modify / add the code following our [Coding Style Guide](https://github.com/openvinotoolkit/openvino/wiki/CodingStyleGuideLines).
- If you want to add a new sample, please look at this [Guide for contributing to C++/C/Python IE samples](https://github.com/openvinotoolkit/openvino/wiki/SampleContribute)
- If you want to contribute to the documentation and want to add a new guide, follow that instruction [Documentation guidelines](https://github.com/openvinotoolkit/openvino/wiki/CodingStyleGuideLinesDocumentation)
- Run testsuite locally:
- execute each test binary from the artifacts directory, e.g. `<source dir>/bin/intel64/Release/ieFuncTests`
- If you contribute to the documentation and want to add a new guide:
- Create a new markdown file in an appropriate folder.
- **REQUIRED:** The document title must contain a document label in a form: `{#openvino_docs_<name>}`. For example: `Deep Learning Network Intermediate Representation and Operation Sets in OpenVINO™ {#openvino_docs_MO_DG_IR_and_opsets}`.
- Add your file to the documentation structure. Open the documentation structure file [`docs/doxygen/ie_docs.xml`](https://github.com/openvinotoolkit/openvino/blob/master/docs/doxygen/ie_docs.xml) and add your file path to the appropriate section.
- When you are done, make sure that your branch is to date with latest state of the branch you want to contribute to (e.g. `git fetch upstream && git merge upstream/master`), push your branch to your GitHub fork; then create a pull request from your branch to the base branch (see [https://help.github.com/articles/using-pull-requests](https://help.github.com/articles/using-pull-requests) for details).
## Making a good pull request
Following these guidelines will increase the likelihood of your pull request being accepted:
- Before pushing your PR to the repository, make sure that it builds perfectly fine on your local system.
- Add enough information, like a meaningful title, the reason why you made the commit and a link to the issue page if you opened one for this PR.
- Scope your PR to one issue. Before submitting, make sure the diff contains no unrelated changes. If you want to cover more than one issue, submit your changes for each as separate pull requests.
- If you have added new functionality, you should update/create the relevant documentation, as well as add tests for it to the testsuite.
- Try not to include "oops" commits - ones that just fix an error in the previous commit. If you have those, then before submitting [squash](https://github.com/openvinotoolkit/openvino/wiki/Contribute#https://git-scm.com/book/en/v2/Git-Tools-Rewriting-History#Squashing-Commits) those fixes directly into the commits where they belong.
- Make sure to choose the right base branch and to follow the [Coding Style Guide](https://github.com/openvinotoolkit/openvino/wiki/CodingStyleGuideLines) for your code or [Documentation guidelines](https://github.com/openvinotoolkit/openvino/wiki/CodingStyleGuideLinesDocumentation) you are changing documentation files.
- Make sure to add test for new functionality or test that reproduces fixed bug with related test data. Please do not add extra images or videos, if some of existing media files are suitable.
- One PR one issue.
- Build perfectly on your local system.
- Choose the right base branch [Branches](https://github.com/openvinotoolkit/openvino/wiki/Branches).
- Follow the [Coding Style Guide](https://github.com/openvinotoolkit/openvino/wiki/CodingStyleGuideLines) for your code.
- Update documentation using [Documentation guidelines](https://github.com/openvinotoolkit/openvino/wiki/CodingStyleGuideLinesDocumentation) if needed.
- Cover your changes with test.
- Add license at the top of new files [C++ example](https://github.com/openvinotoolkit/openvino/blob/master/samples/cpp/classification_sample_async/main.cpp#L1-L2), [Python example](https://github.com/openvinotoolkit/openvino/blob/master/samples/python/hello_classification/hello_classification.py#L3-L4).
- Add enough information: a meaningful title, the reason why you made the commit and a link to the issue page if exists.
- Remove unrelated to PR changes.
- If it is still WIP and you want to check CI test results early then use _Draft_ PR.
- Submit your PR and become an OpenVINO™ contributor!
## Testing and merging pull requests
- Your pull request will be automatically tested by OpenVINO's precommit (testing status are automatically reported as "green" or "red" circles in precommit steps on PR's page). If any builders have failed, you should fix the issue. To rerun the automatic builds just push changes to your branch on GitHub. No need to close pull request and open a new one!
- Once all the builders are "green", one of OpenVINO developers will review your code. Reviewer could ask you to modify your pull request. Please provide timely response for reviewers (within weeks, not months), otherwise you submission could be postponed or even rejected.
Your pull request will be automatically tested by OpenVINO's precommit (testing status are automatically reported as "green" or "red" circles in precommit steps on PR's page). If any builders have failed, you need fix the issue. To rerun the automatic builds just push changes to your branch on GitHub. No need to close pull request and open a new one!
## PR review good practices
- Originator is responsible for driving the review of changes and should ping reviewers periodically.
- Originator should close comments from the Reviewer when it is resolved. The Reviewer may re-open the comment if he does not agree with the resolution.
- Originator should request re-review from the Reviewer when all comments are resolved by pushing the button in the “Reviewers” section.
- If it is still WIP and you want to check CI test results early then use _Draft_ PR.
- Do **NOT** rewrite history (push -f) once you converted draft PR into regular one, add new commits instead. Looking at diffs makes review easier.
- Write meaningful description of commits resulting from review. _"Addressing review comments"_ is **NOT** a good description! Having a quick look at good descriptions can tell you much what is going on in PR without a need to go through all of resolved comments.
## Merging PR
As soon as the reviewer is fine with the pull request and Precommit likes your code and shows "green" status, the "Approved" review status is put, which signals OpenVINO maintainers that they can merge your pull request.
© Copyright 2018-2022, OpenVINO team
As soon as the reviewer is fine with the pull request and precommit shows "green" status, the "Approved" review status is put, which signals OpenVINO maintainers that they can merge your pull request.

206
README.md
View File

@@ -1,50 +1,208 @@
# OpenVINO™ Toolkit
[![Stable release](https://img.shields.io/badge/version-2021.4.2-green.svg)](https://github.com/openvinotoolkit/openvino/releases/tag/2021.4.2)
<div align="center">
<img src="docs/img/openvino-logo-purple-black.png" width="400px">
[![Stable release](https://img.shields.io/badge/version-2022.1-green.svg)](https://github.com/openvinotoolkit/openvino/releases/tag/2022.1)
[![Apache License Version 2.0](https://img.shields.io/badge/license-Apache_2.0-green.svg)](LICENSE)
![GitHub branch checks state](https://img.shields.io/github/checks-status/openvinotoolkit/openvino/master?label=GitHub%20checks)
![Azure DevOps builds (branch)](https://img.shields.io/azure-devops/build/openvinoci/b2bab62f-ab2f-4871-a538-86ea1be7d20f/13?label=Public%20CI)
[![PyPI Status](https://badge.fury.io/py/openvino.svg)](https://badge.fury.io/py/openvino)
[![PyPI Downloads](https://pepy.tech/badge/openvino)](https://pepy.tech/project/openvino)
</div>
This toolkit allows developers to deploy pre-trained deep learning models
through a high-level OpenVINO™ Runtime C++ and Python APIs integrated with application logic.
## Contents:
This open source version includes several components: namely [Model Optimizer], [OpenVINO™ Runtime], [Post-Training Optimization Tool], as well as CPU, GPU, MYRIAD, multi device and heterogeneous plugins to accelerate deep learning inferencing on Intel® CPUs and Intel® Processor Graphics.
- [What is OpenVINO?](#what-is-openvino-toolkit)
- [Components](#components)
- [Supported Hardware matrix](#supported-hardware-matrix)
- [License](#license)
- [Documentation](#documentation)
- [Tutorials](#tutorials)
- [Products which use OpenVINO](#products-which-use-openvino)
- [System requirements](#system-requirements)
- [How to build](#how-to-build)
- [How to contribute](#how-to-contribute)
- [Get a support](#get-a-support)
- [See also](#see-also)
## What is OpenVINO toolkit?
OpenVINO™ is an open-source toolkit for optimizing and deploying AI inference.
- Boost deep learning performance in computer vision, automatic speech recognition, natural language processing and other common tasks
- Use models trained with popular frameworks like TensorFlow, PyTorch and more
- Reduce resource demands and efficiently deploy on a range of Intel® platforms from edge to cloud
This open-source version includes several components: namely [Model Optimizer], [OpenVINO™ Runtime], [Post-Training Optimization Tool], as well as CPU, GPU, MYRIAD, multi device and heterogeneous plugins to accelerate deep learning inferencing on Intel® CPUs and Intel® Processor Graphics.
It supports pre-trained models from the [Open Model Zoo], along with 100+ open
source and public models in popular formats such as TensorFlow, ONNX, PaddlePaddle, MXNet, Caffe, Kaldi.
## Repository components
* [OpenVINO™ Runtime]
* [Model Optimizer]
* [Post-Training Optimization Tool]
* [Samples]
### Components
* [OpenVINO™ Runtime] - is a set of C++ libraries with C and Python bindings providing a common API to deliver inference solutions on the platform of your choice.
* [core](https://github.com/openvinotoolkit/openvino/tree/master/src/core) - provides the base API for model representation and modification.
* [inference](https://github.com/openvinotoolkit/openvino/tree/master/src/inference) - provides an API to infer models on device.
* [transformations](https://github.com/openvinotoolkit/openvino/tree/master/src/common/transformations) - contains the set of common transformations which are used in OpenVINO plugins.
* [low precision transformations](https://github.com/openvinotoolkit/openvino/tree/master/src/common/low_precision_transformations) - contains the set of transformations which are used in low precision models
* [bindings](https://github.com/openvinotoolkit/openvino/tree/master/src/bindings) - contains all awailable OpenVINO bindings which are maintained by OpenVINO team.
* [c](https://github.com/openvinotoolkit/openvino/tree/master/src/bindings/c) - provides C API for OpenVINO™ Runtime
* [python](https://github.com/openvinotoolkit/openvino/tree/master/src/bindings/python) - Python API for OpenVINO™ Runtime
* [Plugins](https://github.com/openvinotoolkit/openvino/tree/master/src/plugins) - contains OpenVINO plugins which are maintained in open-source by OpenVINO team. For more information please taje a look to the [list of supported devices](#supported-hardware-matrix).
* [Frontends](https://github.com/openvinotoolkit/openvino/tree/master/src/frontends) - contains available OpenVINO frontends which allow to read model from native framework format.
* [Model Optimizer] - is a cross-platform command-line tool that facilitates the transition between training and deployment environments, performs static model analysis, and adjusts deep learning models for optimal execution on end-point target devices.
* [Post-Training Optimization Tool] - is designed to accelerate the inference of deep learning models by applying special methods without model retraining or fine-tuning, for example, post-training 8-bit quantization.
* [Samples] - applications on C, C++ and Python languages which shows basic use cases of OpenVINO usages.
## Supported Hardware matrix
The OpenVINO™ Runtime can infer models on different hardware devices. This section provides the list of supported devices.
<table>
<thead>
<tr>
<th>Device</th>
<th>Plugin</th>
<th>Library</th>
<th>ShortDescription</th>
</tr>
</thead>
<tbody>
<tr>
<td rowspan=2>CPU</td>
<td> <a href="https://docs.openvino.ai/nightly/openvino_docs_OV_UG_supported_plugins_CPU.html#doxid-openvino-docs-o-v-u-g-supported-plugins-c-p-u">Intel CPU</a></tb>
<td><b><i><a href="https://github.com/openvinotoolkit/openvino/tree/master/src/plugins/intel_cpu">openvino_intel_cpu_plugin</a></i></b></td>
<td>Intel Xeon with Intel® Advanced Vector Extensions 2 (Intel® AVX2), Intel® Advanced Vector Extensions 512 (Intel® AVX-512), and AVX512_BF16, Intel Core Processors with Intel AVX2, Intel Atom Processors with Intel® Streaming SIMD Extensions (Intel® SSE)</td>
</tr>
<tr>
<td> <a href="https://docs.openvino.ai/nightly/openvino_docs_OV_UG_supported_plugins_ARM_CPU.html">ARM CPU</a></tb>
<td><b><i><a href="https://github.com/openvinotoolkit/openvino_contrib/tree/master/modules/arm_plugin">openvino_arm_cpu_plugin</a></i></b></td>
<td>Raspberry Pi™ 4 Model B, Apple® Mac mini with M1 chip, NVIDIA® Jetson Nano™, Android™ devices
</tr>
<tr>
<td>GPU</td>
<td><a href="https://docs.openvino.ai/nightly/openvino_docs_OV_UG_supported_plugins_GPU.html#doxid-openvino-docs-o-v-u-g-supported-plugins-g-p-u">Intel GPU</a></td>
<td><b><i><a href="https://github.com/openvinotoolkit/openvino/tree/master/src/plugins/intel_gpu">openvino_intel_gpu_plugin</a></i></b></td>
<td>Intel Processor Graphics, including Intel HD Graphics and Intel Iris Graphics</td>
</tr>
<tr>
<td>GNA</td>
<td><a href="https://docs.openvino.ai/nightly/openvino_docs_OV_UG_supported_plugins_GNA.html#doxid-openvino-docs-o-v-u-g-supported-plugins-g-n-a">Intel GNA</a></td>
<td><b><i><a href="https://github.com/openvinotoolkit/openvino/tree/master/src/plugins/intel_gna">openvino_intel_gna_plugin</a></i></b></td>
<td>Intel Speech Enabling Developer Kit, Amazon Alexa* Premium Far-Field Developer Kit, Intel Pentium Silver J5005 Processor, Intel Pentium Silver N5000 Processor, Intel Celeron J4005 Processor, Intel Celeron J4105 Processor, Intel Celeron Processor N4100, Intel Celeron Processor N4000, Intel Core i3-8121U Processor, Intel Core i7-1065G7 Processor, Intel Core i7-1060G7 Processor, Intel Core i5-1035G4 Processor, Intel Core i5-1035G7 Processor, Intel Core i5-1035G1 Processor, Intel Core i5-1030G7 Processor, Intel Core i5-1030G4 Processor, Intel Core i3-1005G1 Processor, Intel Core i3-1000G1 Processor, Intel Core i3-1000G4 Processor</td>
</tr>
<tr>
<td>VPU</td>
<td><a href="https://docs.openvino.ai/nightly/openvino_docs_IE_DG_supported_plugins_VPU.html#doxid-openvino-docs-i-e-d-g-supported-plugins-v-p-u">Myriad plugin</a></td>
<td><b><i><a href="https://github.com/openvinotoolkit/openvino/tree/master/src/plugins/intel_myriad">openvino_intel_myriad_plugin</a></i></b></td>
<td>Intel® Neural Compute Stick 2 powered by the Intel® Movidius™ Myriad™ X</td>
</tr>
</tbody>
</table>
Also OpenVINO™ Toolkit contains several plugins which should simplify to load model on several hardware devices:
<table>
<thead>
<tr>
<th>Plugin</th>
<th>Library</th>
<th>ShortDescription</th>
</tr>
</thead>
<tbody>
<tr>
<td><a href="https://docs.openvino.ai/nightly/openvino_docs_IE_DG_supported_plugins_AUTO.html#doxid-openvino-docs-i-e-d-g-supported-plugins-a-u-t-o">Auto</a></td>
<td><b><i><a href="https://github.com/openvinotoolkit/openvino/tree/master/src/plugins/auto">openvino_auto_plugin</a></i></b></td>
<td>Auto plugin enables selecting Intel device for inference automatically</td>
</tr>
<tr>
<td><a href="https://docs.openvino.ai/nightly/openvino_docs_OV_UG_Automatic_Batching.html">Auto Batch</a></td>
<td><b><i><a href="https://github.com/openvinotoolkit/openvino/tree/master/src/plugins/auto_batch">openvino_auto_batch_plugin</a></i></b></td>
<td>Auto batch plugin performs on-the-fly automatic batching (i.e. grouping inference requests together) to improve device utilization, with no programming effort from the user</td>
</tr>
<tr>
<td><a href="https://docs.openvino.ai/nightly/openvino_docs_OV_UG_Hetero_execution.html#doxid-openvino-docs-o-v-u-g-hetero-execution">Hetero</a></td>
<td><b><i><a href="https://github.com/openvinotoolkit/openvino/tree/master/src/plugins/hetero">openvino_hetero_plugin</a></i></b></td>
<td>Heterogeneous execution enables automatic inference splitting between several devices</td>
</tr>
<tr>
<td><a href="https://docs.openvino.ai/nightly/openvino_docs_OV_UG_Running_on_multiple_devices.html#doxid-openvino-docs-o-v-u-g-running-on-multiple-devices">Multi</a></td>
<td><b><i><a href="https://github.com/openvinotoolkit/openvino/tree/master/src/plugins/auto">openvino_auto_plugin</a></i></b></td>
<td>Multi plugin enables simultaneous inference of the same model on several devices in parallel</td>
</tr>
</tbody>
</table>
## License
OpenVINO™ Toolkit is licensed under [Apache License Version 2.0](LICENSE).
By contributing to the project, you agree to the license and copyright terms therein and release your contribution under these terms.
## Resources
* Docs: https://docs.openvino.ai/
* Wiki: https://github.com/openvinotoolkit/openvino/wiki
* Issue tracking: https://github.com/openvinotoolkit/openvino/issues
* Storage: https://storage.openvinotoolkit.org/
* Additional OpenVINO™ toolkit modules: https://github.com/openvinotoolkit/openvino_contrib
* [Intel® Distribution of OpenVINO™ toolkit Product Page](https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit.html)
* [Intel® Distribution of OpenVINO™ toolkit Release Notes](https://software.intel.com/en-us/articles/OpenVINO-RelNotes)
## Documentation
### User documentation
The latest documentation for OpenVINO™ Toolkit is availabe [here](https://docs.openvino.ai/). This documentation contains detailed information about all OpenVINO components and provides all important information which could be needed if you create an application which is based on binary OpenVINO distribution or own OpenVINO version without source code modification.
### Developer documentation
[Developer documentation](#todo-add) contains information about architectural decisions which are applied inside the OpenVINO components. This documentation has all necessary information which could be needed in order to contribute to OpenVINO.
## Tutorials
The list of OpenVINO tutorials:
- [Jupiter notebooks](https://github.com/openvinotoolkit/openvino_notebooks)
## Products which use OpenVINO
- [OpenCV](https://opencv.org/)
- [ONNX Runtime](https://onnxruntime.ai/)
- [OpenVINO™ Integration with TensorFlow](https://www.intel.com/content/www/us/en/developer/tools/devcloud/edge/build/ovtfoverview.html)
- [TNN](https://github.com/Tencent/TNN/tree/master)
## System requirements
The full information about system requirements depends on platform and available in section `System requirement` on dedicated pages:
- [Linux](https://docs.openvino.ai/latest/openvino_docs_install_guides_installing_openvino_linux.html)
- [Windows](https://docs.openvino.ai/latest/openvino_docs_install_guides_installing_openvino_windows.html)
- [macOS](https://docs.openvino.ai/latest/openvino_docs_install_guides_installing_openvino_macos.html)
- [Raspbian](https://docs.openvino.ai/latest/openvino_docs_install_guides_installing_openvino_raspbian.html)
## How to build
Please take a look to [OpenVINO Wiki](https://github.com/openvinotoolkit/openvino/wiki#how-to-build) to get more information about OpenVINO build process.
## How to contribute
See [CONTRIBUTING](./CONTRIBUTING.md) for details. Thank you!
## Get a support
## Support
Please report questions, issues and suggestions using:
* The [`openvino`](https://stackoverflow.com/questions/tagged/openvino) tag on StackOverflow\*
* [GitHub* Issues](https://github.com/openvinotoolkit/openvino/issues)
* The [`openvino`](https://stackoverflow.com/questions/tagged/openvino) tag on StackOverflow\*
* [Forum](https://software.intel.com/en-us/forums/computer-vision)
## See also
* [OpenVINO Wiki](https://github.com/openvinotoolkit/openvino/wiki)
* [OpenVINO Storage](https://storage.openvinotoolkit.org/)
* Additional OpenVINO™ toolkit modules:
* [openvino_contrib](https://github.com/openvinotoolkit/openvino_contrib)
* [Intel® Distribution of OpenVINO™ toolkit Product Page](https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit.html)
* [Intel® Distribution of OpenVINO™ toolkit Release Notes](https://software.intel.com/en-us/articles/OpenVINO-RelNotes)
* [Neural Network Compression Framework (NNCF)](https://github.com/openvinotoolkit/nncf) - a suite of advanced algorithms for model inference optimization including quantization, filter pruning, binarization and sparsity
* [OpenVINO™ Training Extensions (OTE)](https://github.com/openvinotoolkit/training_extensions) - convenient environment to train Deep Learning models and convert them using OpenVINO for optimized inference.
* [OpenVINO™ Model Server (OVMS)](https://github.com/openvinotoolkit/model_server) - a scalable, high-performance solution for serving deep learning models optimized for Intel architectures
* [DL Workbench](https://docs.openvino.ai/nightly/workbench_docs_Workbench_DG_Introduction.html) - An alternative, web-based version of OpenVINO designed to make production of pretrained deep learning models significantly easier.
* [Computer Vision Annotation Tool (CVAT)](https://github.com/openvinotoolkit/cvat) - an online, interactive video and image annotation tool for computer vision purposes.
* [Dataset Management Framework (Datumaro)](https://github.com/openvinotoolkit/datumaro) - a framework and CLI tool to build, transform, and analyze datasets.
---
\* Other names and brands may be claimed as the property of others.
[Open Model Zoo]:https://github.com/openvinotoolkit/open_model_zoo
[OpenVINO™ Runtime]:https://docs.openvino.ai/latest/openvino_docs_OV_Runtime_User_Guide.html
[OpenVINO™ Runtime]:https://docs.openvino.ai/latest/openvino_docs_OV_UG_OV_Runtime_User_Guide.html
[Model Optimizer]:https://docs.openvino.ai/latest/openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html
[Post-Training Optimization Tool]:https://docs.openvino.ai/latest/pot_README.html
[Post-Training Optimization Tool]:https://docs.openvino.ai/latest/pot_introduction.html
[Samples]:https://github.com/openvinotoolkit/openvino/tree/master/samples
[tag on StackOverflow]:https://stackoverflow.com/search?q=%23openvino

View File

@@ -7,6 +7,7 @@ set(CMAKE_SYSTEM_PROCESSOR armv7l)
set(CMAKE_C_COMPILER arm-linux-gnueabihf-gcc)
set(CMAKE_CXX_COMPILER arm-linux-gnueabihf-g++)
set(PKG_CONFIG_EXECUTABLE arm-linux-gnueabihf-pkg-config CACHE PATH "Path to ARM pkg-config")
set(CMAKE_FIND_ROOT_PATH_MODE_PROGRAM NEVER)
set(CMAKE_FIND_ROOT_PATH_MODE_LIBRARY ONLY)

View File

@@ -7,6 +7,7 @@ set(CMAKE_SYSTEM_PROCESSOR aarch64)
set(CMAKE_C_COMPILER aarch64-linux-gnu-gcc)
set(CMAKE_CXX_COMPILER aarch64-linux-gnu-g++)
set(PKG_CONFIG_EXECUTABLE aarch64-linux-gnu-pkg-config CACHE PATH "Path to ARM64 pkg-config")
set(CMAKE_FIND_ROOT_PATH_MODE_PROGRAM NEVER)
set(CMAKE_FIND_ROOT_PATH_MODE_LIBRARY ONLY)

View File

@@ -23,9 +23,7 @@ message(STATUS "MODELS_PATH=" ${MODELS_PATH})
fetch_models_and_validation_set()
if(COMMAND get_linux_name)
get_linux_name(LINUX_OS_NAME)
endif()
get_linux_name(LINUX_OS_NAME)
if(CMAKE_CROSSCOMPILING AND CMAKE_HOST_SYSTEM_NAME MATCHES Linux AND CMAKE_HOST_SYSTEM_PROCESSOR MATCHES "amd64.*|x86_64.*|AMD64.*")
set(protoc_version "3.18.2")
@@ -93,7 +91,19 @@ if(THREADING STREQUAL "OMP")
endif()
## TBB package
if(THREADING STREQUAL "TBB" OR THREADING STREQUAL "TBB_AUTO")
unset(_ov_download_tbb_done CACHE)
#
# The function downloads prebuilt TBB package
# NOTE: the function should be used if system TBB is not found
# or ENABLE_SYSTEM_TBB is OFF
#
function(ov_download_tbb)
if(_ov_download_tbb_done OR NOT THREADING MATCHES "^(TBB|TBB_AUTO)$")
return()
endif()
set(_ov_download_tbb_done ON CACHE BOOL "Whether prebuilt TBB is already downloaded")
reset_deps_cache(TBBROOT TBB_DIR)
if(DEFINED ENV{THIRDPARTY_SERVER_PATH})
@@ -109,16 +119,6 @@ if(THREADING STREQUAL "TBB" OR THREADING STREQUAL "TBB_AUTO")
TARGET_PATH "${TEMP}/tbb"
ENVIRONMENT "TBBROOT"
SHA256 "f1c9b9e2861efdaa01552bd25312ccbc5feeb45551e5f91ae61e29221c5c1479")
if(ENABLE_TBBBIND_2_5)
RESOLVE_DEPENDENCY(TBBBIND_2_5
ARCHIVE_WIN "tbbbind_2_5_static_win_v1.zip"
TARGET_PATH "${TEMP}/tbbbind_2_5"
ENVIRONMENT "TBBBIND_2_5_ROOT"
SHA256 "a67afeea8cf194f97968c800dab5b5459972908295242e282045d6b8953573c1")
else()
message(WARNING "prebuilt TBBBIND_2_5 is not available.
Build oneTBB from sources and set TBBROOT environment var before OpenVINO cmake configure")
endif()
elseif(ANDROID) # Should be before LINUX due LINUX is detected as well
RESOLVE_DEPENDENCY(TBB
ARCHIVE_ANDROID "tbb2020_20200404_android.tgz"
@@ -131,16 +131,6 @@ if(THREADING STREQUAL "TBB" OR THREADING STREQUAL "TBB_AUTO")
TARGET_PATH "${TEMP}/tbb"
ENVIRONMENT "TBBROOT"
SHA256 "95b2f3b0b70c7376a0c7de351a355c2c514b42c4966e77e3e34271a599501008")
if(ENABLE_TBBBIND_2_5)
RESOLVE_DEPENDENCY(TBBBIND_2_5
ARCHIVE_LIN "tbbbind_2_5_static_lin_v2.tgz"
TARGET_PATH "${TEMP}/tbbbind_2_5"
ENVIRONMENT "TBBBIND_2_5_ROOT"
SHA256 "865e7894c58402233caf0d1b288056e0e6ab2bf7c9d00c9dc60561c484bc90f4")
else()
message(WARNING "prebuilt TBBBIND_2_5 is not available.
Build oneTBB from sources and set TBBROOT environment var before OpenVINO cmake configure")
endif()
elseif(LINUX AND AARCH64)
RESOLVE_DEPENDENCY(TBB
ARCHIVE_LIN "keembay/tbb2020_38404_kmb_lic.tgz"
@@ -160,18 +150,68 @@ if(THREADING STREQUAL "TBB" OR THREADING STREQUAL "TBB_AUTO")
update_deps_cache(TBBROOT "${TBB}" "Path to TBB root folder")
if(EXISTS "${TBBROOT}/lib/cmake/TBB/TBBConfig.cmake")
# oneTBB case
update_deps_cache(TBB_DIR "${TBB}/lib/cmake/TBB" "Path to TBB cmake folder")
update_deps_cache(TBB_DIR "${TBBROOT}/lib/cmake/TBB" "Path to TBB cmake folder")
elseif(EXISTS "${TBBROOT}/lib64/cmake/TBB/TBBConfig.cmake")
# 64-bits oneTBB case
update_deps_cache(TBB_DIR "${TBBROOT}/lib64/cmake/TBB" "Path to TBB cmake folder")
elseif(EXISTS "${TBBROOT}/cmake/TBBConfig.cmake")
# custom downloaded or user provided TBB
update_deps_cache(TBB_DIR "${TBBROOT}/cmake" "Path to TBB cmake folder")
else()
update_deps_cache(TBB_DIR "${TBB}/cmake" "Path to TBB cmake folder")
message(WARNING "Failed to find TBBConfig.cmake in ${TBBROOT} tree. Custom TBBConfig.cmake will be used")
endif()
debug_message(STATUS "tbb=" ${TBB})
debug_message(STATUS "tbb_dir=" ${TBB_DIR})
debug_message(STATUS "tbbroot=" ${TBBROOT})
set(TBB "${TBB}" PARENT_SCOPE)
endfunction()
## TBBBind_2_5 package
unset(_ov_download_tbbbind_2_5_done CACHE)
#
# The function downloads static prebuilt TBBBind_2_5 package
# NOTE: the function should be called only we have TBB with version less 2021
#
function(ov_download_tbbbind_2_5)
if(_ov_download_tbbbind_2_5_done OR NOT ENABLE_TBBBIND_2_5)
return()
endif()
set(_ov_download_tbbbind_2_5_done ON CACHE BOOL "Whether prebuilt TBBBind_2_5 is already downloaded")
reset_deps_cache(TBBBIND_2_5_DIR)
if(DEFINED ENV{THIRDPARTY_SERVER_PATH})
set(IE_PATH_TO_DEPS "$ENV{THIRDPARTY_SERVER_PATH}")
elseif(DEFINED THIRDPARTY_SERVER_PATH)
set(IE_PATH_TO_DEPS "${THIRDPARTY_SERVER_PATH}")
endif()
if(WIN32 AND X86_64)
RESOLVE_DEPENDENCY(TBBBIND_2_5
ARCHIVE_WIN "tbbbind_2_5_static_win_v1.zip"
TARGET_PATH "${TEMP}/tbbbind_2_5"
ENVIRONMENT "TBBBIND_2_5_ROOT"
SHA256 "a67afeea8cf194f97968c800dab5b5459972908295242e282045d6b8953573c1")
elseif(ANDROID)
# don't have TBBBIND_2_5
elseif(LINUX AND X86_64)
RESOLVE_DEPENDENCY(TBBBIND_2_5
ARCHIVE_LIN "tbbbind_2_5_static_lin_v2.tgz"
TARGET_PATH "${TEMP}/tbbbind_2_5"
ENVIRONMENT "TBBBIND_2_5_ROOT"
SHA256 "865e7894c58402233caf0d1b288056e0e6ab2bf7c9d00c9dc60561c484bc90f4")
else()
message(WARNING "prebuilt TBBBIND_2_5 is not available.
Build oneTBB from sources and set TBBROOT environment var before OpenVINO cmake configure")
endif()
update_deps_cache(TBBBIND_2_5_DIR "${TBBBIND_2_5}/cmake" "Path to TBBBIND_2_5 cmake folder")
debug_message(STATUS "tbb=" ${TBB})
if(DEFINED IE_PATH_TO_DEPS)
unset(IE_PATH_TO_DEPS)
endif()
endif()
set(TBBBIND_2_5 "${TBBBIND_2_5}" PARENT_SCOPE)
endfunction()
## OpenCV
if(ENABLE_OPENCV)
@@ -265,8 +305,6 @@ else()
reset_deps_cache(OpenCV_DIR)
endif()
include(${OpenVINO_SOURCE_DIR}/src/cmake/ie_parallel.cmake)
if(ENABLE_INTEL_GNA)
reset_deps_cache(
GNA_EXT_DIR
@@ -276,8 +314,8 @@ if(ENABLE_INTEL_GNA)
GNA_LIB_DIR
libGNA_INCLUDE_DIRS
libGNA_LIBRARIES_BASE_PATH)
set(GNA_VERSION "03.00.00.1455.0")
set(GNA_HASH "99891696269d8fa10116c96e6b7bda4362736881f0df8df8b56c751ee18e5820")
set(GNA_VERSION "03.00.00.1455.2")
set(GNA_HASH "e52785d3f730fefb4e794bb7ab40c8676537ef2f7c69c5b4bb89a5d3cc0bbe60")
set(FILES_TO_EXTRACT_LIST gna_${GNA_VERSION}/include)
if(WIN32)

View File

@@ -14,8 +14,8 @@ set(CMAKE_MODULE_PATH "${IEDevScripts_DIR}")
function(set_ci_build_number)
set(repo_root "${CMAKE_SOURCE_DIR}")
include(version)
foreach(var CI_BUILD_NUMBER IE_VERSION IE_VERSION_BUILD
IE_VERSION_MAJOR IE_VERSION_MINOR IE_VERSION_PATCH)
foreach(var CI_BUILD_NUMBER OpenVINO_VERSION OpenVINO_VERSION_BUILD
OpenVINO_VERSION_MAJOR OpenVINO_VERSION_MINOR OpenVINO_VERSION_PATCH)
if(NOT DEFINED ${var})
message(FATAL_ERROR "${var} version component is not defined")
endif()
@@ -186,6 +186,8 @@ endif()
# Use solution folders
set_property(GLOBAL PROPERTY USE_FOLDERS ON)
# cmake_dependent_option() supports full Condition Syntax
set(CMAKE_POLICY_DEFAULT_CMP0127 NEW)
# Enable CMAKE_<LANG>_COMPILER_ID AppleClang
set(CMAKE_POLICY_DEFAULT_CMP0025 NEW)

View File

@@ -23,13 +23,13 @@ else()
unset(IE_OWN_TBB_CONFIG)
endif()
unset(TBB_DIR)
unset(TBB_DIR CACHE)
find_package(TBB
CONFIG
PATHS ${TBBROOT}/cmake
${TBBROOT}/lib/cmake/TBB # oneTBB case
${IEDevScripts_DIR}/${IE_OWN_TBB_CONFIG}
NO_CMAKE_FIND_ROOT_PATH
NO_DEFAULT_PATH
)
NO_DEFAULT_PATH)
find_package_handle_standard_args(TBB CONFIG_MODE)

View File

@@ -76,8 +76,8 @@ function(addIeTarget)
# remove unnecessary directories
foreach(excludedDir ${ARG_EXCLUDED_SOURCE_PATHS})
list(FILTER includes EXCLUDE REGEX "${excludedDir}*")
list(FILTER sources EXCLUDE REGEX "${excludedDir}*")
list(FILTER includes EXCLUDE REGEX "${excludedDir}.*")
list(FILTER sources EXCLUDE REGEX "${excludedDir}.*")
endforeach()
source_group("include" FILES ${includes})

View File

@@ -7,7 +7,7 @@ include(target_flags)
# FIXME: there are compiler failures with LTO and Cross-Compile toolchains. Disabling for now, but
# this must be addressed in a proper way
ie_dependent_option (ENABLE_LTO "Enable Link Time Optimization" OFF "LINUX;NOT CMAKE_CROSSCOMPILING; CMAKE_CXX_COMPILER_VERSION VERSION_GREATER 4.9" OFF)
ie_dependent_option (ENABLE_LTO "Enable Link Time Optimization" OFF "LINUX;NOT CMAKE_CROSSCOMPILING;CMAKE_CXX_COMPILER_VERSION VERSION_GREATER 4.9" OFF)
ie_option (OS_FOLDER "create OS dedicated folder in output" OFF)
@@ -79,6 +79,4 @@ if(ENABLE_AVX512F)
endif()
endif()
if (VERBOSE_BUILD)
set(CMAKE_VERBOSE_MAKEFILE ON CACHE BOOL "" FORCE)
endif()
set(CMAKE_VERBOSE_MAKEFILE ${VERBOSE_BUILD} CACHE BOOL "" FORCE)

View File

@@ -82,10 +82,11 @@ unset(protobuf_installed CACHE)
#
# ov_add_frontend(NAME <IR|ONNX|...>
# FILEDESCRIPTION <description>
# [LINKABLE_FRONTEND]
# [SKIP_INSTALL]
# [PROTOBUF_LITE]
# FILEDESCRIPTION <description> # used on Windows to describe DLL file
# [LINKABLE_FRONTEND] # whether we can use FE API directly or via FEM only
# [SKIP_INSTALL] # private frontend, not for end users
# [PROTOBUF_LITE] # requires only libprotobuf-lite
# [SKIP_NCC_STYLE] # use custom NCC rules
# [LINK_LIBRARIES <lib1 lib2 ...>])
#
macro(ov_add_frontend)
@@ -106,6 +107,17 @@ macro(ov_add_frontend)
set(FRONTEND_NAMES "${FRONTEND_NAMES}" CACHE INTERNAL "" FORCE)
file(GLOB_RECURSE LIBRARY_SRC ${CMAKE_CURRENT_SOURCE_DIR}/src/*.cpp)
if (WIN32)
# Remove linux specific files
file(GLOB_RECURSE LIN_FILES ${CMAKE_CURRENT_SOURCE_DIR}/src/os/lin/*.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/os/lin/*.hpp)
list(REMOVE_ITEM LIBRARY_SRC "${LIN_FILES}")
else()
# Remove windows specific files
file(GLOB_RECURSE WIN_FILES ${CMAKE_CURRENT_SOURCE_DIR}/src/os/win/*.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/os/win/*.hpp)
list(REMOVE_ITEM LIBRARY_SRC "${WIN_FILES}")
endif()
file(GLOB_RECURSE LIBRARY_HEADERS ${CMAKE_CURRENT_SOURCE_DIR}/src/*.hpp)
file(GLOB_RECURSE LIBRARY_PUBLIC_HEADERS ${CMAKE_CURRENT_SOURCE_DIR}/include/*.hpp)
@@ -231,7 +243,7 @@ macro(ov_add_frontend)
endif()
if(OV_FRONTEND_LINKABLE_FRONTEND)
# install -dev part
# install library development files
install(DIRECTORY ${${TARGET_NAME}_INCLUDE_DIR}/openvino
DESTINATION ${FRONTEND_INSTALL_INCLUDE}/
COMPONENT core_dev

View File

@@ -31,4 +31,8 @@ if (LINUX)
set(${res_var} NOTFOUND PARENT_SCOPE)
endif ()
endfunction()
else()
function(get_linux_name res_var)
set(${res_var} NOTFOUND PARENT_SCOPE)
endfunction()
endif ()

View File

@@ -23,7 +23,7 @@ execute_process(
ERROR_VARIABLE error_var)
if(NOT clang_find_result EQUAL "0")
message(WARNING "Please, install libclang-[N]-dev package (required for ncc naming style check)")
message(WARNING "Please, install clang-[N] libclang-[N]-dev package (required for ncc naming style check)")
message(WARNING "find_package(Clang) output: ${output_var}")
message(WARNING "find_package(Clang) error: ${error_var}")
set(ENABLE_NCC_STYLE OFF)
@@ -106,9 +106,11 @@ function(ov_ncc_naming_style)
"${NCC_STYLE_SOURCE_DIRECTORY}/*.cpp")
list(APPEND NCC_STYLE_ADDITIONAL_INCLUDE_DIRECTORIES "${NCC_STYLE_SOURCE_DIRECTORY}")
# without it sources with same name from different directories will map to same .ncc_style target
file(RELATIVE_PATH source_dir_rel ${CMAKE_SOURCE_DIR} ${NCC_STYLE_SOURCE_DIRECTORY})
foreach(source IN LISTS sources)
set(output_file "${ncc_style_bin_dir}/${source}.ncc_style")
set(output_file "${ncc_style_bin_dir}/${source_dir_rel}/${source}.ncc_style")
set(full_source_path "${NCC_STYLE_SOURCE_DIRECTORY}/${source}")
add_custom_command(

View File

@@ -11,6 +11,11 @@ macro (ie_option variable description value)
list(APPEND IE_OPTIONS ${variable})
endmacro()
# Usage: ov_option(<option_variable> "description" <initial value or boolean expression> [IF <condition>])
macro (ov_option variable description value)
ie_option(${variable} "${description}" ${value})
endmacro()
macro (ie_dependent_option variable description def_value condition fallback_value)
cmake_dependent_option(${variable} "${description}" ${def_value} "${condition}" ${fallback_value})
list(APPEND IE_OPTIONS ${variable})

View File

@@ -69,8 +69,8 @@ macro(ie_cpack)
endif()
foreach(ver IN LISTS MAJOR MINOR PATCH)
if(DEFINED IE_VERSION_${ver})
set(CPACK_PACKAGE_VERSION_${ver} ${IE_VERSION_${ver}})
if(DEFINED OpenVINO_VERSION_${ver})
set(CPACK_PACKAGE_VERSION_${ver} ${OpenVINO_VERSION_${ver}})
endif()
endforeach()

View File

@@ -13,8 +13,8 @@ function(ie_plugin_get_file_name target_name library_name)
set("${library_name}" "${LIB_PREFIX}${target_name}${LIB_SUFFIX}" PARENT_SCOPE)
endfunction()
if(NOT TARGET ie_plugins)
add_custom_target(ie_plugins)
if(NOT TARGET ov_plugins)
add_custom_target(ov_plugins)
endif()
#
@@ -27,11 +27,12 @@ endif()
# [OBJECT_LIBRARIES <object_libs>]
# [VERSION_DEFINES_FOR <source>]
# [SKIP_INSTALL]
# [SKIP_REGISTRATION] Skip creation of <device>.xml
# [ADD_CLANG_FORMAT]
# )
#
function(ie_add_plugin)
set(options SKIP_INSTALL ADD_CLANG_FORMAT AS_EXTENSION)
set(options SKIP_INSTALL ADD_CLANG_FORMAT AS_EXTENSION SKIP_REGISTRATION)
set(oneValueArgs NAME DEVICE_NAME VERSION_DEFINES_FOR PSEUDO_PLUGIN_FOR)
set(multiValueArgs DEFAULT_CONFIG SOURCES OBJECT_LIBRARIES CPPLINT_FILTERS)
cmake_parse_arguments(IE_PLUGIN "${options}" "${oneValueArgs}" "${multiValueArgs}" ${ARGN})
@@ -101,7 +102,7 @@ function(ie_add_plugin)
add_cpplint_target(${IE_PLUGIN_NAME}_cpplint FOR_TARGETS ${IE_PLUGIN_NAME} CUSTOM_FILTERS ${custom_filter})
endif()
add_dependencies(ie_plugins ${IE_PLUGIN_NAME})
add_dependencies(ov_plugins ${IE_PLUGIN_NAME})
if(TARGET openvino_gapi_preproc)
if(BUILD_SHARED_LIBS)
add_dependencies(${IE_PLUGIN_NAME} openvino_gapi_preproc)
@@ -146,25 +147,27 @@ function(ie_add_plugin)
endif()
endif()
# check that plugin with such name is not registered
# Enable for static build to generate correct plugins.hpp
if(NOT IE_PLUGIN_SKIP_REGISTRATION OR NOT BUILD_SHARED_LIBS)
# check that plugin with such name is not registered
foreach(plugin_entry IN LISTS PLUGIN_FILES)
string(REPLACE ":" ";" plugin_entry "${plugin_entry}")
list(GET plugin_entry -1 library_name)
list(GET plugin_entry 0 plugin_name)
if(plugin_name STREQUAL "${IE_PLUGIN_DEVICE_NAME}" AND
NOT library_name STREQUAL ${IE_PLUGIN_NAME})
message(FATAL_ERROR "${IE_PLUGIN_NAME} and ${library_name} are both registered as ${plugin_name}")
endif()
endforeach()
foreach(plugin_entry IN LISTS PLUGIN_FILES)
string(REPLACE ":" ";" plugin_entry "${plugin_entry}")
list(GET plugin_entry -1 library_name)
list(GET plugin_entry 0 plugin_name)
if(plugin_name STREQUAL "${IE_PLUGIN_DEVICE_NAME}" AND
NOT library_name STREQUAL ${IE_PLUGIN_NAME})
message(FATAL_ERROR "${IE_PLUGIN_NAME} and ${library_name} are both registered as ${plugin_name}")
endif()
endforeach()
# append plugin to the list to register
# append plugin to the list to register
list(APPEND PLUGIN_FILES "${IE_PLUGIN_DEVICE_NAME}:${IE_PLUGIN_NAME}")
set(PLUGIN_FILES "${PLUGIN_FILES}" CACHE INTERNAL "" FORCE)
set(${IE_PLUGIN_DEVICE_NAME}_CONFIG "${IE_PLUGIN_DEFAULT_CONFIG}" CACHE INTERNAL "" FORCE)
set(${IE_PLUGIN_DEVICE_NAME}_PSEUDO_PLUGIN_FOR "${IE_PLUGIN_PSEUDO_PLUGIN_FOR}" CACHE INTERNAL "" FORCE)
set(${IE_PLUGIN_DEVICE_NAME}_AS_EXTENSION "${IE_PLUGIN_AS_EXTENSION}" CACHE INTERNAL "" FORCE)
list(APPEND PLUGIN_FILES "${IE_PLUGIN_DEVICE_NAME}:${IE_PLUGIN_NAME}")
set(PLUGIN_FILES "${PLUGIN_FILES}" CACHE INTERNAL "" FORCE)
set(${IE_PLUGIN_DEVICE_NAME}_CONFIG "${IE_PLUGIN_DEFAULT_CONFIG}" CACHE INTERNAL "" FORCE)
set(${IE_PLUGIN_DEVICE_NAME}_PSEUDO_PLUGIN_FOR "${IE_PLUGIN_PSEUDO_PLUGIN_FOR}" CACHE INTERNAL "" FORCE)
set(${IE_PLUGIN_DEVICE_NAME}_AS_EXTENSION "${IE_PLUGIN_AS_EXTENSION}" CACHE INTERNAL "" FORCE)
endif()
endfunction()
function(ov_add_plugin)
@@ -172,13 +175,12 @@ function(ov_add_plugin)
endfunction()
#
# ie_register_plugins_dynamic(MAIN_TARGET <main target name>
# POSSIBLE_PLUGINS <list of plugins which can be build by this repo>)
# ie_register_plugins_dynamic(MAIN_TARGET <main target name>)
#
macro(ie_register_plugins_dynamic)
set(options)
set(oneValueArgs MAIN_TARGET)
set(multiValueArgs POSSIBLE_PLUGINS)
set(multiValueArgs)
cmake_parse_arguments(IE_REGISTER "${options}" "${oneValueArgs}" "${multiValueArgs}" ${ARGN})
if(NOT IE_REGISTER_MAIN_TARGET)
@@ -261,6 +263,15 @@ macro(ie_register_plugins)
endif()
endmacro()
#
# ov_register_plugins()
#
macro(ov_register_plugins)
if(BUILD_SHARED_LIBS)
ie_register_plugins_dynamic(${ARGN})
endif()
endmacro()
#
# ie_target_link_plugins(<TARGET_NAME>)
#

View File

@@ -26,20 +26,22 @@ function (commitHash VAR)
set (${VAR} ${GIT_COMMIT_HASH} PARENT_SCOPE)
endfunction()
macro(ie_parse_ci_build_number)
set(IE_VERSION_BUILD 000)
macro(ov_parse_ci_build_number)
set(OpenVINO_VERSION_BUILD 000)
set(IE_VERSION_BUILD ${OpenVINO_VERSION_BUILD})
if(CI_BUILD_NUMBER MATCHES "^([0-9]+)\.([0-9]+)\.([0-9]+)\-([0-9]+)\-.*")
set(IE_VERSION_MAJOR ${CMAKE_MATCH_1})
set(IE_VERSION_MINOR ${CMAKE_MATCH_2})
set(IE_VERSION_PATCH ${CMAKE_MATCH_3})
set(IE_VERSION_BUILD ${CMAKE_MATCH_4})
set(OpenVINO_VERSION_MAJOR ${CMAKE_MATCH_1})
set(OpenVINO_VERSION_MINOR ${CMAKE_MATCH_2})
set(OpenVINO_VERSION_PATCH ${CMAKE_MATCH_3})
set(OpenVINO_VERSION_BUILD ${CMAKE_MATCH_4})
endif()
if(NOT DEFINED repo_root)
message(FATAL_ERROR "repo_root is not defined")
endif()
macro(ie_get_hpp_version)
macro(ov_get_hpp_version)
if(NOT DEFINED OpenVINO_SOURCE_DIR)
return()
endif()
@@ -59,11 +61,12 @@ macro(ie_parse_ci_build_number)
foreach(suffix MAJOR MINOR PATCH)
set(ie_version_name "IE_VERSION_${suffix}")
set(ov_version_name "OPENVINO_VERSION_${suffix}")
set(ov_version_name "OpenVINO_VERSION_${suffix}")
set(ov_version_name_hpp "OPENVINO_VERSION_${suffix}")
string(REGEX REPLACE ".+${ie_version_name}[ ]+([0-9]+).*" "\\1"
${ie_version_name}_HPP "${IE_VERSION_PARTS}")
string(REGEX REPLACE ".+${ov_version_name}[ ]+([0-9]+).*" "\\1"
string(REGEX REPLACE ".+${ov_version_name_hpp}[ ]+([0-9]+).*" "\\1"
${ov_version_name}_HPP "${OV_VERSION_PARTS}")
if(NOT ${ie_version_name}_HPP EQUAL ${ov_version_name}_HPP)
@@ -72,26 +75,26 @@ macro(ie_parse_ci_build_number)
endif()
endforeach()
set(ie_hpp_version_is_found ON)
set(ov_hpp_version_is_found ON)
endmacro()
# detect OpenVINO version via ie_version.hpp
ie_get_hpp_version()
# detect OpenVINO version via openvino/core/version.hpp and ie_version.hpp
ov_get_hpp_version()
if(ie_hpp_version_is_found)
foreach(var IE_VERSION_MAJOR IE_VERSION_MINOR IE_VERSION_PATCH)
if(ov_hpp_version_is_found)
foreach(var OpenVINO_VERSION_MAJOR OpenVINO_VERSION_MINOR OpenVINO_VERSION_PATCH)
if(DEFINED ${var} AND NOT ${var} EQUAL ${var}_HPP)
message(FATAL_ERROR "${var} parsed from CI_BUILD_NUMBER (${${var}}) \
and from ie_version.hpp (${${var}_HPP}) are different")
and from openvino/core/version.hpp (${${var}_HPP}) are different")
else()
# CI_BUILD_NUMBER is not defined well, take info from ie_verison.hpp as a baseline
# CI_BUILD_NUMBER is not defined well, take info from openvino/core/version.hpp as a baseline
set(${var} ${${var}_HPP})
endif()
endforeach()
endif()
set(IE_VERSION "${IE_VERSION_MAJOR}.${IE_VERSION_MINOR}.${IE_VERSION_PATCH}")
message(STATUS "OpenVINO version is ${IE_VERSION}")
set(OpenVINO_VERSION "${OpenVINO_VERSION_MAJOR}.${OpenVINO_VERSION_MINOR}.${OpenVINO_VERSION_PATCH}")
message(STATUS "OpenVINO version is ${OpenVINO_VERSION} (Build ${OpenVINO_VERSION_BUILD})")
endmacro()
if (DEFINED ENV{CI_BUILD_NUMBER})
@@ -104,10 +107,10 @@ else()
set(CI_BUILD_NUMBER "${custom_build}")
endif()
# provides Inference Engine version
# provides OpenVINO version
# 1. If CI_BUILD_NUMBER is defined, parses this information
# 2. Otherwise, parses ie_version.hpp
ie_parse_ci_build_number()
# 2. Otherwise, parses openvino/core/version.hpp
ov_parse_ci_build_number()
macro (addVersionDefines FILE)
set(__version_file ${FILE})

View File

@@ -2,9 +2,9 @@
# SPDX-License-Identifier: Apache-2.0
#
set(IE_VS_VER_FILEVERSION_QUAD "${IE_VERSION_MAJOR},${IE_VERSION_MINOR},${IE_VERSION_PATCH},0")
set(IE_VS_VER_PRODUCTVERSION_QUAD "${IE_VERSION_MAJOR},${IE_VERSION_MINOR},${IE_VERSION_PATCH},0")
set(IE_VS_VER_FILEVERSION_STR "${IE_VERSION_MAJOR}.${IE_VERSION_MINOR}.${IE_VERSION_PATCH}.0")
set(IE_VS_VER_FILEVERSION_QUAD "${OpenVINO_VERSION_MAJOR},${OpenVINO_VERSION_MINOR},${OpenVINO_VERSION_PATCH},${OpenVINO_VERSION_BUILD}")
set(IE_VS_VER_PRODUCTVERSION_QUAD "${OpenVINO_VERSION_MAJOR},${OpenVINO_VERSION_MINOR},${OpenVINO_VERSION_PATCH},${OpenVINO_VERSION_BUILD}")
set(IE_VS_VER_FILEVERSION_STR "${OpenVINO_VERSION_MAJOR}.${OpenVINO_VERSION_MINOR}.${OpenVINO_VERSION_PATCH}.${OpenVINO_VERSION_BUILD}")
set(IE_VS_VER_COMPANY_NAME_STR "Intel Corporation")
set(IE_VS_VER_PRODUCTVERSION_STR "${CI_BUILD_NUMBER}")

View File

@@ -6,15 +6,19 @@ function(ie_generate_dev_package_config)
# dummy check that OpenCV is here
find_package(OpenCV QUIET)
set(all_dev_targets gflags ov_runtime_libraries)
foreach(component IN LISTS openvino_export_components)
# export all targets with prefix and use them during extra modules build
export(TARGETS ${${component}} NAMESPACE IE::
APPEND FILE "${CMAKE_BINARY_DIR}/${component}_dev_targets.cmake")
APPEND FILE "${CMAKE_BINARY_DIR}/${component}_dev_targets.cmake")
list(APPEND all_dev_targets ${${component}})
endforeach()
add_custom_target(ie_dev_targets DEPENDS ${all_dev_targets})
# if we've found system gflags
if(gflags_DIR)
set(gflags_BINARY_DIR "${gflags_DIR}")
endif()
configure_package_config_file("${OpenVINO_SOURCE_DIR}/cmake/templates/InferenceEngineDeveloperPackageConfig.cmake.in"
"${CMAKE_BINARY_DIR}/InferenceEngineDeveloperPackageConfig.cmake"
INSTALL_DESTINATION share # not used
@@ -30,18 +34,22 @@ function(ov_generate_dev_package_config)
# dummy check that OpenCV is here
find_package(OpenCV QUIET)
set(all_dev_targets gflags ov_runtime_libraries)
foreach(component IN LISTS openvino_export_components)
string(FIND "${component}" "_legacy" index)
if (index EQUAL -1)
if(index EQUAL -1)
# export all targets with prefix and use them during extra modules build
export(TARGETS ${${component}} NAMESPACE openvino::
APPEND FILE "${CMAKE_BINARY_DIR}/ov_${component}_dev_targets.cmake")
APPEND FILE "${CMAKE_BINARY_DIR}/ov_${component}_dev_targets.cmake")
list(APPEND all_dev_targets ${${component}})
endif()
endforeach()
add_custom_target(ov_dev_targets DEPENDS ${all_dev_targets})
# if we've found system gflags
if(gflags_DIR)
set(gflags_BINARY_DIR "${gflags_DIR}")
endif()
configure_package_config_file("${OpenVINO_SOURCE_DIR}/cmake/templates/OpenVINODeveloperPackageConfig.cmake.in"
"${CMAKE_BINARY_DIR}/OpenVINODeveloperPackageConfig.cmake"
INSTALL_DESTINATION share # not used
@@ -59,14 +67,14 @@ endfunction()
function(register_extra_modules)
# post export
openvino_developer_export_targets(COMPONENT core TARGETS inference_engine)
openvino_developer_export_targets(COMPONENT core TARGETS ngraph)
openvino_developer_export_targets(COMPONENT core_legacy TARGETS inference_engine)
openvino_developer_export_targets(COMPONENT core_legacy TARGETS ngraph)
set(InferenceEngineDeveloperPackage_DIR "${CMAKE_CURRENT_BINARY_DIR}/runtime")
set(OpenVINODeveloperPackage_DIR "${CMAKE_BINARY_DIR}/runtime")
function(generate_fake_dev_package NS)
if (NS STREQUAL "openvino")
if(NS STREQUAL "openvino")
set(devconfig_file "${OpenVINODeveloperPackage_DIR}/OpenVINODeveloperPackageConfig.cmake")
else()
set(devconfig_file "${InferenceEngineDeveloperPackage_DIR}/InferenceEngineDeveloperPackageConfig.cmake")
@@ -81,10 +89,6 @@ function(register_extra_modules)
file(APPEND "${devconfig_file}" "add_library(${NS}::${target} ALIAS ${target})\n")
endif()
endforeach()
if ("${NS}" STREQUAL "openvino")
file(APPEND "${devconfig_file}" "add_library(${NS}::runtime ALIAS openvino)\n")
file(APPEND "${devconfig_file}" "add_library(${NS}::runtime::dev ALIAS openvino_dev)\n")
endif()
endfunction()
generate_fake_dev_package("openvino")
@@ -137,7 +141,7 @@ ie_generate_dev_package_config()
ov_generate_dev_package_config()
# extra modules must be registered after inference_engine library
# and all other IE common libraries (ov_runtime_libraries) are creared
# and all other OpenVINO Core libraries are creared
# because 'register_extra_modules' creates fake InferenceEngineDeveloperPackageConfig.cmake
# with all imported developer targets
register_extra_modules()

View File

@@ -59,7 +59,7 @@ cmake_dependent_option (ENABLE_WHEEL "Build wheel packages for PyPi" OFF
# Inference Engine specific options
#
# "MKL-DNN library based on OMP or TBB or Sequential implementation: TBB|OMP|SEQ"
# "OneDNN library based on OMP or TBB or Sequential implementation: TBB|OMP|SEQ"
if(X86 OR ARM OR (MSVC AND (ARM OR AARCH64)) )
set(THREADING_DEFAULT "SEQ")
else()
@@ -82,7 +82,7 @@ else()
set(ENABLE_TBBBIND_2_5_DEFAULT OFF)
endif()
ie_dependent_option (ENABLE_TBBBIND_2_5 "Enable TBBBind_2_5 static usage in OpenVINO runtime" ON "ENABLE_TBBBIND_2_5_DEFAULT" OFF)
ie_dependent_option (ENABLE_TBBBIND_2_5 "Enable TBBBind_2_5 static usage in OpenVINO runtime" ${ENABLE_TBBBIND_2_5_DEFAULT} "THREADING MATCHES TBB" OFF)
ie_dependent_option (ENABLE_INTEL_GNA "GNA support for inference engine" ON
"NOT APPLE;NOT ANDROID;X86_64;CMAKE_CXX_COMPILER_VERSION VERSION_GREATER_EQUAL 5.4" OFF)
@@ -136,6 +136,17 @@ ie_dependent_option(ENABLE_TBB_RELEASE_ONLY "Only Release TBB libraries are link
ie_dependent_option (ENABLE_SYSTEM_PUGIXML "use the system copy of pugixml" OFF "BUILD_SHARED_LIBS" OFF)
get_linux_name(LINUX_OS_NAME)
if(LINUX_OS_NAME MATCHES "^Ubuntu [0-9]+\.[0-9]+$" AND NOT DEFINED ENV{TBBROOT})
# Debian packages are enabled on Ubuntu systems
# so, system TBB can be tried for usage
set(ENABLE_SYSTEM_TBB_DEFAULT ON)
else()
set(ENABLE_SYSTEM_TBB_DEFAULT OFF)
endif()
ie_dependent_option (ENABLE_SYSTEM_TBB "use the system version of TBB" ${ENABLE_SYSTEM_TBB_DEFAULT} "THREADING MATCHES TBB;LINUX" OFF)
ie_option (ENABLE_DEBUG_CAPS "enable OpenVINO debug capabilities at runtime" OFF)
ie_dependent_option (ENABLE_GPU_DEBUG_CAPS "enable GPU debug capabilities at runtime" ON "ENABLE_DEBUG_CAPS" OFF)

View File

@@ -2,9 +2,9 @@
# SPDX-License-Identifier: Apache-2.0
#
set(PACKAGE_VERSION_MAJOR @IE_VERSION_MAJOR@)
set(PACKAGE_VERSION_MINOR @IE_VERSION_MINOR@)
set(PACKAGE_VERSION_PATCH @IE_VERSION_PATCH@)
set(PACKAGE_VERSION_MAJOR @OpenVINO_VERSION_MAJOR@)
set(PACKAGE_VERSION_MINOR @OpenVINO_VERSION_MINOR@)
set(PACKAGE_VERSION_PATCH @OpenVINO_VERSION_PATCH@)
set(PACKAGE_VERSION "${PACKAGE_VERSION_MAJOR}.${PACKAGE_VERSION_MINOR}.${PACKAGE_VERSION_PATCH}")
set(PACKAGE_VERSION_EXACT False)

View File

@@ -12,19 +12,20 @@ set_and_check(OpenVINO_MAIN_SOURCE_DIR "@OpenVINO_SOURCE_DIR@") # KMB
# Variables to export in plugin's projects
set(ie_options "@IE_OPTIONS@;CMAKE_BUILD_TYPE;CMAKE_SKIP_RPATH")
list(APPEND ie_options CMAKE_CXX_COMPILER_LAUNCHER CMAKE_C_COMPILER_LAUNCHER)
set(ie_options "@IE_OPTIONS@")
list(APPEND ie_options CMAKE_CXX_COMPILER_LAUNCHER CMAKE_C_COMPILER_LAUNCHER
CMAKE_BUILD_TYPE CMAKE_SKIP_RPATH CMAKE_INSTALL_PREFIX)
file(TO_CMAKE_PATH "${CMAKE_CURRENT_LIST_DIR}" cache_path)
message(STATUS "The following CMake options are exported from Inference Engine Developer package")
message("")
message(" ")
foreach(option IN LISTS ie_options)
if(NOT DEFINED "${option}")
load_cache("${cache_path}" READ_WITH_PREFIX "" ${option})
endif()
message(" ${option}: ${${option}}")
endforeach()
message("")
message(" ")
# for samples in 3rd party projects
set_and_check(gflags_DIR "@gflags_BINARY_DIR@")
@@ -48,11 +49,6 @@ find_dependency(ngraph
NO_CMAKE_FIND_ROOT_PATH
NO_DEFAULT_PATH)
find_dependency(OpenVINODeveloperPackage
PATHS "${CMAKE_CURRENT_LIST_DIR}"
NO_CMAKE_FIND_ROOT_PATH
NO_DEFAULT_PATH)
if(TARGET openvino::runtime AND NOT TARGET IE::runtime)
add_library(IE::runtime INTERFACE IMPORTED)
set_target_properties(IE::runtime PROPERTIES
@@ -70,6 +66,18 @@ foreach(component @openvino_export_components@)
include("${CMAKE_CURRENT_LIST_DIR}/${component}_dev_targets.cmake")
endforeach()
if(TARGET IE::ov_core_dev AND NOT TARGET openvino::core::dev)
add_library(openvino::core::dev INTERFACE IMPORTED)
set_target_properties(openvino::core::dev PROPERTIES
INTERFACE_LINK_LIBRARIES IE::ov_core_dev)
endif()
if(TARGET IE::runtime::dev AND NOT TARGET openvino::runtime::dev)
add_library(openvino::runtime::dev INTERFACE IMPORTED)
set_target_properties(openvino::runtime::dev PROPERTIES
INTERFACE_LINK_LIBRARIES IE::runtime::dev)
endif()
if(ENABLE_SYSTEM_PUGIXML)
find_dependency(PugiXML)
set_property(TARGET pugixml PROPERTY IMPORTED_GLOBAL TRUE)
@@ -86,13 +94,11 @@ endif()
# Extra Compile Flags
#
if(NOT MSVC)
if(CMAKE_COMPILER_IS_GNUCXX)
ie_add_compiler_flags(-Wno-error=unused-variable)
if(CMAKE_COMPILER_IS_GNUCXX)
ie_add_compiler_flags(-Wno-error=unused-but-set-variable)
if(SUGGEST_OVERRIDE_SUPPORTED)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wno-suggest-override")
endif()
ie_add_compiler_flags(-Wno-error=unused-but-set-variable)
if(SUGGEST_OVERRIDE_SUPPORTED)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wno-suggest-override")
endif()
endif()

View File

@@ -2,9 +2,9 @@
# SPDX-License-Identifier: Apache-2.0
#
set(PACKAGE_VERSION_MAJOR @IE_VERSION_MAJOR@)
set(PACKAGE_VERSION_MINOR @IE_VERSION_MINOR@)
set(PACKAGE_VERSION_PATCH @IE_VERSION_PATCH@)
set(PACKAGE_VERSION_MAJOR @OpenVINO_VERSION_MAJOR@)
set(PACKAGE_VERSION_MINOR @OpenVINO_VERSION_MINOR@)
set(PACKAGE_VERSION_PATCH @OpenVINO_VERSION_PATCH@)
set(PACKAGE_VERSION "${PACKAGE_VERSION_MAJOR}.${PACKAGE_VERSION_MINOR}.${PACKAGE_VERSION_PATCH}")
set(PACKAGE_VERSION_EXACT False)

View File

@@ -12,6 +12,7 @@
# * `Runtime`: OpenVINO C++ and C Core & Inference Runtime, frontend common
# * `ONNX`: OpenVINO ONNX frontend
# * `Paddle`: OpenVINO Paddle frontend
# * `TensorFlow`: OpenVINO TensorFlow frontend
#
# If no components are specified, `Runtime` component is provided:
#
@@ -146,14 +147,29 @@ set(_ov_package_prefix_dir "${PACKAGE_PREFIX_DIR}")
set(THREADING "@THREADING@")
if((THREADING STREQUAL "TBB" OR THREADING STREQUAL "TBB_AUTO") AND NOT TBB_FOUND)
set_and_check(_tbb_dir "@PACKAGE_IE_TBB_DIR@")
set(enable_system_tbb "@ENABLE_SYSTEM_TBB@")
if(NOT enable_system_tbb)
set_and_check(_tbb_dir "@PACKAGE_IE_TBB_DIR@")
set(find_package_tbb_extra_args
CONFIG
PATHS
# oneTBB case exposed via export TBBROOT=<custom TBB root>
"$ENV{TBBROOT}/lib64/cmake/TBB"
"$ENV{TBBROOT}/lib/cmake/TBB"
# "$ENV{TBB_DIR}"
# for custom TBB exposed via cmake -DTBBROOT=<custom TBB root>
"${TBBROOT}/cmake"
# _tbb_dir points to TBB_DIR (custom | temp | system) used to build OpenVINO
${_tbb_dir}
NO_CMAKE_FIND_ROOT_PATH
NO_DEFAULT_PATH)
unset(_tbb_dir)
endif()
unset(enable_system_tbb)
_ov_find_dependency(TBB
COMPONENTS tbb tbbmalloc
CONFIG
PATHS ${TBBROOT}/cmake
${_tbb_dir}
NO_CMAKE_FIND_ROOT_PATH
NO_DEFAULT_PATH)
${find_package_tbb_extra_args})
set(install_tbbbind "@install_tbbbind@")
if(install_tbbbind)
@@ -164,6 +180,7 @@ if((THREADING STREQUAL "TBB" OR THREADING STREQUAL "TBB_AUTO") AND NOT TBB_FOUND
NO_DEFAULT_PATH)
set_target_properties(${TBBBIND_2_5_IMPORTED_TARGETS} PROPERTIES IMPORTED_GLOBAL ON)
endif()
unset(install_tbbbind)
endif()
_ov_find_dependency(Threads)
@@ -175,7 +192,7 @@ if(ENABLE_INTEL_GNA AND NOT ENABLE_INTEL_GNA_SHARED AND NOT libGNA_FOUND)
_ov_find_dependency(libGNA
COMPONENTS KERNEL
CONFIG
PATHS ${CMAKE_CURRENT_LIST_DIR}
PATHS "${CMAKE_CURRENT_LIST_DIR}"
NO_CMAKE_FIND_ROOT_PATH
NO_DEFAULT_PATH)
endif()
@@ -184,7 +201,8 @@ if(NOT TARGET openvino)
set(_ov_as_external_package ON)
include("${CMAKE_CURRENT_LIST_DIR}/OpenVINOTargets.cmake")
# TODO: WA for cmake version < 3.16
# WA for cmake version < 3.16 which does not export
# IMPORTED_LINK_DEPENDENT_LIBRARIES_** properties if no PUBLIC dependencies for the library
if((THREADING STREQUAL "TBB" OR THREADING STREQUAL "TBB_AUTO") AND TBB_FOUND)
foreach (type RELEASE DEBUG RELWITHDEBINFO MINSIZEREL)
set_property(TARGET openvino::runtime APPEND PROPERTY IMPORTED_LINK_DEPENDENT_LIBRARIES_${type} "TBB::tbb;TBB::tbbmalloc")

View File

@@ -10,19 +10,20 @@ set_and_check(OpenVINO_SOURCE_DIR "@OpenVINO_SOURCE_DIR@")
# Variables to export in plugin's projects
set(ie_options "@IE_OPTIONS@;CMAKE_BUILD_TYPE;CMAKE_SKIP_RPATH")
list(APPEND ie_options CMAKE_CXX_COMPILER_LAUNCHER CMAKE_C_COMPILER_LAUNCHER)
set(ov_options "@IE_OPTIONS@")
list(APPEND ov_options CMAKE_CXX_COMPILER_LAUNCHER CMAKE_C_COMPILER_LAUNCHER
CMAKE_BUILD_TYPE CMAKE_SKIP_RPATH CMAKE_INSTALL_PREFIX)
file(TO_CMAKE_PATH "${CMAKE_CURRENT_LIST_DIR}" cache_path)
message(STATUS "The following CMake options are exported from OpenVINO Developer package")
message("")
foreach(option IN LISTS ie_options)
message(" ")
foreach(option IN LISTS ov_options)
if(NOT DEFINED "${option}")
load_cache("${cache_path}" READ_WITH_PREFIX "" ${option})
endif()
message(" ${option}: ${${option}}")
endforeach()
message("")
message(" ")
# for samples in 3rd party projects
set_and_check(gflags_DIR "@gflags_BINARY_DIR@")
@@ -51,10 +52,10 @@ endforeach()
if(ENABLE_SYSTEM_PUGIXML)
find_dependency(PugiXML)
set_property(TARGET pugixml PROPERTY IMPORTED_GLOBAL TRUE)
add_library(IE::pugixml ALIAS pugixml)
add_library(openvino::pugixml ALIAS pugixml)
endif()
# inherit OpenCV from main IE project if enabled
# inherit OpenCV from main OpenVINO project if enabled
if ("@OpenCV_FOUND@")
load_cache("${cache_path}" READ_WITH_PREFIX "" OpenCV_DIR)
find_dependency(OpenCV)
@@ -64,13 +65,11 @@ endif()
# Extra Compile Flags
#
if(NOT MSVC)
if(CMAKE_COMPILER_IS_GNUCXX)
ie_add_compiler_flags(-Wno-error=unused-variable)
if(CMAKE_COMPILER_IS_GNUCXX)
ie_add_compiler_flags(-Wno-error=unused-but-set-variable)
if(SUGGEST_OVERRIDE_SUPPORTED)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wno-suggest-override")
endif()
ie_add_compiler_flags(-Wno-error=unused-but-set-variable)
if(SUGGEST_OVERRIDE_SUPPORTED)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wno-suggest-override")
endif()
endif()

View File

@@ -46,6 +46,7 @@ endif()
set(LINKCHECKER_PY "" CACHE FILEPATH "Path to linkchecker.py for documentation check dir.")
set(ENABLE_OPENVINO_NOTEBOOKS OFF CACHE BOOL "Build with openvino notebooks")
set(OMZ_DOCS_DIR "" CACHE PATH "Path to open_model_zoo documentation dir.")
set(OTE_DOCS_DIR "" CACHE PATH "Path to training_extensions documentation dir.")
set(WORKBENCH_DOCS_DIR "" CACHE PATH "Path to workbench documentation dir.")
set(OVMS_DOCS_DIR "" CACHE PATH "Path to model server documentation dir.")
set(GRAPH_CSV_DIR "" CACHE PATH "Path to the folder containing csv data for rendering graphs.")
@@ -159,6 +160,15 @@ function(build_docs)
--output_dir=${DOCS_BUILD_DIR}/workbench)
endif()
# ote doc files
if(EXISTS "${OTE_DOCS_DIR}")
get_filename_component(WORKBENCH_DOCS_DIR "${OTE_DOCS_DIR}" ABSOLUTE)
list(APPEND commands COMMAND ${PYTHON_EXECUTABLE} ${DOXY_MD_FILTER}
--input_dir=${OTE_DOCS_DIR}
--output_dir=${DOCS_BUILD_DIR}/ote)
endif()
# ovms doc files
if(EXISTS "${OVMS_DOCS_DIR}")
get_filename_component(OVMS_DOCS_DIR "${OVMS_DOCS_DIR}" ABSOLUTE)

View File

@@ -264,6 +264,10 @@ TAB_SIZE = 4
ALIASES = "ref_ie{1}=@ref InferenceEngine::\1 \"\1\""
ALIASES += sphinxdirective="\n\xmlonly<sphinxdirective>"
ALIASES += endsphinxdirective="</sphinxdirective>\endxmlonly"
ALIASES += sphinxtabset="\n\xmlonly<sphinxtabset></sphinxtabset>\endxmlonly\n"
ALIASES += endsphinxtabset="\n\xmlonly<endsphinxtabset></endsphinxtabset>\endxmlonly\n"
ALIASES += sphinxtab{1}="\n\xmlonly<sphinxtab>\1</sphinxtab>\endxmlonly\n"
ALIASES += endsphinxtab="\n\xmlonly<endsphinxtab></endsphinxtab>\endxmlonly\n"
# Set the OPTIMIZE_OUTPUT_FOR_C tag to YES if your project consists of C sources
# only. Doxygen will then generate output that is more tailored for C. For

View File

@@ -0,0 +1,239 @@
# How to Implement Custom GPU Operations {#openvino_docs_Extensibility_UG_GPU}
To enable operations not supported by OpenVINO out of the box, you may need an extension for an OpenVINO operation set, and a custom kernel for the device you will target. This page describes custom kernel support for the GPU device.
The GPU codepath abstracts many details about OpenCL. You need to provide the kernel code in OpenCL C and an XML configuration file that connects the kernel and its parameters to the parameters of the operation.
There are two options for using the custom operation configuration file:
* Include a section with your kernels into the automatically-loaded `<lib_path>/cldnn_global_custom_kernels/cldnn_global_custom_kernels.xml` file.
* Call the `ov::Core::set_property()` method from your application with the `"CONFIG_FILE"` key and the configuration file name as a value before loading the network that uses custom operations to the plugin:
@sphinxtabset
@sphinxtab{C++}
@snippet docs/snippets/gpu/custom_kernels_api.cpp part0
@endsphinxtab
@sphinxtab{Python}
@snippet docs/snippets/gpu/custom_kernels_api.py part0
@endsphinxtab
@endsphinxtabset
All OpenVINO samples, except the trivial `hello_classification`, and most Open Model Zoo demos
feature a dedicated command-line option `-c` to load custom kernels. For example, to load custom operations for the classification sample, run the command below:
```sh
$ ./classification_sample -m <path_to_model>/bvlc_alexnet_fp16.xml -i ./validation_set/daily/227x227/apron.bmp -d GPU
-c <absolute_path_to_config>/custom_layer_example.xml
```
## Configuration File Format <a name="config-file-format"></a>
The configuration file is expected to follow the `.xml` file structure
with a node of the `CustomLayer` type for every custom operation you provide.
The definitions described in the sections below use the following notations:
Notation | Description
---|---
(0/1) | Can have zero or one instance of this node or attribute
(1) | Must have only one instance of this node or attribute
(0+) | Can have any number of instances of this node or attribute
(1+) | Can have one or more instances of this node or attribute
### CustomLayer Node and Sub-Node Structure
`CustomLayer` node contains the entire configuration for a single custom operation.
| Attribute Name |\# | Description |
|-----|-----|-----|
| `name` | (1) | The name of the operation type to be used. This name should be identical to the type used in the IR.|
| `type` | (1) | Must be `SimpleGPU`. |
| `version` | (1) | Must be `1`. |
**Sub-nodes**: `Kernel` (1), `Buffers` (1), `CompilerOptions` (0+),
`WorkSizes` (0/1)
### Kernel Node and Sub-Node Structure
`Kernel` node contains all kernel source code configuration.
**Sub-nodes**: `Source` (1+), `Define` (0+)
### Source Node and Sub-Node Structure
`Source` node points to a single OpenCL source file.
| Attribute Name | \# |Description|
|-----|-----|-----|
| `filename` | (1) | Name of the file containing OpenCL source code. Note that the path is relative to your executable. Multiple source nodes will have their sources concatenated in order. |
**Sub-nodes**: None
### Define Node and Sub-Node Structure
`Define` node configures a single `#&zwj;define` instruction to be added to
the sources during compilation (JIT).
| Attribute Name | \# | Description |
|------|-------|------|
| `name` | (1) | The name of the defined JIT. For static constants, this can include the value as well, which is taken as a string. |
| `param` | (0/1) | This parameter value is used as the value of this JIT definition. |
| `type` | (0/1) | The parameter type. Accepted values: `int`, `float`, and `int[]`, `float[]` for arrays. |
| `default` | (0/1) | The default value to be used if the specified parameters are missing from the operation in the IR. |
**Sub-nodes:** None
The resulting JIT has the following form:
`#&zwj;define [name] [type] [value/default]`.
### Buffers Node and Sub-Node Structure
`Buffers` node configures all input/output buffers for the OpenCL entry
function. No buffers node structure exists.
**Sub-nodes:** `Data` (0+), `Tensor` (1+)
### Data Node and Sub-Node Structure
`Data` node configures a single input with static data, for example,
weights or biases.
| Attribute Name | \# | Description |
|----|-----|------|
| `name` | (1) | Name of a blob attached to an operation in the IR |
| `arg-index` | (1) | 0-based index in the entry function arguments to be bound to |
**Sub-nodes**: None
### Tensor Node and Sub-Node Structure
`Tensor` node configures a single input or output tensor.
| Attribute Name | \# | Description |
|------|-------|-------|
| `arg-index` | (1) | 0-based index in the entry function arguments to be bound to. |
| `type` | (1) | `input` or `output` |
| `port-index` | (1) | 0-based index in the operation input/output ports in the IR |
| `format` | (0/1) | Data layout declaration for the tensor. Accepted values: `BFYX`, `BYXF`, `YXFB`, `FYXB`, and same values in all lowercase. Default value: `BFYX` |
### CompilerOptions Node and Sub-Node Structure
`CompilerOptions` node configures the compilation flags for the OpenCL
sources.
| Attribute Name | \# | Description |
|--------|-----|------|
| `options` | (1) | Options string to be passed to the OpenCL compiler |
**Sub-nodes**: None
### WorkSizes Node and Sub-Node Structure
`WorkSizes` node configures the global/local work sizes to be used when
queuing an OpenCL program for execution.
| Attribute Name | \# | Description |
|-----|------|-----|
| `global`<br>`local` | (0/1)<br>(0/1) | An array of up to three integers or formulas for defining OpenCL work-sizes to be used during execution.<br> The formulas can use the values of the B,F,Y,X dimensions and contain the operators: +,-,/,\*,%. All operators are evaluated in integer arithmetic. <br>Default value: `global=”B*F*Y*X” local=””` |
| `dim` | (0/1) | A tensor to take the work-size from. Accepted values: `input N`, `output`, where `N` is an index of input tensor starting with 0. Default value: `output` |
**Sub-nodes**: None
## Example Configuration File
The following code sample provides an example configuration file in XML
format. For information on the configuration file structure, see
[Configuration File Format](#config-file-format).
```xml
<CustomLayer name="ReLU" type="SimpleGPU" version="1">
<Kernel entry="example_relu_kernel">
<Source filename="custom_layer_kernel.cl"/>
<Define name="neg_slope" type="float" param="negative_slope" default="0.0"/>
</Kernel>
<Buffers>
<Tensor arg-index="0" type="input" port-index="0" format="BFYX"/>
<Tensor arg-index="1" type="output" port-index="0" format="BFYX"/>
</Buffers>
<CompilerOptions options="-cl-mad-enable"/>
<WorkSizes global="X,Y,B*F"/>
</CustomLayer>
```
## Built-In Definitions for Custom Layers
The following table includes definitions that are attached before
user sources.
For an example, see [Example Kernel](#example-kernel).
| Name | Value |
|---|---|
| `NUM_INPUTS` | Number of the input tensors bound to this kernel |
| `GLOBAL_WORKSIZE` | An array of global work sizes used to execute this kernel |
| `GLOBAL_WORKSIZE_SIZE` | The size of the `GLOBAL_WORKSIZE` array |
| `LOCAL_WORKSIZE` | An array of local work sizes used to execute this kernel |
| `LOCAL_WORKSIZE_SIZE` | The size of the `LOCAL_WORKSIZE` array |
| `<TENSOR>_DIMS`| An array of the tensor dimension sizes. Always ordered as `BFYX` |
| `<TENSOR>_DIMS_SIZE`| The size of the `<TENSOR>_DIMS` array.|
| `<TENSOR>_TYPE`| The datatype of the tensor: `float`, `half`, or `char`|
| `<TENSOR>_FORMAT_<TENSOR_FORMAT>` | The format of the tensor, BFYX, BYXF, YXFB , FYXB, or ANY. The format is concatenated to the defined name. You can use the tensor format to define codepaths in your code with `#&zwj;ifdef/#&zwj;endif`. |
| `<TENSOR>_LOWER_PADDING` | An array of padding elements used for the tensor dimensions before they start. Always ordered as BFYX.|
| `<TENSOR>_LOWER_PADDING_SIZE` | The size of the `<TENSOR>_LOWER_PADDING` array |
| `<TENSOR>_UPPER_PADDING` | An array of padding elements used for the tensor dimensions after they end. Always ordered as BFYX. |
| `<TENSOR>_UPPER_PADDING_SIZE` | The size of the `<TENSOR>_UPPER_PADDING` array |
| `<TENSOR>_PITCHES` | The offset (in elements) between adjacent elements in each dimension. Always ordered as BFYX.|
| `<TENSOR>_PITCHES_SIZE`| The size of the `<TENSOR>_PITCHES` array |
| `<TENSOR>_OFFSET`| The number of elements from the start of the tensor to the first valid element, bypassing the lower padding. |
All `<TENSOR>` values are automatically defined for every tensor
bound to this operation, such as `INPUT0`, `INPUT1`, and `OUTPUT0`, as shown
in the following example:
```c
#define INPUT0_DIMS_SIZE 4
#define INPUT0_DIMS (int []){ 1,96,55,55, }
```
## Example Kernel<a name="example-kernel"></a>
```c
#pragma OPENCL EXTENSION cl_khr_fp16 : enable
__kernel void example_relu_kernel(
const __global INPUT0_TYPE* input0,
__global OUTPUT0_TYPE* output)
{
const uint idx = get_global_id(0);
const uint idy = get_global_id(1);
const uint idbf = get_global_id(2); // batches*features, as OpenCL supports 3D nd-ranges only
const uint feature = idbf % OUTPUT0_DIMS[1];
const uint batch = idbf / OUTPUT0_DIMS[1];
//notice that pitches are in elements, not in bytes!
const uint in_id = batch*INPUT0_PITCHES[0] + feature*INPUT0_PITCHES[1] + idy*INPUT0_PITCHES[2] + idx*INPUT0_PITCHES[3] + INPUT0_OFFSET;
const uint out_id = batch*OUTPUT0_PITCHES[0] + feature*OUTPUT0_PITCHES[1] + idy*OUTPUT0_PITCHES[2] + idx*OUTPUT0_PITCHES[3] + OUTPUT0_OFFSET;
INPUT0_TYPE value = input0[in_id];
// neg_slope (which is non-zero for leaky ReLU) is put automatically as #define, refer to the config xml
output[out_id] = value < 0 ? value * neg_slope : value;
}
```
> **NOTE**: As described in the previous section, all items like
> `INPUT0_TYPE` are actually defined as OpenCL (pre-)compiler inputs by
> OpenVINO for efficiency reasons. See [Debugging
> Tips](#debugging-tips) for information on debugging the results.
## Debugging Tips<a name="debugging-tips"></a>
* **Using `printf` in the OpenCL™ Kernels**.
To debug the specific values, you can use `printf` in your kernels.
However, be careful not to output excessively, which
could generate too much data. The `printf` output is typical, so
your output can be truncated to fit the buffer. Also, because of
buffering, you actually get an entire buffer of output when the
execution ends.<br>
For more information, refer to the [printf
Function](https://www.khronos.org/registry/OpenCL/sdk/1.2/docs/man/xhtml/printfFunction.html).

View File

@@ -1,4 +1,4 @@
# OpenVINO Extensibility Mechanism {#openvino_docs_Extensibility_UG_Intro}
# OpenVINO Extensibility Mechanism {#openvino_docs_Extensibility_UG_Intro}
@sphinxdirective
@@ -7,36 +7,68 @@
:hidden:
openvino_docs_Extensibility_UG_add_openvino_ops
openvino_docs_Extensibility_UG_Frontend_Extensions
openvino_docs_Extensibility_UG_GPU
openvino_docs_IE_DG_Extensibility_DG_VPU_Kernel
openvino_docs_MO_DG_prepare_model_customize_model_optimizer_Customize_Model_Optimizer
@endsphinxdirective
The Intel® Distribution of OpenVINO™ toolkit supports neural network models trained with multiple frameworks including
TensorFlow, Caffe, MXNet, Kaldi, PaddlePaddle, and ONNX. The list of supported operations (layers) is different for
The Intel® Distribution of OpenVINO™ toolkit supports neural network models trained with various frameworks, including
TensorFlow, PyTorch, ONNX, PaddlePaddle, MXNet, Caffe, and Kaldi. The list of supported operations is different for
each of the supported frameworks. To see the operations supported by your framework, refer to
[Supported Framework Operations](../MO_DG/prepare_model/Supported_Frameworks_Layers.md).
Custom operations, that is those not included in the list, are not recognized by OpenVINO™ out-of-the-box. Therefore, creating Intermediate Representation (IR) for a model using them requires additional steps. This guide illustrates the workflow for running inference on topologies featuring custom operations, allowing you to plug in your own implementation for existing or completely new operations.
Custom operations, that is those not included in the list, are not recognized by OpenVINO™ out-of-the-box. The need for a custom operation may appear in two main cases:
If your model contains operations not normally supported by OpenVINO™, the OpenVINO™ Extensibility API lets you add support for those custom operations and use one implementation for Model Optimizer and OpenVINO™ Runtime.
1. A regular framework operation that is new or rarely used, which is why it hasnt been implemented in OpenVINO yet.
There are two steps to support inference of a model with custom operation(s):
1. Add support for a [custom operation in the Model Optimizer](../MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md) so
the Model Optimizer can generate the IR with the operation.
2. Create a custom operation in it as described in the [Custom Operation](add_openvino_ops.md).
2. A new user operation that was created for some specific model topology by a model author using framework extension capabilities.
## OpenVINO™ Extensions
Importing models with such operations requires additional steps. This guide illustrates the workflow for running inference on models featuring custom operations, allowing you to plug in your own implementation for them. OpenVINO™ Extensibility API lets you add support for those custom operations and use one implementation for Model Optimizer and OpenVINO™ Runtime.
An OpenVINO™ provides extensions for:
Defining a new custom operation basically consist of two parts:
* [Custom OpenVINO™ Operation](add_openvino_ops.md):
- Enables the creation of unsupported operations
- Enables the use of `ov::Core::read_model` to read models with unsupported operations
- Provides a shape inference mechanism for custom operations
- Provides an evaluate method which allow to support the operation on CPU or perform constant folding
1. Definition of operation semantics in OpenVINO, the code that describes how this operation should be inferred consuming input tensor(s) and producing output tensor(s). How to implement execution kernels for [GPU](./GPU_Extensibility.md) and [VPU](./VPU_Extensibility.md) is described in separate guides.
> **NOTE**: This documentation is written based on the [Template extension](https://github.com/openvinotoolkit/openvino/tree/master/docs/template_extension/new), which demonstrates extension development details. You can review the complete code, which is fully compilable and up-to-date, to see how it works.
2. Mapping rule that facilitates conversion of framework operation representation to OpenVINO defined operation semantics.
## Load extensions to OpenVINO™ Runtime
The first part is required for inference, the second part is required for successful import of a model containing such operations from the original framework model format. There are several options to implement each part, the next sections will describe them in detail.
## Definition of Operation Semantics
If the custom operation can be mathematically represented as a combination of exiting OpenVINO operations and such decomposition gives desired performance, then low-level operation implementation is not required. When deciding feasibility of such decomposition refer to the latest OpenVINO operation set. You can use any valid combination of exiting operations. How to map a custom operation is described in the next section of this document.
If such decomposition is not possible or appears too bulky with lots of consisting operations that are not performing well, then a new class for the custom operation should be implemented as described in the [Custom Operation Guide](add_openvino_ops.md).
Prefer implementing a custom operation class if you already have a generic C++ implementation of operation kernel. Otherwise try to decompose the operation first as described above and then after verifying correctness of inference and resulting performance, optionally invest to implementing bare metal C++ implementation.
## Mapping from Framework Operation
Depending on model format used for import, mapping of custom operation is implemented differently, choose one of:
1. If model is represented in ONNX (including models exported from Pytorch in ONNX) or PaddlePaddle formats, then one of the classes from [Frontend Extension API](frontend_extensions.md) should be used. It consists of several classes available in C++ which can be used with Model Optimizer `--extensions` option or when model is imported directly to OpenVINO run-time using read_model method. Python API is also available for run-time model importing.
2. If model is represented in TensorFlow, Caffe, Kaldi or MXNet formats, then [Model Optimizer Extensions](../MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md) should be used. This approach is available for model conversion in Model Optimizer only.
Existing of two approaches simultaneously is explained by two different types of frontends used for model conversion in OpenVINO: new frontends (ONNX, PaddlePaddle) and legacy frontends (TensorFlow, Caffe, Kaldi and MXNet). Model Optimizer can use both front-ends in contrast to the direct import of model with `read_model` method which can use new frontends only. Follow one of the appropriate guides referenced above to implement mappings depending on framework frontend.
If you are implementing extensions for ONNX or PaddlePaddle new frontends and plan to use Model Optimizer `--extension` option for model conversion, then the extensions should be
1. Implemented in C++ only
2. Compiled as a separate shared library (see details how to do that later in this guide).
You cannot write new frontend extensions using Python API if you plan to use them with Model Optimizer.
Remaining part of this guide uses Frontend Extension API applicable for new frontends.
## Registering Extensions
A custom operation class and a new mapping frontend extension class object should be registered to be usable in OpenVINO runtime.
> **NOTE**: This documentation is written based on the [Template extension](https://github.com/openvinotoolkit/openvino/tree/master/docs/template_extension/new), which demonstrates extension development details based on minimalistic `Identity` operation that is a placeholder for your real custom operation. You can review the complete code, which is fully compliable, to see how it works.
To load the extensions to the `ov::Core` object, use the `ov::Core::add_extension` method, this method allows to load library with extensions or extensions from the code.
@@ -44,27 +76,50 @@ To load the extensions to the `ov::Core` object, use the `ov::Core::add_extensio
Extensions can be loaded from code with `ov::Core::add_extension` method:
@sphinxtabset
@sphinxtab{C++}
@snippet docs/snippets/ov_extensions.cpp add_extension
@endsphinxtab
@sphinxtab{Python}
@snippet docs/snippets/ov_extensions.py add_extension
@endsphinxtab
@endsphinxtabset
`Identity` is custom operation class defined in [Custom Operation Guide](add_openvino_ops.md). This is enough to enable reading IR which uses `Identity` extension operation emitted by Model Optimizer. To be able to load original model directly to the runtime, you need to add also a mapping extension:
@sphinxdirective
.. tab:: C++
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
:language: cpp
:fragment: add_extension
:fragment: add_frontend_extension
.. tab:: Python
.. doxygensnippet:: docs/snippets/ov_extensions.py
:language: python
:fragment: add_extension
:fragment: add_frontend_extension
@endsphinxdirective
When Python API is used there is no way to implement a custom OpenVINO operation. Also, even if custom OpenVINO operation is implemented in C++ and loaded to the runtime through a shared library, there is still no way to add a frontend mapping extension that refers to this custom operation. Use C++ shared library approach to implement both operations semantics and framework mapping in this case.
You still can use Python for operation mapping and decomposition in case if operations from the standard OpenVINO operation set is used only.
### Create library with extensions
You need to create extension library in following cases:
- Load extensions to Model Optimizer
- Load extensions to Python application
You need to create extension library in the following cases:
- Convert model with custom operations in Model Optimizer
- Load model with custom operations in Python application. It is applicable for both framework model and IR.
- Loading models with custom operations in tools that support loading extensions from a library, for example `benchmark_app`.
If you want to create an extension library, for example in order to load these extensions to the Model Optimizer, you need to do next steps:
Create an entry point for extension library. OpenVINO™ provides an `OPENVINO_CREATE_EXTENSIONS()` macro, which allows to define an entry point to a library with OpenVINO™ Extensions.
@@ -92,24 +147,25 @@ $ cmake --build .
After the build you can use path to your extension library to load your extensions to OpenVINO™ Runtime:
@sphinxdirective
@sphinxtabset
.. tab:: C++
@sphinxtab{C++}
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
:language: cpp
:fragment: add_extension_lib
@snippet docs/snippets/ov_extensions.cpp add_extension_lib
.. tab:: Python
@endsphinxtab
.. doxygensnippet:: docs/snippets/ov_extensions.py
:language: python
:fragment: add_extension_lib
@sphinxtab{Python}
@endsphinxdirective
@snippet docs/snippets/ov_extensions.py add_extension_lib
@endsphinxtab
@endsphinxtabset
## See Also
* [OpenVINO Transformations](./ov_transformations.md)
* [Using Inference Engine Samples](../OV_Runtime_UG/Samples_Overview.md)
* [Using OpenVINO Runtime Samples](../OV_Runtime_UG/Samples_Overview.md)
* [Hello Shape Infer SSD sample](../../samples/cpp/hello_reshape_ssd/README.md)

View File

@@ -0,0 +1,619 @@
# How to Implement Custom Layers for VPU (Intel® Neural Compute Stick 2) {#openvino_docs_IE_DG_Extensibility_DG_VPU_Kernel}
To enable operations not supported by OpenVINO™ out of the box, you need a custom extension for Model Optimizer, a custom nGraph operation set, and a custom kernel for the device you will target. This page describes custom kernel support for one the VPU, the Intel® Neural Compute Stick 2 device, which uses the MYRIAD device plugin.
> **NOTES:**
> * OpenCL\* custom layer support is available in the preview mode.
> * This section assumes you are familiar with developing kernels using OpenCL.
To customize your topology with an OpenCL layer, carry out the tasks described on this page:
1. Write and compile your OpenCL code with the standalone offline OpenCL compiler (`clc`).
2. Write a configuration file to bind the OpenCL kernel to the topology file (`.xml`) of the model IR.
3. Pass the configuration file to the OpenVINO™ Runtime with the model IR.
## Compile OpenCL code for VPU (Intel® Neural Compute Stick 2)
> **NOTE**: OpenCL compiler, targeting Intel® Neural Compute Stick 2 for the SHAVE* processor only, is redistributed with OpenVINO.
OpenCL support is provided by ComputeAorta* and is distributed under a license agreement between Intel® and Codeplay* Software Ltd.
The OpenCL toolchain for the Intel® Neural Compute Stick 2 supports offline compilation only, so first compile OpenCL C code using the standalone `clc` compiler. You can find the compiler binary at `<INSTALL_DIR>/tools/cl_compiler`.
> **NOTE**: By design, custom OpenCL layers support any OpenCL kernels written assuming OpenCL version 1.2. It also supports half float extension and is optimized for this type, because it is a native type for Intel® Movidius™ VPUs.
1. Prior to running a compilation, make sure that the following variables are set:
* `SHAVE_MA2X8XLIBS_DIR=<INSTALL_DIR>/tools/cl_compiler/lib/`
* `SHAVE_LDSCRIPT_DIR=<INSTALL_DIR>/tools/cl_compiler/ldscripts/`
* `SHAVE_MYRIAD_LD_DIR=<INSTALL_DIR>/tools/cl_compiler/bin/`
* `SHAVE_MOVIASM_DIR=<INSTALL_DIR>/tools/cl_compiler/bin/`
2. Run the compilation with the command below. You should use `--strip-binary-header` to make an OpenCL runtime-agnostic binary runnable with the OpenVINO™ Runtime.
```bash
cd <INSTALL_DIR>/tools/cl_compiler/bin
./clc --strip-binary-header custom_layer.cl -o custom_layer.bin
```
## Write a Configuration File
To tie the topology IR for a layer you customize, prepare a configuration file, so that the OpenVINO™ Runtime can find parameters for your kernel and the execution work grid is described.
For example, consider the following OpenCL kernel signature:
```cpp
__kernel void reorg_nhwc(__global const half *src, __global half *out, int w, int h, int c, int stride);
```
A configuration file for this kernel might be the following:
```xml
<CustomLayer name="ReorgYolo" type="MVCL" version="1">
<Kernel entry="reorg_nhwc">
<Source filename="reorg.bin"/>
</Kernel>
<Parameters>
<Tensor arg-name="src" type="input" port-index="0" format="BYXF"/>
<Tensor arg-name="out" type="output" port-index="0" format="BYXF"/>
<Scalar arg-name="w" type="int" port-index="0" source="I.X" />
<Scalar arg-name="h" type="int" port-index="0" source="I.Y" />
<Scalar arg-name="c" type="int" port-index="0" source="I.F" />
<Scalar arg-name="stride" type="int" source="stride" />
</Parameters>
<WorkSizes dim="input,0" global="(Y+7)/8*8,1,1" local="8,1,1"/>
</CustomLayer>
```
Each custom layer is described with the `CustomLayer` node. It has the following nodes and attributes:
- Root node `CustomLayer` contains the following attributes:
- `name` (Required) The name of the OpenVINO™ Runtime layer to bind the kernel with.
- `type` and `version` (Required) Reserved for future use. Set them to `MVCL` and `1` respectively.
- `max-shaves` (Optional) The maximum number of SHAVE cores that should be dedicated for the layer. It is useful for debugging concurrency issues or for resource saving that memory bound kernel does not scale well with the number of cores, so more resources can be left for the rest of a topology.
- Sub-node `Kernel` must contain the following attributes:
- `entry` The name of your kernel function as you defined it in a source file. In the example above, it is `reorg_nhwc`.
- Node `Source` must contain the following attributes:
- `filename` The path to a compiled binary relative to the XML configuration file.
- Sub-node `Parameters` Describes parameters bindings. For more information, see the description below.
- Sub-node `WorkSizes` Describes local and global work group sizes and the source for dimension deduction as a pair `direction,port`. In the example above, the work group is described relatively to the dimension of the input tensor that comes through port 0 in the IR. `global` and `local` work group configurations support any simple math expressions with +,-,\*,/, and () from `B`(batch), `Y`(height), `X`(width) and `F`(channels).
- Sub-node `Where` Allows to customize bindings with the `key="value"` attribute. For example, to substitute only 3x3 convolutions, write `<Where kernel="3,3"/>` in the binding xml.
Parameter description supports `Tensor` of one of tensor types such as `input`, `output`, `input_buffer`, `output_buffer` or `data`, `Scalar`, or `Data` nodes and has the following format:
- Each `Tensor` node of `input` or `output` type must contain the following attributes:
- `arg-name` The name of a kernel parameter in the kernel signature.
- `type` Node type: `input` or `output` as specified in the IR.
- `port-index` A number of input/output ports as specified in the IR.
- `format` The channel order in the tensor. Optional conversion layers are generated if the custom layer format is not compatible with formats of neighboring layers. `BFXY`, `BYXF`, and `ANY` formats are supported currently.
- Each `Tensor` node of `input_buffer` or `output_buffer` type must contain the following attributes:
- `arg-name` The name of a kernel parameter in the kernel signature.
- `type` Node type: `input_buffer` or `output_buffer`. Use the appropriate type to bind multiple kernels that correspond to different stages of the same layer.
- `port-index` The unique identifier to bind by.
- `dim` The dim source with the same `direction,port` format used for `WorkSizes` bindings.
- `size` Amount of bytes needed. Current expression syntax supports only expression over dimensions of over selected input/output tensor or constants and might be expended in the future.
Here is an example of multi-stage MVN layer binding:
```xml
<CustomLayer name="MVN" stage="0" type="MVCL" version="1">
<Kernel entry="reduction_mean">
<Source filename="mvn.bin"/>
</Kernel>
<Parameters>
<Tensor arg-name="src" type="input" port-index="0" format="BFYX"/>
<Tensor arg-name="mean" type="output_buffer" port-index="0" dim="output,0" size="Y*F*4"/>
<Tensor arg-name="variance" type="output_buffer" port-index="1" dim="output,0" size="Y*F*4"/>
<!--other parameters -->
</Parameters>
<WorkSizes dim="output,0" global="((Y+7)/8)*8,F,1" local="8,1,1"/>
</CustomLayer>
<CustomLayer name="MVN" stage="1" type="MVCL" version="1">
<Kernel entry="mvn_scale">
<Source filename="mvn_scale_changed_orded.bin"/>
</Kernel>
<Parameters>
<Tensor arg-name="src_data" type="input" port-index="0" format="BFYX"/>
<Tensor arg-name="dst_data" type="output" port-index="0" format="BFYX"/>
<Tensor arg-name="mean_part" type="input_buffer" port-index="0" dim="output,0" size="Y*F*4"/>
<Tensor arg-name="power_mean" type="input_buffer" port-index="1" dim="output,0" size="Y*F*4"/>
<!--other parameters -->
</Parameters>
<WorkSizes dim="output,0" global="((Y+7)/8)*8,F,1" local="8,1,1"/>
</CustomLayer>
```
- Each `Tensor` node that has the type `data` must contain the following attributes:
- `source` A name of the blob as it is in the IR. Typical example is `weights` for convolution.
- `format` Specifies the channel order in the tensor. Optional conversion layers are generated if the custom layer format is not.
```xml
<CustomLayer name="BinaryConvolution" type="MVCL" version="1">
<Kernel entry="binary_convolution">
<Source filename="binary_layers.bin"/>
</Kernel>
<Parameters>
<Tensor arg-name="src_data" type="input" port-index="0" format="BFYX"/>
<Data arg-name="weights_data" type="data" source="weights" format="ANY"/>
<Tensor arg-name="dst_data" type="output" port-index="0" format="BFYX"/>
<!--other parameters -->
</Parameters>
<WorkSizes dim="output,0" global="X,Y,F" local="1,1,1"/>
</CustomLayer>
```
- Each `Scalar` node must contain the following attributes:
- `arg-name` The name of a kernel parameter in the kernel signature.
- `type` `int` or `float` value. It is used for correct argument extraction from IR parameters.
- `source` Contains the name of the parameter in the IR file or input/output (`I`/`O`, `In`/`On`, where `n` is a port number)
followed by dimension `B`(batch), `Y`(height), `X`(width), or `F`(channels).
- Each `Data` node must contain the following attributes:
- `arg-name` The name of a kernel parameter in the kernel signature.
- `type` Node type. Currently, `local_data` is the only supported value, which defines buffer allocated in fast local on-chip memory. It is limited to 100KB for all `__local` and
`__private` arrays defined inside the kernel as well as all `__local` parameters passed to the kernel. Note that a manual-DMA extension requires double buffering.
If the custom layer is detected to run out of local memory, the inference fails.
- `dim` The dim source with the same `direction,port` format used for `WorkSizes` bindings.
- `size` Amount of bytes needed. The current expression syntax supports only expression over dimensions of over selected input/output tensor or constants and may be extended in the future.
The example binding below illustrates a kernel with two local buffers passed to the kernel.
```xml
<CustomLayer name="GRN" type="MVCL" version="1">
<Kernel entry="grn_NCHW">
<Source filename="grn.bin"/>
</Kernel>
<Parameters>
<Tensor arg-name="src_data" type="input" port-index="0" format="BFYX"/>
<Tensor arg-name="dst_data" type="output" port-index="0" format="BFYX"/>
<Data arg-name="src" type="local_data" dim="input,0" size="X*F*2" />
<Data arg-name="dst" type="local_data" dim="input,0" size="X*F*2" />
<Scalar arg-name="C" type="int" port-index="0" source="I.F" />
<Scalar arg-name="bias" type="float" source="bias" />
</Parameters>
<WorkSizes dim="input,0" global="X,Y,1" local="X,1,1"/>
</CustomLayer>
```
## Pass Configuration File to OpenVINO™ Runtime
> **NOTE**: If both native and custom layer implementations are present, the custom kernel has a priority over the native one.
Before loading the network that features the custom layers, provide a separate configuration file and load it using the ov::Core::set_property() method with the "CONFIG_KEY" key and the configuration file name as a value before loading the network that uses custom operations to the plugin:
@snippet docs/snippets/vpu/custom_op.cpp part0
## Optimizing Kernels with OpenCL for VPU (Intel® Neural Compute Stick 2)
This section provides optimization guidelines on writing custom layers with OpenCL for VPU devices. Knowledge about general OpenCL
programming model and OpenCL kernel language is assumed and not a subject of this section. The OpenCL model mapping to VPU is described in the table below.
| OpenCL Model | VPU Mapping|
|-----|----|
| Device code | Executed on SHAVE cores |
| Private memory | Mapped to CMX internal memory, limited to 100KB per work group, valid only while the work group is executed |
| Local memory | Mapped to CMX internal memory, limited to 100KB per work group, valid only while the work group is executed |
| Global memory | Mapped to DDR, used to pass execution preserved parameters for inputs, outputs, and blobs |
| Work group | Executed on a single SHAVE core iterating over multiple work items |
Note that by the OpenCL specification, the work group execution order is not specified. This means that it is your
responsibility to ensure that race conditions among work groups are not introduced. Custom layer runtime spits evenly
work grid among available compute resources and executes them in an arbitrary order. This static scheduling approach works best if the load is evenly spread out across work groups, which is a typical case for Deep Learning kernels. The following guidelines are recommended to use for work group partitioning:
1. Split work evenly across work groups.
2. Adjust work group granularity to maintain equal workload for all compute codes.
3. Set the maximum number of cores using the `max-shaves` attribute for the `CustomLayer` node. This keeps more resources for the rest of topology. It is also useful if the kernel scalability reached its limits, which may happen while optimizing memory bound kernels or kernels with poor parallelization.
4. Try an alternate data layout (`BFXY`/`BYXF`) for the kernel if it improves work group partitioning or data access patterns.
Consider not just specific layer boost, but full topology performance because data conversion layers would be automatically inserted
as appropriate.
Offline OpenCL compiler (`clc`) features automatic vectorization over `get_global_id(0)` usage, if uniform access is detected.
For example, the kernel below could be automatically vectorized:
```cpp
__kernel void cvtf32f16(__global float* restrict inImage, __global half* restrict outImage,
float scale, float bais)
{
int idx = get_global_id(0) + get_global_id(1) * get_global_size(0) + get_global_id(2) * get_global_size(0) * get_global_size(1);
outImage[idx] = convert_half(inImage[idx]*scale+bais);
}
```
However, this work-group based vectorizer (WGV) conflicts with the default LLVM vectorizer based on superword level parallelism
(SLP) for the current compiler version. Manual vectorization is recommended to provide the best performance for non-uniform code
patterns. WGV works if and only if vector types are not used in the code.
Here is a short list of optimization tips:
1. Help auto-vectorizer ensure non-aliasing pointers for kernel parameters by putting `restrict` where possible.
- This can give a performance boost, especially for kernels with unrolling, like `ocl_grn` from the example below.
- Place `restrict` markers for kernels with manually vectorized codes. In the `ocl_grn` kernel below, the unrolled version without `restrict` is up to 20% slower than the most optimal one, which combines unrolling and `restrict`.
2. Put `#&zwj;pragma unroll N` to your loop header. The compiler does not trigger unrolling by default, so it is your responsibility to
annotate the code with pragmas as appropriate. The `ocl_grn` version with `#&zwj;pragma unroll 4` is up to 50% faster, most of which comes from unrolling the first loop, because LLVM, in general, is better in scheduling 3-stage loops (load-compute-store), while the fist loop
`variance += (float)(src_data[c*H*W + y*W + x] * src_data[c*H*W + y*W + x]);` is only 2-stage (load-compute). Pay
attention to unrolling such cases first. Unrolling factor is loop-dependent. Choose the smallest number that
still improves performance as an optimum between the kernel size and execution speed. For this specific kernel, changing the unroll factor from `4` to `6` results in the same performance, so unrolling factor equal to 4 is an optimum. For Intel® Neural Compute Stick 2, unrolling is conjugated with the automatic software pipelining for load, store, and compute stages:
```cpp
__kernel void ocl_grn(__global const half* restrict src_data, __global half* restrict dst_data, int C, float bias)
{
int x = get_global_id(0);
int W = get_global_size(0);
int y = get_global_id(1);
int H = get_global_size(1);
float variance = bias + 1e-9f;
#pragma unroll 4
for (int c = 0; c < C; c++)
variance += (float)(src_data[c*H*W + y*W + x] * src_data[c*H*W + y*W + x]);
variance = 1.f / native_sqrt(variance);
#pragma unroll 4
for (int c = 0; c < C; c++)
dst_data[c*H*W + y*W + x] = (half)((float)src_data[c*H*W + y*W + x] * variance);
}
```
To check the efficiency of WGV, you can compare performance of the kernel above with the kernel below, which is manually vectorized over width:
```cpp
__kernel void ocl_grn_line(__global const half* restrict src_data, __global half* restrict dst_data, int C, int W, float bias)
{
int y = get_global_id(1);
int H = get_global_size(1);
for (int x = 0; x < W/8; x++)
{
float8 variance = (float8)(bias+1e-9f);
#pragma unroll 4
for (int c = 0; c < C; c++)
{
__global const half8* restrict src_line = ((__global const half8 * restrict)(src_data + c*H*W + y*W));
half8 sh = src_line[x];
variance += convert_float8(sh*sh);
}
variance = 1.f/native_sqrt(variance);
#pragma unroll 4
for (int c = 0; c < C; c++)
{
__global const half8* restrict src_line = ((__global const half8 * restrict)(src_data + c*H*W + y*W));
__global half8* restrict dst_line = ((__global half8 * restrict)(dst_data + c*H*W + y*W));
dst_line[x] = convert_half8(convert_float8(src_line[x])*variance);
}
}
for (int x = W/8*8; x < W; x++)
{
float variance = bias+1e-9f;
#pragma unroll 4
for (int c = 0; c < C; c++)
variance += (float)(src_data[c*H*W + y*W + x]*src_data[c*H*W + y*W + x]);
variance = 1.f/native_sqrt(variance);
#pragma unroll 4
for (int c = 0; c < C; c++)
dst_data[c*H*W + y*W + x] = (float)src_data[c*H*W + y*W + x]*variance;
}
}
```
Both versions perform the same, but the second one has more complex code.
3. If it is easy to predict the work group size, you can also use the `reqd_work_group_size` kernel attribute to ask the compiler
to unroll the code up to the local size of the work group. Note that if the kernel is actually executed with the
different work group configuration, the result is undefined.
4. Prefer to use the `half` compute if it keeps reasonable accuracy. 16-bit float is a native type for Intel® Neural Compute Stick 2, most of the functions `half_*` are mapped to a single hardware instruction.
Use the standard `native_*` function for the rest of types.
5. Prefer to use the `convert_half` function over `vstore_half` if conversion to 32-bit float is required. `convert_half` is mapped to a single hardware instruction. For the `cvtf32f16` kernel above, the line `outImage[idx] = convert_half(inImage[idx]*scale+bais);` is eight times slower than the code with `vstore_half`.
6. Mind early exits. Early exit can be extremely costly for the current version of the `clc` compiler due to conflicts with the
auto-vectorizer. The generic advice would be to setup local size by `x` dimension equal to inputs or/and outputs width.
If it is impossible to define the work grid that exactly matches inputs or/and outputs to eliminate checks, for example,
`if (get_global_id(0) >= width) return`, use line-wise kernel variant with manual vectorization.
The kernel example below demonstrates the impact of early exits on kernel performance.
```cpp
// Initial version
__kernel void reorg(const __global half* restrict src, __global half* restrict out, int stride)
{
int w = get_global_id(0);
int W = get_global_size(0);
int h = get_global_id(1);
int H = get_global_size(1);
int c = get_global_id(2);
int C = get_global_size(2);
int C2 = C/(stride*stride);
int offset = c / C2;
int c2 = c - C2 * offset;
int H2 = H*stride;
int W2 = W*stride;
int h2 = h*stride + offset / stride;
int w2 = w*stride + offset - stride * (offset / stride);
out[W*H*c + W*h + w] = src[W2*H2*c2 + W2*h2 + w2];
}
```
This `reorg` kernel is auto-vectorizable, but an input for YOLO v2 topology is `NCHW=<1,64,26,26>` and it is not multiple of vector width, which is `8` for `half` data type. As a result, the Inference Engine does not select the auto-vectorized kernel.
To compare performance of auto-vectorized and scalar version of the kernel, change the input size to`NCHW=<1,64,26,32>`. This enables the auto-vectorized version to be selected by the Inference Engine and can give you about 30% uplift.
Since the auto-vectorized version is faster, it makes sense to enable it for the YOLO v2 topology input size by setting the local size multiple of vector, for example, 32, and adjust global sizes accordingly. As a result, the execution work grid exceeds actual input dimension, so out-of-bound checks should be inserted. See the updated kernel version below:
```cpp
// Version with out-of-bound checks added
__kernel void reorg(const __global half* restrict src, __global half* restrict out, int W, int stride)
{
int w = get_global_id(0);
w = min(w, W-1);
int h = get_global_id(1);
int H = get_global_size(1);
int c = get_global_id(2);
int C = get_global_size(2);
int C2 = C/(stride*stride);
int offset = c / C2;
int c2 = c - C2 * offset;
int H2 = H*stride;
int W2 = W*stride;
int h2 = h*stride + offset / stride;
int w2 = w*stride + offset - stride * (offset / stride);
out[W*H*c + W*h + w] = src[W2*H2*c2 + W2*h2 + w2];
}
```
This code performs the same as the initial kernel above (scalar) due to branching overhead. If you replace min/max expression `w = min(w, W-1);` with `if (w >= W) return;`, runtime increases up to 2x against to code without branching (initial version).<br>
If branching is inevitable for your element-based kernel, it is recommended to change the scheme to line-based. See the kernel variant below:
```cpp
// Line-wise version
__kernel void reorg(const __global half* restrict src, __global half* restrict out, int H, int W, int stride)
{
int h = min((int)get_global_id(0), H-1);
int c = get_global_id(1);
int C = get_global_size(1);
int C2 = C/(stride*stride);
int offset = c / C2;
int c2 = c - C2 * offset;
int H2 = H*stride;
int W2 = W*stride;
for (int w = 0; w < W; ++w)
{
int h2 = h*stride + offset / stride;
int w2 = w*stride + offset - stride * (offset / stride);
out[W*H*c + W*h + w] = src[W2*H2*c2 + W2*h2 + w2];
}
}
```
This decreases the execution time up to 40% against the best performing vectorized kernel without early exits (initial version).
7. Reuse computations among work items by using line-based kernels or sharing values though `__local` memory.
8. Improve data access locality. Most of custom kernels are memory bound while convolution and fully connected layers are hardware-implemented. The code below demonstrates a further optimized version of the `reorg` kernel unrolled by `stride`:
```cpp
// Unrolled line-wise version
__kernel void reorg_unrolled_by_stride(const __global half* restrict src, __global half* restrict dst,
int H, int W, int stride)
{
int h = min((int)get_global_id(0), H-1);
int c2 = get_global_id(1);
int C2 = get_global_size(1);
int C = C2*stride*stride;
int H2 = H*stride;
int W2 = W*stride;
for (int stride_y = 0; stride_y < stride; stride_y++)
for (int stride_x = 0; stride_x < stride; stride_x++)
for (int w2 = 0, w = 0; w < W; w2 += stride, w++)
dst[W*H*C2*(stride_y*stride+stride_x) + W*H*c2 + W*h + w] = src[W2*H2*c2 + W2*h*stride + W2*stride_y + w2 + stride_x];
}
```
`scr` data in this case loaded only once. As the result, the cycle count drops up to 45% against the line-wise version.
9. Copy data from `__dlobal` to `__local` or `__private` memory if the data is accessed more than once. Access to
`__dlobal` memory is orders of magnitude slower than access to `__local`/`__private` due to statically scheduled pipeline, which
stalls completely on memory access without any prefetch. The same recommendation is applicable for scalar load/store
from/to a `__blobal` pointer since work-group copying could be done in a vector fashion.
10. Use a manual DMA extension. Local (on-chip) memory throughput is up to 24x higher than DDR throughput. Starting from OpenVINO™ 2020.1, VPU OpenCL features manual-DMA kernel extension to copy sub-tensor used by work group into local memory and performing compute without DDR evolved. Here is the simple GRN kernel implementation that runs over DDR. Local size is in the form (width of the input tensor, 1, 1) to define a large enough work group to get code automatically vectorized and unrolled, while global size is (width of the input tensor, height of the input tensor, 1):
```cpp
__kernel void grn_NCHW(
__global const half* restrict src_data,
__global half* restrict dst_data,
int C,
float bias)
{
float variance = bias + 1e-9f;
#pragma unroll 4
for (int c = 0; c < C; c++)
{
float val = (float) src_data[c*get_global_size(1)*get_global_size(0) + get_global_id(1)*get_global_size(0) + get_global_id(0)];
variance += val*val;
}
half hvariance = (half)(native_rsqrt((half)(variance/16.f))*0.25f);
#pragma unroll 4
for (int c = 0; c < C; c++)
{
dst_data[c*get_global_size(1)*get_global_size(0) + get_global_id(1)*get_global_size(0) + get_global_id(0)]
= src_data[c*get_global_size(1)*get_global_size(0) + get_global_id(1)*get_global_size(0) + get_global_id(0)] * hvariance;
}
}
```
This kernel can be rewritten to introduce special data binding `__dma_preload` and `__dma_postwrite intrinsics`. This means that instead of one kernel, a group of three kernels should be implemented: `kernelName`, `__dma_preload_kernelName`, and `__dma_postwrite_kernelName`. `__dma_preload_kernelName` for a particular work group `n` is guaranteed to be executed before the `n`-th work group itself, while `__dma_postwrite_kernelName` is guaranteed to be executed after a corresponding work group. You can define one of those functions that are intended to be used to copy data from-to `__global` and `__local` memory. The syntactics requires exact functional signature match. The example below illustrates how to prepare your kernel for manual-DMA.
```cpp
__kernel void __dma_preload_grn_NCHW(
__global const half* restrict src,
__global half* restrict dst,
__local half* restrict local_src,
__local half* restrict local_dst,
int C,
float bias)
{
// ToDO: copy required piece of src tensor into local_src
}
__kernel void __dma_postwrite_grn_NCHW(
__global const half* restrict src,
__global half* restrict dst,
__local const half* restrict local_src,
__local half* restrict local_dst,
int C,
float bias)
{
// ToDO: copy back computed piece of local_dst into dst
}
__kernel void grn_NCHW(
__global const half* restrict src_data,
__global half* restrict dst_data,
__local half* restrict src,
__local half* restrict dst,
int C,
float bias)
{
// same as the example above
}
```
The GRN kernel operates on channel-major tensors to compute average over full channel range and then normalizes input elements to produce the output.
As a part of the manual DMA extension, a group of work group copy functions are introduced in addition to `async_work_group_copy`, which is also mapped to a DMA call.
Here is the list of supported functions:
```cpp
// 2D sub-tensor copy
event_t WorkGroupDmaCreateStrideTransaction(
const local T *src,
global T *dst,
size_t src_width, // width of the line of source in bytes
size_t dst_width, // width of the line of destination in bytes
size_t src_stride, // stride between corresponding 2 consecutive lines of source in bytes
size_t dst_stride, // stride between corresponding 2 consecutive lines of destination in bytes
size_t size, // total number of bytes loaded for all lines from source to destination
event_t event) __OVERLOAD;
event_t WorkGroupDmaCreateStrideTransaction(
const global T *src,
local T *dst,
size_t src_width, // width of the line of source in bytes
size_t dst_width, // width of the line of destination in bytes
size_t src_stride, // stride between corresponding 2 consecutive lines of source in bytes
size_t dst_stride, // stride between corresponding 2 consecutive lines of destination in bytes
size_t size, // total number of bytes loaded for all lines from source to destination
event_t event) __OVERLOAD;
// 3D sub-tensor copy
event_t WorkGroupDmaCreate3DTransaction(
const local T *src,
global T *dst,
size_t src_width, // width of the line of source in bytes
size_t dst_width, // width of the line of destination in bytes
size_t src_stride, // stride between corresponding 2 consecutive lines of source in bytes
size_t dst_stride, // stride between corresponding 2 consecutive lines of destination in bytes
size_t num_planes, // number of planes to be copied
size_t src_plane_stride, // stride between corresponding 2 consecutive planes of source in bytes
size_t dst_plane_stride, // stride between corresponding 2 consecutive planes of destination in bytes
size_t size, // size of the loaded plane in bytes, analogues to the size in 2D case
event_t event) __OVERLOAD;
event_t WorkGroupDmaCreate3DTransaction(
const global T *src,
local T *dst,
size_t src_width, // width of the line of source in bytes
size_t dst_width, // width of the line of destination in bytes
size_t src_stride, // stride between corresponding 2 consecutive lines of source in bytes
size_t dst_stride, // stride between corresponding 2 consecutive lines of destination in bytes
size_t num_planes, // number of planes to be copied
size_t src_plane_stride, // stride between corresponding 2 consecutive planes of source in bytes
size_t dst_plane_stride, // stride between corresponding 2 consecutive planes of destination in bytes
size_t size, // size of the loaded plane in bytes, analogues to the size in 2D case
event_t event) __OVERLOAD;
```
where `T` can be `uchar`, `char`, `short`, `ushort`, `int`, `uint`, `long`, `ulong`, `half` or `float`.
Modified version of the GRN kernel could be the following:
```cpp
__kernel void __dma_preload_grn_NCHW(
__global const half* restrict src,
__global half* restrict dst,
__local half* restrict local_src,
__local half* restrict local_dst,
int C,
float bias)
{
WorkGroupDmaCreate3DTransaction(
src + get_group_id(0)*get_local_size(0)
+ get_group_id(1)*get_local_size(1)*get_global_size(0), // src
local_src, // dst
get_local_size(0) * sizeof(half), // src width
get_local_size(0) * sizeof(half), // dst width
get_global_size(0) * sizeof(half), // src stride
get_local_size(0) * sizeof(half), // dst stride
C, // num planes
get_global_size(0) * get_global_size(1) * sizeof(half), // src plane stride
get_local_size(0) * get_local_size(1) * sizeof(half), // dst plane stride
get_local_size(0) * get_local_size(1) * sizeof(half), // plane size
0);
}
__kernel void __dma_postwrite_grn_NCHW(
__global const half* restrict src,
__global half* restrict dst,
__local const half* restrict local_src,
__local half* restrict local_dst,
int C,
float bias)
{
WorkGroupDmaCreate3DTransaction(
local_dst, // src
dst + get_group_id(0)*get_local_size(0)
+ get_group_id(1)*get_local_size(1)*get_global_size(0), // dst
get_local_size(0) * sizeof(half), // src width
get_local_size(0) * sizeof(half), // dst width
get_local_size(0) * sizeof(half), // src stride
get_global_size(0) * sizeof(half), // dst stride
C, // num planes
get_local_size(0) * get_local_size(1) * sizeof(half), // src plane stride
get_global_size(0) * get_global_size(1) * sizeof(half), // dst plane stride
get_local_size(0) * get_local_size(1) * sizeof(half), // plane size
0);
}
__kernel void grn_NCHW(
__global const half* restrict src_data,
__global half* restrict dst_data,
__local half* restrict src,
__local half* restrict dst,
int C,
float bias)
{
float variance = bias + 1e-9f;
#pragma unroll 8
for (int c = 0; c < C; c++)
{
float val = (float) src[c*get_local_size(1)*get_local_size(0) + get_local_id(1)*get_local_size(0) + get_local_id(0)];
variance += val*val;
}
half hvariance = (half)(native_rsqrt((half)(variance/16.f))*0.25f);
#pragma unroll 8
for (int c = 0; c < C; c++)
{
dst[c*get_local_size(1)*get_local_size(0) + get_local_id(1)*get_local_size(0) + get_local_id(0)]
= src[c*get_local_size(1)*get_local_size(0) + get_local_id(1)*get_local_size(0) + get_local_id(0)] * hvariance;
}
}
```
Note the `get_local_size` and `get_local_id` usage inside the kernel. 21x speedup is expected for a kernel on enet-curbs setup because it was completely limited by memory usage.
An alternative method to using DMA is to use work item copy extension. Those functions are executed inside a kernel and requires work groups equal to single work item.
Here is the list of supported work item functions:
```cpp
item_dma_event_t WorkItemDmaCreateTransaction(
const global T *src,
private T *dst,
size_t size,
item_dma_event_t event) __OVERLOAD;
item_dma_event_t WorkItemDmaCreateTransaction(
const private T *src,
global T *dst,
size_t size,
item_dma_event_t event) __OVERLOAD;
item_dma_event_t WorkItemDmaCreateStrideTransaction(
const global T *src,
private T *dst,
size_t src_width,
size_t dst_width,
size_t src_stride,
size_t dst_stride,
size_t size,
item_dma_event_t event) __OVERLOAD;
item_dma_event_t WorkItemDmaCreateStrideTransaction(
const private T *src,
global T *dst,
size_t src_width,
size_t dst_width,
size_t src_stride,
size_t dst_stride,
size_t size,
item_dma_event_t event) __OVERLOAD;
item_dma_event_t WorkItemDmaCreate3DTransaction(
const global T *src,
private T *dst,
size_t src_width,
size_t dst_width,
size_t src_stride,
size_t dst_stride,
size_t num_planes,
size_t src_plane_stride,
size_t dst_plane_stride,
size_t size,
item_dma_event_t event) __OVERLOAD;
item_dma_event_t WorkItemDmaCreate3DTransaction(
const private T *src,
global T *dst,
size_t src_width,
size_t dst_width,
size_t src_stride,
size_t dst_stride,
size_t num_planes,
size_t src_plane_stride,
size_t dst_plane_stride,
size_t size,
item_dma_event_t event) __OVERLOAD;
```
where `T` can be `uchar`, `char`, `short`, `ushort`, `int`, `uint`, `long`, `ulong`, `half` or `float`.

View File

@@ -1,4 +1,4 @@
# Custom OpenVINO™ Operations {#openvino_docs_Extensibility_UG_add_openvino_ops}
# Custom OpenVINO™ Operations {#openvino_docs_Extensibility_UG_add_openvino_ops}
OpenVINO™ Extension API allows you to register custom operations to support models with operations which OpenVINO™ does not support out-of-the-box.
@@ -20,14 +20,10 @@ Follow the steps below to add a custom operation:
5. Override the `visit_attributes` method, which enables serialization and deserialization of operation attributes. An `AttributeVisitor` is passed to the method, and the implementation is expected to walk over all the attributes in the op using the type-aware `on_attribute` helper. Helpers are already implemented for standard C++ types like `int64_t`, `float`, `bool`, `vector`, and for existing OpenVINO defined types.
6. Override `evaluate`, which is an optional method that enables fallback of some devices to this implementation and the application of constant folding if there is a custom operation on the constant branch. If your operation contains `evaluate` method you also need to override the `has_evaluate` method, this method allow to get information about availability of `evaluate` method for the operation.
7. Add the `OPENVINO_FRAMEWORK_MAP` macro if you want to map custom operation to framework operation with the same name. It is an optional macro which can be used for one to one mapping. In order to use this macro please include frontend specific headers:
@snippet template_extension/new/identity.hpp op:frontend_include
6. Override `evaluate`, which is an optional method that enables fallback of some devices to this implementation and the application of constant folding if there is a custom operation on the constant branch. If your operation contains `evaluate` method you also need to override the `has_evaluate` method, this method allows to get information about availability of `evaluate` method for the operation.
Based on that, declaration of an operation class can look as follows:
@snippet template_extension/new/identity.hpp op:header
### Operation Constructors
@@ -55,8 +51,9 @@ OpenVINO™ operation contains two constructors:
@snippet template_extension/new/identity.cpp op:visit_attributes
### `evaluate()` and `has_evaluate()`
### evaluate() and has_evaluate()
`ov::Node::evaluate` method enables you to apply constant folding to an operation.
@snippet template_extension/new/identity.cpp op:evaluate

View File

@@ -0,0 +1,105 @@
# Frontend Extensions {#openvino_docs_Extensibility_UG_Frontend_Extensions}
The goal of this chapter is to explain how to use Frontend extension classes to facilitate mapping of custom operations from framework model representation to OpenVINO representation. Refer to [Introduction to OpenVINO Extension](Intro.md) to understand entire flow.
This API is applicable for new frontends only, which exist for ONNX and PaddlePaddle. If a different model format is used, follow legacy [Model Optimizer Extensions](../MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md) guide.
> **NOTE**: This documentation is written based on the [Template extension](https://github.com/openvinotoolkit/openvino/tree/master/docs/template_extension/new), which demonstrates extension development details based on minimalistic `Identity` operation that is a placeholder for your real custom operation. You can review the complete code, which is fully compliable, to see how it works.
## Single Operation Mapping with OpExtension
This section covers the case when a single operation in framework representation is mapped to a single operation in OpenVINO representation. This is called *one-to-one mapping*. There is `OpExtension` class that works well if all the following conditions are satisfied:
1. Number of inputs to operation in the Framework representation is the same as in the OpenVINO representation.
2. Number of outputs is also the same in both representations.
3. Inputs can be indexed and are mapped in order correspondingly, e.g. input with index 0 in framework representation maps to input with index 0 in OpenVINO representation and so on.
4. The same for outputs.
5. Each attribute in OpenVINO operation can be initialized from one of the attributes of original operation or by some predefined constant value. Value of copied attributes cannot contain expressions, value is accepted as-is, so type of a value should be compatible.
> **NOTE**: `OpExtension` class is currently available for ONNX frontend only. PaddlePaddle frontend has named inputs and outputs for operation (not indexed) therefore OpExtension mapping is not applicable for this case.
The next example maps ONNX operation with type [“Identity”]( https://github.com/onnx/onnx/blob/main/docs/Operators.md#Identity) to OpenVINO template extension `Identity` class.
@snippet ov_extensions.cpp frontend_extension_Identity_header
@snippet ov_extensions.cpp frontend_extension_Identity
The mapping doesnt involve any attributes, as operation Identity doesnt have them.
Extension objects, like just constructed `extension` can be used to add to the OpenVINO runtime just before the loading a model that contains custom operations:
@snippet ov_extensions.cpp frontend_extension_read_model
Or extensions can be constructed in a separately compiled shared library. Separately compiled library can be used in Model Optimizer or `benchmark_app`. Read about how to build and load such library in chapter “Create library with extensions” in [Introduction to OpenVINO Extension](Intro.md).
If operation have multiple inputs and/or outputs they will be mapped in order. The type of elements in input/output tensors should match expected types in the surrounding operations. For example, if custom operation produces `f32` data type then operation that consumes this output should also support `f32`. Otherwise, model conversion fails with an error, there are no automatic type conversion happens.
### Converting to Standard OpenVINO Operation
`OpExtension` class can be used when mapping to one of the operations from standard OpenVINO operation set is what you need and there is no class like `TemplateExtension::Identity` implemented.
Here is an example for a custom framework operation “MyRelu”. Suppose it is mathematically equivalent to standard `Relu` that exists in OpenVINO operation set, but for some reason has type name “MyRelu”. In this case you can directly say that “MyRelu” -> `Relu` mapping should be used:
@snippet ov_extensions.cpp frontend_extension_MyRelu
In the resulting converted OpenVINO model, “MyRelu” operation will be replaced by the standard operation `Relu` from the latest available OpenVINO operation set. Notice that when standard operation is used, it can be specified using just a type string (“Relu”) instead of using a `ov::opset8::Relu` class name as a template parameter for `OpExtension`. This method is available for operations from the standard operation set only. For a user custom OpenVINO operation the corresponding class should be always specified as a template parameter as it was demonstrated with `TemplateExtension::Identity`.
### Attributes Mapping
As described above, `OpExtension` is useful when attributes can be mapped one by one or initialized by a constant. If the set of attributes in framework representation and OpenVINO representation completely match by their names and types, nothing should be specified in OpExtension constructor parameters. The attributes are discovered and mapped automatically based on `visit_attributes` method that should be defined for any OpenVINO operation.
Imagine you have CustomOperation class implementation that has two attributes with names `attr1` and `attr2`:
@snippet ov_extensions.cpp frontend_extension_CustomOperation
And original model in framework representation also has operation with name “CustomOperatoin” with the same `attr1` and `attr2` attributes. Then with the following code:
@snippet ov_extensions.cpp frontend_extension_CustomOperation_as_is
both `attr1` and `attr2` are copied from framework representation to OpenVINO representation automatically. If for some reason names of attributes are different but values still can be copied “as-is” you can pass attribute names mapping in `OpExtension` constructor:
@snippet ov_extensions.cpp frontend_extension_CustomOperation_rename
Where `fw_attr1` and `fw_attr2` are names for corresponding attributes in framework operation representation.
If copying of an attribute is not what you need, `OpExtension` also can set attribute to predefined constant value. For the same `CustomOperation`, imagine you want to set `attr2` to value 5 instead of copying from `fw_attr2`, to achieve that do the following:
@snippet ov_extensions.cpp frontend_extension_CustomOperation_rename_set
So the conclusion is that each attribute of target OpenVINO operation should be initialized either by
1. Setting automatically due to name matching
2. Mapped by attribute name
3. Set to a constant value
This is achieved by specifying maps as arguments for `OpExtension` constructor.
## Mapping to Multiple Operations with ConversionExtension
Previous sections cover the case when a single operation is mapped to a single operation with optional adjustment in names and attribute values. That is likely enough for your own custom operation with existing C++ kernel implementation. In this case your framework representation and OpenVINO representation for the operation are under your control and inputs/outpus/attributes can be aligned to make `OpExtension` usable.
In case if one-to-one mapping is not possible, *decomposition to multiple operations* should be considered. It is achieved by using more verbose and less automated `ConversionExtension` class. It enables writing arbitrary code to replace a single framework operation by multiple connected OpenVINO operations constructing dependency graph of any complexity.
`ConversionExtension` maps a single operation to a function which builds a graph using OpenVINO operation classes. Follow chapter [Build a Model in OpenVINO Runtime](@ref ov_ug_build_model) to learn how to use OpenVINO operation classes to build a fragment of model for replacement.
The next example illustrates using `ConversionExtension` for conversion of “ThresholdedRelu” from ONNX according to the formula: `ThresholdedRelu(x, alpha) -> Multiply(x, Convert(Greater(x, alpha), type=float))`.
> **NOTE**: `ThresholdedRelu` is one of the standard ONNX operators which is supported by ONNX frontend natively out-of-the-box. Here we are re-implementing it to illustrate how you can add a similar support for your custom operation instead of `ThresholdedRelu`.
@snippet ov_extensions.cpp frontend_extension_ThresholdedReLU_header
@snippet ov_extensions.cpp frontend_extension_ThresholdedReLU
To access original framework operation attribute value and connect to inputs, `node` object of type `NodeContext` is used. It has two main methods:
* `NodeContext::get_input` to get input with a given index,
* `NodeContext::get_attribute` to get attribute value with a given name.
The conversion function should return a vector of node outputs that are mapped to corresponding outputs of the original framework operation in the same order.

View File

@@ -12,6 +12,7 @@
@endsphinxdirective
OpenVINO Transformation mechanism allows to develop transformation passes to modify `ov::Model`. You can use this mechanism to apply additional optimizations to the original Model or transform unsupported subgraphs and operations to new operations which are supported by the plugin.
This guide contains all necessary information that you need to start implementing OpenVINO™ transformations.
## Working with Model

View File

@@ -37,8 +37,8 @@ The implementation `CompileNetwork` is fully device-specific.
The function accepts a const shared pointer to `ngraph::Function` object and performs the following steps:
1. Applies ngraph passes using `TransformNetwork` function, which defines plugin-specific conversion pipeline. To support low precision inference, the pipeline can include Low Precision Transformations. These transformations are usually hardware specific. You can find how to use and configure Low Precisions Transformations in [Low Precision Transformations](@ref openvino_docs_IE_DG_lpt) guide.
2. Maps the transformed graph to a backend specific graph representation (for example, to MKLDNN graph for Intel CPU).
1. Applies nGraph passes using `TransformNetwork` function, which defines plugin-specific conversion pipeline. To support low precision inference, the pipeline can include Low Precision Transformations. These transformations are usually hardware specific. You can find how to use and configure Low Precisions Transformations in [Low Precision Transformations](@ref openvino_docs_OV_UG_lpt) guide.
2. Maps the transformed graph to a backend specific graph representation (for example, to CPU plugin internal graph representation).
3. Allocates and fills memory for graph weights, backend specific memory handles and so on.
@snippet src/template_executable_network.cpp executable_network:map_graph

View File

@@ -9,11 +9,12 @@
Implement Plugin Functionality <openvino_docs_ie_plugin_dg_plugin>
Implement Executable Network Functionality <openvino_docs_ie_plugin_dg_executable_network>
openvino_docs_ie_plugin_dg_quantized_networks
Implement Synchronous Inference Request <openvino_docs_ie_plugin_dg_infer_request>
Implement Asynchronous Inference Request <openvino_docs_ie_plugin_dg_async_infer_request>
openvino_docs_ie_plugin_dg_plugin_build
openvino_docs_ie_plugin_dg_plugin_testing
openvino_docs_ie_plugin_detailed_guides
openvino_docs_ie_plugin_api_references
@endsphinxdirective
@@ -55,11 +56,11 @@ Detailed guides
* [Build](@ref openvino_docs_ie_plugin_dg_plugin_build) a plugin library using CMake\*
* Plugin and its components [testing](@ref openvino_docs_ie_plugin_dg_plugin_testing)
* [Quantized networks](@ref openvino_docs_ie_plugin_dg_quantized_networks)
* [Low precision transformations](@ref openvino_docs_IE_DG_lpt) guide
* [Low precision transformations](@ref openvino_docs_OV_UG_lpt) guide
* [Writing OpenVINO™ transformations](@ref openvino_docs_transformations) guide
API References
-----------------------
* [Inference Engine Plugin API](groupie_dev_api.html)
* [Inference Engine Transformation API](groupie_transformation_api.html)
* [Inference Engine Plugin API](@ref ie_dev_api)
* [Inference Engine Transformation API](@ref ie_transformation_api)

View File

@@ -2,7 +2,7 @@
Inference Engine Plugin usually represents a wrapper around a backend. Backends can be:
- OpenCL-like backend (e.g. clDNN library) for GPU devices.
- MKLDNN backend for Intel CPU devices.
- oneDNN backend for Intel CPU devices.
- NVIDIA cuDNN for NVIDIA GPUs.
The responsibility of Inference Engine Plugin:

View File

@@ -9,7 +9,7 @@ For more details about low-precision model representation please refer to this [
During the model load each plugin can interpret quantization rules expressed in *FakeQuantize* operations:
- Independently based on the definition of *FakeQuantize* operation.
- Using a special library of low-precision transformations (LPT) which applies common rules for generic operations,
such as Convolution, Fully-Connected, Eltwise, etc., and translates "fake-quantized" models into the models with low-precision operations. For more information about low-precision flow please refer to the following [document](@ref openvino_docs_IE_DG_Int8Inference).
such as Convolution, Fully-Connected, Eltwise, etc., and translates "fake-quantized" models into models with low-precision operations.
Here we provide only a high-level overview of the interpretation rules of FakeQuantize.
At runtime each FakeQuantize can be split into two independent operations: **Quantize** and **Dequantize**.

View File

@@ -0,0 +1,18 @@
# Advanced Topics {#openvino_docs_ie_plugin_detailed_guides}
@sphinxdirective
.. toctree::
:maxdepth: 1
:hidden:
openvino_docs_ie_plugin_dg_quantized_networks
openvino_docs_OV_UG_lpt
@endsphinxdirective
The guides below provides extra information about specific features of OpenVINO needed for understanding during OpenVINO plugin development:
* [Quantized networks](@ref openvino_docs_ie_plugin_dg_quantized_networks)
* [Low precision transformations](@ref openvino_docs_OV_UG_lpt) guide
* [Writing OpenVINO™ transformations](@ref openvino_docs_transformations) guide

View File

@@ -0,0 +1,17 @@
# Plugin API Reference {#openvino_docs_ie_plugin_api_references}
@sphinxdirective
.. toctree::
:maxdepth: 1
:hidden:
../groupie_dev_api
../groupie_transformation_api
@endsphinxdirective
The guides below provides extra API references needed for OpenVINO plugin development:
* [OpenVINO Plugin API](@ref ie_dev_api)
* [OpenVINO Transformation API](@ref ie_transformation_api)

View File

@@ -5,74 +5,74 @@
<tab type="usergroup" url="index.html" title="Developer Guide for Inference Engine Plugin Library">
<tab type="user" url="@ref plugin" visibile="yes" title="Implement Plugin Functionality"/>
<tab type="user" url="@ref executable_network" visibile="yes" title="Implement Executable Network Functionality">
<tab type="usergroup" title="Low Precision Transformations" url="@ref openvino_docs_IE_DG_lpt">
<tab type="user" title="Attributes" url="@ref openvino_docs_IE_DG_lpt_attributes">
<tab type="user" title="AvgPoolPrecisionPreserved" url="@ref openvino_docs_IE_DG_lpt_AvgPoolPrecisionPreserved"/>
<tab type="user" title="IntervalsAlignment" url="@ref openvino_docs_IE_DG_lpt_IntervalsAlignment"/>
<tab type="user" title="PerTensorQuantization" url="@ref openvino_docs_IE_DG_lpt_PerTensorQuantization"/>
<tab type="user" title="PrecisionPreserved" url="@ref openvino_docs_IE_DG_lpt_PrecisionPreserved"/>
<tab type="user" title="Precisions" url="@ref openvino_docs_IE_DG_lpt_Precisions"/>
<tab type="user" title="QuantizationAlignment" url="@ref openvino_docs_IE_DG_lpt_QuantizationAlignment"/>
<tab type="usergroup" title="Low Precision Transformations" url="@ref openvino_docs_OV_UG_lpt">
<tab type="user" title="Attributes" url="@ref openvino_docs_OV_UG_lpt_attributes">
<tab type="user" title="AvgPoolPrecisionPreserved" url="@ref openvino_docs_OV_UG_lpt_AvgPoolPrecisionPreserved"/>
<tab type="user" title="IntervalsAlignment" url="@ref openvino_docs_OV_UG_lpt_IntervalsAlignment"/>
<tab type="user" title="PrecisionPreserved" url="@ref openvino_docs_OV_UG_lpt_PrecisionPreserved"/>
<tab type="user" title="Precisions" url="@ref openvino_docs_OV_UG_lpt_Precisions"/>
<tab type="user" title="QuantizationAlignment" url="@ref openvino_docs_OV_UG_lpt_QuantizationAlignment"/>
<tab type="user" title="QuantizationGranularity" url="@ref openvino_docs_OV_UG_lpt_QuantizationGranularity"/>
</tab>
<tab type="user" title="Step 1. Prerequisites transformations" url="@ref openvino_docs_IE_DG_lpt_step1_prerequisites">
<tab type="user" title="LinOpSequenceFusion" url="@ref openvino_docs_IE_DG_lpt_LinOpSequenceFusion"/>
<tab type="user" title="PullReshapeThroughDequantization" url="@ref openvino_docs_IE_DG_lpt_PullReshapeThroughDequantization"/>
<tab type="user" title="PullTransposeThroughDequantization" url="@ref openvino_docs_IE_DG_lpt_PullTransposeThroughDequantization"/>
<tab type="user" title="Step 1. Prerequisites transformations" url="@ref openvino_docs_OV_UG_lpt_step1_prerequisites">
<tab type="user" title="LinOpSequenceFusion" url="@ref openvino_docs_OV_UG_lpt_LinOpSequenceFusion"/>
<tab type="user" title="PullReshapeThroughDequantization" url="@ref openvino_docs_OV_UG_lpt_PullReshapeThroughDequantization"/>
<tab type="user" title="PullTransposeThroughDequantization" url="@ref openvino_docs_OV_UG_lpt_PullTransposeThroughDequantization"/>
</tab>
<tab type="user" title="Step 2. Markup transformations" url="@ref openvino_docs_IE_DG_lpt_step2_markup">
<tab type="user" title="AlignQuantizationIntervals" url="@ref openvino_docs_IE_DG_lpt_AlignQuantizationIntervals"/>
<tab type="user" title="AlignQuantizationParameters" url="@ref openvino_docs_IE_DG_lpt_AlignQuantizationParameters"/>
<tab type="user" title="CreateAttribute" url="@ref openvino_docs_IE_DG_lpt_CreateAttribute"/>
<tab type="user" title="CreatePrecisionsDependentAttribute" url="@ref openvino_docs_IE_DG_lpt_CreatePrecisionsDependentAttribute"/>
<tab type="user" title="MarkupAvgPoolPrecisionPreserved" url="@ref openvino_docs_IE_DG_lpt_MarkupAvgPoolPrecisionPreserved"/>
<tab type="user" title="MarkupCanBeQuantized" url="@ref openvino_docs_IE_DG_lpt_MarkupCanBeQuantized"/>
<tab type="user" title="MarkupPerTensorQuantization" url="@ref openvino_docs_IE_DG_lpt_MarkupPerTensorQuantization"/>
<tab type="user" title="MarkupPrecisions" url="@ref openvino_docs_IE_DG_lpt_MarkupPrecisions"/>
<tab type="user" title="PropagatePrecisions" url="@ref openvino_docs_IE_DG_lpt_PropagatePrecisions"/>
<tab type="user" title="PropagateThroughPrecisionPreserved" url="@ref openvino_docs_IE_DG_lpt_PropagateThroughPrecisionPreserved"/>
<tab type="user" title="PropagateToInput" url="@ref openvino_docs_IE_DG_lpt_PropagateToInput"/>
<tab type="user" title="UpdateSharedPrecisionPreserved" url="@ref openvino_docs_IE_DG_lpt_UpdateSharedPrecisionPreserved"/>
<tab type="user" title="Step 2. Markup transformations" url="@ref openvino_docs_OV_UG_lpt_step2_markup">
<tab type="user" title="AlignQuantizationIntervals" url="@ref openvino_docs_OV_UG_lpt_AlignQuantizationIntervals"/>
<tab type="user" title="AlignQuantizationParameters" url="@ref openvino_docs_OV_UG_lpt_AlignQuantizationParameters"/>
<tab type="user" title="CreateAttribute" url="@ref openvino_docs_OV_UG_lpt_CreateAttribute"/>
<tab type="user" title="CreatePrecisionsDependentAttribute" url="@ref openvino_docs_OV_UG_lpt_CreatePrecisionsDependentAttribute"/>
<tab type="user" title="MarkupAvgPoolPrecisionPreserved" url="@ref openvino_docs_OV_UG_lpt_MarkupAvgPoolPrecisionPreserved"/>
<tab type="user" title="MarkupCanBeQuantized" url="@ref openvino_docs_OV_UG_lpt_MarkupCanBeQuantized"/>
<tab type="user" title="MarkupPerTensorQuantization" url="@ref openvino_docs_OV_UG_lpt_MarkupPerTensorQuantization"/>
<tab type="user" title="MarkupPrecisions" url="@ref openvino_docs_OV_UG_lpt_MarkupPrecisions"/>
<tab type="user" title="PropagatePrecisions" url="@ref openvino_docs_OV_UG_lpt_PropagatePrecisions"/>
<tab type="user" title="PropagateThroughPrecisionPreserved" url="@ref openvino_docs_OV_UG_lpt_PropagateThroughPrecisionPreserved"/>
<tab type="user" title="PropagateToInput" url="@ref openvino_docs_OV_UG_lpt_PropagateToInput"/>
<tab type="user" title="UpdateSharedPrecisionPreserved" url="@ref openvino_docs_OV_UG_lpt_UpdateSharedPrecisionPreserved"/>
</tab>
<tab type="user" title="Step 3. Main transformations" url="@ref openvino_docs_IE_DG_lpt_step3_main">
<tab type="user" title="AddTransformation" url="@ref openvino_docs_IE_DG_lpt_AddTransformation"/>
<tab type="user" title="AvgPoolTransformation" url="@ref openvino_docs_IE_DG_lpt_AvgPoolTransformation"/>
<tab type="user" title="ClampTransformation" url="@ref openvino_docs_IE_DG_lpt_ClampTransformation"/>
<tab type="user" title="ConcatTransformation" url="@ref openvino_docs_IE_DG_lpt_ConcatTransformation"/>
<tab type="user" title="ConvolutionTransformation" url="@ref openvino_docs_IE_DG_lpt_ConvolutionTransformation"/>
<tab type="user" title="ConvolutionBackpropDataTransformation" url="@ref openvino_docs_IE_DG_lpt_ConvolutionBackpropDataTransformation"/>
<tab type="user" title="DepthToSpaceTransformation" url="@ref openvino_docs_IE_DG_lpt_DepthToSpaceTransformation"/>
<tab type="user" title="FakeQuantizeDecompositionTransformation" url="@ref openvino_docs_IE_DG_lpt_FakeQuantizeDecompositionTransformation"/>
<tab type="user" title="FakeQuantizeTransformation" url="@ref openvino_docs_IE_DG_lpt_FakeQuantizeTransformation"/>
<tab type="user" title="InterpolateTransformation" url="@ref openvino_docs_IE_DG_lpt_InterpolateTransformation"/>
<tab type="user" title="GroupConvolutionTransformation" url="@ref openvino_docs_IE_DG_lpt_GroupConvolutionTransformation"/>
<tab type="user" title="MatMulTransformation" url="@ref openvino_docs_IE_DG_lpt_MatMulTransformation"/>
<tab type="user" title="MaxPoolTransformation" url="@ref openvino_docs_IE_DG_lpt_MaxPoolTransformation"/>
<tab type="user" title="MultiplyTransformation" url="@ref openvino_docs_IE_DG_lpt_MultiplyTransformation"/>
<tab type="user" title="MVNTransformation" url="@ref openvino_docs_IE_DG_lpt_MVNTransformation"/>
<tab type="user" title="NormalizeL2Transformation" url="@ref openvino_docs_IE_DG_lpt_NormalizeL2Transformation"/>
<tab type="user" title="PadTransformation" url="@ref openvino_docs_IE_DG_lpt_PadTransformation"/>
<tab type="user" title="PReluTransformation" url="@ref openvino_docs_IE_DG_lpt_PReluTransformation"/>
<tab type="user" title="ReduceMaxTransformation" url="@ref openvino_docs_IE_DG_lpt_ReduceMaxTransformation"/>
<tab type="user" title="ReduceMeanTransformation" url="@ref openvino_docs_IE_DG_lpt_ReduceMeanTransformation"/>
<tab type="user" title="ReduceMinTransformation" url="@ref openvino_docs_IE_DG_lpt_ReduceMinTransformation"/>
<tab type="user" title="ReduceSumTransformation" url="@ref openvino_docs_IE_DG_lpt_ReduceSumTransformation"/>
<tab type="user" title="ReluTransformation" url="@ref openvino_docs_IE_DG_lpt_ReluTransformation"/>
<tab type="user" title="ReshapeTransformation" url="@ref openvino_docs_IE_DG_lpt_ReshapeTransformation"/>
<tab type="user" title="SqueezeTransformation" url="@ref openvino_docs_IE_DG_lpt_SqueezeTransformation"/>
<tab type="user" title="ShuffleChannelsTransformation" url="@ref openvino_docs_IE_DG_lpt_ShuffleChannelsTransformation"/>
<tab type="user" title="SplitTransformation" url="@ref openvino_docs_IE_DG_lpt_SplitTransformation"/>
<tab type="user" title="StridedSliceTransformation" url="@ref openvino_docs_IE_DG_lpt_StridedSliceTransformation"/>
<tab type="user" title="TransposeTransformation" url="@ref openvino_docs_IE_DG_lpt_TransposeTransformation"/>
<tab type="user" title="UnsqueezeTransformation" url="@ref openvino_docs_IE_DG_lpt_UnsqueezeTransformation"/>
<tab type="user" title="VariadicSplitTransformation" url="@ref openvino_docs_IE_DG_lpt_VariadicSplitTransformation"/>
<tab type="user" title="Step 3. Main transformations" url="@ref openvino_docs_OV_UG_lpt_step3_main">
<tab type="user" title="AddTransformation" url="@ref openvino_docs_OV_UG_lpt_AddTransformation"/>
<tab type="user" title="AvgPoolTransformation" url="@ref openvino_docs_OV_UG_lpt_AvgPoolTransformation"/>
<tab type="user" title="ClampTransformation" url="@ref openvino_docs_OV_UG_lpt_ClampTransformation"/>
<tab type="user" title="ConcatTransformation" url="@ref openvino_docs_OV_UG_lpt_ConcatTransformation"/>
<tab type="user" title="ConvolutionTransformation" url="@ref openvino_docs_OV_UG_lpt_ConvolutionTransformation"/>
<tab type="user" title="ConvolutionBackpropDataTransformation" url="@ref openvino_docs_OV_UG_lpt_ConvolutionBackpropDataTransformation"/>
<tab type="user" title="DepthToSpaceTransformation" url="@ref openvino_docs_OV_UG_lpt_DepthToSpaceTransformation"/>
<tab type="user" title="FakeQuantizeDecompositionTransformation" url="@ref openvino_docs_OV_UG_lpt_FakeQuantizeDecompositionTransformation"/>
<tab type="user" title="FakeQuantizeTransformation" url="@ref openvino_docs_OV_UG_lpt_FakeQuantizeTransformation"/>
<tab type="user" title="InterpolateTransformation" url="@ref openvino_docs_OV_UG_lpt_InterpolateTransformation"/>
<tab type="user" title="GroupConvolutionTransformation" url="@ref openvino_docs_OV_UG_lpt_GroupConvolutionTransformation"/>
<tab type="user" title="MatMulTransformation" url="@ref openvino_docs_OV_UG_lpt_MatMulTransformation"/>
<tab type="user" title="MaxPoolTransformation" url="@ref openvino_docs_OV_UG_lpt_MaxPoolTransformation"/>
<tab type="user" title="MultiplyTransformation" url="@ref openvino_docs_OV_UG_lpt_MultiplyTransformation"/>
<tab type="user" title="MVNTransformation" url="@ref openvino_docs_OV_UG_lpt_MVNTransformation"/>
<tab type="user" title="NormalizeL2Transformation" url="@ref openvino_docs_OV_UG_lpt_NormalizeL2Transformation"/>
<tab type="user" title="PadTransformation" url="@ref openvino_docs_OV_UG_lpt_PadTransformation"/>
<tab type="user" title="PReluTransformation" url="@ref openvino_docs_OV_UG_lpt_PReluTransformation"/>
<tab type="user" title="ReduceMaxTransformation" url="@ref openvino_docs_OV_UG_lpt_ReduceMaxTransformation"/>
<tab type="user" title="ReduceMeanTransformation" url="@ref openvino_docs_OV_UG_lpt_ReduceMeanTransformation"/>
<tab type="user" title="ReduceMinTransformation" url="@ref openvino_docs_OV_UG_lpt_ReduceMinTransformation"/>
<tab type="user" title="ReduceSumTransformation" url="@ref openvino_docs_OV_UG_lpt_ReduceSumTransformation"/>
<tab type="user" title="ReluTransformation" url="@ref openvino_docs_OV_UG_lpt_ReluTransformation"/>
<tab type="user" title="ReshapeTransformation" url="@ref openvino_docs_OV_UG_lpt_ReshapeTransformation"/>
<tab type="user" title="SqueezeTransformation" url="@ref openvino_docs_OV_UG_lpt_SqueezeTransformation"/>
<tab type="user" title="ShuffleChannelsTransformation" url="@ref openvino_docs_OV_UG_lpt_ShuffleChannelsTransformation"/>
<tab type="user" title="SplitTransformation" url="@ref openvino_docs_OV_UG_lpt_SplitTransformation"/>
<tab type="user" title="StridedSliceTransformation" url="@ref openvino_docs_OV_UG_lpt_StridedSliceTransformation"/>
<tab type="user" title="TransposeTransformation" url="@ref openvino_docs_OV_UG_lpt_TransposeTransformation"/>
<tab type="user" title="UnsqueezeTransformation" url="@ref openvino_docs_OV_UG_lpt_UnsqueezeTransformation"/>
<tab type="user" title="VariadicSplitTransformation" url="@ref openvino_docs_OV_UG_lpt_VariadicSplitTransformation"/>
</tab>
<tab type="user" title="Step 4. Cleanup transformations" url="@ref openvino_docs_IE_DG_lpt_step4_cleanup">
<tab type="user" title="FoldConvertTransformation" url="@ref openvino_docs_IE_DG_lpt_FoldConvertTransformation"/>
<tab type="user" title="FoldFakeQuantizeTransformation" url="@ref openvino_docs_IE_DG_lpt_FoldFakeQuantizeTransformation"/>
<tab type="user" title="FuseConvertTransformation" url="@ref openvino_docs_IE_DG_lpt_FuseConvertTransformation"/>
<tab type="user" title="FuseMultiplyToFakeQuantizeTransformation" url="@ref openvino_docs_IE_DG_lpt_FuseMultiplyToFakeQuantizeTransformation"/>
<tab type="user" title="FuseSubtractToFakeQuantizeTransformation" url="@ref openvino_docs_IE_DG_lpt_FuseSubtractToFakeQuantizeTransformation"/>
<tab type="user" title="MultiplyToGroupConvolutionTransformation" url="@ref openvino_docs_IE_DG_lpt_MultiplyToGroupConvolutionTransformation"/>
<tab type="user" title="Step 4. Cleanup transformations" url="@ref openvino_docs_OV_UG_lpt_step4_cleanup">
<tab type="user" title="FoldConvertTransformation" url="@ref openvino_docs_OV_UG_lpt_FoldConvertTransformation"/>
<tab type="user" title="FoldFakeQuantizeTransformation" url="@ref openvino_docs_OV_UG_lpt_FoldFakeQuantizeTransformation"/>
<tab type="user" title="FuseConvertTransformation" url="@ref openvino_docs_OV_UG_lpt_FuseConvertTransformation"/>
<tab type="user" title="FuseMultiplyToFakeQuantizeTransformation" url="@ref openvino_docs_OV_UG_lpt_FuseMultiplyToFakeQuantizeTransformation"/>
<tab type="user" title="FuseSubtractToFakeQuantizeTransformation" url="@ref openvino_docs_OV_UG_lpt_FuseSubtractToFakeQuantizeTransformation"/>
<tab type="user" title="MultiplyToGroupConvolutionTransformation" url="@ref openvino_docs_OV_UG_lpt_MultiplyToGroupConvolutionTransformation"/>
</tab>
</tab>
</tab>

View File

@@ -1,17 +0,0 @@
# Plugin Transformation Pipeline {#openvino_docs_IE_DG_plugin_transformation_pipeline}
@sphinxdirective
.. toctree::
:maxdepth: 1
:caption: Executable Network
:hidden:
Low Precision Transformations <openvino_docs_IE_DG_lpt>
@endsphinxdirective
Typical plugin transformation pipeline includes steps:
1. Common transformations
2. [Low precision transformations](@ref openvino_docs_IE_DG_lpt)
3. Plugin specific transformations

View File

@@ -1,4 +1,4 @@
# AvgPoolPrecisionPreserved attribute {#openvino_docs_IE_DG_lpt_AvgPoolPrecisionPreserved}
# AvgPoolPrecisionPreserved attribute {#openvino_docs_OV_UG_lpt_AvgPoolPrecisionPreserved}
ngraph::AvgPoolPrecisionPreservedAttribute class represents the `AvgPoolPrecisionPreserved` attribute.

View File

@@ -1,4 +1,4 @@
# IntervalsAlignment attribute {#openvino_docs_IE_DG_lpt_IntervalsAlignment}
# IntervalsAlignment attribute {#openvino_docs_OV_UG_lpt_IntervalsAlignment}
ngraph::IntervalsAlignmentAttribute class represents the `IntervalsAlignment` attribute.

View File

@@ -1,11 +0,0 @@
# PerTensorQuantization attribute {#openvino_docs_IE_DG_lpt_PerTensorQuantization}
ngraph::PerTensorQuantizationAttribute class represents the `PerTensorQuantization` attribute.
The attribute defines if the operation input port requires per-tensor quantization.
| Property name | Values |
|---------------|----------------------------------------------|
| Required | Yes |
| Defined | Operation, input ports |
| Properties | |

View File

@@ -1,4 +1,4 @@
# PrecisionPreserved attribute {#openvino_docs_IE_DG_lpt_PrecisionPreserved}
# PrecisionPreserved attribute {#openvino_docs_OV_UG_lpt_PrecisionPreserved}
ngraph::PrecisionPreservedAttribute class represents the `PrecisionPreserved` attribute.

View File

@@ -1,4 +1,4 @@
# Precisions attribute {#openvino_docs_IE_DG_lpt_Precisions}
# Precisions attribute {#openvino_docs_OV_UG_lpt_Precisions}
ngraph::PrecisionsAttribute class represents the `Precisions` attribute.

View File

@@ -1,4 +1,4 @@
# QuantizationAlignment attribute {#openvino_docs_IE_DG_lpt_QuantizationAlignment}
# QuantizationAlignment attribute {#openvino_docs_OV_UG_lpt_QuantizationAlignment}
ngraph::QuantizationAlignmentAttribute class represents the `QuantizationAlignment` attribute.

View File

@@ -0,0 +1,11 @@
# QuantizationGranularity attribute {#openvino_docs_OV_UG_lpt_QuantizationGranularity}
ngraph::QuantizationAttribute class represents the `QuantizationGranularity` attribute.
The attribute defines quantization granularity of operation inputs.
| Property name | Values |
|---------------|----------------------------------------------|
| Required | No |
| Defined | Input ports |
| Properties | Quantization granularity |

View File

@@ -1,4 +1,4 @@
# OpenVINO™ Low Precision Transformations {#openvino_docs_IE_DG_lpt}
# OpenVINO™ Low Precision Transformations {#openvino_docs_OV_UG_lpt}
@sphinxdirective
@@ -7,13 +7,13 @@
:caption: Low Precision Transformations
:hidden:
Low Precision Transformations <openvino_docs_IE_DG_lpt>
Low Precision Transformations <openvino_docs_OV_UG_lpt>
Attributes <openvino_docs_IE_DG_lpt_attributes>
Step 1. Prerequisites transformations <openvino_docs_IE_DG_lpt_step1_prerequisites>
Step 2. Markup transformations <openvino_docs_IE_DG_lpt_step2_markup>
Step 3. Main transformations <openvino_docs_IE_DG_lpt_step3_main>
Step 4. Cleanup transformations <openvino_docs_IE_DG_lpt_step4_cleanup>
Attributes <openvino_docs_OV_UG_lpt_attributes>
Step 1. Prerequisites transformations <openvino_docs_OV_UG_lpt_step1_prerequisites>
Step 2. Markup transformations <openvino_docs_OV_UG_lpt_step2_markup>
Step 3. Main transformations <openvino_docs_OV_UG_lpt_step3_main>
Step 4. Cleanup transformations <openvino_docs_OV_UG_lpt_step4_cleanup>
@endsphinxdirective
@@ -72,11 +72,7 @@ For example, if you would like to infer a model with `Convolution` operation in
> There are several supported quantization approaches on activations and on weights. All supported approaches are described in [Quantization approaches](#quantization-approaches) section below. In demonstrated model [FakeQuantize operation quantization](#fakequantize-operation) approach is used.
### Low precision tools
There are two tools to quantize a model:
1. [Post-Training Optimization Toolkit](@ref pot_docs_LowPrecisionOptimizationGuide) (POT)
2. [Neural Network Compression Framework](https://github.com/openvinotoolkit/nncf) (NNCF)
Additionally, low precision transformations can handle ONNX quantized models.
For more details on how to get a quantized model, refer to [Model Optimization](@ref openvino_docs_model_optimization_guide) document.
## Quantization approaches
LPT transformations support two quantization approaches:
@@ -115,63 +111,63 @@ Inside each step LPT transformations handle input model operation by operation,
As result, usually all operations are inferred by plugin in low precision. If plugin doesn't support an operation inference in low precision, then corresponding LPT transformation can be disabled, and input tensor precisions for the operation will not be changed. In this case the operation is inferred in the original precision.
Low precision transformations pipeline includes four steps:
* [Step #1: Prerequisites](@ref openvino_docs_IE_DG_lpt_step1_prerequisites)
* [Step #2: Markup transformations](@ref openvino_docs_IE_DG_lpt_step2_markup)
* [Step #3: Main transformations](@ref openvino_docs_IE_DG_lpt_step3_main)
* [Step #4: Cleanup transformations](@ref openvino_docs_IE_DG_lpt_step4_cleanup)
* [Step #1: Prerequisites](@ref openvino_docs_OV_UG_lpt_step1_prerequisites)
* [Step #2: Markup transformations](@ref openvino_docs_OV_UG_lpt_step2_markup)
* [Step #3: Main transformations](@ref openvino_docs_OV_UG_lpt_step3_main)
* [Step #4: Cleanup transformations](@ref openvino_docs_OV_UG_lpt_step4_cleanup)
### Step 1. Prerequisites
This step fuses and propagates some operations in the model to prepare for the next step. It is required for OpenVINO plugins. Transformations:
* [PullReshapeThroughDequantization](@ref openvino_docs_IE_DG_lpt_PullReshapeThroughDequantization)
* [PullTransposeThroughDequantization](@ref openvino_docs_IE_DG_lpt_PullTransposeThroughDequantization)
* [LinOpSequenceFusion](@ref openvino_docs_IE_DG_lpt_LinOpSequenceFusion)
* [PullReshapeThroughDequantization](@ref openvino_docs_OV_UG_lpt_PullReshapeThroughDequantization)
* [PullTransposeThroughDequantization](@ref openvino_docs_OV_UG_lpt_PullTransposeThroughDequantization)
* [LinOpSequenceFusion](@ref openvino_docs_OV_UG_lpt_LinOpSequenceFusion)
The model on this step is changed. There are more details in developer guide [Prerequisites transformations](@ref openvino_docs_IE_DG_lpt_step1_prerequisites).
The model on this step is changed. There are more details in developer guide [Prerequisites transformations](@ref openvino_docs_OV_UG_lpt_step1_prerequisites).
### Step 2. Markup
This step creates runtime attributes for operations. These attributes will be used in next step. Transformations:
* [MarkupCanBeQuantized](@ref openvino_docs_IE_DG_lpt_MarkupCanBeQuantized)
* [MarkupPrecisions](@ref openvino_docs_IE_DG_lpt_MarkupPrecisions)
* [MarkupPerTensorQuantization](@ref openvino_docs_IE_DG_lpt_MarkupPerTensorQuantization)
* [MarkupAvgPoolPrecisionPreserved](@ref openvino_docs_IE_DG_lpt_MarkupAvgPoolPrecisionPreserved)
* [PropagatePrecisions](@ref openvino_docs_IE_DG_lpt_PropagatePrecisions)
* [AlignQuantizationIntervals](@ref openvino_docs_IE_DG_lpt_AlignQuantizationIntervals)
* [AlignQuantizationParameters](@ref openvino_docs_IE_DG_lpt_AlignQuantizationParameters)
* [MarkupCanBeQuantized](@ref openvino_docs_OV_UG_lpt_MarkupCanBeQuantized)
* [MarkupPrecisions](@ref openvino_docs_OV_UG_lpt_MarkupPrecisions)
* [MarkupPerTensorQuantization](@ref openvino_docs_OV_UG_lpt_MarkupPerTensorQuantization)
* [MarkupAvgPoolPrecisionPreserved](@ref openvino_docs_OV_UG_lpt_MarkupAvgPoolPrecisionPreserved)
* [PropagatePrecisions](@ref openvino_docs_OV_UG_lpt_PropagatePrecisions)
* [AlignQuantizationIntervals](@ref openvino_docs_OV_UG_lpt_AlignQuantizationIntervals)
* [AlignQuantizationParameters](@ref openvino_docs_OV_UG_lpt_AlignQuantizationParameters)
The model on this step is changed: only new attributes are added to some operations. There are more details in developer guide [Markup transformations](@ref openvino_docs_IE_DG_lpt_step2_markup).
The model on this step is changed: only new attributes are added to some operations. There are more details in developer guide [Markup transformations](@ref openvino_docs_OV_UG_lpt_step2_markup).
### Step 3. Main transformations, FakeQuantize decomposition and dequantization operations handling
This step has the most transformations. These transformations can be separated in two groups: decomposition transformation and dequantization operations handling. There are more details in developer guide [Main transformations](@ref openvino_docs_IE_DG_lpt_step3_main). Transformations:
* [AddTransformation](@ref openvino_docs_IE_DG_lpt_AddTransformation)
* [AvgPoolTransformation](@ref openvino_docs_IE_DG_lpt_AvgPoolTransformation)
* [ClampTransformation](@ref openvino_docs_IE_DG_lpt_AvgPoolTransformation)
* [ConcatTransformation](@ref openvino_docs_IE_DG_lpt_ConcatTransformation)
* [ConvolutionTransformation](@ref openvino_docs_IE_DG_lpt_ConvolutionTransformation)
* [ConvolutionBackpropDataTransformation](@ref openvino_docs_IE_DG_lpt_ConvolutionBackpropDataTransformation)
* [DepthToSpaceTransformation](@ref openvino_docs_IE_DG_lpt_DepthToSpaceTransformation)
* [FakeQuantizeDecompositionTransformation](@ref openvino_docs_IE_DG_lpt_FakeQuantizeDecompositionTransformation)
* [FakeQuantizeTransformation](@ref openvino_docs_IE_DG_lpt_FakeQuantizeTransformation)
* [InterpolateTransformation](@ref openvino_docs_IE_DG_lpt_InterpolateTransformation)
* [GroupConvolutionTransformation](@ref openvino_docs_IE_DG_lpt_GroupConvolutionTransformation)
* [MatMulTransformation](@ref openvino_docs_IE_DG_lpt_MatMulTransformation)
* [MaxPoolTransformation](@ref openvino_docs_IE_DG_lpt_MaxPoolTransformation)
* [MultiplyTransformation](@ref openvino_docs_IE_DG_lpt_MultiplyTransformation)
* [MVNTransformation](@ref openvino_docs_IE_DG_lpt_MVNTransformation)
* [NormalizeL2Transformation](@ref openvino_docs_IE_DG_lpt_NormalizeL2Transformation)
* [PReluTransformation](@ref openvino_docs_IE_DG_lpt_PReluTransformation)
* [ReduceMaxTransformation](@ref openvino_docs_IE_DG_lpt_ReduceMaxTransformation)
* [ReduceMeanTransformation](@ref openvino_docs_IE_DG_lpt_ReduceMeanTransformation)
* [ReduceMinTransformation](@ref openvino_docs_IE_DG_lpt_ReduceMinTransformation)
* [ReduceSumTransformation](@ref openvino_docs_IE_DG_lpt_ReduceSumTransformation)
* [ReluTransformation](@ref openvino_docs_IE_DG_lpt_ReluTransformation)
* [ReshapeTransformation](@ref openvino_docs_IE_DG_lpt_ReshapeTransformation)
* [SqueezeTransformation](@ref openvino_docs_IE_DG_lpt_SqueezeTransformation)
* [ShuffleChannelsTransformation](@ref openvino_docs_IE_DG_lpt_ShuffleChannelsTransformation)
* [SplitTransformation](@ref openvino_docs_IE_DG_lpt_SplitTransformation)
* [StridedSliceTransformation](@ref openvino_docs_IE_DG_lpt_StridedSliceTransformation)
* [TransposeTransformation](@ref openvino_docs_IE_DG_lpt_TransposeTransformation)
* [UnsqueezeTransformation](@ref openvino_docs_IE_DG_lpt_UnsqueezeTransformation)
* [VariadicSplitTransformation](@ref openvino_docs_IE_DG_lpt_VariadicSplitTransformation)
This step has the most transformations. These transformations can be separated in two groups: decomposition transformation and dequantization operations handling. There are more details in developer guide [Main transformations](@ref openvino_docs_OV_UG_lpt_step3_main). Transformations:
* [AddTransformation](@ref openvino_docs_OV_UG_lpt_AddTransformation)
* [AvgPoolTransformation](@ref openvino_docs_OV_UG_lpt_AvgPoolTransformation)
* [ClampTransformation](@ref openvino_docs_OV_UG_lpt_AvgPoolTransformation)
* [ConcatTransformation](@ref openvino_docs_OV_UG_lpt_ConcatTransformation)
* [ConvolutionTransformation](@ref openvino_docs_OV_UG_lpt_ConvolutionTransformation)
* [ConvolutionBackpropDataTransformation](@ref openvino_docs_OV_UG_lpt_ConvolutionBackpropDataTransformation)
* [DepthToSpaceTransformation](@ref openvino_docs_OV_UG_lpt_DepthToSpaceTransformation)
* [FakeQuantizeDecompositionTransformation](@ref openvino_docs_OV_UG_lpt_FakeQuantizeDecompositionTransformation)
* [FakeQuantizeTransformation](@ref openvino_docs_OV_UG_lpt_FakeQuantizeTransformation)
* [InterpolateTransformation](@ref openvino_docs_OV_UG_lpt_InterpolateTransformation)
* [GroupConvolutionTransformation](@ref openvino_docs_OV_UG_lpt_GroupConvolutionTransformation)
* [MatMulTransformation](@ref openvino_docs_OV_UG_lpt_MatMulTransformation)
* [MaxPoolTransformation](@ref openvino_docs_OV_UG_lpt_MaxPoolTransformation)
* [MultiplyTransformation](@ref openvino_docs_OV_UG_lpt_MultiplyTransformation)
* [MVNTransformation](@ref openvino_docs_OV_UG_lpt_MVNTransformation)
* [NormalizeL2Transformation](@ref openvino_docs_OV_UG_lpt_NormalizeL2Transformation)
* [PReluTransformation](@ref openvino_docs_OV_UG_lpt_PReluTransformation)
* [ReduceMaxTransformation](@ref openvino_docs_OV_UG_lpt_ReduceMaxTransformation)
* [ReduceMeanTransformation](@ref openvino_docs_OV_UG_lpt_ReduceMeanTransformation)
* [ReduceMinTransformation](@ref openvino_docs_OV_UG_lpt_ReduceMinTransformation)
* [ReduceSumTransformation](@ref openvino_docs_OV_UG_lpt_ReduceSumTransformation)
* [ReluTransformation](@ref openvino_docs_OV_UG_lpt_ReluTransformation)
* [ReshapeTransformation](@ref openvino_docs_OV_UG_lpt_ReshapeTransformation)
* [SqueezeTransformation](@ref openvino_docs_OV_UG_lpt_SqueezeTransformation)
* [ShuffleChannelsTransformation](@ref openvino_docs_OV_UG_lpt_ShuffleChannelsTransformation)
* [SplitTransformation](@ref openvino_docs_OV_UG_lpt_SplitTransformation)
* [StridedSliceTransformation](@ref openvino_docs_OV_UG_lpt_StridedSliceTransformation)
* [TransposeTransformation](@ref openvino_docs_OV_UG_lpt_TransposeTransformation)
* [UnsqueezeTransformation](@ref openvino_docs_OV_UG_lpt_UnsqueezeTransformation)
* [VariadicSplitTransformation](@ref openvino_docs_OV_UG_lpt_VariadicSplitTransformation)
#### Decomposition transformations
Decomposition transformations decompose the `FakeQuantize` operation to: quantize (`FakeQuantize` with low precision output) and dequantization operations (opposite to quantize, with low precision input and the original precision output). For dequantization operations LPT uses three operations: `Convert`, `Subtract` and `Multiply`. Element-wise operations `Subtract` and `Multiply` have constants on the second branches. If dequantization operations are not handled at the end of LPT pipeline, then they will be fused back to the `FakeQuantize`.
@@ -197,14 +193,14 @@ Original `Convolution` operation in FP32 with dequantization operations before:
### Step 4: Cleanup of the result model
LPT cleanup transformations is final stage in LPT pipeline. In this step LPT transformations clean up the result model to avoid not handled dequantization operations: fuse dequantization operations if possible (fuse at least `Convert` operations if not) to other model operations to cleanup result model. Transformations:
* [FoldConvertTransformation](@ref openvino_docs_IE_DG_lpt_FoldConvertTransformation)
* [FoldFakeQuantizeTransformation](@ref openvino_docs_IE_DG_lpt_FoldFakeQuantizeTransformation)
* [FuseConvertTransformation](@ref openvino_docs_IE_DG_lpt_FuseConvertTransformation)
* [FuseMultiplyToFakeQuantizeTransformation](@ref openvino_docs_IE_DG_lpt_FuseMultiplyToFakeQuantizeTransformation)
* [FuseSubtractToFakeQuantizeTransformation](@ref openvino_docs_IE_DG_lpt_FuseSubtractToFakeQuantizeTransformation)
* [MultiplyToGroupConvolutionTransformation](@ref openvino_docs_IE_DG_lpt_MultiplyToGroupConvolutionTransformation)
* [FoldConvertTransformation](@ref openvino_docs_OV_UG_lpt_FoldConvertTransformation)
* [FoldFakeQuantizeTransformation](@ref openvino_docs_OV_UG_lpt_FoldFakeQuantizeTransformation)
* [FuseConvertTransformation](@ref openvino_docs_OV_UG_lpt_FuseConvertTransformation)
* [FuseMultiplyToFakeQuantizeTransformation](@ref openvino_docs_OV_UG_lpt_FuseMultiplyToFakeQuantizeTransformation)
* [FuseSubtractToFakeQuantizeTransformation](@ref openvino_docs_OV_UG_lpt_FuseSubtractToFakeQuantizeTransformation)
* [MultiplyToGroupConvolutionTransformation](@ref openvino_docs_OV_UG_lpt_MultiplyToGroupConvolutionTransformation)
There are more details in developer guide [Cleanup transformations](@ref openvino_docs_IE_DG_lpt_step4_cleanup).
There are more details in developer guide [Cleanup transformations](@ref openvino_docs_OV_UG_lpt_step4_cleanup).
`FakeQuantize` operation with not handled dequantization operations:
![TODO: FakeQuantize operation with dequantization operations before LPT](quantization/img/fq.transformed.png)
@@ -236,11 +232,11 @@ This step is optional. It modifies the nGraph function to a device-specific oper
Let's explore quantized [TensorFlow* implementation of ResNet-50](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/resnet-50-tf) model. Use [Model Downloader](@ref omz_tools_downloader) tool to download the `fp16` model from [OpenVINO™ Toolkit - Open Model Zoo repository](https://github.com/openvinotoolkit/open_model_zoo):
```sh
./downloader.py --name resnet-50-tf --precisions FP16-INT8
omz_downloader --name resnet-50-tf --precisions FP16-INT8
```
After that you should quantize model by the [Model Quantizer](@ref omz_tools_downloader) tool.
```sh
./quantizer.py --model_dir public/resnet-50-tf --dataset_dir <DATASET_DIR> --precisions=FP16-INT8
omz_quantizer --model_dir public/resnet-50-tf --dataset_dir <DATASET_DIR> --precisions=FP16-INT8
```
### Inference
@@ -259,7 +255,7 @@ Result model depends on different factors:
Information about layer precision is stored in the performance counters that are
available from the Inference Engine API. For example, the part of performance counters table for quantized [TensorFlow* implementation of ResNet-50](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/resnet-50-tf) model inference on CPU Plugin looks as follows:
available from the OpenVINO Runtime API. For example, the part of performance counters table for quantized [TensorFlow* implementation of ResNet-50](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/resnet-50-tf) model inference on CPU Plugin looks as follows:
| layerName | execStatus | layerType | execType | realTime (ms) | cpuTime (ms) |

View File

@@ -1,4 +1,4 @@
# Attributes {#openvino_docs_IE_DG_lpt_attributes}
# Attributes {#openvino_docs_OV_UG_lpt_attributes}
@sphinxdirective
@@ -7,30 +7,31 @@
:caption: Attributes
:hidden:
AvgPoolPrecisionPreserved <openvino_docs_IE_DG_lpt_AvgPoolPrecisionPreserved>
IntervalsAlignment <openvino_docs_IE_DG_lpt_IntervalsAlignment>
PerTensorQuantization <openvino_docs_IE_DG_lpt_PerTensorQuantization>
PrecisionPreserved <openvino_docs_IE_DG_lpt_PrecisionPreserved>
Precisions <openvino_docs_IE_DG_lpt_Precisions>
QuantizationAlignment <openvino_docs_IE_DG_lpt_QuantizationAlignment>
AvgPoolPrecisionPreserved <openvino_docs_OV_UG_lpt_AvgPoolPrecisionPreserved>
IntervalsAlignment <openvino_docs_OV_UG_lpt_IntervalsAlignment>
PrecisionPreserved <openvino_docs_OV_UG_lpt_PrecisionPreserved>
Precisions <openvino_docs_OV_UG_lpt_Precisions>
QuantizationAlignment <openvino_docs_OV_UG_lpt_QuantizationAlignment>
QuantizationGranularity <openvino_docs_OV_UG_lpt_QuantizationGranularity>
@endsphinxdirective
## Introduction
| Name | Target | Required | Mutable |
|-------------------------------------------------------------------------------------|------------------------|----------|---------|
| [AvgPoolPrecisionPreserved](@ref openvino_docs_IE_DG_lpt_AvgPoolPrecisionPreserved) | Precision | No | Yes |
| [IntervalsAlignment](@ref openvino_docs_IE_DG_lpt_IntervalsAlignment) | Quantization interval | Yes | Yes |
| [PerTensorQuantization](@ref openvino_docs_IE_DG_lpt_PerTensorQuantization) | Precision | Yes | No |
| [PrecisionPreserved](@ref openvino_docs_IE_DG_lpt_PrecisionPreserved) | Precision | Yes | Yes |
| [Precisions](@ref openvino_docs_IE_DG_lpt_Precisions) | Precision | Yes | Yes |
| [QuantizationAlignment](@ref openvino_docs_IE_DG_lpt_QuantizationAlignment) | Quantization alignment | Yes | Yes |
| Name | Target | Required | Mutable |
|-------------------------------------------------------------------------------------|--------------------------|----------|---------|
| [AvgPoolPrecisionPreserved](@ref openvino_docs_OV_UG_lpt_AvgPoolPrecisionPreserved) | Precision | No | Yes |
| [IntervalsAlignment](@ref openvino_docs_OV_UG_lpt_IntervalsAlignment) | Quantization interval | Yes | Yes |
| [PrecisionPreserved](@ref openvino_docs_OV_UG_lpt_PrecisionPreserved) | Precision | Yes | Yes |
| [Precisions](@ref openvino_docs_OV_UG_lpt_Precisions) | Precision | Yes | Yes |
| [QuantizationAlignment](@ref openvino_docs_OV_UG_lpt_QuantizationAlignment) | Quantization granularity | Yes | Yes |
| [QuantizationGranularity](@ref openvino_docs_OV_UG_lpt_QuantizationGranularity) | Quantization granularity | Yes | No |
> `Target` attribute group defines attribute usage during model transformation for the best performance:
> - `Precision` - the attribute defines the most optimal output port precision.
> - `Quantization interval` - the attribute defines quantization interval.
> - `Quantization alignment` - the attribute defines quantization alignment: per-channel or per-tensor quantization.
> - `Quantization alignment` - the attribute defines quantization granularity in runtime: per-channel or per-tensor quantization.
> - `Quantization granularity` - the attribute is set by plugin to define quantization granularity: per-channel or per-tensor quantization.
>
> `Required` attribute group defines if attribute usage is required to get an optimal model during transformation:
> - `Yes` - the attribute is used by all OpenVINO plugins for low-precision optimization.

View File

@@ -1,6 +1,6 @@
# Step 1. Prerequisites Transformations {#openvino_docs_IE_DG_lpt_step1_prerequisites}
# Step 1. Prerequisites Transformations {#openvino_docs_OV_UG_lpt_step1_prerequisites}
Prerequisites transformations are optional. The transformations prepare a model before running other low precision transformations. The transformations do not operate with dequantization operations or update precisions. Prerequisites transformations include:
* [PullReshapeThroughDequantization](@ref openvino_docs_IE_DG_lpt_PullReshapeThroughDequantization)
* [PullTransposeThroughDequantization](@ref openvino_docs_IE_DG_lpt_PullTransposeThroughDequantization)
* [LinOpSequenceFusion](@ref openvino_docs_IE_DG_lpt_LinOpSequenceFusion)
* [PullReshapeThroughDequantization](@ref openvino_docs_OV_UG_lpt_PullReshapeThroughDequantization)
* [PullTransposeThroughDequantization](@ref openvino_docs_OV_UG_lpt_PullTransposeThroughDequantization)
* [LinOpSequenceFusion](@ref openvino_docs_OV_UG_lpt_LinOpSequenceFusion)

View File

@@ -1,14 +1,14 @@
# Step 2. Markup Transformations {#openvino_docs_IE_DG_lpt_step2_markup}
# Step 2. Markup Transformations {#openvino_docs_OV_UG_lpt_step2_markup}
This step defines the optimal `FakeQuantize` decomposition precisions for the best inference performance via operations markup with runtime attribute instances. Attributes are created for input and output ports and operations. Transformations do not change the operation output port precisions. A model markup low precision logic is decomposed and implemented into the following common markup transformations. The order of transformations is important:
1. [MarkupCanBeQuantized](@ref openvino_docs_IE_DG_lpt_MarkupCanBeQuantized)
2. [MarkupPrecisions](@ref openvino_docs_IE_DG_lpt_MarkupPrecisions)
3. [MarkupPerTensorQuantization](@ref openvino_docs_IE_DG_lpt_MarkupPerTensorQuantization)
4. [MarkupAvgPoolPrecisionPreserved](@ref openvino_docs_IE_DG_lpt_MarkupAvgPoolPrecisionPreserved)
5. [PropagatePrecisions](@ref openvino_docs_IE_DG_lpt_PropagatePrecisions)
6. [AlignQuantizationIntervals](@ref openvino_docs_IE_DG_lpt_AlignQuantizationIntervals)
7. [AlignQuantizationParameters](@ref openvino_docs_IE_DG_lpt_AlignQuantizationParameters)
1. [MarkupCanBeQuantized](@ref openvino_docs_OV_UG_lpt_MarkupCanBeQuantized)
2. [MarkupPrecisions](@ref openvino_docs_OV_UG_lpt_MarkupPrecisions)
3. [MarkupPerTensorQuantization](@ref openvino_docs_OV_UG_lpt_MarkupPerTensorQuantization)
4. [MarkupAvgPoolPrecisionPreserved](@ref openvino_docs_OV_UG_lpt_MarkupAvgPoolPrecisionPreserved)
5. [PropagatePrecisions](@ref openvino_docs_OV_UG_lpt_PropagatePrecisions)
6. [AlignQuantizationIntervals](@ref openvino_docs_OV_UG_lpt_AlignQuantizationIntervals)
7. [AlignQuantizationParameters](@ref openvino_docs_OV_UG_lpt_AlignQuantizationParameters)
The table of transformations and used attributes:
@@ -25,11 +25,11 @@ The table of transformations and used attributes:
> **Note:** the same type of attribute instances can be created in different transformations. This approach is the result of the transformation single-responsibility principle. For example, `Precision` attribute instances are created in `MarkupCanBeQuantized` and `MarkupPrecisions` transformations, but the reasons for their creation are different
Common markup transformations can be decomposed into simpler utility markup transformations. The order of Markup utility transformations is not important:
* [CreateAttribute](@ref openvino_docs_IE_DG_lpt_CreateAttribute)
* [CreatePrecisionsDependentAttribute](@ref openvino_docs_IE_DG_lpt_CreatePrecisionsDependentAttribute)
* [PropagateThroughPrecisionPreserved](@ref openvino_docs_IE_DG_lpt_PropagateThroughPrecisionPreserved)
* [PropagateToInput](@ref openvino_docs_IE_DG_lpt_PropagateToInput)
* [UpdateSharedPrecisionPreserved](@ref openvino_docs_IE_DG_lpt_UpdateSharedPrecisionPreserved)
* [CreateAttribute](@ref openvino_docs_OV_UG_lpt_CreateAttribute)
* [CreatePrecisionsDependentAttribute](@ref openvino_docs_OV_UG_lpt_CreatePrecisionsDependentAttribute)
* [PropagateThroughPrecisionPreserved](@ref openvino_docs_OV_UG_lpt_PropagateThroughPrecisionPreserved)
* [PropagateToInput](@ref openvino_docs_OV_UG_lpt_PropagateToInput)
* [UpdateSharedPrecisionPreserved](@ref openvino_docs_OV_UG_lpt_UpdateSharedPrecisionPreserved)
Let's explore all transformations and their relations in detail, using one and the same model:

View File

@@ -1,36 +1,36 @@
# Step 3. Main Transformations {#openvino_docs_IE_DG_lpt_step3_main}
# Step 3. Main Transformations {#openvino_docs_OV_UG_lpt_step3_main}
Main transformations are the majority of low precision transformations. Transformations operate with dequantization operations. Main transformations include:
* [AddTransformation](@ref openvino_docs_IE_DG_lpt_AddTransformation)
* [AvgPoolTransformation](@ref openvino_docs_IE_DG_lpt_AvgPoolTransformation)
* [ClampTransformation](@ref openvino_docs_IE_DG_lpt_AvgPoolTransformation)
* [ConcatTransformation](@ref openvino_docs_IE_DG_lpt_ConcatTransformation)
* [ConvolutionTransformation](@ref openvino_docs_IE_DG_lpt_ConvolutionTransformation)
* [ConvolutionBackpropDataTransformation](@ref openvino_docs_IE_DG_lpt_ConvolutionBackpropDataTransformation)
* [DepthToSpaceTransformation](@ref openvino_docs_IE_DG_lpt_DepthToSpaceTransformation)
* [FakeQuantizeDecompositionTransformation](@ref openvino_docs_IE_DG_lpt_FakeQuantizeDecompositionTransformation)
* [FakeQuantizeTransformation](@ref openvino_docs_IE_DG_lpt_FakeQuantizeTransformation)
* [InterpolateTransformation](@ref openvino_docs_IE_DG_lpt_InterpolateTransformation)
* [GroupConvolutionTransformation](@ref openvino_docs_IE_DG_lpt_GroupConvolutionTransformation)
* [MatMulTransformation](@ref openvino_docs_IE_DG_lpt_MatMulTransformation)
* [MaxPoolTransformation](@ref openvino_docs_IE_DG_lpt_MaxPoolTransformation)
* [MultiplyTransformation](@ref openvino_docs_IE_DG_lpt_MultiplyTransformation)
* [MVNTransformation](@ref openvino_docs_IE_DG_lpt_MVNTransformation)
* [NormalizeL2Transformation](@ref openvino_docs_IE_DG_lpt_NormalizeL2Transformation)
* [PReluTransformation](@ref openvino_docs_IE_DG_lpt_PReluTransformation)
* [ReduceMaxTransformation](@ref openvino_docs_IE_DG_lpt_ReduceMaxTransformation)
* [ReduceMeanTransformation](@ref openvino_docs_IE_DG_lpt_ReduceMeanTransformation)
* [ReduceMinTransformation](@ref openvino_docs_IE_DG_lpt_ReduceMinTransformation)
* [ReduceSumTransformation](@ref openvino_docs_IE_DG_lpt_ReduceSumTransformation)
* [ReluTransformation](@ref openvino_docs_IE_DG_lpt_ReluTransformation)
* [ReshapeTransformation](@ref openvino_docs_IE_DG_lpt_ReshapeTransformation)
* [SqueezeTransformation](@ref openvino_docs_IE_DG_lpt_SqueezeTransformation)
* [ShuffleChannelsTransformation](@ref openvino_docs_IE_DG_lpt_ShuffleChannelsTransformation)
* [SplitTransformation](@ref openvino_docs_IE_DG_lpt_SplitTransformation)
* [StridedSliceTransformation](@ref openvino_docs_IE_DG_lpt_StridedSliceTransformation)
* [TransposeTransformation](@ref openvino_docs_IE_DG_lpt_TransposeTransformation)
* [UnsqueezeTransformation](@ref openvino_docs_IE_DG_lpt_UnsqueezeTransformation)
* [VariadicSplitTransformation](@ref openvino_docs_IE_DG_lpt_VariadicSplitTransformation)
* [AddTransformation](@ref openvino_docs_OV_UG_lpt_AddTransformation)
* [AvgPoolTransformation](@ref openvino_docs_OV_UG_lpt_AvgPoolTransformation)
* [ClampTransformation](@ref openvino_docs_OV_UG_lpt_AvgPoolTransformation)
* [ConcatTransformation](@ref openvino_docs_OV_UG_lpt_ConcatTransformation)
* [ConvolutionTransformation](@ref openvino_docs_OV_UG_lpt_ConvolutionTransformation)
* [ConvolutionBackpropDataTransformation](@ref openvino_docs_OV_UG_lpt_ConvolutionBackpropDataTransformation)
* [DepthToSpaceTransformation](@ref openvino_docs_OV_UG_lpt_DepthToSpaceTransformation)
* [FakeQuantizeDecompositionTransformation](@ref openvino_docs_OV_UG_lpt_FakeQuantizeDecompositionTransformation)
* [FakeQuantizeTransformation](@ref openvino_docs_OV_UG_lpt_FakeQuantizeTransformation)
* [InterpolateTransformation](@ref openvino_docs_OV_UG_lpt_InterpolateTransformation)
* [GroupConvolutionTransformation](@ref openvino_docs_OV_UG_lpt_GroupConvolutionTransformation)
* [MatMulTransformation](@ref openvino_docs_OV_UG_lpt_MatMulTransformation)
* [MaxPoolTransformation](@ref openvino_docs_OV_UG_lpt_MaxPoolTransformation)
* [MultiplyTransformation](@ref openvino_docs_OV_UG_lpt_MultiplyTransformation)
* [MVNTransformation](@ref openvino_docs_OV_UG_lpt_MVNTransformation)
* [NormalizeL2Transformation](@ref openvino_docs_OV_UG_lpt_NormalizeL2Transformation)
* [PReluTransformation](@ref openvino_docs_OV_UG_lpt_PReluTransformation)
* [ReduceMaxTransformation](@ref openvino_docs_OV_UG_lpt_ReduceMaxTransformation)
* [ReduceMeanTransformation](@ref openvino_docs_OV_UG_lpt_ReduceMeanTransformation)
* [ReduceMinTransformation](@ref openvino_docs_OV_UG_lpt_ReduceMinTransformation)
* [ReduceSumTransformation](@ref openvino_docs_OV_UG_lpt_ReduceSumTransformation)
* [ReluTransformation](@ref openvino_docs_OV_UG_lpt_ReluTransformation)
* [ReshapeTransformation](@ref openvino_docs_OV_UG_lpt_ReshapeTransformation)
* [SqueezeTransformation](@ref openvino_docs_OV_UG_lpt_SqueezeTransformation)
* [ShuffleChannelsTransformation](@ref openvino_docs_OV_UG_lpt_ShuffleChannelsTransformation)
* [SplitTransformation](@ref openvino_docs_OV_UG_lpt_SplitTransformation)
* [StridedSliceTransformation](@ref openvino_docs_OV_UG_lpt_StridedSliceTransformation)
* [TransposeTransformation](@ref openvino_docs_OV_UG_lpt_TransposeTransformation)
* [UnsqueezeTransformation](@ref openvino_docs_OV_UG_lpt_UnsqueezeTransformation)
* [VariadicSplitTransformation](@ref openvino_docs_OV_UG_lpt_VariadicSplitTransformation)
Let's explore some main transformations on the example model. Original model:

View File

@@ -1,8 +1,8 @@
# Step 4. Cleanup Transformations {#openvino_docs_IE_DG_lpt_step4_cleanup}
# Step 4. Cleanup Transformations {#openvino_docs_OV_UG_lpt_step4_cleanup}
* [FoldConvertTransformation](@ref openvino_docs_IE_DG_lpt_FoldConvertTransformation)
* [FoldFakeQuantizeTransformation](@ref openvino_docs_IE_DG_lpt_FoldFakeQuantizeTransformation)
* [FuseConvertTransformation](@ref openvino_docs_IE_DG_lpt_FuseConvertTransformation)
* [FuseMultiplyToFakeQuantizeTransformation](@ref openvino_docs_IE_DG_lpt_FuseMultiplyToFakeQuantizeTransformation)
* [FuseSubtractToFakeQuantizeTransformation](@ref openvino_docs_IE_DG_lpt_FuseSubtractToFakeQuantizeTransformation)
* [MultiplyToGroupConvolutionTransformation](@ref openvino_docs_IE_DG_lpt_MultiplyToGroupConvolutionTransformation)
* [FoldConvertTransformation](@ref openvino_docs_OV_UG_lpt_FoldConvertTransformation)
* [FoldFakeQuantizeTransformation](@ref openvino_docs_OV_UG_lpt_FoldFakeQuantizeTransformation)
* [FuseConvertTransformation](@ref openvino_docs_OV_UG_lpt_FuseConvertTransformation)
* [FuseMultiplyToFakeQuantizeTransformation](@ref openvino_docs_OV_UG_lpt_FuseMultiplyToFakeQuantizeTransformation)
* [FuseSubtractToFakeQuantizeTransformation](@ref openvino_docs_OV_UG_lpt_FuseSubtractToFakeQuantizeTransformation)
* [MultiplyToGroupConvolutionTransformation](@ref openvino_docs_OV_UG_lpt_MultiplyToGroupConvolutionTransformation)

View File

@@ -1,3 +1,3 @@
# ConvertSubtractConstant transformation {#openvino_docs_IE_DG_lpt_ConvertSubtractConstant}
# ConvertSubtractConstant transformation {#openvino_docs_OV_UG_lpt_ConvertSubtractConstant}
ngraph::pass::low_precision::ConvertSubtractConstant class represents the `ConvertSubtractConstant` transformation.

View File

@@ -1,4 +1,4 @@
# LinOpSequenceFusion transformation {#openvino_docs_IE_DG_lpt_LinOpSequenceFusion}
# LinOpSequenceFusion transformation {#openvino_docs_OV_UG_lpt_LinOpSequenceFusion}
ngraph::pass::LinOpSequenceFusion class represents the `LinOpSequenceFusion` transformation.

View File

@@ -1,3 +1,3 @@
# PullReshapeThroughDequantization transformation {#openvino_docs_IE_DG_lpt_PullReshapeThroughDequantization}
# PullReshapeThroughDequantization transformation {#openvino_docs_OV_UG_lpt_PullReshapeThroughDequantization}
ngraph::pass::low_precision::PullReshapeThroughDequantization class represents the `PullReshapeThroughDequantization` transformation.

View File

@@ -1,3 +1,3 @@
# PullTransposeThroughDequantization transformation {#openvino_docs_IE_DG_lpt_PullTransposeThroughDequantization}
# PullTransposeThroughDequantization transformation {#openvino_docs_OV_UG_lpt_PullTransposeThroughDequantization}
ngraph::pass::low_precision::PullTransposeThroughDequantization class represents the `PullTransposeThroughDequantization` transformation.

View File

@@ -1,3 +1,3 @@
# AlignQuantizationIntervals transformation {#openvino_docs_IE_DG_lpt_AlignQuantizationIntervals}
# AlignQuantizationIntervals transformation {#openvino_docs_OV_UG_lpt_AlignQuantizationIntervals}
ngraph::pass::low_precision::AlignQuantizationIntervals class represents the `AlignQuantizationIntervals` transformation.

View File

@@ -1,3 +1,3 @@
# AlignQuantizationParameters transformation {#openvino_docs_IE_DG_lpt_AlignQuantizationParameters}
# AlignQuantizationParameters transformation {#openvino_docs_OV_UG_lpt_AlignQuantizationParameters}
ngraph::pass::low_precision::AlignQuantizationParameters class represents the `AlignQuantizationParameters` transformation.

View File

@@ -1,3 +1,3 @@
# CreateAttribute transformation {#openvino_docs_IE_DG_lpt_CreateAttribute}
# CreateAttribute transformation {#openvino_docs_OV_UG_lpt_CreateAttribute}
ngraph::pass::low_precision::CreateAttribute class represents the `CreateAttribute` transformation.

View File

@@ -1,3 +1,3 @@
# CreatePrecisionsDependentAttribute transformation {#openvino_docs_IE_DG_lpt_CreatePrecisionsDependentAttribute}
# CreatePrecisionsDependentAttribute transformation {#openvino_docs_OV_UG_lpt_CreatePrecisionsDependentAttribute}
ngraph::pass::low_precision::CreatePrecisionsDependentAttribute class represents the `CreatePrecisionsDependentAttribute` transformation.

View File

@@ -1,3 +1,3 @@
# MarkupAvgPoolPrecisionPreserved transformation {#openvino_docs_IE_DG_lpt_MarkupAvgPoolPrecisionPreserved}
# MarkupAvgPoolPrecisionPreserved transformation {#openvino_docs_OV_UG_lpt_MarkupAvgPoolPrecisionPreserved}
ngraph::pass::low_precision::MarkupAvgPoolPrecisionPreserved class represents the `MarkupAvgPoolPrecisionPreserved` transformation.

View File

@@ -1,3 +1,3 @@
# MarkupCanBeQuantized transformation {#openvino_docs_IE_DG_lpt_MarkupCanBeQuantized}
# MarkupCanBeQuantized transformation {#openvino_docs_OV_UG_lpt_MarkupCanBeQuantized}
ngraph::pass::low_precision::MarkupCanBeQuantized class represents the `MarkupCanBeQuantized` transformation.

View File

@@ -1,3 +1,3 @@
# MarkupPerTensorQuantization transformation {#openvino_docs_IE_DG_lpt_MarkupPerTensorQuantization}
# MarkupPerTensorQuantization transformation {#openvino_docs_OV_UG_lpt_MarkupPerTensorQuantization}
ngraph::pass::low_precision::MarkupPerTensorQuantization class represents the `MarkupPerTensorQuantization` transformation.

View File

@@ -1,3 +1,3 @@
# MarkupPrecisions transformation {#openvino_docs_IE_DG_lpt_MarkupPrecisions}
# MarkupPrecisions transformation {#openvino_docs_OV_UG_lpt_MarkupPrecisions}
ngraph::pass::low_precision::MarkupPrecisions class represents the `MarkupPrecisions` transformation.

View File

@@ -1,3 +1,3 @@
# PropagatePrecisions transformation {#openvino_docs_IE_DG_lpt_PropagatePrecisions}
# PropagatePrecisions transformation {#openvino_docs_OV_UG_lpt_PropagatePrecisions}
ngraph::pass::low_precision::PropagatePrecisions class represents the `PropagatePrecisions` transformation.

View File

@@ -1,3 +1,3 @@
# PropagateSharedValue transformation {#openvino_docs_IE_DG_lpt_PropagateSharedValue}
# PropagateSharedValue transformation {#openvino_docs_OV_UG_lpt_PropagateSharedValue}
ngraph::pass::low_precision::PropagateSharedValue class represents the `PropagateSharedValue` transformation.

View File

@@ -1,3 +1,3 @@
# PropagateThroughPrecisionPreserved transformation {#openvino_docs_IE_DG_lpt_PropagateThroughPrecisionPreserved}
# PropagateThroughPrecisionPreserved transformation {#openvino_docs_OV_UG_lpt_PropagateThroughPrecisionPreserved}
ngraph::pass::low_precision::PropagateThroughPrecisionPreserved class represents the `PropagateThroughPrecisionPreserved` transformation.

View File

@@ -1,3 +1,3 @@
# PropagateToInput transformation {#openvino_docs_IE_DG_lpt_PropagateToInput}
# PropagateToInput transformation {#openvino_docs_OV_UG_lpt_PropagateToInput}
ngraph::pass::low_precision::PropagateToInput class represents the `PropagateToInput` transformation.

View File

@@ -1,3 +1,3 @@
# UpdateSharedPrecisionPreserved transformation {#openvino_docs_IE_DG_lpt_UpdateSharedPrecisionPreserved}
# UpdateSharedPrecisionPreserved transformation {#openvino_docs_OV_UG_lpt_UpdateSharedPrecisionPreserved}
ngraph::pass::low_precision::UpdateSharedPrecisionPreserved class represents the `UpdateSharedPrecisionPreserved` transformation.

View File

@@ -1,3 +1,3 @@
# ClampTransformation transformation {#openvino_docs_IE_DG_lpt_ClampTransformation}
# ClampTransformation transformation {#openvino_docs_OV_UG_lpt_ClampTransformation}
ngraph::pass::low_precision::ClampTransformation class represents the `Clamp` operation transformation.

View File

@@ -1,3 +1,3 @@
# PReluTransformation transformation {#openvino_docs_IE_DG_lpt_PReluTransformation}
# PReluTransformation transformation {#openvino_docs_OV_UG_lpt_PReluTransformation}
ngraph::pass::low_precision::PReluTransformation class represents the `PRelu` operation transformation.

View File

@@ -1,3 +1,3 @@
# ReluTransformation transformation {#openvino_docs_IE_DG_lpt_ReluTransformation}
# ReluTransformation transformation {#openvino_docs_OV_UG_lpt_ReluTransformation}
ngraph::pass::low_precision::ReluTransformation class represents the `Relu` operation transformation.

View File

@@ -1,4 +1,4 @@
# AddTransformation transformation {#openvino_docs_IE_DG_lpt_AddTransformation}
# AddTransformation transformation {#openvino_docs_OV_UG_lpt_AddTransformation}
ngraph::pass::low_precision::AddTransformation class represents the `Add` operation transformation.

View File

@@ -1,3 +1,3 @@
# MultiplyTransformation transformation {#openvino_docs_IE_DG_lpt_MultiplyTransformation}
# MultiplyTransformation transformation {#openvino_docs_OV_UG_lpt_MultiplyTransformation}
ngraph::pass::low_precision::MultiplyTransformation class represents the `Multiply` operation transformation.

View File

@@ -1,3 +1,3 @@
# SubtractTransformation transformation {#openvino_docs_IE_DG_lpt_SubtractTransformation}
# SubtractTransformation transformation {#openvino_docs_OV_UG_lpt_SubtractTransformation}
ngraph::pass::low_precision::SubtractTransformation class represents the `Subtract` operation transformation.

View File

@@ -1,4 +1,4 @@
# ConvolutionTransformation transformation {#openvino_docs_IE_DG_lpt_ConvolutionTransformation}
# ConvolutionTransformation transformation {#openvino_docs_OV_UG_lpt_ConvolutionTransformation}
ngraph::pass::low_precision::ConvolutionTransformation class represents the `Convolution` operation transformation.

View File

@@ -1,3 +1,3 @@
# ConvolutionBackpropDataTransformation transformation {#openvino_docs_IE_DG_lpt_ConvolutionBackpropDataTransformation}
# ConvolutionBackpropDataTransformation transformation {#openvino_docs_OV_UG_lpt_ConvolutionBackpropDataTransformation}
ngraph::pass::low_precision::ConvolutionBackpropDataTransformation class represents the `ConvolutionBackpropData` operation transformation.

Some files were not shown because too many files have changed in this diff Show More