Compare commits

..

1329 Commits

Author SHA1 Message Date
Karol Blaszczak
98a33ba770 [DOCS] 23.0 selector tool remove (#21315) 2023-11-27 15:17:44 +01:00
Sebastian Golebiewski
d0647322de Direct Github link to a specific notebook (#20358) 2023-10-10 15:45:18 +02:00
Alina Kladieva
9f2f5fc59f Bump product version to 2023.0.3 (#20147) 2023-09-29 12:43:45 +02:00
Yuan Hu
f80f99f5c5 Revert [Core] fix Memory Leak caused by create/inference request con… (#20050)
* Revert "[Core] fix Memory Leak caused by create/inference request consequently in separate thread (#18868) (#19191)"

This reverts commit b0394cc3e4.

* Install local wheel packages instead of PYPI ones (#19031)

* Try to use --no-index when install python packages

* Apply suggestions from code review

* Update .ci/azure/linux.yml

* Try to use conan.lock file (#19709)

* Fixed NCC style check (#20121)

---------

Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
Co-authored-by: Alina Kladieva <alina.kladieva@intel.com>
2023-09-28 19:39:23 +02:00
Sebastian Golebiewski
5127753183 Fix Issue 20097 - providing an easy to read Cmake command (#20130)
Porting: https://github.com/openvinotoolkit/openvino/pull/20126
2023-09-28 18:15:41 +04:00
bstankix
f4a3f0a223 [DOCS] Bugfix coveo sa-search url (#20056) 2023-09-26 15:15:29 +02:00
Ilya Lavrenov
319711fca5 Fixed issue 19784 (#19788)
* Fixed issue 19784

* Update .ci/azure/linux_debian.yml

replaced 'focal' with 'ubuntu20'
2023-09-13 14:59:55 +04:00
bstankix
8c31771e6c [DOCS] Port coveo search engine (#19751) 2023-09-11 12:40:01 +00:00
Maciej Smyk
29d01a5cbf img-fix (#19701) 2023-09-08 15:46:56 +02:00
Maciej Smyk
002537729b fix (#19697) 2023-09-08 13:33:44 +02:00
Ilya Lavrenov
088fe50dd9 Update versions in apt, yum installation docs (#19688) 2023-09-08 11:06:15 +02:00
Maciej Smyk
be021afc0b Update supported_model_formats.md (#19643) 2023-09-07 11:02:43 +02:00
Maciej Smyk
74ee34e925 Extend sphinx_sitemap to add custom metadata (#19641) 2023-09-07 09:17:07 +02:00
Maciej Smyk
c516a7279e update (#19621) 2023-09-07 08:38:50 +02:00
Maciej Smyk
8252d74662 [DOCS] contributing guidelines (#19623) 2023-09-06 16:18:46 +02:00
Alexander Suvorov
a3e746401e [DOCS] Update Selector Tool for 2023.0.2 2023-09-05 21:26:49 +02:00
Maciej Smyk
68db3844d3 [DOCS] Fixing Optimize Preprocessing in notebooks 120 and 230 for 23.0 2023-09-05 18:11:21 +02:00
Maciej Smyk
4eb71aa5e0 [DOCS] Link fix for 23.0 (#19592)
* 2023.0 link fix

* Update README.md
2023-09-05 10:06:13 +02:00
Karol Blaszczak
256a6d2572 [DOCS] 23.0.2 adjustment (#19604) 2023-09-05 09:47:01 +02:00
Ilya Lavrenov
6fbcb94e20 Fixed build with static protobuf for brew publishing (#19590) 2023-09-04 19:35:31 +04:00
Przemyslaw Wysocki
2ac63aea24 Backport Robust detection of Cython version (#19537) (#19547)
* Robust detection of Cython version (#19537)

* Aligned protobuf version in conanfile.txt with onnx recipe (#19525)

---------

Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
2023-09-04 17:03:14 +04:00
Sebastian Golebiewski
e8d44d4502 Adding Quantizing with Accuracy Control using NNCF notebook (#19588) 2023-09-04 14:56:47 +02:00
Maciej Smyk
816c2a24de [DOCS] Fix for Install from Docker Image for 23.0 (#19581)
* Update installing-openvino-docker-linux.md

* Update installing-openvino-docker-linux.md

* Update installing-openvino-docker-linux.md
2023-09-04 11:16:22 +02:00
Maciej Smyk
177aa10040 [DOCS] Torch.compile() documentation for 23.0 (#19542)
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
2023-09-04 08:38:52 +02:00
Sebastian Golebiewski
68dfd60057 add-253 (#19502) 2023-08-30 13:46:40 +02:00
Sebastian Golebiewski
fd519c711a improve-snippets (#19498)
Porting: https://github.com/openvinotoolkit/openvino/pull/19479
2023-08-30 13:11:03 +02:00
Przemyslaw Wysocki
3d17c656d1 Comment cmake check (#19491) 2023-08-30 12:56:18 +02:00
Maciej Smyk
baab44c4f4 [DOCS] Docker Guide Update for 23.0 (#19448)
* docker-update

* id fix

* Update installing-openvino-docker-linux.md

* Update installing-openvino-docker-linux.md

* Update installing-openvino-docker-linux.md
2023-08-29 08:45:20 +02:00
Sebastian Golebiewski
d972830c9a update-notebooks (#19454)
Add notebook 252-fastcomposer-image-generation. Fix indentation, admonitions, broken links and images.
2023-08-28 15:03:53 +02:00
Karol Blaszczak
f322762818 [DOCS] speech sample deprecation port 23.0 2023-08-25 12:41:30 +02:00
Karol Blaszczak
c2da07a8e7 [DOCS] adjustment to supported devices port 23.0 2023-08-25 12:10:44 +02:00
Sebastian Golebiewski
c08f68f1e5 [DOCS] Updating MO documentation for 23.0 (#19373)
* restructure-mo-docs

* apply-commits-18214

Applying commits from:

https://github.com/openvinotoolkit/openvino/pull/18214

* update

* Apply suggestions from code review

Co-authored-by: Anastasiia Pnevskaia <anastasiia.pnevskaia@intel.com>

* Apply suggestions from code review

* Update model_introduction.md

* Update docs/resources/tensorflow_frontend.md

* Create MO_Python_API.md

* Update Deep_Learning_Model_Optimizer_DevGuide.md

---------

Co-authored-by: Anastasiia Pnevskaia <anastasiia.pnevskaia@intel.com>
2023-08-23 19:24:15 +02:00
Sebastian Golebiewski
433d2f2750 CVS-113150 (#19370)
Porting:

https://github.com/openvinotoolkit/openvino/pull/18495
2023-08-23 18:28:04 +02:00
Sebastian Golebiewski
6b3863632d update-notebooks (#19339) 2023-08-22 15:37:51 +02:00
Sebastian Golebiewski
5509db87af link-to-frontend (#19333) 2023-08-22 12:54:27 +02:00
Artyom Anokhov
e662b1a330 Bump OV version to 2023.0.2 (#19329) 2023-08-22 11:02:36 +02:00
Sebastian Golebiewski
0aa5a8f704 port-19307 (#19310)
Porting: https://github.com/openvinotoolkit/openvino/pull/19307
Updating tutorials: adding table of contents and new notebooks.
2023-08-21 16:47:28 +02:00
Marcin Kusmierski
54f6f11186 [GNA] Switch GNA library to version 03.05.00.2116 (#19296)
Co-authored-by: Szymon Irzabek <szymon.jakub.irzabek@intel.com>
2023-08-21 14:50:15 +02:00
hyunback kim
ea482d8391 [GPU] Do not select onednn format for asymmetric weight (#19140) (#19265) 2023-08-21 11:30:33 +04:00
Karol Blaszczak
a93f320a48 Update prerelease_information.md (#19283) 2023-08-18 20:00:49 +02:00
Karol Blaszczak
26e9c69440 [DOCS] pre-releasenotes 23.1 Aug (#19271) 2023-08-18 17:43:11 +02:00
Marcin Kusmierski
4727efdb3c [GNA] Fix memory leak in GNA plugin. (#19257)
* Disabled transformation introducing memory leak.
2023-08-18 13:39:22 +02:00
Sergey Shlyapnikov
b7415f5c3b [GPU] Prevent Conv's input data type changing at reorder_inputs pass (#19042) (#19245) 2023-08-17 17:57:14 +04:00
Sebastian Golebiewski
0262662050 add-slash (#19243) 2023-08-17 11:23:37 +04:00
Maciej Smyk
576b99fee9 [DOCS] Removal of redundant files for 23.0 2023-08-16 13:22:43 +02:00
bstankix
4e790d7b46 [DOCS] Fix parameter name in design-tabs (#19212) 2023-08-16 07:40:39 +00:00
Yuan Hu
b0394cc3e4 [Core] fix Memory Leak caused by create/inference request consequently in separate thread (#18868) (#19191)
Signed-off-by: HU Yuan2 <yuan2.hu@intel.com>
2023-08-15 10:55:36 +04:00
bstankix
18cb7c94c1 [DOCS] Add state retention to design-tabs (#19180) 2023-08-14 13:46:50 +00:00
Wanglei Shen
064364eb5e Support Win7 in cpu information parser (#19110) 2023-08-11 09:54:51 +04:00
Shuangji Yang
5ded6fb699 fix bug on conversion of gather to sequeeze (#19094) 2023-08-10 22:25:59 +04:00
Maciej Smyk
eabf199c3a Adds Python wheel requirements info to docs (#19125) 2023-08-10 21:50:10 +04:00
Sebastian Golebiewski
0e0d166746 add-numpy (#19128) 2023-08-10 21:39:44 +04:00
Stefania Hergane
a6351294e7 [EISW-89820] [releases/2023/0] Rename VPUX to NPU (#19002)
* Change `VPUX` occurences to `NPU`

* Change library for `NPU` device in `api_conformance_helpers.hpp`

* Rename `MYRIAD plugin`

* Switch `HARDWARE_AWARE_IGNORED_PATTERNS` VPU to NPU

* Rename DEVICE_KEEMBAY to DEVICE_NPU

* Rename VPUX_DEVICE_NAME to NPU_DEVICE_NAME

* Rename vpu_patterns to npu_patterns

* Change VPUX occurences to NPU after review

* Remove VPUX device comment

* Change VPUX/vpu to NPU in tests/time_tests

* Rename VPU to NPU in docs after review

* Rename VPU to NPU in tools/pot after review

* Renamed vpu.json to npu.json in tools/pot after review

* Restore CommonTestUtils::DEVICE_KEEMBAY

---------

Co-authored-by: MirceaDan99 <mircea-aurelian.dan@intel.com>
2023-08-09 00:19:25 +04:00
Maciej Smyk
cac7e2e1c4 [DOCS] Change sample structure for 23.0 (#19058) 2023-08-08 14:18:48 +00:00
Karol Blaszczak
13e674b1f8 Docs installation guide restructuring port (#19054) 2023-08-08 16:11:51 +02:00
Maciej Smyk
a55d1c21ee [DOCS] Basic quantization flow additions for 23.0 (#19059) 2023-08-08 15:59:47 +02:00
Marcin Kacprzak
91a4f73971 * [GNA] Fix for GeminiLake detection (#18653) (#18994)
* [GNA] Added HWGeneration::GNA_1_0_E enumerator
* [GNA] Extended a few tests with GNA1.0

Co-authored-by: Przemyslaw Wysocki <przemyslaw.wysocki@intel.com>
2023-08-08 00:01:13 +04:00
Przemyslaw Wysocki
84a3aab115 [PyOV] Backport of wheel building fix (#19013)
* Add upper bound

* backport flake fix

* Support of protobuf >= 21 (#18351)

* Corrected typo

* Ability to compile with newer protobuf versions

* Limit numpy (#18406)

* Revert "[PyOV] Pin version of Cython for API 1.0 (#18604)" (#18681)

* Revert "[PyOV] Pin version of Cython for API 1.0 (#18604)"

This reverts commit 787796d88f.

* Suppressed clang warning

* Restrict scipy module version for POT (#18237)

* Restrict scipy module version for POT

Latest release https://pypi.org/project/scipy/1.11.0 causes dependency conflicts

* Bump OMZ to include scipy restriction

---------

Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>

---------

Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
Co-authored-by: Alina Kladieva <alina.kladieva@intel.com>
Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>
2023-08-07 18:23:05 +04:00
Tatiana Savina
4ddeecc031 delete gna content (#19000) 2023-08-07 10:03:32 +02:00
Maciej Smyk
9c10e33fc7 Update model_optimization_guide.md (#18954) 2023-08-04 14:49:52 +02:00
Karol Blaszczak
c32b9a0cd5 [DOCS] fix ext directive 23.0 (#18989) 2023-08-04 12:23:34 +02:00
Karol Blaszczak
c32eef361b [DOCS] update prereleasenotes (#18958) 2023-08-03 13:13:01 +02:00
Sebastian Golebiewski
8d54bdd4d5 [DOCS] Compile CPU plugin for ARM platforms - for 23.0 (#18765)
* Update build_raspbian.md

* update-instructions

* remove-cross-compilation

* Update build_raspbian.md
2023-08-01 11:19:30 +02:00
Karol Blaszczak
64395f0d5e [DOCS] new benchmark data port 23.0 (#18873)
[DOCS] new benchmark data (#18532)
2023-08-01 08:36:26 +02:00
bstankix
9562161f76 Bugfix newsletter and footer scripts (#18854) 2023-07-28 16:12:37 +02:00
Maciej Smyk
cb59f057a0 [DOCS] Link fix for Get Started for 23.0 (#18799)
* Update get_started.md

* Update get_started.md
2023-07-26 12:25:55 +02:00
bstankix
28948502a9 [DOCS] Port newsletter and carousel changes from nightly (#18780) 2023-07-25 12:24:40 +00:00
Maciej Smyk
34748ae3b5 background fix for images (#18758) 2023-07-25 07:59:57 +02:00
bstankix
06eb4afd41 Port changes from nightly (#18743) 2023-07-24 12:00:22 +00:00
Karol Blaszczak
967d74ade6 [DOCS] conformance update port (#18735)
port: https://github.com/openvinotoolkit/openvino/pull/18732

conformance table
fix Add in ONNX layers
2023-07-24 11:44:23 +02:00
Sebastian Golebiewski
5ae4e2bb2d update (#18623) 2023-07-20 14:31:42 +02:00
Karol Blaszczak
22f6a3bcc0 [DOCS] minor MO fixes (#18606) 2023-07-18 16:22:04 +02:00
Sebastian Golebiewski
e842453865 realignment (#18621) 2023-07-18 16:03:42 +02:00
Maciej Smyk
2abbec386f Update configurations-for-intel-gpu.md (#18610) 2023-07-18 12:35:43 +02:00
Sebastian Golebiewski
afb2ebcdd4 [DOCS] Updating Interactive Tutorials for 23.0 (#18556)
* update-notebooks

* Update docs/nbdoc/nbdoc.py

Co-authored-by: bstankix <bartoszx.stankiewicz@intel.com>

* Update docs/nbdoc/nbdoc.py

Co-authored-by: bstankix <bartoszx.stankiewicz@intel.com>

---------

Co-authored-by: bstankix <bartoszx.stankiewicz@intel.com>
2023-07-14 14:07:40 +02:00
Karol Blaszczak
83e45c5ff3 [DOCS] GNA disclaimer port (#18507)
port: https://github.com/openvinotoolkit/openvino/pull/18431
2023-07-12 12:40:28 +02:00
Maciej Smyk
bdb6a44942 [DOCS] Code block update for 23.0 (#18451)
* code-block-1

* Update Convert_Model_From_Paddle.md

* code-block force

* fix

* fix-2

* Update troubleshooting-steps.md

* code-block-2

* Update README.md
2023-07-11 10:50:03 +02:00
Maciej Smyk
17cd26077a Update installing-openvino-docker-linux.md (#18459) 2023-07-11 08:34:12 +02:00
Maciej Smyk
247eb8a9b9 [DOCS] Tab reorder for 23.0 (#18389)
* tabs-1

* Update configure_devices.md

* tab-2

* tab-order

* Update installing-openvino-from-archive-linux.md

* Update installing-openvino-from-archive-linux.md

* win-linux-fix

* Update GPU_Extensibility.md
2023-07-07 14:31:05 +02:00
bstankix
68b8748c9f [DOCS] Add global footer
port: https://github.com/openvinotoolkit/openvino/pull/18374
2023-07-06 08:25:46 +02:00
Sebastian Golebiewski
852efa2269 [DOCS] Fix references in installation guide for 23.0 (#18384) 2023-07-06 08:04:42 +02:00
Karol Blaszczak
303fb7a121 [DOCS] menu bug fix 23.0 (#18353) 2023-07-04 07:59:17 +00:00
Tatiana Savina
7f1c6c8ce1 update links to rn (#18338) 2023-07-03 19:03:51 +02:00
Sebastian Golebiewski
55530b47c0 [DOCS] Adding metadata to articles for 2023.0 (#18332)
* adding-metadata

* Apply suggestions from code review

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Apply suggestions from code review

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

---------

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>
2023-07-03 15:00:28 +02:00
Karol Blaszczak
69a6097a30 [DOCS] update for install 23.0.1 (#18335) 2023-07-03 14:57:57 +02:00
Karol Blaszczak
1f759456d6 [DOCS] Update Selector Tool 2023.0.1 (#18336)
authored-by: Alexander Suvorov <alexander.suvorov@intel.com>
2023-07-03 14:56:59 +02:00
Karol Blaszczak
b05a7f2ed6 [DOCS] adjustments for ST and cookie policy (#18316) 2023-07-03 08:47:00 +02:00
Tatiana Savina
f4709ffe8b [DOCS] Port docs to release branch (#18317)
* [DOCS] Local distribution page improvements  (#18049)

* add slider with os specific libs

* doc review

* local distrib doc changes

* [DOCS] Added local distribution libraries path (#18191)

* add relative path to the table

* add another column

* new table format

* fix build issue

* fix tab name

* remove old table

* format fixes

* change font

* change path windows

* change tabset name

* add arm and 86_64 tables

* remove list dots

* [DOCS] Add FrontEnd API note (#18154)

* add note

* fix typo

* add advance cases note

* tf doc note

* wording change
2023-06-30 15:34:22 +02:00
Karol Blaszczak
bb1e353e58 [DOCS] supported models page update (#18298) 2023-06-29 15:23:18 +02:00
Karol Blaszczak
99c7bbc25e [DOCS] port selector tool to 23.0 (#18295)
port:
https://github.com/openvinotoolkit/openvino/pull/17799
https://github.com/openvinotoolkit/openvino/pull/18286

authored-by: Alexander Suvorov <alexander.suvorov@intel.com>
2023-06-29 14:36:59 +02:00
Maciej Smyk
33cfcb26fb [DOCS] WSL2 Docker update for 23.0 (#18293)
* windows-fix

* Update installing-openvino-docker-linux.md

* docker fix

* Update installing-openvino-docker-linux.md

* Update installing-openvino-docker-linux.md

* Update installing-openvino-docker-linux.md

* Update installing-openvino-docker-linux.md

* Update installing-openvino-docker-linux.md

* Update installing-openvino-docker-linux.md
2023-06-29 13:26:25 +02:00
Artyom Anokhov
39c84e03f7 Updated 3lvl domain for RPMs from fedoraproject.org (#18289) 2023-06-29 11:47:46 +02:00
Karol Blaszczak
f59126dde0 [DOCS] reset the pre-release notes pages port 23 (#18276)
port: https://github.com/openvinotoolkit/openvino/pull/18177
2023-06-29 08:17:02 +02:00
Karol Blaszczak
209d506341 [DOCS] top bar fixes port 23.0
port: #18261

FAQ for pot gets drop-downs
Homepage css improvement
2023-06-28 14:13:38 +02:00
bstankix
a710adf81a Add sitemap configuration (#18271) 2023-06-28 13:50:42 +02:00
Artyom Anokhov
fa1c41994f Bump version to 2023.0.1. Updated conflicted version for APT/YUM (#18268) 2023-06-28 13:36:31 +02:00
bstankix
caae459f54 Change html_baseurl to canonical (#18253) 2023-06-27 10:25:37 +02:00
Karol Blaszczak
7ef5cbff30 [DOCS] benchmark update for OVMS 23.0 (#18010) (#18250) 2023-06-27 09:16:16 +02:00
Maciej Smyk
85956dfa4d [DOCS] Debugging Auto-Device Plugin rst shift + Notebooks installation id align for 23.0 (#18241)
* Update AutoPlugin_Debugging.md

* Update AutoPlugin_Debugging.md

* Update AutoPlugin_Debugging.md

* Update AutoPlugin_Debugging.md

* notebooks id fix

* fixes

* Update AutoPlugin_Debugging.md
2023-06-27 08:15:51 +02:00
Maciej Smyk
2d98cbed74 [DOCS] Table directive update + Get Started fix for 23.0 (#18217)
* Update notebooks-installation.md

* Update notebooks-installation.md

* Update performance_benchmarks.md

* Update openvino_ecosystem.md

* Update get_started_demos.md

* Update installing-model-dev-tools.md

* Update installing-model-dev-tools.md

* Update installing-openvino-brew.md

* Update installing-openvino-conda.md

* fix

* Update installing-openvino-apt.md

* Update installing-openvino-apt.md

* Update installing-openvino-docker-linux.md

* Update installing-openvino-docker-windows.md

* Update installing-openvino-from-archive-linux.md

* Update installing-openvino-from-archive-macos.md

* Update installing-openvino-from-archive-macos.md

* Update installing-openvino-from-archive-linux.md

* Update installing-openvino-from-archive-macos.md

* Update installing-openvino-from-archive-linux.md

* Update installing-openvino-from-archive-macos.md

* Update installing-openvino-from-archive-windows.md

* tabs

* fixes

* fixes2

* Update GPU_RemoteTensor_API.md

* fixes

* fixes

* Get started fix
2023-06-26 10:27:12 +02:00
Sebastian Golebiewski
5d47cedcc9 updating-tutorials (#18213) 2023-06-26 09:42:21 +02:00
Ilya Lavrenov
9ab5a8f5d9 Added cmake_policy call to allow IN_LIST in if() (#18226) 2023-06-24 22:51:54 +04:00
Maciej Smyk
ad84dc6205 [DOCS] Docker and GPU update for 23.0 (#17851)
* docker gpu update

* ref fix

* Update installing-openvino-docker-linux.md

* fixes

* Update DeviceDriverVersion.svg

* Update docs/install_guides/configurations-for-intel-gpu.md

Co-authored-by: Miłosz Żeglarski <milosz.zeglarski@intel.com>

* Update docs/install_guides/configurations-for-intel-gpu.md

Co-authored-by: Miłosz Żeglarski <milosz.zeglarski@intel.com>

* fixes from review

* Update configurations-for-intel-gpu.md

* Update configurations-for-intel-gpu.md

* Update deployment_migration.md

---------

Co-authored-by: Miłosz Żeglarski <milosz.zeglarski@intel.com>
2023-06-22 15:47:59 +02:00
bstankix
bd3e4347dd [DOCS] gsearch 2023-06-22 13:08:00 +02:00
Tatiana Savina
0adf0e27ee [DOCS] Port docs fixes (#18155)
* change classification notebook (#18037)

* add python block (#18085)
2023-06-21 11:47:37 +02:00
Tatiana Savina
cb7cab1886 [DOCS] shift to rst - opsets F,G (#17253) (#18152)
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
2023-06-20 14:22:13 +02:00
Georgy Krivoruchko
fd48b0bbdc [TF FE] Workaround for Broadcast/Concat issue with empty tensors (#18140)
* Added transformation for Concat
* Added test
* CI fix
* Fixed behavior of the "empty tensor list" test
2023-06-20 13:13:55 +04:00
Mateusz Bencer
691630b68c [PORT TO 23.0][ONNX FE] Allow to mix new and legacy extensions (#18116)
* [ONNX FE] Allow to mix new and legacy extensions

* added unit test

* Update op_extension.cpp

Fixed compilation with Conan

Ported https://github.com/openvinotoolkit/openvino/pull/18126

* Update op_extension.cpp

Fixed code style

---------

Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
2023-06-17 13:17:37 +00:00
Nikita Malinin
205feb9421 ENABLE_MMAP property pos (#17896) (#18106)
(cherry picked from commit 29f06692d6)
2023-06-17 12:48:47 +04:00
Georgy Krivoruchko
5ef750d5b3 Fixed Windows behavior if folder path on input (#18113) 2023-06-16 16:39:33 +00:00
Zlobin Vladimir
80fddfe1c2 Update open_model_zoo submodule (#18110)
Catch up https://github.com/openvinotoolkit/open_model_zoo/pull/3790
2023-06-16 18:19:07 +04:00
Maciej Smyk
7eb59527a0 [DOCS] CMake options description in build guide for 23.0 2023-06-16 10:37:53 +02:00
Sebastian Golebiewski
21fdda5609 [DOCS] Restyling tabs for 23.0
Porting: #18054

Introducing changes in css style for tabs from sphinx-design extension.
2023-06-14 11:43:10 +00:00
Tatiana Savina
9983f74dc7 fix link (#18050) 2023-06-14 08:54:11 +00:00
Maciej Smyk
ef0b8161c9 Update build_linux.md (#18046) 2023-06-14 10:45:16 +02:00
Sebastian Golebiewski
9e2dacbc53 [DOCS] Restyling elements on home page - for 23.0 2023-06-13 08:50:20 +02:00
Sebastian Golebiewski
d299be4202 [DOCS] Fixing formatting issues in articles - for 23.0 (#18004)
* fixing-formatting
2023-06-13 07:59:16 +02:00
Tatiana Savina
99fe2e9bdc add tabs (#18007) 2023-06-12 16:45:34 +02:00
Karol Blaszczak
6668ec39d7 [DOCS] Adding Datumaro document into OV Ecosystems (#17944) (#17968)
* add Datumaro document
* add datumaro into toctree

authored-by: Wonju Lee <wonju.lee@intel.com>
2023-06-09 13:22:43 +02:00
Maciej Smyk
1e5dced9d4 Update build_linux.md (#17967) 2023-06-09 15:03:45 +04:00
Zlobin Vladimir
7d73bae243 Update open_model_zoo submodule (#17902)
* Update open_model_zoo submodule

Catch up https://github.com/openvinotoolkit/open_model_zoo/pull/3779

---------

Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
2023-06-09 13:52:38 +04:00
Sebastian Golebiewski
d8d4fb9c94 [DOCS] Fixing broken code blocks for 23.0 (#17960)
* code-blocks-fixes
2023-06-09 09:08:27 +02:00
Ilya Lavrenov
11cde296b7 Updated refs to dependency repositories (#17953) 2023-06-08 20:14:48 +04:00
Ilya Lavrenov
44f8dac403 Align tabs in install archives Linux (#17947) (#17950) 2023-06-08 14:49:28 +02:00
Tatiana Savina
41b4fd1057 add enter dir (#17897) 2023-06-08 13:08:41 +02:00
Sebastian Golebiewski
0f89782489 update-deployment-manager (#17904) 2023-06-08 10:31:50 +04:00
Tatiana Savina
d894716fad [DOCS] Add sudo to uninstall (#17929)
* add sudo to uninstall

* Update uninstalling-openvino.md
2023-06-07 18:18:12 +02:00
Tatiana Savina
f6fd84d2e1 fix archive link (#17918) 2023-06-07 09:19:05 +00:00
Tatiana Savina
648b2ad308 [DOCS] Model optimization paragraph fix (#17907)
* fix mo guide paragraph

* fix format

* fix paragraph

* remove extra line
2023-06-07 10:45:01 +02:00
Tatiana Savina
ea5c1b04e5 [DOCS] Fix list and links to POT (#17887)
* change link to POT

* change header label

* fix typo
2023-06-06 10:59:05 +02:00
Karol Blaszczak
f3d88cbf99 DOCS post-release adjustments (#17876) 2023-06-05 15:43:45 +02:00
Tatiana Savina
e824e482b1 fix apt and yum links (#17877) 2023-06-05 13:11:21 +03:00
Sebastian Golebiewski
e4d0021e2c update-diagram (#17872) 2023-06-05 08:17:26 +02:00
Artyom Anokhov
e74cb4084d [docs] Conda update (#17861)
* Adding installing OV via Conda-Forge for MacOS

* Adding section Compiling with OpenVINO™ Runtime from Conda-Forge

* Update docs/install_guides/installing-openvino-conda.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Update docs/install_guides/installing-openvino-conda.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Update docs/install_guides/installing-openvino-conda.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Update docs/install_guides/installing-openvino-conda.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Update docs/install_guides/installing-openvino-conda.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Update docs/install_guides/installing-openvino-conda.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Update docs/install_guides/installing-openvino-conda.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Update docs/install_guides/installing-openvino-conda.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* installing-openvino-conda: Fixed title

* installing-openvino-macos-header: Fixed order for links

---------

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>
2023-06-02 21:54:11 +04:00
Tatiana Savina
e843e357cd change notebooks links (#17857) 2023-06-02 13:14:26 +02:00
Tatiana Savina
ecc502733d [DOCS] Change downloads directory link (#17846)
* installation link

* fix path
2023-06-01 19:04:02 +04:00
bstankix
d1de793552 Port default platform type selection from nightly (#17845) 2023-06-01 15:46:22 +02:00
Karol Blaszczak
ebaf6a2fcb DOCS homepage update (#17843) 2023-06-01 15:16:20 +02:00
Anton Voronov
88b006bce9 [DOC] cpu documentation fixes (#17816)
* [DOC] cpu documentation fixes

* fixed typos
2023-06-01 10:17:48 +02:00
Ilya Lavrenov
4aae068125 Update archive names for 2023.0 release (#17831) 2023-06-01 12:07:06 +04:00
Ilya Lavrenov
41c37c8af9 Updated badges for 2023.0 (#17832) 2023-06-01 12:03:20 +04:00
Sebastian Golebiewski
f40f0fa58b [DOCS] convert_model() as a default conversion path - for 23.0 (#17751)
Porting: https://github.com/openvinotoolkit/openvino/pull/17454

Updating MO documentation to make convert_model() a default conversion path.
2023-05-31 19:22:54 +02:00
Tatiana Savina
20dc436b6f DOCS Fix build links (#17821)
* change doc vers

* fix links
2023-05-31 17:45:57 +02:00
Sebastian Golebiewski
b2b7a57a4c update-tutorials (#17812) 2023-05-31 15:48:08 +02:00
Tatiana Savina
4481bfa17e [DOCS] Review release docs (#17793)
* review docs

* fix link to notebook

* fix build

* fix links

* remove bracket
2023-05-31 15:46:53 +02:00
Sebastian Golebiewski
366a5467d1 [DOCS] benchmark 23.0 update - port from master (#17806)
Porting: #17789

new benchmarking data
2023-05-31 12:58:49 +02:00
Karol Blaszczak
4be1dddb21 DOCS operation support articles update (#17449) (#17809)
port: #17449

conformance table added
ARM merged with CPU
precision support and layout tables removed from the overview device article (info available in device articles)
2023-05-31 10:56:25 +00:00
Ilya Lavrenov
3fd9b8c3b7 Updated install docs for 2023.0 (#17764) 2023-05-31 13:37:30 +04:00
Maciej Smyk
66528622a8 [DOCS] Link adjustment for dev docs + fix to build.md CPU link for 23.0 (#17747)
Port from #17744

JIRA Ticket: 110042

Update of hardcoded links to switch references from latest, nightly and 2022.3 (and earlier) to 2023.0.

JIRA Ticket: 111393

Fix for the Mac (Intel CPU) link name (it should be Intel CPU instead of Intel GPU).
2023-05-31 11:34:22 +02:00
Tatiana Savina
4fb2cebf28 [DOCS] Compile tool docs port (#17753)
* [DOCS] Compile tool docs change (#17460)

* add compile tool description

* change refs

* remove page to build docs

* doc reference fix

* review comments

* fix comment

* snippet comment

* Update docs/snippets/compile_model.cpp

Co-authored-by: Ilya Churaev <ilyachur@gmail.com>

* change snippet name

* create ov object

* code block fix

* cpp code block

* include change

* code test

* change snippet

* Update docs/snippets/export_compiled_model.cpp

Co-authored-by: Ilya Churaev <ilyachur@gmail.com>

---------

Co-authored-by: Ilya Churaev <ilyachur@gmail.com>

* Fixed compile_tool install (#17666)

---------

Co-authored-by: Ilya Churaev <ilyachur@gmail.com>
Co-authored-by: Ilya Churaev <ilya.churaev@intel.com>
2023-05-31 13:27:01 +04:00
Wanglei Shen
c18a24c05b [DOC] Add multi threading for 2023.0 release in CPU plugin document (#17788) 2023-05-31 12:54:42 +04:00
Anton Voronov
95f0005793 [DOC][CPU] Documentation update (#17786) 2023-05-31 10:37:14 +04:00
bstankix
9ac239de75 Port ability to build notebooks from local files from nightly (#17798) 2023-05-30 16:27:11 +02:00
Tatiana Savina
ad5c0808a6 add pad1 (#17760) 2023-05-30 10:39:05 +02:00
Tatiana Savina
66c6e125cf [DOCS] Port workflow docs (#17761)
* [DOCS]  Deploy and run documentation sections (#17708)

* first draft

* change name

* restructure

* workflow headers change

* change note

* remove deployment guide

* change deployment description

* fix conflicts

* clean up conflict fixes
2023-05-30 10:38:47 +02:00
Maciej Smyk
53bfc41a74 [DOCS] Configuring devices article update for 2023.0 (#17757)
* Update configure_devices.md
2023-05-29 09:02:25 +02:00
Karol Blaszczak
9b72c33039 [DOCS] install-guide fix port to 23.0 (#17672) 2023-05-25 18:39:03 +02:00
Zlobin Vladimir
c0e9e1b1a1 Update open_model_zoo submodule (#17733)
Catch up https://github.com/openvinotoolkit/open_model_zoo/pull/3770

Ticket: 110042
2023-05-25 15:56:45 +00:00
Daria Mityagina
720e283ff1 Update comments and help text (#17710) 2023-05-24 22:12:27 +02:00
Tatyana Raguzova
0e87a28791 [build_samples] Using make instead of cmake (#17560) 2023-05-24 22:43:42 +04:00
Ilya Lavrenov
6d17bbb7e9 Conan port (#17625) 2023-05-24 22:07:50 +04:00
Maxim Vafin
cebbfe65ac [DOCS] Add examples of using named outputs in extensions (#17622)
* [DOCS]Add examples of using named outputs in extensions

* Fix opset

* Apply suggestions from code review

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Update docs/Extensibility_UG/frontend_extensions.md

* Add reference to external docs

* Update docs/Extensibility_UG/frontend_extensions.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

---------

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>
2023-05-24 14:15:01 +02:00
Aleksandr Voron
c4c6567182 [DOCS][CPU] Update ARM CPU plugin documentation (#17700) 2023-05-24 15:42:32 +04:00
Karol Blaszczak
1a9ce16dd6 [DOCS] framework deprecation notice (#17484) (#17537)
Port: #17484
A new PR will be created with more changes, as suggested by jane-intel and slyalin. The "deprecated" label for articles and  additional content on converting models to ONNX will be covered then.
2023-05-24 11:53:56 +02:00
Maciej Smyk
4e8d5f3798 [DOCS] link fix (#17658) 2023-05-23 07:31:19 +02:00
Przemyslaw Wysocki
7351859ec2 limit linter (#17624) 2023-05-22 23:58:06 +04:00
Ilya Lavrenov
405c5ea03a Install libtbb12 on U22 (#17653) 2023-05-22 17:52:59 +04:00
Sebastian Golebiewski
183253e834 [DOCS] Update Interactive Tutorials - for 23.0 (#17600)
port: https://github.com/openvinotoolkit/openvino/pull/17598/
2023-05-22 14:46:14 +02:00
Maciej Smyk
cfea37b139 [DOCS] RST fixes for 23.0 (#17606)
* fixes
2023-05-22 10:33:32 +02:00
Tatiana Savina
34f00bd173 DOCS Update optimization docs with NNCF PTQ changes and deprecation of POT (#17398) (#17633)
* Update model_optimization_guide.md

* Update model_optimization_guide.md

* Update model_optimization_guide.md

* Update model_optimization_guide.md

* Update model_optimization_guide.md

* Update model_optimization_guide.md

* Update model_optimization_guide.md

* Update home.rst

* Update ptq_introduction.md

* Update Introduction.md

* Update Introduction.md

* Update Introduction.md

* Update ptq_introduction.md

* Update ptq_introduction.md

* Update basic_quantization_flow.md

* Update basic_quantization_flow.md

* Update basic_quantization_flow.md

* Update quantization_w_accuracy_control.md

* Update quantization_w_accuracy_control.md

* Update quantization_w_accuracy_control.md

* Update quantization_w_accuracy_control.md

* Update quantization_w_accuracy_control.md

* Update quantization_w_accuracy_control.md

* Update quantization_w_accuracy_control.md

* Update quantization_w_accuracy_control.md

* Update quantization_w_accuracy_control.md

* Update basic_quantization_flow.md

* Update basic_quantization_flow.md

* Update quantization_w_accuracy_control.md

* Update basic_quantization_flow.md

* Update basic_quantization_flow.md

* Update model_optimization_guide.md

* Update ptq_introduction.md

* Update quantization_w_accuracy_control.md

* Update model_optimization_guide.md

* Update quantization_w_accuracy_control.md

* Update model_optimization_guide.md

* Update quantization_w_accuracy_control.md

* Update model_optimization_guide.md

* Update Introduction.md

* Update basic_quantization_flow.md

* Update basic_quantization_flow.md

* Update quantization_w_accuracy_control.md

* Update ptq_introduction.md

* Update Introduction.md

* Update model_optimization_guide.md

* Update basic_quantization_flow.md

* Update quantization_w_accuracy_control.md

* Update quantization_w_accuracy_control.md

* Update quantization_w_accuracy_control.md

* Update Introduction.md

* Update FrequentlyAskedQuestions.md

* Update model_optimization_guide.md

* Update Introduction.md

* Update model_optimization_guide.md

* Update model_optimization_guide.md

* Update model_optimization_guide.md

* Update model_optimization_guide.md

* Update model_optimization_guide.md

* Update ptq_introduction.md

* Update ptq_introduction.md

* added code snippet (#1)

* Update basic_quantization_flow.md

* Update basic_quantization_flow.md

* Update quantization_w_accuracy_control.md

* Update basic_quantization_flow.md

* Update basic_quantization_flow.md

* Update ptq_introduction.md

* Update model_optimization_guide.md

* Update basic_quantization_flow.md

* Update ptq_introduction.md

* Update quantization_w_accuracy_control.md

* Update basic_quantization_flow.md

* Update basic_quantization_flow.md

* Update basic_quantization_flow.md

* Update ptq_introduction.md

* Update ptq_introduction.md

* Delete ptq_introduction.md

* Update FrequentlyAskedQuestions.md

* Update Introduction.md

* Update quantization_w_accuracy_control.md

* Update introduction.md

* Update basic_quantization_flow.md code blocks

* Update quantization_w_accuracy_control.md code snippets

* Update docs/optimization_guide/nncf/ptq/code/ptq_torch.py



* Update model_optimization_guide.md

* Optimization docs proofreading  (#2)

* images updated

* delete reminder

* review

* text review

* change images to original ones

* Update filter_pruning.md code blocks

* Update basic_quantization_flow.md

* Update quantization_w_accuracy_control.md

* Update images (#3)

* images updated

* delete reminder

* review

* text review

* change images to original ones

* Update filter_pruning.md code blocks

* update images

* resolve conflicts

* resolve conflicts

* change images to original ones

* resolve conflicts

* update images

* fix conflicts

* Update model_optimization_guide.md

* Update docs/optimization_guide/nncf/ptq/code/ptq_tensorflow.py



* Update docs/optimization_guide/nncf/ptq/code/ptq_torch.py



* Update docs/optimization_guide/nncf/ptq/code/ptq_onnx.py



* Update docs/optimization_guide/nncf/ptq/code/ptq_aa_openvino.py



* Update docs/optimization_guide/nncf/ptq/code/ptq_openvino.py



* table format fix

* Update headers

* Update qat.md code blocks

---------

Co-authored-by: Maksim Proshin <maksim.proshin@intel.com>
Co-authored-by: Alexander Suslov <alexander.suslov@intel.com>
2023-05-19 15:37:41 +00:00
Tatiana Savina
17326abb72 [MO][TF FE] Document freezing as essential step for pruning SM format (#17595) (#17632)
* [MO][TF FE] Document freezing as essential step for pruning SM format



* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_TensorFlow.md



---------

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>
2023-05-19 15:32:57 +00:00
Ilya Lavrenov
8601042bea Added python 3.11 for deployment tool (#17627) 2023-05-19 18:08:49 +04:00
Artyom Anokhov
39958e0dc1 Updated APT/YUM instructions with actual version. Added instructions for Ubuntu22. Updated subfolders naming for APT. (#17561) 2023-05-19 12:46:40 +02:00
Maciej Smyk
6fc9840e32 [DOCS] Link adjustment for 23.0 (#17604) 2023-05-18 15:10:13 +02:00
Ekaterina Aidova
b4452d5630 update OMZ submodule to fix bug (#17570) 2023-05-17 05:51:59 -07:00
Evgenya Stepyreva
4c69552656 Normalize_L2 relax constant input restriction (#17567)
* Normalize_L2 relax constant input restriction

* Fix warning treated as error during windows build
2023-05-17 12:37:02 +00:00
Maciej Smyk
6d8b3405ca [DOCS] Precision Control article for 23.0 (#17573)
Port from: https://github.com/openvinotoolkit/openvino/pull/17413

Added separate article on Precision Control (ov::hint::execution_mode and ov::inference_precision properties)
2023-05-17 11:21:05 +00:00
Evgenya Stepyreva
4c2096ad9c Strided Slice fix constant creation (#17557)
* Strided Slice fix constant creation

* Apply suggestions from code review

* Final touches
2023-05-16 13:53:57 +00:00
Aleksandr Voron
0c67b90f47 [CPU][ARM] Dynamic shapes support in ARM transformations (#17517) 2023-05-16 13:10:34 +04:00
Jan Iwaszkiewicz
83f51e0d00 [PyOV][Backport] Remove numpy strides from Tensor creation (#17535)
* [PyOV] Remove numpy strides from Tensor creation

* [PyOV] Add test for stride calculation

* [PyOV] Fix flake issue
2023-05-16 09:04:56 +04:00
Dmitry Kurtaev
8bb2a2a789 [CMake] Add CMAKE_MAKE_PROGRAM arg (#17340)
Co-authored-by: Ilya Churaev <ilya.churaev@intel.com>
2023-05-15 17:22:27 +04:00
Pawel Raasz
c9cfd6755c [Core] StridedSlice improvements of bound evaluation and constant folding (#17536)
* StridedSlice improvements:
-Bound evaluation for begin, end partial values when ignore mask set.
- Custom constant fold implementation.

* Improve const folding when all begin or end values
are ignored
2023-05-15 12:24:36 +00:00
Karol Blaszczak
c0060aefa7 Prepare "memory_optimization_guide.md" (#17022) (#17498)
---------

Co-authored-by: Vitaliy Urusovskij <vitaliy.urusovskij@intel.com>
2023-05-15 10:48:32 +02:00
Gorokhov Dmitriy
8a97d3c0e1 [CPU] Restore OneDNN InnerProduct primitive os_block computation behavior (#17462) 2023-05-12 15:50:10 +04:00
Wanglei Shen
c5fd3300a2 HOT FIX: disable set_cpu_used in 2023.0 release (#17456)
* disable set_cpu_used in 2023.0 release

* fix code style issue
2023-05-12 14:16:42 +08:00
Mateusz Mikolajczyk
a7f6f5292e Add missing check for special zero (#17479) 2023-05-12 09:30:55 +04:00
Maxim Vafin
804df84f7d Add transformation to convert adaptive pool to reduce (#17478)
* Add transformation to convert adaptive pool to reduce

* Update src/common/transformations/src/transformations/common_optimizations/moc_transformations.cpp

* Add tests and apply feedback

* Simplify if branches
2023-05-11 15:51:26 +00:00
Evgenya Stepyreva
1e49a594f7 [Shape inference] Pooling: Dimension div fix (#17197) (#17471)
* Dimension div fix

* codestyle fixes

* Convolution labels propagation test instances corrected

Co-authored-by: Vladislav Golubev <vladislav.golubev@intel.com>
2023-05-11 14:36:17 +04:00
Tatiana Savina
d5ac1c2e5c [DOCS] Port update docs for TF FE (#17464)
* [TF FE] Update docs for TF FE (#17453)

* Update tensorflow_frontend.md

* Update docs/resources/tensorflow_frontend.md

---------

Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>
2023-05-11 01:03:47 +04:00
Karol Blaszczak
afb2ae6b7a [DOCS] Update GPU.md (#17400) 2023-05-10 17:30:22 +02:00
Maxim Vafin
c5623b71cf Remove posibility to export to onnx (#17423)
* Remove posibility to export to onnx

* Apply suggestions from code review

* Fix tests and docs

* Workaround function inputs

* Fix code style
2023-05-10 16:35:54 +04:00
Maxim Vafin
152b11e77f Remove section about --use_legacy_frontend for PyTorch models (#17441) 2023-05-09 20:30:27 +04:00
Mateusz Tabaka
5adf3b5ca8 [TF frontend] use InterpolateMode::LINEAR_ONNX if input rank is 4 (#17406)
This change mimicks LinearToLinearONNXReplacer transformation in
legacy frontend, where linear interpolate mode is replaced with
linear_onnx due to performance reasons.

Ticket: CVS-108343
2023-05-09 14:52:35 +02:00
Fang Xu
a2ccbdf86e Update oneTBB2021.2.2 for 2023.0 (#17367)
* update oneTBB2021.2.2 for windows

* update SHA256

* update SHA256

oneTBB https://github.com/oneapi-src/oneTBB/releases/tag/v2021.2.2 (a25ebdf)

* add print for hwloc which is not found

---------

Co-authored-by: Chen Peter <peter.chen@intel.com>
2023-05-09 05:49:33 -07:00
Roman Kazantsev
1440b9950f [TF FE] Handle incorrect models (empty, fake) by TF FE (#17408) (#17432)
* [TF FE] Handle incorrect models (empty, fake) by TF FE



* Apply suggestions from code review

---------

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2023-05-09 16:30:51 +04:00
Mateusz Tabaka
d88d4d22e8 Update docs for frontend extensions (#17428) 2023-05-09 13:27:40 +02:00
Tatiana Savina
ea79006a0a DOCS Port shift to rst - Model representation doc (#17385)
* DOCS shift to rst - Model representation doc (#17320)

* model representation to rst

* fix indentation

* fix cide tabs

* fix indentation

* change doc

* fix snippets

* fix snippet

* port changes

* dev docs port
2023-05-09 10:33:12 +02:00
Pavel Esir
a4ff3318ea renumber FAQ (#17376) 2023-05-09 11:35:55 +04:00
Anastasiia Pnevskaia
44e7a003e7 Removed checks of unsatisfied dependencies in MO (#16991) (#17419)
* Fixed dependencies check, made unsatisfied dependencies show only in case of error.

* Small fix.

* Test correction.

* Small test correction.

* Temporarily added debug print.

* Debug output.

* Debug output.

* Debug output.

* Test fix.

* Removed debug output.

* Small fix.

* Moved tests to check_info_messages_test.py

* Remove dependies checks from MO.

* Small corrections.
2023-05-09 11:32:14 +04:00
Przemyslaw Wysocki
fa4112593d [Backport] OMZ submodule bump for Python 3.11 (#17325)
* backport

* Update sha
2023-05-08 18:49:28 +02:00
Surya Siddharth Pemmaraju
45e378f189 Added Torchscript backend (#17328)
* Added Torchscript backend

* Added some torchscript backend tests to ci

* Removed tests from CI as torch.compile doesn't support 3.11 currently

* Fixed linter issues

* Addressed PR comments and linter issues
2023-05-08 03:44:10 -07:00
Maciej Smyk
9320cbaa8c [DOCS] Recreation of BDTI PRs - 23.0 (#17383)
Porting: https://github.com/openvinotoolkit/openvino/pull/16913

Recreation of BDTI PRs for master.

Recreated PRs:

Docs: Update Dynamic Shapes documentation #15216
Docs: Edits to Performance Hints and Cumulative Throughput documentation #14793
Docs: Update Devices pages to state improved INT8 performance with 11th & 12th gen devices #12067
2023-05-08 10:36:56 +00:00
Maciej Smyk
718b194ad6 [DOCS] Legacy MO Extensibility update for 23.0
porting: https://github.com/openvinotoolkit/openvino/pull/15931

Divided MO Extensibility article into separate smaller articles,
Applied the suggestion from [DOCS] Better statement about MO extensions as internal API [Recreating #14062] #15679
Recreated images in svg format
Fixing directives
2023-05-08 12:25:37 +02:00
Maxim Vafin
8241540609 [PT FE] Improve exception when decoder cannot trace or script the model (#17338) (#17347)
* [PT FE] Improve exception when decoder cannot trace or script the model

* Add exception in convert_model

* Add test
2023-05-08 09:22:47 +04:00
Maxim Vafin
10d87b7332 [PT FE] Support default strides for avg and max pooling (#17337) (#17348)
* Support default strides for avg and max pooling

* Fix code style

* Remove changes from other ticket
2023-05-08 09:21:53 +04:00
Karol Blaszczak
386d773b33 [DOCS] fix typos in install guides (#17388) 2023-05-08 07:12:38 +02:00
Sun Xiaoxia
a5312f70db fix binding wrong core with latency mode in i9-13900 (#17364) 2023-05-06 17:17:11 +08:00
Roman Kazantsev
8f113ef24e [TF FE] Provide single tensor names for inputs and outputs in SavedModel (#17373)
* [TF FE] Provide single tensor names for inputs and outputs in SavedModel

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Fix build issue

* Xfail some cases due to internal problems in TF

* Xfail other layer test

* Extend documentation for function to adjust tensor names

* Use old path of tf2 layer testing for legacy frontend

---------

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2023-05-05 19:06:59 +02:00
Sebastian Golebiewski
c651bc5f87 [DOCS] Fix links - port (#17356) 2023-05-05 18:14:38 +02:00
Karol Blaszczak
12aab024d1 [GPU] Update dynamic shape document (#17274) (#17384)
porting: https://github.com/openvinotoolkit/openvino/pull/17384

* Update dynamic shape document for GPU
* Applied review comments

authored-by: Taylor Yeonbok Lee <taylor.lee@intel.com>
2023-05-05 17:58:19 +02:00
Ivan Tikhonov
3978511c5c Fix the names copying in TransposeSinking backward transformations (#17283) (#17344)
* Fix tensor names copying in TS transformations

* added a check that sinking is available for all consumers in TS backward transformations

* codestyle

* Apply review comments, add result sorting by tensor names in graph comparator

* delete debug code

* fix RemoveConsumers method implementation

* fix snippet tests

* use reference instead of raw pointer

* add new transformation tests

* fix transformation tests

Co-authored-by: Andrei Kochin <andrei.kochin@intel.com>
2023-05-05 17:09:44 +02:00
Chen Xu
0de0efd751 [CPU] Fix kernel precision mismatch in Reduce node (#17372)
* [CPU] Fix kernel precision mismatch in Reduce node

* Apply review comments
2023-05-05 14:39:30 +02:00
Sebastian Golebiewski
53e2997909 DOCS shift to rst (#17377) 2023-05-05 10:55:03 +02:00
Maciej Smyk
7779fea76f DOCS shift to rst - Opsets E for 23.0 (#17365) 2023-05-05 10:17:05 +02:00
Sebastian Golebiewski
c785551b57 DOCS shift to rst (#17346) 2023-05-04 13:29:16 +02:00
Roman Kazantsev
8c95c90e45 [TF FE] Use original input types for SavedModel (#17295) (#17335)
Also, refactor TF FE unit-tests

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2023-05-03 16:26:33 +04:00
Evgenya Stepyreva
bf829eead4 NMS-5 calculate upper-bound (#17332)
* NMS-5 calculate upper-bound

* Test
2023-05-03 15:22:08 +04:00
Roman Kazantsev
1141e90435 [MO][TF FE] Handle constant with undefined value (#17311) (#17327)
Since TF 2.10 the native model freezing can produce constants with undefined value,
i.e. tensor shape can be any and value is []. In this case the tensor just fills up with
the default value (0 - for numerics, "" - for strings)

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2023-05-03 12:09:57 +04:00
Roman Kazantsev
15b62d77cc [TF FE] Added additional pruned inputs for MetaGraph support (#17237) (#17326)
* Added handling of additional pruned inputs
Added possible topology of RestoreV2 -> AssignVariableOp
Added additional checks

* Extended tests coverage

Co-authored-by: Georgy Krivoruchko <georgy.krivoruchko@intel.com>
2023-05-03 12:09:20 +04:00
Maxim Vafin
e6347544e2 Fix issue with Pow when both inputs are scalars (#17305) (#17321)
* Fix issue with Pow when both inputs are scalars

* Fix code style
2023-05-03 11:32:13 +04:00
Anton Voronov
fcf261a048 [DOC] small fix for sparse weights decompression feature documentation (#17316) 2023-05-02 15:50:48 +02:00
Tatiana Savina
bba9f3094b [DOCS] Port docs: opsets, import keyword, deprecated options (#17289)
* Added missing import keyword (#17271)

* [DOCS] shift to rst - opsets N (#17267)

* opset to rst

* change list indentations

* fix formula

* add n operations

* add negative and nonzero

* fix link

* specs to rst

* fix matrixnms path

* change path to if

* fix list

* fix format

* DOCS remove deprecated options (#17167)

* DOCS remove deprecated options

* removed a couple more not actual questions

* remove the whole lines completely

* remove a couple of more deprecations

---------

Co-authored-by: Nikita Savelyev <nikita.savelyev@intel.com>
Co-authored-by: Pavel Esir <pavel.esir@intel.com>
2023-05-02 14:05:03 +02:00
Sergey Shlyapnikov
aa13ab63f5 [GPU] Use BFS processing order for out_of_order queue (#17304) 2023-05-02 15:25:21 +04:00
Tatiana Savina
8f978d2c60 update OTE and Datumaro links (#17269) (#17310) 2023-05-02 13:14:21 +02:00
Sebastian Golebiewski
a349ba7295 DOCS shift to rst - Opsets H & I - for 23.0 (#17307)
* update

* update

* cpp
2023-05-02 11:16:21 +02:00
Vladimir Paramuzov
73442bbc82 [GPU] Don't throw exception if no devices are found (#17302)
* [GPU] Don't throw exception if no devices are found

* Fix CAPI test
2023-05-01 23:18:51 +04:00
Tatiana Savina
76c237da8b [DOCS] Document Model Optimizer Python API port (#17287)
* [DOCS] Document Model Optimizer Python API (#14380)

* Added MO convert_model() documentation.

* Apply suggestions from code review

Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Updated Convert_Model pages for PyTorch and TF with PythonAPI info. United TF2 and TF formats lists.

* Added info on flag params, example_input formats list, small corrections.

* Moved MO python API to separate doc. Small text corrections.

* Added TF types conversion description.

* Removed duplicating info.

* Added description of InputCutInfo types and default onnx opset.

* Small correction.

* Changed type table to bullets, added blank lines.

* Added quote marks.

* Removed redunrant bullets.

* Apply suggestions from code review

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Apply suggestions from code review.

* Added new line.

* Apply comments from review.d

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_TensorFlow.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Added description of lists of parameters.

* Update docs/MO_DG/prepare_model/MO_Python_API.md

Co-authored-by: Sergey Lyalin <sergey.lyalin@intel.com>

* Added details about input_shape, example_input.

* Updated PyTorch page.

* Corrected input_signature description.

* Format correction.

* Format correction.

* Format correction.

* Format correction.

* Small correction.

* Small correction.

* Removed input_signature param description.

* Updated text.

* Small correction.

* Small correction.

* Removed not needed examples.

* Apply suggestions from code review

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Added new line.

* Update docs/MO_DG/prepare_model/MO_Python_API.md

Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Added titles of examples.

* Update docs/MO_DG/prepare_model/MO_Python_API.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Apply suggestions from code review

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Update docs/MO_DG/prepare_model/MO_Python_API.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_PyTorch.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

---------

Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>
Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>
Co-authored-by: Sergey Lyalin <sergey.lyalin@intel.com>

* fix first paragraph

* Update MO_Python_API.md

---------

Co-authored-by: Anastasiia Pnevskaia <anastasiia.pnevskaia@intel.com>
Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>
Co-authored-by: Sergey Lyalin <sergey.lyalin@intel.com>
2023-05-01 15:19:11 +04:00
Roman Lyamin
aebea2337e [GPU] Coverity fixes (#17241) (#17281) 2023-05-01 14:35:50 +04:00
Ilya Lavrenov
29c672d6d8 Fixed Python API build for Ubuntu 22.04 with python3.11 (#17297)
* Fixed Python API build for Ubuntu 22.04 with python3.11

* Update ONNX CI docker to test python 3.11 and system pybind11
2023-04-29 03:38:01 +04:00
Maksim Doronin
1f790df33c Fix enable_plugins_xml (#17293) 2023-04-29 00:02:43 +04:00
Ilya Lavrenov
5625424b91 Fixes for OpenCL via brew package (#17273) 2023-04-28 18:10:30 +04:00
Tatiana Savina
c7d0df39b5 remove pre-release note (#17265) 2023-04-28 13:04:31 +02:00
Alina Kladieva
85b57ea2bf Bump Azure refs to 2023/0 (#17264) 2023-04-27 22:09:27 +04:00
Sun Xiaoxia
893b29eab4 HOT FIX: Xiaoxia/fix read wrong data in muti threading (#17240)
* fix threading test sporadic failure

* fix read wrong data in muti threading

* fix read and write sync

* add lock before cpu._cpu_mapping_table[i][CPU_MAP_USED_FLAG],because CPU_MAP_USED_FLAG may be modified by set_cpu_used
2023-04-27 22:44:44 +08:00
Maxim Vafin
1d443c6da6 Fix problems with pytorch models passed to convert_model (#17255)
* Do eval() only for torch Module

* Add test

* Support decoder in convert_model

* Enable tests
2023-04-27 18:33:46 +04:00
Mateusz Bencer
2b8a6ba99a condition to print warnings (#17195) 2023-04-27 13:43:32 +00:00
Jan Iwaszkiewicz
ca02336c1b [PyOV] Bump pybind to 2.10.4 (#17251) 2023-04-27 16:41:45 +04:00
Artyom Anokhov
e06c4cc6fd CMakeLists: Changed FATAL_ERROR to Warning in case of OpenCLHeaders not found (#17260) 2023-04-27 16:22:53 +04:00
Ivan Tikhonov
40bf400b18 Add FakeQuantize op support in TS transformations (#17243)
* Add FQ op support in TS transformations

* codestyle

* Mark FQ as supported op in the TS ops list
2023-04-27 15:09:07 +04:00
Nikolay Shchegolev
22bb3af7df [CPU] Disable test case with sporadic failure. (#17256) 2023-04-27 14:06:33 +04:00
Sebastian Golebiewski
c0767a7e27 [DOCS] TensorFlow Lite FrontEnd updating dev docs (#17225)
* update with tflite

* Update index.md

---------

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
2023-04-27 13:58:25 +04:00
Anastasiia Pnevskaia
59e28f8d0d Disabled tests. (#17231) 2023-04-27 13:39:58 +04:00
Sebastian Golebiewski
40128cded1 update tuts (#17201) 2023-04-27 11:29:19 +02:00
Ryszard Jezierski
8005a3d0b0 Removed unneeded deprecated test code (#16939) 2023-04-26 23:53:10 +04:00
Ryszard Jezierski
561bf6d478 Removed deprecated parser tests (#17151) 2023-04-26 23:51:53 +04:00
Yuan Hu
cecd0e75a6 coverity Uninitialized scalar variable (#17182)
Signed-off-by: Hu Yuan2 <yuan2.hu@intel.com>
2023-04-26 23:49:21 +04:00
Nesterov Alexander
dbaa1f0c0d [ARM CPU] Fix interpolate tests (#17171)
* fix interpolate bug

* fix interpolate bug - some tests

* fix interpolate bug - change init

* fix interpolate bug - shape fix

* fix interpolate bug - shape fix 2

* fix interpolate bug - add assert
2023-04-26 23:28:02 +04:00
Wilson Seok
03a428f50c [GPU] Fix remove redundant reorder to skip reorder fusing when sibling node doesn't support fused padding (#17041)
* initial fix

* add corresponding unit test

* skip reorder fusing when sibling node does not support fused padding

* fix data type of axis for win build

* Revert "fix data type of axis for win build"

This reverts commit 719ea75d7826aafc7bb94c1971586c33a9842f10.

* add static casting for win build
2023-04-26 16:53:23 +00:00
Sun Xiaoxia
7fc65ae3c5 fix threading test sporadic failure (#17230)
Co-authored-by: Wanglei Shen <wanglei.shen@intel.com>
2023-04-26 20:27:49 +04:00
Maxim Vafin
10392644e3 [PT FE] Enable stable sort layer tests (#17229)
* [PT FE] Enable stable sort layer tests

* Remove unused code
2023-04-26 18:24:38 +02:00
Ivan Tikhonov
80519162ae Reduce the binary size of transformation lib (#17220)
* Replace opset with op version for TransposeSinking and SmartReshape transformations to reduce binary size

* replace opset with op version in some op_conversions transformations

* codestyle
2023-04-26 19:03:36 +04:00
Ekaterina Aidova
82ff7e17c9 use input parameter for building example_inputs (#17207)
* use input parameter for building example_inputs

* Update tools/mo/openvino/tools/mo/moc_frontend/pytorch_frontend_utils.py
2023-04-26 17:58:06 +04:00
Egor Duplenskii
f1bc402b38 [CPU] Pick fix for oneDNN v3.1 release (#17144) 2023-04-26 17:44:36 +04:00
Wang Wangwang
962df2cdcb [AUTO] Exclude other vendor's GPU device in default candidate list (#17063)
* [AUTO] Plugin takes only Intel dGPU as 1st priority

* Update test case

* Simplify the code

* Support more test cases in GetDeviceList API

* Add notIntelGPU to _deviceBlocklist in AUTO plugin

* Restore some code formats

* Update test cases

* Add some logs to GetValidDevice API

* Simplify the code

---------

Co-authored-by: Wanglei Shen <wanglei.shen@intel.com>
2023-04-26 14:42:53 +01:00
Nikolay Shchegolev
c8ac7c9b82 [CPU] Infer_request crashes for SpaceToBatch operation. (#16974)
* [CPU] Infer_request crashes for SpaceToBatch operation.

* Fixes as per comments.

* Fixes as per comments 2.
2023-04-26 17:39:54 +04:00
Vladimir Paramuzov
6ed85178d5 [GPU] Fix layout propagation logic (#17199) 2023-04-26 14:20:48 +01:00
Edward Shogulin
14a14ecd76 [LPT] Precision restriction customization extending: tests (#17196)
* [LPT] Precision restriction customization extending

* comments fix: refactoring

* [LPT] Precision restriction customization extending: tests
2023-04-26 16:53:04 +04:00
Tomasz Adamowicz
546581bcce [Gna][coverity] fixes for issue type AUTO_CAUSES_COPY (#17192)
* [Gna][coverity] fixes for AUTO_CAUSES_COPY CID: 1491505, 1491595, 1502494, 1502500, 1504698, 1504769, 1507058

* update afte review

* adding const specifier to auto where needed
2023-04-26 13:32:54 +01:00
Ilya Lavrenov
cfbfa18f34 Fixed WASM build in update docker container / new dependencies (#17224) 2023-04-26 16:32:36 +04:00
Edward Shogulin
e593cf8545 [LPT] Precision restriction customization extending (#17147)
* [LPT] Precision restriction customization extending

* comments fix: refactoring
2023-04-26 13:29:09 +01:00
Alexandra Sidorova
a032d67cc7 [CPU] Fixed enforcebf16 condition for transformation pipeline (#17157)
* [CPU] Fixed enforcebf16 condition for transformation pipeline

* [Snippets][CPU][Tests] Added test with bf16
2023-04-26 16:13:01 +04:00
Irina Efode
ca92eb96ad [CONFORMANCE] Fix Runner on Win (#17221) 2023-04-26 13:03:20 +01:00
Zlobin Vladimir
de30d8523d State single value is uese (#15458)
Ticket EISW-60868
2023-04-26 14:50:03 +04:00
Ilya Lavrenov
da91b33763 ARM32 ACL kernels in oneDNN (#17142)
* ARM32 ACL kernels in oneDNN

* Fixed review comments

* Fixed ERF

* Disabled several eltwise tests on arm32
2023-04-26 13:50:10 +04:00
Vitaliy Urusovskij
02bfa7804b Add copyright (#17218) 2023-04-26 13:44:31 +04:00
Luwei Zhou
6cb6c5958a Fix the SDL issues. (#17107)
* Fix the SDL issues.

* Applied review comments.

* Update Slice test case to test none-const axis input.
2023-04-26 13:35:36 +04:00
Chenhu Wang
737864bdc7 [CPU] layout alignment to improve perf for interpolate pillow modes (#17079)
* infer planar layout with [1,2] axis as nhwc layout pass and kernel

* leftover comments apply

* comment apply
2023-04-26 11:33:17 +02:00
Ivan Tikhonov
95ca54d0ab Update ConstantFolding transformation to support Gather with dynamic input (#16973)
* ConstFold Gather op in case of dynamic dims in data input

* Update ConstantFolding transformation to support Gather with dynamic input; add test

* always mark ShapeOf nodes as can_be_folded

* add additional checks for fused_names in the gather test

---------

Co-authored-by: Andrei Kochin <andrei.kochin@intel.com>
2023-04-26 13:22:47 +04:00
Vladimir Paramuzov
ce5f65af14 [GPU] Use hash of test name for random generator initialization (#17213) 2023-04-26 12:52:38 +04:00
Ekaterina Aidova
6389f423bf [PT FE]: implement scaled dot product attention (#17178)
* [PT FE]: implement scaled dot product attention

* Apply suggestions from code review

Co-authored-by: Maxim Vafin <maxim.vafin@intel.com>

* Update src/frontends/pytorch/src/op/scaled_dot_product_attention.cpp

Co-authored-by: Maxim Vafin <maxim.vafin@intel.com>

---------

Co-authored-by: Maxim Vafin <maxim.vafin@intel.com>
2023-04-26 12:51:02 +04:00
Ekaterina Aidova
5857c4438b [PT FE]: switch on tracing as main path if example inputs provided (#17194) 2023-04-26 12:50:43 +04:00
Eddy Kim
09265083ed [GPU] fixed a missing data type (#17200)
* fixed missing data type

* updated the resolution for better accuracy check
2023-04-26 08:28:18 +00:00
Roman Kazantsev
7cf9d109e8 [TF FE] Implement optimal conversion of body graphs (#17211)
* [TF FE] Implement optimal conversion of body graphs

Preliminary setting input shapes and types for body graph InputModel
provides more optimal conversion of body-graphs.

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Fix build issue

---------

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2023-04-26 12:12:54 +04:00
Maciej Smyk
5682e178dd DOCS shift to rst - Opsets D (#17205)
* Update Operations_specifications.md

* Update Divide_1.md

* Update DFT_7.md

* Update DetectionOutput_8.md

* Update DetectionOutput_1.md

* Update DetectionOutput_1.md

* Update DepthToSpace_1.md

* Update DeformablePSROIPooling_1.md

* Update DeformableConvolution_8.md

* Update DeformableConvolution_1.md

* Update DeformableConvolution_8.md

* fix

* fix

* Update DFT_7.md

* Update DFT_7.md

---------

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>
2023-04-26 10:11:13 +02:00
Mateusz Tabaka
dfaa4e7bd6 Add ConvertSubtractWithConstant to MOCTransformations (#17058)
* Add ConvertSubtractWithConstant to MOCTransformations

Ticket: CVS-62419

* fix test_mo_import_from_memory tests

* move test file

---------

Co-authored-by: Michal Lukaszewski <michal.lukaszewski@intel.com>
2023-04-26 11:37:42 +04:00
Mateusz Tabaka
da4316845f ConvMulFusion - handle ConvolutionBackpropData with 3 inputs (#17145)
* ConvMulFusion - handle ConvolutionBackpropData with 3 inputs

Ticket: 98769

* add using

* use compare functions
2023-04-26 11:37:31 +04:00
Sungeun Kim
3c485feea8 removed case to choose onednn impl for deconv (#17108)
- in_dt(f16) wei_dt(f16) out_dt(f32)
2023-04-26 13:20:11 +09:00
Egor Duplenskii
dabd5ee412 [CPU][TESTS] Fix cmake test dependencies (#17202)
Co-authored-by: Maksim Doronin <maksim.doronin@intel.com>
2023-04-26 01:17:12 +04:00
Gorokhov Dmitriy
edec7bb897 [CORE] Disable fp32->fp16 optimized constant conversion impl (#17189) 2023-04-25 15:50:24 +00:00
Maciej Smyk
72533a7da1 DOCS shift to rst - Quantizing Models with Accuracy Control, Documentation, Get Started & Learn OpenVINO (#16997)
* Update AccuracyAwareQuantizationUsage.md

* Update AccuracyAwareQuantizationUsage.md

* Update AccuracyAwareQuantizationUsage.md

* Update AccuracyAwareQuantizationUsage.md

* Update AccuracyAwareQuantizationUsage.md

* Update AccuracyAwareQuantizationUsage.md

* Update AccuracyAwareQuantizationUsage.md

* Update AccuracyAwareQuantizationUsage.md

* Update AccuracyAwareQuantizationUsage.md

* Update AccuracyAwareQuantizationUsage.md

* Update AccuracyAwareQuantizationUsage.md

* rst

* fixes
2023-04-25 16:06:34 +02:00
Maciej Smyk
49b5d039db DOCS shift to rst - Opsets B (#17169)
* Update BatchNormInference_1.md

* Update BatchNormInference_1.md

* Update BatchNormInference_1.md

* Update BatchNormInference_1.md

* Update BatchNormInference_1.md

* Update BatchNormInference_1.md

* Update BatchNormInference_1.md

* Update BatchNormInference_1.md

* Update BatchNormInference_1.md

* Update BatchNormInference_1.md

* Update BatchNormInference_1.md

* Update BatchNormInference_1.md

* Update BatchNormInference_5.md

* Update BatchToSpace_2.md

* Update BinaryConvolution_1.md

* Update Broadcast_1.md

* Update Broadcast_3.md

* Update Bucketize_3.md

* fix

* fix-2
2023-04-25 16:06:17 +02:00
Anastasiia Pnevskaia
acd424bb5e Show message with suggestion to try legacy FE in case of conversion error (#17088)
* Moved exception checks to _convert(), added suggestion to try legacy TF in case of conversion fail.

* Added test.

* Added send_conversion_result() method.

* Small correction.

* Update tools/mo/openvino/tools/mo/convert_impl.py

Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Moved test_suggest_legacy_fe() test to check_info_messages_test.py.

* Removed not needed import.

* Small correction.

---------

Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>
2023-04-25 13:57:01 +00:00
Ilya Lavrenov
57d4ca27e6 Revert "Proper ACL version detection (#17152)" (#17206)
This reverts commit 1aec450fc6.
2023-04-25 17:36:18 +04:00
Przemyslaw Wysocki
923b6f297c [PyOV] Move environment markers to requirements.txt files (#17113)
* WIP

* WIP

* Debug

* WIP

* Expand function to other setup.pies

* Revert mxnet

* Update docstring'

* restore defusedxml

* Update tools/mo/requirements.txt

Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Code review

---------

Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>
2023-04-25 13:25:21 +00:00
Vladislav Golubev
a8278ba4a6 [LPT] FQ reference implementation reused in foldFakeQuantize function (#17096)
* [LPT] reused reference FQ implementation in fold_fake_quantize

* [LPT] Removed legacy parameters

* Added plugin tests with per-channel FQ for GrConv wo reshape

* Apply folding only in the case when FQ data input is constant

* EliminateFQ fix
2023-04-25 14:08:01 +01:00
Aleksandr Voron
43a42fa9cd fix (#17179) 2023-04-25 16:50:37 +04:00
Evgenya Stepyreva
cd4c012f08 LogicalNot: convert precision (#17061)
* CVS-108362 LogicalNot: convert precision

* Test
2023-04-25 12:43:44 +00:00
Ilya Lavrenov
2e3deb8d8f Windows arm64 support for CPU plugin (#17075)
* ARM32 support

* ARM32 support

* Fixed packaging

* Windows arm64 support

* Updated submodule

* 32 bits support in Intel CPU plugin

* Fixed FIndAcl.cmake

* Enable proper conditional  compilation for Windows ARM64

* Enable proper conditional  compilation for Windows ARM64

* Updated submodule

* Updated submodule

* Updated submodule

* Updated submodule

* Updated submodule

* Added template_extension to CPU func tests dependencies

* Updated submodule

* Enabled runtime model tests

* Updated submodule

* Submodule update
2023-04-25 16:41:28 +04:00
Maxim Vafin
d423491bcb Fix Scatter value infer for fully dynamic value (#17165)
* Fix issue with dynamic Scatter in MO IR Reader

* Only normalize for 1D tensors

* Add test
2023-04-25 16:38:49 +04:00
Vitaliy Urusovskij
11a2b75161 Fix TSAN issue No2 in GNA plugin (#17185)
* Fix TSAN issue No2 in GNA plugin

* Misprint
2023-04-25 16:32:06 +04:00
Jan Iwaszkiewicz
512b186231 [PyOV] Enable group_convolution_backprop test (#17186) 2023-04-25 12:19:56 +00:00
Evgenya Stepyreva
ee4ccec190 TensorFlow Lite FrontEnd: documentation changes (#17187)
* First glance doc changes

* Apply suggestions from code review

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_TensorFlow_Lite.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

---------

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>
2023-04-25 16:18:24 +04:00
Oleg Pipikin
27210b6505 Fix Coverity issue #1505788 (#17173) 2023-04-25 16:13:42 +04:00
Oleg Pipikin
ab879f143c Add check to avoid out of bounds segfault in scatterNDupdate (#17066)
* Add check to avoid out of bounds segfault in scatterNDupdate

* Fix code style
2023-04-25 16:13:14 +04:00
Aleksandr Voron
6e11645018 [CPU] Add axis check to ACL Reduce isSupported method (#17188)
* fix

* fix2
2023-04-25 16:11:50 +04:00
Sergey Shlyapnikov
0a5975bdfa [GPU] Add real kernels' execution timings collection for DumpProfilingData debug option (#15797) 2023-04-25 14:33:08 +04:00
Ilya Lavrenov
1aec450fc6 Proper ACL version detection (#17152) 2023-04-25 14:05:52 +04:00
Sungeun Kim
8c09a128ac [GPU] update weights_layout for GroupConv 1d spatial (#17109)
* update weights_layout for GroupConv 1d spatial
2023-04-25 18:54:54 +09:00
Georgy Krivoruchko
3f07c8b48b [TF FE] Added MetaGraph file format (#16524)
* Separeted SavedModelVariablesIndex class from Saved Model

* Renamed SavedModelVariablesIndex class

* Enabled Tensorflow MetaGraph

* Enabled Tensorflow MetaGraph

* Covered VariableV2 and Assign nodes

* Applied review comments

* Added tests

* Added names to input/output ports too

* Fixed naming for using with MO

* Applied part of review comments

* Renamed meta.cpp and saved_model.cpp

* Applied shared_ptr for memory management of PtrNode

* Fixing CI

* Prevent cycles while passing thru graph

* Released requirement for Checkpointable Object Graph

* Changed naming approach to align port order

* Changed renaming order (before reordering)

* Added a Placeholder translator which checks updated shape

* WA missing Identity name

* Fix CI and restored lost translators after rebase

* WA for output names

* Removing unused params after cutting a model

* Prevents crash in case VariableV2 appears in freezed model

* Fixed saved model in case no variables.index found, but
variables exists

* Changed approach for handling native formats support

* Aligned behavior with freezing .meta files

* Fixed behavior for cutting a model by input tensor

* Applied review comments
2023-04-25 13:46:06 +04:00
Maciej Kwapulinski
9c01de4b6e [GNA] fix: embedded export is available for embedded targets only (#17105)
* fix: embedded export is available for embedded targets only

* [GNA] functional tests fix - embedded export should NOT be possible on non-embedded target

* [GNA] tests added/justified to process both negative and positive path
2023-04-25 10:45:47 +01:00
Andrew Kwangwoong Park
72906ca242 [GPU] Fix i8/u8 representation error for clamp due to overflow (#17183)
* [GPU] Fix i8 representation error for clamp due to overflow

Signed-off-by: Andrew Park <andrew.park@intel.com>

* Fix to not include in ocl code

Signed-off-by: Andrew Park <andrew.park@intel.com>

---------

Signed-off-by: Andrew Park <andrew.park@intel.com>
2023-04-25 09:41:01 +00:00
Ekaterina Aidova
39ed9a624f [PT FE]: extend batch norm to support training mode (#17040) 2023-04-25 11:27:00 +02:00
Vladimir Paramuzov
f736c71feb [GPU] Fix reshape split for dynamic models + accuracy fix for SAM (#16911) 2023-04-25 09:21:31 +00:00
Alexandra Sidorova
9247906879 [Snippets][CPU] Fixed coverity (#17094) 2023-04-25 09:12:58 +00:00
hyunback kim
19f8f5a3a7 [GPU] Disable oneDNN post-op Prelu in FC,gemm (#17084)
* [GPU] Disable oneDNN post-op relu

Only disable Prelu fusion in Fc, gemm
 - check additional data input

Signed-off-by: hyunback <hyunback.kim@intel.com>
2023-04-25 18:06:22 +09:00
Yuan Hu
2255bb25fd fix input issuse of ScatterNDUpdate conformance test (#16406)
* fix input issuse of ScatterNDUpdate conformance test

Signed-off-by: Hu Yuan2 <yuan2.hu@intel.com>

* fix typo and optimize temporary variable

Signed-off-by: Hu Yuan2 <yuan2.hu@intel.com>

---------

Signed-off-by: Hu Yuan2 <yuan2.hu@intel.com>
2023-04-25 13:00:22 +04:00
Vladimir Paramuzov
ca1102b855 [GPU] Support MVN cases with axis=-1 w/o decomposition (#17020) 2023-04-25 12:59:03 +04:00
Katarzyna Mitrus
0617ce9089 Set ONNX opset in Reduce ops layer tests (#17170) 2023-04-25 10:38:56 +02:00
Ilya Lavrenov
22aee08958 Revert "[CPU] Fix data race in concurrent compile_model calls (#17164)" (#17184)
This reverts commit 8879ef53a7.
2023-04-25 12:01:02 +04:00
Nikita Malinin
e37288fbcc [POT] Added inference shape for in-place statistics (#17114)
* Added inference shape for inplace statistics

* Update graph_builder
2023-04-25 11:14:34 +04:00
Vitaliy Urusovskij
5533de5dd8 Fix TSAN issue in GNA plugin (#17163) 2023-04-25 10:33:06 +04:00
Aleksandr Voron
10f53cb40b [CPU] Force NCHW layout for ACL Interpolate executor (#17121)
* fix

* fix 2nd case
2023-04-25 10:05:15 +04:00
Alexandra Sidorova
4750523c81 [Snippets][CPU][Test] Allow tokenize MHA without machine dependancy (#17064) 2023-04-25 09:40:11 +04:00
Egor Duplenskii
478725c719 [CPU] Reorganize function tests. Remove legacy bfloat16 tests (#17130) 2023-04-25 09:32:54 +04:00
Yuan Hu
e79db660ce [CPU]GroupConvolutionLayer CPU test for AMX (#13539) 2023-04-25 09:21:17 +04:00
Vladimir Paramuzov
d1f1fa2b39 [GPU] Enable broadcast transition pass (#17172) 2023-04-25 09:04:37 +04:00
Vladimir Paramuzov
3bb0fb61f6 [GPU] Support 8d tensors in activation and quantize primitives (#16947) 2023-04-25 09:02:54 +04:00
Sun Xiaoxia
6663367183 Xiaoxia/fix performance regression (#17036)
* add _streams_info_table in Executor config

* change useHyperThreading init value

* restore cmake

* fix comments

* add calling enableCpuPinning property

* fix judgment about number of sockets in init_stream

* fix test case compile issue

* fix ci test case fail issue

* modify GetPerformanceStreams calling position

* add affinity in get_cpu_pinning

* modify ecore judgement

* add no binding core on ADL

* fix ci issue, add get_num_numa_nodes()

* fix code style

* fix StreamsHasHigherPriority issue

* fix according to comments

* fix performance degression

* fix code style

* code style

* fix warning

* fix ci test failed

* fix ImportNetwork issue

* fix ci test case issue

* fix smoke_CachingSupportCase_CPU issue

* add ExportOptimalNumStreamsTest test

* modify test name

* modify ExportOptimalNumStreams test

---------

Co-authored-by: Chen Peter <peter.chen@intel.com>
2023-04-25 04:35:47 +00:00
Chen Peter
28e54e75ea Update MULTI doc per current implementation (#17045)
* Update MULTI doc per current implementation

Signed-off-by: Peter Chen <peter.chen@intel.com>

* Update the description of Multi-Device execution mode

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Remove sample code and video

1. Remove the sample code for removed behaviors
2. Remove the video to avoid confusion

Signed-off-by: Peter Chen <peter.chen@intel.com>

---------

Signed-off-by: Peter Chen <peter.chen@intel.com>
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
2023-04-25 10:28:48 +08:00
Pawel Raasz
38a5ee719d Remove unused lambda capture (#17160) 2023-04-25 00:39:40 +00:00
Egor Duplenskii
8879ef53a7 [CPU] Fix data race in concurrent compile_model calls (#17164) 2023-04-25 00:01:03 +00:00
Anastasiia Pnevskaia
00847cba7d Fix of tf.GenericFunction conversion in convert_model() (#17125)
* Added GenericFunction support, fixed tf.Function test.

* Added test, added TF version checks.

* Small correction

* Removed Trackable type support.

* Small correction.
2023-04-24 22:57:56 +00:00
Taylor Yeonbok Lee
ce23ce00f1 [GPU] Fixed fused_primitive_desc to have -1 value for dep_start_idx (#17099)
* Fixed fused_primitive_desc to have -1 value for dep_start_idxt b

* Fixed dgpu i8 errors
2023-04-24 22:21:58 +00:00
Roman Kazantsev
3830125e3b [TF FE] Report the full list of unsupported operations (#17143) 2023-04-24 21:33:07 +00:00
Eddy Kim
d972a71b4c [GPU] Fixed the prepare_quantization pass to support grouped_weights_shape (#17093)
* fixed to support grouped_weights_shape

* added grouped_weights unit tests
2023-04-24 14:21:50 -07:00
Piotr Krzemiński
22a81e0e58 [PT FE] Enable stable tests for sort & argsort (#16415)
* [PT FE] Enable stable tests for sort & argsort

* Update test_argsort.py

* [PT FE] Update to opset11

* [PT FE] Remove redundant argument from argsort test

---------

Co-authored-by: Michal Lukaszewski <michal.lukaszewski@intel.com>
2023-04-25 01:21:16 +04:00
Maksim Kutakov
9fce01f8cc [CPU] Remove legacy dynamic batch processing from the plugin (#17052)
* Intermediate state

* Remove old dyn batch path in the new api

* Remove legacy dyn batch support

* Remove dyn batch support field from the config

* Revert changes to the common part

* Revert accidental change in the test file

* Minor fixes

* Fix support for dyn batch without setting current

* Typo fix
2023-04-25 01:18:10 +04:00
Evgenya Stepyreva
758ec32001 CVS-108963 Coverity fixes (#17161) 2023-04-25 01:03:56 +04:00
yanlan song
64b5a4595a Bell/use cpu for dynamic models (#17149)
* clean up multi code path

Signed-off-by: fishbell <bell.song@intel.com>

* clang

Signed-off-by: fishbell <bell.song@intel.com>

* potential locking issue

Signed-off-by: fishbell <bell.song@intel.com>

* remove unecessary variable

Signed-off-by: fishbell <bell.song@intel.com>

* clear redundunt return syntax

Signed-off-by: fishbell <bell.song@intel.com>

* still use cpu for dynamic models

Signed-off-by: fishbell <bell.song@intel.com>

* merge master

Signed-off-by: fishbell <bell.song@intel.com>

---------

Signed-off-by: fishbell <bell.song@intel.com>
2023-04-25 01:01:11 +04:00
Jade Cho
5c21dcec4d [GPU] Fix detection output kernel build error on dGPU (#17150)
+ Check local memory size used in the kernel and choose proper kernel.
+ 	Select DO_STAGE_0_CAFFE instead of DO_STAGE_0_CAFFE_OPT
2023-04-25 01:00:26 +04:00
Vladislav Golubev
a6b1544acf Review comments applied (#17168) 2023-04-25 00:59:03 +04:00
Mateusz Mikolajczyk
8e5b0650a0 [PT FE] Fix for prim::Constant optional or containing list of tensors (#16754)
* Fix Constant list of tensor

* Write TorchScript transformation

* Handle Optional Tensor Constants

* Improve tests

* Add comments

* Try fix flake
2023-04-24 22:56:42 +02:00
Evgenya Stepyreva
b452dab8f0 TypeRelaxed<>::clone_with_new_inputs thread safety fix (#16881)
* TypeRelaxed<>::clone_with_new_inputs thread safety fix

* Style

* Make TypeRelaxed<BaseOp>::clone_with_new_inputs copy node the same way as copy ctor of ov::Node

* Removed mutex field from intel_cpu::GraphContext

* Removed all about has_type_relaxed_ops field from the snippets subgraph

* Clonning test
2023-04-25 00:51:18 +04:00
Ilya Lavrenov
83cc2277b4 Fixed compilation with sanitizer (#17175) 2023-04-25 00:44:16 +04:00
Alina Kladieva
f39ab0dbc9 Upper-bound for patchelf (#17177) 2023-04-24 19:52:55 +02:00
Wanglei Shen
10c56708fd update auto architecture document in GitHub for 2023.0 release (#17141)
* update auto architecture doc

* update auto architecture doc

* Apply suggestions from code review

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* update for comments

---------

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
2023-04-24 15:44:34 +00:00
Tomasz Adamowicz
86ed1e93b6 [Gna] [coverity]fixes (#17122)
* [Coverity] Fix: CID 1502468 - Not restoring ostream format

* [Coverity] Fix: CID 1502524 - Dereference null return value

* [Coverity] Fix: CID 1509007 - Uncaught exception

* [Coverity] Fix: CID 1505779, 1505781, 1505783 and 1505786 - Dereference null return value

* [Coverity] Fix: CID 1502503 - Using invalid iterator

* Revert "[Coverity] Fix: CID 1502524 - Dereference null return value"

This reverts commit b605a493ae.
2023-04-24 14:04:30 +01:00
Maksim Kutakov
f8522a6ea1 [CPU] Rnn weights repacking (#16992) 2023-04-24 15:48:57 +04:00
Vladislav Golubev
f410658d32 [LPT] AddTransformation fix (#17076)
* [LPT] AddTransformation: constants on 0's input support

* AddTransformation: new test instances

* codestyle
2023-04-24 12:15:01 +01:00
Edward Shogulin
a3f14366d9 [LPT] Extending EliminateFakeQuantize transformation (two interval boundaries) (#17140)
* [LPT] EliminateFakeQuantize extending

* tests

* folding quick fix
2023-04-24 11:58:00 +01:00
Ilya Lavrenov
a34ef680f2 Made plugins.hpp generation to be CONFIG dependent (#17139) 2023-04-24 14:48:45 +04:00
Vladimir Paramuzov
faba5fb71e [Transformations] Add threshold for const comparison in Gelu fusion pass to fuse with fp16 precision (#17042) 2023-04-24 14:37:31 +04:00
Vladimir Paramuzov
e8ae1e41ea [GPU] Skip FC fake alignment for some vector by matrix multiplications (#17051) 2023-04-24 14:34:50 +04:00
dependabot[bot]
eac265722f Update networkx requirement from <=2.8.8 to <=3.1 in /tools/pot (#16745)
Updates the requirements on [networkx](https://github.com/networkx/networkx) to permit the latest version.
- [Release notes](https://github.com/networkx/networkx/releases)
- [Commits](https://github.com/networkx/networkx/compare/networkx-0.23...networkx-3.1)

---
updated-dependencies:
- dependency-name: networkx
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-04-24 13:37:35 +04:00
hyunback kim
63f5c2f0e7 [GPU] Fix levit-128s accuracy issue (#17136)
* [GPU] Fix levit-128s accuracy issue

Wrong batch dims for fused eltwise of gemm.
-> The issue is getting incorrect batch size of fused eltwise used by gemm.
     Its rank is different from src tensor. Eltwise tensor rank was reduced by mistake.
     It is only reproduce in batch 1 and full tensor. 
     The batch size in here means all of non spatial dims, but previous implementation was default batch dim role.

Signed-off-by: hyunback <hyunback.kim@intel.com>
2023-04-24 18:16:00 +09:00
Pavel Esir
6ff0cad127 Fix mixed precision inference for quantized IRs (#16785)
* disable mixed precision inference for quantized IRs

* typo fix

* improved solution, disable mixed precision in quantized IRs selectively only for float nodes

* minor typos correction

* added unit-tests

* renamed rt_info

* updated list of nodes for which FQ is propagated; updated unit-tests

* fix failing build
2023-04-24 13:13:04 +04:00
Maxim Vafin
01065338ef Fix MO IR Reader extender for StridedSlice to support empty begin and end masks (#17019) 2023-04-24 13:08:28 +04:00
Tatiana Savina
aa5b6ecac2 DOCS shift to rst - Opset S (#17158)
* ops to rst

* fix errors

* formula fix

* change code

* console directive

* vsplit try hoghlight

* fix code snippets

* comment fixes

* fix list
2023-04-24 11:02:30 +02:00
Tatiana Savina
b3ea6ceefa DOCS shift to rst - Opset R (#17159)
* ops to rst

* sphinx transition

* try html tag

* try comment

* try code directive

* try code directive

* try highlight

* try concole directive

* try line directive

* add highlight for code

* another directive

* introduce consoke directive

* add code format
2023-04-24 11:02:09 +02:00
Fang Xu
656d7fe380 prebuilt oneTBB binaries for ARM64 (#16904)
* use oneTBB for arm64

* force THREADING=TBB

* test: remove TBB_DIR for linux arm64

* update linux and mac arm64 packages

* update SHA256

* add comment

* disable add_rpath for tbb libraries on mac arm64

---------

Co-authored-by: Chen Peter <peter.chen@intel.com>
2023-04-24 09:48:47 +04:00
Daniil Lyakhov
7997354359 POT is depricated (#16758) 2023-04-24 09:37:57 +04:00
Vladimir Paramuzov
219a0eebdc [GPU] Fix 1d onednn convolutions (#17038) 2023-04-24 09:24:56 +04:00
Min, Byungil
bb0be3c177 [GPU] Resolve failed onednn tests (#16990)
* [GPU] Resolve failed unit-tests on dGPU

+ Modified unit-tests of asymetric conv with per channel(WA for oneDNN issue)
+ Modified conv unit-tests with padded input or output
+ For testing oneDNN conv, it needs to query oneDNN about format. Applied this to conv tests.
+ Modified accuracy checking logic in unit-tests which have different format on dGPU.
+ reorder from fsv16 to bfyx should not be optimized out if not aligned by 16

Signed-off-by: Min, Byungil <byungil.min@intel.com>
2023-04-24 14:11:35 +09:00
Ilya Lavrenov
11c3623ebb Fixed compilation errors on Linux arm64 (#17138) 2023-04-23 21:34:37 +04:00
yanlan song
fed06fcb91 resubmit PR#17006 (#17137)
* clean up multi code path

Signed-off-by: fishbell <bell.song@intel.com>

* clang

Signed-off-by: fishbell <bell.song@intel.com>

* potential locking issue

Signed-off-by: fishbell <bell.song@intel.com>

* remove unecessary variable

Signed-off-by: fishbell <bell.song@intel.com>

* clear redundunt return syntax

Signed-off-by: fishbell <bell.song@intel.com>

* WR build issue on buntu 2004

Signed-off-by: fishbell <bell.song@intel.com>

---------

Signed-off-by: fishbell <bell.song@intel.com>
2023-04-23 11:56:07 +00:00
Gorokhov Dmitriy
2c450ced24 [CPU] Fixed JIT Reorder impl on Apple targets (#17134) 2023-04-23 01:09:03 +04:00
Ilya Lavrenov
26029c2d48 Enabled runtime model tests (#17131) 2023-04-22 11:07:36 +04:00
Ilya Lavrenov
462cdb54f8 Enabled convolution_backprop_quantize_type CPU tests on non-x64 (#17123) 2023-04-22 01:45:14 +04:00
Ilya Lavrenov
46f8ebfaec Revert "Fix C API unite test case error (#17012)" (#17128)
This reverts commit 63c0089128.
2023-04-22 01:44:34 +04:00
Ilya Lavrenov
fbc28297ec Enabled C-API tests on ARM platform (#17119)
* Enabled C-API tests on ARM platform

* Fixed ARM CPU plugin test on streams
2023-04-21 22:55:18 +04:00
Ilya Lavrenov
d7b775f583 Updated onednn submodule (#17126) 2023-04-21 20:47:37 +04:00
Aleksandr Voron
e31b00c299 [CPU] Enable Python test test_infer_request.test_infer_mixed_values with bool for ARM (#17111)
* Update test_infer_request.py

* enable all py tests
2023-04-21 19:52:32 +04:00
Anastasiia Pnevskaia
50a6c88ea3 Fix of crashes of convert_model() when executed for different frameworks (#16968)
* Fix of class conflicts in different frameworks.

* Remove commented code.

* Moved FakeQuantWithMinMaxVars to common part.

* Fixed BOM package test.

* Removed not needed code.

* Removed not needed code.
2023-04-21 19:29:38 +04:00
Maksim Kutakov
793bbb6ee2 Remove dyn batch support from onednn i8 ref conv (#17106) 2023-04-21 17:44:00 +04:00
Jan Iwaszkiewicz
88cb428763 [PyOV][DOCS] Added Python advanced inference documentation (#17090)
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
2023-04-21 15:22:33 +02:00
Maciej Smyk
c4b155edc2 DOCS shift to rst - Opsets C (#17112) 2023-04-21 13:30:07 +02:00
yanlan song
304991f88b Revert "Clean up unused code (#17006)" (#17110)
This reverts commit 359b444558.
2023-04-21 15:26:01 +04:00
Tomasz Dołbniak
6ea9cc7149 ONNX FE - model loading fix (#17091)
* Path retrieval fix

* More detailed messages in the failing test

* Exe path with model name

---------

Co-authored-by: Michal Lukaszewski <michal.lukaszewski@intel.com>
2023-04-21 15:25:26 +04:00
Jade Cho
8fbd78fb07 [GPU] Fix a bug of fusing eltwise sum post-op. (#17078)
+ When input of eltwise is full-tensor constant layer, use binary add
instead of sum as post-op on oneDNN.
2023-04-21 20:17:35 +09:00
Nesterov Alexander
6ad80576b7 [ARM CPU] Fix smoke_if tests (#17095)
* fix smoke if

* fix smoke if - arm32

* review fix
2023-04-21 14:45:22 +04:00
HARI CHAND BALASUBRAMANIAM
6b44902bf2 Update bug.md (#16880)
Update the OpenVINO GitHub issue submission template.  To allow the submitter to provide more information when submitting an issue.
2023-04-21 02:27:39 -07:00
Sun Xiaoxia
b22d0641cb fix streams is not correct by latency mode (#17101) 2023-04-21 09:21:14 +01:00
Yury Gaydaychuk
4ae7e1ff61 [CPU] Commit slider: safe file opening (#16755) 2023-04-21 11:42:42 +04:00
Vladislav Golubev
31efdfd00d [Transformations] BroadcastTransition transformation (#16861) 2023-04-21 11:35:04 +04:00
Chen Xu
70d80a750f [CPU] Reduce node asymmetrical precision optimization (#16829) 2023-04-21 11:00:16 +04:00
Mingyu Kim
ba23e2290e [GPU] Choose onednn impl for reorder (#17077)
* [GPU] Choose onednn impl for reorder
* [GPU] Add unit test
2023-04-21 13:56:58 +09:00
yanlan song
359b444558 Clean up unused code (#17006)
* clean up multi code path

Signed-off-by: fishbell <bell.song@intel.com>

* clang

Signed-off-by: fishbell <bell.song@intel.com>

* potential locking issue

Signed-off-by: fishbell <bell.song@intel.com>

* remove unecessary variable

Signed-off-by: fishbell <bell.song@intel.com>

* clear redundunt return syntax

Signed-off-by: fishbell <bell.song@intel.com>

---------

Signed-off-by: fishbell <bell.song@intel.com>
2023-04-21 04:23:55 +00:00
Sun Xiaoxia
c186ffdf0d Xiaoxia/stream process refactor (#16692)
* add _streams_info_table in Executor config

* change useHyperThreading init value

* restore cmake

* fix comments

* add calling enableCpuPinning property

* fix judgment about number of sockets in init_stream

* fix test case compile issue

* fix ci test case fail issue

* modify GetPerformanceStreams calling position

* add affinity in get_cpu_pinning

* modify ecore judgement

* add no binding core on ADL

* fix ci issue, add get_num_numa_nodes()

* fix code style

* fix StreamsHasHigherPriority issue

* fix according to comments

* merge master

* fix build issue

* fix template plugin test case failed issue

* fix build issue

* fix cpu test failed

* Update plugin.cpp

---------

Co-authored-by: Wanglei Shen <wanglei.shen@intel.com>
2023-04-21 01:38:32 +00:00
hyunback kim
344db564fc [GPU] Fix dump graph failure issue in levit-128s model. (#17055)
* [GPU] Fix dump_graph failure issue in levit-128s model.

1. to_string() in strided_slice always access begin/end/stride param id from dependencies
    regardless of max dependencies.
2. Add an exception in dump_full_node(). It helps below.
   - Avoid a dump failure. Usually, graph dump are used during debugging,
      which reduces unnecessary debugging time due to graph dump failure.
   - You can immediately see which node has failed, making it easy to find it.

Signed-off-by: hyunback <hyunback.kim@intel.com>
2023-04-21 09:14:47 +09:00
Wanglei Shen
14d4fcf827 enable smoke_SetConfigAffinity for ARM (#17092) 2023-04-20 20:25:35 +01:00
Anastasia Kuporosova
a8b5ccc03f [PyOV] Check for glibc version in python test (#17081)
* [PyOV] Check for glibc version in python test

* fix for no glibc
2023-04-20 19:28:55 +04:00
Karol Blaszczak
0c12ee6015 [DOCS] fix for copyright and trademark glyphs (#17021) 2023-04-20 14:11:16 +02:00
Karol Blaszczak
dcfa1f6881 [DOCS] bring back conda guide 23.0 (#17031) 2023-04-20 14:09:07 +02:00
Wanglei Shen
70e0eed075 update default affinity for macOS (#17080) 2023-04-20 11:50:04 +00:00
Mateusz Bencer
77a5d1aa03 [ONNX FE] Fixed handling duplicates during graph extraction (#17071) 2023-04-20 11:10:09 +00:00
Vladislav Golubev
f100c36ac9 [LPT] Revert changes in fold_reshape (#17068) 2023-04-20 11:43:59 +01:00
Yuan Hu
e53fc86988 [CPU] [Coverity] fix Uninitialized issue in node mvn (#16980)
* fix uninit issue in node mvn

Signed-off-by: Hu Yuan2 <yuan2.hu@intel.com>

* Revert "fix uninit issue in node mvn"

This reverts commit 45e68725f3.

* fix Uninitialized issue in MVNAttrs ctor

Signed-off-by: Hu Yuan2 <yuan2.hu@intel.com>

---------

Signed-off-by: Hu Yuan2 <yuan2.hu@intel.com>
2023-04-20 12:34:49 +02:00
Yuan Hu
bef25ddf43 [CPU] resubmit pr for optimize shape infer of Reshape (#16942)
* Revert "Revert "[CPU] optimize shape infer of Reshape (#16537)" (#16703)"

This reverts commit 06cacfe2a7.

* fix reshape connext with nonzero issue

Signed-off-by: Hu Yuan2 <yuan2.hu@intel.com>

* add nonzero connect with reshape testcase

Signed-off-by: Hu Yuan2 <yuan2.hu@intel.com>

* add debug code

Signed-off-by: Hu Yuan2 <yuan2.hu@intel.com>

* fix test case issue

fix shape_nonzero testcase issue
fix a bug in origin test case

Signed-off-by: Hu Yuan2 <yuan2.hu@intel.com>

* Revert "add debug code"

This reverts commit c305464c8c.

* fix other review comments except test case

Signed-off-by: Hu Yuan2 <yuan2.hu@intel.com>

---------

Signed-off-by: Hu Yuan2 <yuan2.hu@intel.com>
2023-04-20 12:34:21 +02:00
Maksim Kutakov
70c3979602 [CPU] Execute constants in order with the create primitives calls (#16795) 2023-04-20 14:22:57 +04:00
Mikhail Ryzhov
0f7e6de346 [GNA] WA to fix config parsing of scale factor map (#17060)
* WA to fix config parsing

* clang fix

* excluded json
2023-04-20 10:51:23 +01:00
Maciej Smyk
7d574e3114 DOCS shift to rst - Opsets (#17059) 2023-04-20 10:59:35 +02:00
Maxim Vafin
552143c9cd [MO] Fix Interpolate-11 in MO (#17002)
* Fix Interpolate-11 in MO

* Add forgotten file

* Fix output type of TopK-11

* Do not force precision on port 1 for mode scales

* Update tools/mo/openvino/tools/mo/ops/interpolate.py

---------

Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
Co-authored-by: Andrei Kochin <andrei.kochin@intel.com>
Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>
2023-04-20 09:51:38 +02:00
Anastasiia Pnevskaia
5026aa044a Removed naming of inputs in MO Python API PyTorch tests. (#17070)
* Removed naming of inputs in MO Python API PyTorch tests.

* Fixed coping of data.

* Small correction.

* Small correction.

* Small fix.
2023-04-20 11:49:45 +04:00
Marcin Kusmierski
4e6a129672 [GNA] Fix tests configuration to ensure that 3_5 target is tested too (#17046) 2023-04-20 09:18:36 +02:00
Nesterov Alexander
d00731c0ab [ARM CPU] Fix tests for eltwise layer (#16917) 2023-04-20 09:57:29 +04:00
Taylor Yeonbok Lee
5bded05ae6 [GPU] Improve shape infer performance (#17039)
* [Dynamic shape] Improve shape infer performance for igpu by preventing copy from usm_device to usm host from lock()

* Fixed is_shape_infer_dep to use pointer instead of unique_id becuase unique_id may not be set
2023-04-20 03:23:52 +00:00
Tomasz Dołbniak
1bd9a1e01c Passing tests re-enabled (#17067) 2023-04-20 01:55:42 +01:00
Ekaterina Aidova
f9fbcbe419 update omz submodule (#16986) 2023-04-20 03:53:39 +04:00
Ilya Churaev
71880aadd3 Deprecate set batch method (#17057)
* Deprecate set batch method

* Fixed some errors

* Suppress warning in tests

* Fixed warning in GPU

* Deprecate python
2023-04-19 20:21:18 +00:00
Ilya Lavrenov
1ec22a3180 32 bits support in Intel CPU plugin (#16900) 2023-04-19 22:10:20 +04:00
Eddy Kim
fab8236af3 [GPU] Fixed OneDNN fc+sum fusion serialization (#16988)
* fixed onednn fc+sum fusion serialization

* removed the white list for sum post op fusion

* added deconv fusing caching tests
2023-04-19 09:43:27 -07:00
Pawel Raasz
4c3a4a8992 Correct inf bound check for 32-bit in shape infer (#17047) 2023-04-19 19:33:01 +04:00
Nesterov Alexander
3d33cb2b43 [ARM CPU] Fix eltwise op tests (Divide) (#17029)
* update skip list

* skip change

* fix divide

* review fixes

* review fixes #2
2023-04-19 18:52:09 +04:00
Egor Duplenskii
39f843fb78 [CPU] Move to oneDNN 3.1 release version (#16721) 2023-04-19 18:26:30 +04:00
Tomasz Dołbniak
d230ad9313 Interpolate op cleanup (#17026) 2023-04-19 15:47:29 +02:00
Evgenya Stepyreva
497a19edf6 CVS-102308 Read tflite model to vector (#17048) 2023-04-19 13:27:41 +00:00
Pawel Raasz
d7083fb4db Improve slice and strided slice shape inference (#16940)
when start, stop are interval values
2023-04-19 16:20:29 +04:00
Vitaliy Urusovskij
a611104b12 FQ tests leftovers (#17009)
* Try to return skipped test after FQ fix

* Copy FQ broadcast case from CPU to TEMPL tests

---------

Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
2023-04-19 12:32:44 +01:00
Tatiana Savina
921bebc1ec change ov version (#17056) 2023-04-19 11:28:41 +00:00
Mateusz Tabaka
7338257e00 Fix transformations tests on 32 bit build (#17043)
Ticket: 104593
2023-04-19 11:28:00 +00:00
Artyom Anokhov
bb6a3251a8 README.md: Added Conda Budge (#17025)
* README.md: Added Conda Budge

* README: Moved Conda badge after PyPI status
2023-04-19 12:35:31 +02:00
Egor Duplenskii
4ce5548c9a [GNA] fix compilation warning (#17027)
Which becomes error with '-Werror'
2023-04-19 10:00:24 +00:00
Marcin Kusmierski
90b485715a [GNA] Fix tests failing due to dependency to CI environment state (#17007) 2023-04-19 11:42:15 +02:00
Vladislav Golubev
00a4fc514c Review comments applied (#16856) 2023-04-19 10:11:47 +01:00
Szymon Irzabek
a8c7c19cb9 [GNA] Fix channel multiplier calculation (#17010) 2023-04-19 11:01:27 +02:00
Xuejun Zhai
63c0089128 Fix C API unite test case error (#17012)
* Fix C API unite test case error

Signed-off-by: Zhai, Xuejun <xuejun.zhai@intel.com>

* Fix test error with relative path

Signed-off-by: Zhai, Xuejun <xuejun.zhai@intel.com>

---------

Signed-off-by: Zhai, Xuejun <xuejun.zhai@intel.com>
Co-authored-by: River Li <river.li@intel.com>
2023-04-19 11:26:12 +04:00
Chenhu Wang
34b3abc0e2 [CPU][Snippets]fix candidate merged node's subgraph inputs have common subgraph input (#16249) 2023-04-19 11:12:52 +04:00
Tingqian Li
1525f6cc16 [CPU] WA: Stop fusing per-OC eltwise into Matmul with input rank >4 (#16824) 2023-04-19 11:11:04 +04:00
Vladimir Paramuzov
dbd20ec799 [GPU] Added try/catch for device detection loop to skip platforms which throw an exception (#17011) 2023-04-19 11:05:24 +04:00
Chenhu Wang
498486588e [CPU]interpolate-11 support (#16698) 2023-04-19 11:05:09 +04:00
Ilya Churaev
ca0b30c082 Added components relationships on architecture page (#17037) 2023-04-19 10:51:23 +04:00
Shen, Wanglei
626caf7f2a update file location for 2023.0 release (#17034) 2023-04-19 10:38:23 +04:00
Wilson Seok
2401b0aa3c [GPU] Skip reorder_node_to_split to avoid change of input data type for ondenn kernel support (#16827)
* skip reorder_node_to_split when new input data type of onednn kernel is not supported
* update layout_optimizer and add unit test
2023-04-19 15:00:55 +09:00
Marcin Kusmierski
1281074e15 [GNA] Fix for GNA 3_5 fixing tests after review (#16954)
* [GNA] Fix review comments for Conovolution2DLayer tests

* [GNA] fix review comments for smoke_ConvolutionPoolingStrideNotEqualWindowTest_Above

* [GNA] Fix review comments to GNAPWLExtraSegmentsTestFixture

* [GNA] Fix review comments to smoke_LSTMCellBasicCommon
2023-04-19 07:31:34 +02:00
Kelvin Choi
bd8ca523b9 [GPU] Fix proposal sort condition (#16981) 2023-04-18 21:05:32 -07:00
Ilya Lavrenov
3ad3a90e98 Enabled several arm64 tests (#17032) 2023-04-19 02:35:32 +04:00
Anastasia Kuporosova
9f250edc7f [PyOV] use generator is multi config (#17004)
* [PyOV]- use generator is multi config

* use ov

---------

Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
2023-04-18 22:04:22 +00:00
Maksim Kutakov
38d97709d1 [CPU] Remove allocation by the upper bound (#16666) 2023-04-19 00:25:58 +04:00
Maksim Kutakov
531b5a3657 [CPU] Optimize TBB usage in the parallel dynamic shapes processing (#16517) 2023-04-19 00:25:03 +04:00
Aleksandr Voron
d4ac0b0e79 MultipleLSTMCellTest fix (#17015) 2023-04-18 23:27:45 +04:00
Anastasiia Pnevskaia
078f28911b Fixed parsing of 'layout' param (#16999)
* Fixed layout parsing.

* Small correction.

* Removed wrong change.
2023-04-18 22:43:38 +04:00
Roman Kazantsev
e93c8e1b1c [TF FE] Skip one Keras ConvLSTM2D test (#17028)
* [TF FE] Mark one Keras ConvLSTM2D test with xfail

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Change to skip

---------

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2023-04-18 22:28:30 +04:00
Ilya Lavrenov
d5cc696e00 Removed contrib repo usage from Linux ARM64 Azure Pipeline (#17016)
* Removed contrib repo usage from Linux ARM64

* Removed contrib repo usage from Linux ARM64
2023-04-18 21:33:49 +04:00
Ilya Churaev
566ef01a3f Remove constructors for ov Exceptions (#16938)
* Remove constructors for ov Exceptions

* Fixed linux build

* Fixed ONNX Frontend

* Fixed paddle

* Fixed exceptions in tests

* Deprecate constructors for ov::Exception

* Suppress some warnings

* Merge several exceptions

* Some small changes

* Suppress more warnings

* More warnings

* mode warnings

* Suppress more warnings

* More warnings
2023-04-18 21:02:26 +04:00
Mateusz Mikolajczyk
441dad2eea Fix bug with reshape on empty tensor (#17014)
* Fix empty tensor reshape

* Add test
2023-04-18 20:56:03 +04:00
Vladislav Golubev
e6341917cd [LPT] PullReshapeThroughDequantization transformation fix (#16395)
* PullReshapeThroughDequantization fix

* Added a test-case
2023-04-18 15:22:31 +01:00
Katarzyna Mitrus
2a5c69abc6 [ONNX FE] Fix ONNX DequantizeLinear-13 import dynamic shape (#16966) 2023-04-18 13:03:55 +02:00
Marcin Kusmierski
d5123056bb [GNA] Fix issues with GNA 3.5 - Fix pooling for Convolution1D and Convolution2D (#16734)
* [GNA] Fix 1D Pooling realized as part of 2D Convolution

* [GNA] Fix pooling for GNA_SW_FP32 mode when fused with Convolution2d

* [GNA] Fix ConvolutionPoolingStrideNotEqualWindowTest tests for 3_5
2023-04-18 11:41:04 +02:00
Tatiana Savina
e3fdfc4e09 DOCS shift to rst Plugin updated (#17000)
* shift to rst

* test snippets

* test build fixes

* change code block

* test new path

* change path

* add cancel

* change note format

* add docs

* change path to snippet

* change path to snippet

* change list format

* fix list

* fix snippets path

* fix format

* fix lists

* fix snippet

* compiled model doc fix

* change indentation

* small fixes to format
2023-04-18 10:59:15 +02:00
Mikhail Ryzhov
f97eeb59d5 [GNA] Fixed cases when FQ is not the 1st layer (#16602)
* Fixed cases when FQ is not the 1st layer

* clang formatted

* Added support of Gather
2023-04-18 10:43:31 +02:00
Pavel Esir
d70d8509c3 [FP16][IE] exclude MVN and NormalizeL2 from precision sensitive marking (#16953)
* exclude MVN from mixed infer

* fix align_mixed_fp32_fp16_types_test.cpp

* fix unit-tests for convert_precision.cpp

* code style fix
2023-04-18 16:20:49 +09:00
Pawel Raasz
3494edeed2 Fix Cast util functor when cast from floating point to integer (#16959)
* Fix cast to helper from floating point to integer
when floating value is out-of-range of integer

* Fix negative float cast if outside integer range
2023-04-18 07:29:31 +04:00
Min, Byungil
bf2870a63b [GPU] Resolved failed unit-tests (#16618)
+ Resolved issues related to deconv
+ Modified test-cases for conv, fc.
+ In fc unit-tests, tiny tensors showed unexpected behavior. Modified tensor size a little
+ Bugfix in get_test_stream

Signed-off-by: Min, Byungil <byungil.min@intel.com>
2023-04-18 11:22:43 +09:00
Ilya Lavrenov
d15cdc81cd Fixed multi-config generators (#17003) 2023-04-18 02:44:38 +04:00
Shen, Wanglei
3f9cc0112a Hot Fix: using all small core as Ecore (#16978)
* using all small core as Ecore

* add test case
2023-04-18 00:06:36 +04:00
Ilya Lavrenov
adc733f1e9 Enabled several ARM CPU tests (#16995)
* Enabled several ARM CPU tests

* Removed not-valid tests

* Fixed several template plugin tests

* Removed non-working suppressions

* Disabled 2 tests on ARM CPU
2023-04-17 22:44:43 +04:00
Egor Duplenskii
e52445dda4 [CPU] Clean up temporary debug toggles (#16972) 2023-04-17 22:00:37 +04:00
Sofya Balandina
c14d0d7389 [conformance] Fix itaration of ops_list when recalculating tests counters (#16993) 2023-04-17 21:57:56 +04:00
Roman Kazantsev
ae06322cb7 [TF FE] Correct layer test for ConvLSTM2D and add to the pre-commit (#16996) 2023-04-17 17:54:19 +00:00
Mikhail Ryzhov
14f38bfde8 [GNA] Reverted internal overload correction (#16962)
* reverted overload correction

* added comment

* Enabled tests

* Revert merge error

This reverts commit daed290452.
2023-04-17 17:39:58 +00:00
Ivan Tikhonov
930441b223 TransposeSinking: Gather and ReverseSequence (#16532)
* Resolve the performance issues in TransposeSinking transformation

* codestyle

* fix warning as error, fix tests failures

* fix ts for Concat and Reduce

* Fix TransposeReduceBackward

* fix the issue in TransposeFuse transformation

* fix TransposeReduce transformations

* Fix TransposeReduction, fix TransposeSinkingSplit, add unsqueeze support

* delete debug print

* Add additional validations

* fix node validation

* Fix validate for split, revert changes for concat, add BatchToSpace/SpaceToBatch

* Add SpaceToBatch/BatchToSpace

* fix TS for Interpolate + codestyle

* fix gna build

* Support TS for Interpolate, VariadicSplit, IsInf, IsNan, IsFinite + refactoring

* add the missed line

* add include

* TransposeSinking tests refactoring: part1

* TransposeSinking tests refactoring: part2

* Add limited support for StridedSlice op

* codestye

* TransposeReduction: skip the case when 2nd input for Squeeze is not provided

* Transpose sinking tests refactoring: part 3. + Revert changes in MOC.

* fix build

* codestyle

* Add tests for TS backward transformations, update TransposeSinkingFuse transformation, delete StridedSlice transformation prototype + tests refactoring

* fix unary tests

* Fix warning as error on Windows

* Add new tests for Unsqueeze/Squeeze; refactoring; remove debug code

* TransposeSinking: add support for Slice op

* Add descriptions to the transformations, add additional checks

* fix a warning

* TransposeSinking Rafactoring part2: move the transformations to a separate folder, align namespaces

* TransposeSinking refactoring: class names, namespaces

* codestyle

* resolve merge conflicts

* codestyle

* TSReduction refactoring, move Unsqueeze/Squeeze transformations to separate files, added limited support for Reshape op + tests

* fix minor mistakes

* fix warnings

* Added TSSlice transformation to TSGeneral, created TransposeSinkingGeneral alias in ov::pass namespace

* refactoring

* codestyle

* fix TSSqueeze/TSUnsqueeze transformations

* delete debug serialize

* remove TransposeSinking from MOC

* fix TSSqueeze/TSUnsqueeze transformations in case of Reshape op

* delete debug code

* fix unit tests, revert changes for TSSlice transformation

* TransposeSinking: Add gather support

* TransposeSinking: add support for Gather, ReverseSequence ops; Fix TSReduction, TSSqueeze, TSUnsqueeze transformations

* fix new constants shape

* fix TSReduction, TSSqueeze, TSUnsqueeze transformations; codestyle

* fix TSGather

* Fix TSGather transformation, add tests

* Updated TSGather transformation, updated the tests

* fix TSGather, codestyle

* Add missing files for TS testing

* fix TS for ReverseSequence op; codestyle

* revert local changes

* fix warnings

* delete const folding passes

* disable constant folding for shapeOf subgraph only

* correct thirdparty versions

* codestyle
2023-04-17 16:38:48 +00:00
Ilya Lavrenov
f4fe8400a7 Generic ARM fixes (#16994) 2023-04-17 20:37:10 +04:00
Anastasia Kuporosova
f9098cd67c [PyOV] Mark add_openvnio_libs as internal (#16971)
* [PyOV] Mark add_openvnio_libs as internal

* fix flake8
2023-04-17 17:34:13 +01:00
Vitaliy Urusovskij
47f0d72f02 Fix broadcasting issue in FQ ref implementation (#16812) 2023-04-17 20:33:07 +04:00
Aleksandr Voron
496a608a28 [CPU] ReduceMean fix for ACL Executor (#16987)
* reduce fix

* enable gru, rnn and lstm tests
2023-04-17 19:17:50 +04:00
Karol Blaszczak
1471a6e8de [DOCS] benchmarks new page (#16620) 2023-04-17 16:43:57 +02:00
Ilya Churaev
25826bfe7d Added deprecation of nv12 legacy API (#16982)
* Added deprecation of nv12 legacy API

* Added new files

* Change macros

* Suppress warnings for preprocessing

* Suppress warnings in tests

* Suppress warnings for Windows
2023-04-17 14:13:43 +00:00
Anastasiia Pnevskaia
dc2fa65224 Support of unnamed saved_model_dir in MO Python API (#16542)
* Added support of unnamed saved_model_dir.

* Switch TF2 layer tests for unnamed saved_model_dir.

* Added test.

* Correction of comment.

* Removed unnecessary pytest mark.

* Code correction, added comment.
2023-04-17 17:20:27 +04:00
Ilya Lavrenov
4a997de4a3 Disabled failed ARM CPU tests (#16989) 2023-04-17 15:34:56 +04:00
Shen, Wanglei
98393c0da1 update number of threads per stream on Ecore to 2 for aggressive model on hybrid platform (#16857)
* update number of threads per stream on Ecore to 2 when  aggressive model run on hybrid platform

* update for corner case and add test case
2023-04-17 18:42:55 +08:00
Jan Iwaszkiewicz
816c0f76e2 [PyOV] Deprecate PerformanceMode.UNDEFINED and refactor deprecation (#16965) 2023-04-17 12:38:28 +02:00
bstankix
7c41d78b5d Add OVMS benchmarks (#16984)
* Add ovms support for Graph Builder

* Add new OVMS dataset
2023-04-17 12:27:58 +02:00
Katarzyna Mitrus
834e611bde [Interpolate-11] Additional tests for Interpolate-11 reference implementation (#16956)
* bf precision tests

* i32 prec tests

* Default axes test

* Add f16 prec tests

* i8 prec tests

* Update eval types in the new file
2023-04-17 11:39:09 +02:00
Przemyslaw Wysocki
9ca85eb363 [PyOV] Update docs with Python 3.11 (#16366) 2023-04-17 11:33:15 +02:00
Przemyslaw Wysocki
d72d833a96 [PyOV] Enable Python 3.11 (#15144)
* Bump ONNX version

* Bump protobuf

* Add xfails and skips

* Add tickets

* Skip ONNX Serialization tests

* Compile ONNX with C++17

* Force cpp17 - 2

* Use MSVC check

* Relax python reqs, enable 311 in azure

* Fix setupvars error

* Ignore watchdog error

* Update tensorflow

* Minor change

* Bump onnx to 1.13.1

* Bump protobuf to 3.20.3

* Debug test tf

* Xfail tests in comp

* Update comp tests

* Update tf reqs

* Remove deprecated ONNX function

* Align PDPD FE protobuf req with 2.4.1

* Satisfy dependency review

* Attempt to fix dependency review

* Revert pdpd protobuf

* Skip pdpd tests

* Fix MO-TF-PB test

* Skip TF test case

* Enable py311 on rest of jobs

* Try disabling pdpd req

* Exclude pdpd form cmake

* Update .ci/azure/linux.yml

Fixed unmerged merge-conflict

* CR

* Fix reqs

* Skip pdpd tests

* Disable pdpd tests building in cmake

* Skip another pdpd cmake

* Add file

* Add paddle constraint to tests

* Disable paddle reqs

* Debug prints

* Skip TF test if Python ver is 3.11

* Apply Mish cr comments

* Debug

* Debug

* Constrain tensorflow_addons

* Fix pdpd skipping

* Add debug prints

* Update skips

* Remove prints

* Minor change

* Update OMZ commit

* Fix some tests

* Minor change

* Disable pdpd at all

* Disable pdpd at all

---------

Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
2023-04-17 13:30:17 +04:00
Wang Wangwang
589bd6d076 Add the implementation to GetExecGraphInfo API in AUTO plugin (#16979) 2023-04-17 17:24:07 +08:00
Ilya Churaev
aa1f26a2b7 Enable more tests for Template plugin (#16874)
* Enable more tests for Template plugin

* Removed deprecated API

* Fixed typo

* Added internal properties

* Removed incorrect tests

* Fixed code style

* Enabled some tests
2023-04-17 07:07:09 +00:00
Andrew Kwangwoong Park
7282728cec [GPU] Fix incomplete condition for NMS shape inference (#16960)
Signed-off-by: Andrew Park <andrew.park@intel.com>
2023-04-16 22:41:57 -07:00
Eddy Kim
9b9c31d46b [GPU] Updated to allocate memory in order of size while deserializing (#16867)
* updated to allocate memory in order of size while deserializing

* fix windows build error

* updated to check dependencies between not connected nodes
2023-04-16 22:33:57 -07:00
Egor Duplenskii
175db3523a [CPU] Add few tests to smoke scope (#16963) 2023-04-17 09:04:57 +04:00
Taylor Yeonbok Lee
c96a5c4b70 Fix prepare padding which was not handling group size properly (#16977) 2023-04-16 21:42:03 -07:00
Ilya Lavrenov
31398bb3eb Fixed deprecated API warnings (#16949) 2023-04-17 07:19:53 +04:00
Roman Kazantsev
18da874c57 [MO] Remove use of mapping file and its generation (#16944)
* [MO] Remove use of mapping file and its generation

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Fix pylinter findings

* Remove usage of mapping file in the layer tests

* Fixing layer tests for legacy frontend

---------

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2023-04-15 10:38:33 +00:00
Andrew Kwangwoong Park
507b3251ef [GPU] Fix to skip reorder optimization during post_optimize_graph phase (#16908)
* [GPU] Fix to skip reorder optimization during post_optimize_graph phase

Signed-off-by: Andrew Park <andrew.park@intel.com>

* Apply comment

Signed-off-by: Andrew Park <andrew.park@intel.com>

* update condition to check empty padding

Signed-off-by: Andrew Park <andrew.park@intel.com>

* add condition to check batch size

Signed-off-by: Andrew Park <andrew.park@intel.com>

---------

Signed-off-by: Andrew Park <andrew.park@intel.com>
2023-04-15 02:24:06 +00:00
Taylor Yeonbok Lee
824a5aa7fb [GPU] Fix nonzero issue in constant propagate (#16933)
* Fix gather_nonzero not to be marked as constant.
Even though count nonzero is to be turned into a constant, gather nonzero still cannot infer shape at the moment of propagate constant.

* Apply the fix only for gather_non_zero
2023-04-14 23:16:34 +00:00
Sofya Balandina
9f3bc22e7a [apiConformance] Refactoor core_integration.cpp (#16416) 2023-04-14 23:15:41 +00:00
Roman Kazantsev
4ba0ac5476 [MO][TF FE] Support delayed batch setting (#16937)
* [TF FE] Support delayed batch setting

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Cover BOM list

* Add unit-tests for batch setting with layout

* Apply code-review: check batch size

* Apply code-review: default index for any dimension

---------

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2023-04-14 22:35:43 +00:00
Edward Shogulin
8bdc5bc85f [LPT] Support ONNX quantized models coming from ORT PTQ (#14811)
* [LPT] FakeQuantize fuse

* GPU & CPU tests alignment

* refactoring & comments

* doc quick fix

* quick fix
2023-04-14 21:22:55 +00:00
Ilya Lavrenov
de2e9faa58 Corrected pattern for Linux ARm64 tests disablement 2023-04-14 22:28:58 +04:00
Gorokhov Dmitriy
cc6fd80d0a [CPU] Fixed Softmax and TopK nodes initilization for ARM devices (#16950) 2023-04-14 22:13:42 +04:00
Oleg Pipikin
7ce40996e5 Fix copy constructor and assignment for ov::Any (#16757)
* Fix copy constructor and assignment for ov::Any

* Fix1

* Apply comments

* Add test

* Fix code style

* Fix2

* Fix3
2023-04-14 22:12:18 +04:00
Aleksandr Voron
dc941f69ae fix (#16969) 2023-04-14 22:09:50 +04:00
Aleksandr Voron
fe98b8ee13 reduce 6d+ fix (#16931) 2023-04-14 22:09:22 +04:00
Ilya Lavrenov
df5ada8b19 Skipped failed tests on Linux ARM64 (#16970) 2023-04-14 21:56:28 +04:00
Liubov Talamanova
0c0aa5c997 [POT] Fix POT CI (#16955) 2023-04-14 17:21:01 +00:00
Tomasz Jankowski
129670ab1e [Transformations] Fix Parameter name override while removing Select node (#16934)
Details:
Applies valid node replacement method which avoids Parameter name override

Tickets: 101209

Co-authored-by: Ivan Tikhonov <ivan.tikhonov@intel.com>
2023-04-14 18:36:25 +02:00
Sun Xiaoxia
25058da48f Hot Fix: threading is disable with "threading=omp" (#16923)
* fix omp threading disable

* the divisor and dividend are reversed
2023-04-14 20:24:28 +04:00
Ilya Lavrenov
f5c2db73d5 Moved heavy OVInferConsistencyTest tests to nightly (#16967) 2023-04-14 20:02:06 +04:00
Anastasiia Pnevskaia
24c9d95779 Support of unnamed input for MO Python API. (#16373)
* Support of unnamed input for MO Python API.

* Code correction, tests fix.

* Small fix.

* Added tests for unnamed input, code fixes.

* Small code correction.

* Removed code comment.

* Added tests, fixed bugs.

* Minor corrections, added comments.

* Code refactoring.

* Added defaults for InputCutInfo.

* Fixed error.

* Small fixes.

* Removed wrong change.

* Fixed error.

* Corrected input description.
2023-04-14 19:37:46 +04:00
Irina Efode
ae34720818 [CONFORMANCE] Add check devices in parallelization over devices (#16964)
* [CONFORMANCE] Add check devices in parallelization over devices

* Remove extra
2023-04-14 15:50:35 +01:00
Vladimir Paramuzov
231569db16 [GPU] Fix group axis value for blocking desc (#16936) 2023-04-14 14:42:21 +00:00
Tatiana Savina
cf12f92fae DOCS shift to rst - IR articles (#16437)
* add IR documentation
2023-04-14 14:11:00 +00:00
Tatiana Savina
9e5be9ad24 DOCS shift to rst Advanced topics (#16454) 2023-04-14 16:06:59 +02:00
Ilya Lavrenov
9b38e5168f Updated oneDNN to fix crash on aarch64 Linux (#16961) 2023-04-14 17:49:21 +04:00
Irina Efode
d07fa6f80e [CONFORMANCE] Fix Opset filters (#16928)
* [CONFORMANCE] Fix filters related to opsets

* fix

* Fix op_summary

* Update op_summary.cpp

* fix

* fix
2023-04-14 17:42:42 +04:00
Irina Efode
fd824cf036 [CONFORMANCE] Correct passrate when added skipped tests (#16844)
* init

* Refactor

* Static and dynamic approach

* next

* fix

* small fixes

* fix
2023-04-14 17:00:19 +04:00
Ilya Lavrenov
b9f82e37b9 Removed WAs from packaging scripts related to old ARM plugin (#16952) 2023-04-14 16:17:12 +04:00
Vitaliy Urusovskij
04a4971481 Small docs fixes (#16945) 2023-04-14 15:14:48 +04:00
Aleksandr Voron
2c7cbdb293 [TEMPLATE] Skip TopK tests for ARM (#16946)
* skip topk tests for arm

* changed macros

* added include
2023-04-14 14:50:10 +04:00
Marcin Kusmierski
d6f7e5e84d [GNA] Fix UT for adding extra segments to PWL-s after convolution (#16732) 2023-04-14 11:25:10 +02:00
Maciej Kwapulinski
435a79a2a3 Fix stride height setting in input_conv test (#16813) 2023-04-14 08:53:24 +02:00
Marcin Kusmierski
67aa807892 Fix smoke_LSTMCellBasicCommon for GNA 3.5 (#16924) 2023-04-14 08:43:30 +02:00
Egor Duplenskii
e98bd0dae4 [CPU] Correct crop in FQ optimized formula (#16887) 2023-04-14 10:43:05 +04:00
Michael Frank Hansen
a7228534af DOCS Adding results for RPL-S (#16862)
* Adding results for RPL-S
* Create OVMS-benchmark-data.csv
2023-04-14 08:01:59 +02:00
Aleksandr Voron
55fa8da5e4 [CPU] MVN 1D fix in ACL Executor (#16930) 2023-04-14 10:00:20 +04:00
Xuejun Zhai
802742e59f split evaluate_map.cpp to small files (#16216)
* Split evaluate_map.cpp

Signed-off-by: Zhai, Xuejun <xuejun.zhai@intel.com>

* Fix compiler error

Signed-off-by: Zhai, Xuejun <xuejun.zhai@intel.com>

* Fix CI build error

Signed-off-by: Zhai, Xuejun <xuejun.zhai@intel.com>

* Fix CI build error

Signed-off-by: Zhai, Xuejun <xuejun.zhai@intel.com>

* Fix CI build error

Signed-off-by: Zhai, Xuejun <xuejun.zhai@intel.com>

* Fix CI format issues

Signed-off-by: Zhai, Xuejun <xuejun.zhai@intel.com>

* Fix CI format issues

Signed-off-by: Zhai, Xuejun <xuejun.zhai@intel.com>

* Fix CI format issues

Signed-off-by: Zhai, Xuejun <xuejun.zhai@intel.com>

* Fix CI format issues

Signed-off-by: Zhai, Xuejun <xuejun.zhai@intel.com>

* Add op v7::Gelu

Signed-off-by: Zhai, Xuejun <xuejun.zhai@intel.com>

---------

Signed-off-by: Zhai, Xuejun <xuejun.zhai@intel.com>
2023-04-14 06:57:19 +04:00
Pavel Esir
68f46ff9a1 [MO] compress_to_fp16=False by default (#16854)
* compress_to_fp16=False by default

* Apply suggestions from code review

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* note abound RAM consumption for FP16 compressed models

* detailed notion about RAM usage

* update 'get_compression_message()'

* corrected get_compression_message: remove infor about RAM

* fix pytorch convert layer tests

---------

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
2023-04-14 01:16:41 +00:00
Ilya Lavrenov
de8f34c8f0 Fixed plugin name in tests for ARM CPU (#16932) 2023-04-13 22:35:21 +04:00
Ilya Lavrenov
85f9d1392c Used cmake interface in ARM compute (#16929) 2023-04-13 22:35:03 +04:00
Luwei Zhou
6aeb054e48 [CPU] Use ONEDNN3.x weight/dest scale API to optimize perf (#16805)
* [LPT][CPU] Added callback for AddTransformation

* [WIP] Convolution scales fusion

* Force to use weight sclae to test performance.

* Update on interface.

* Use weight scale to adapt to ONEDNN 3.x API changes.

* Update the code.

* Update ONEDNN fix for gemm_x8s8s32x_conv kernel

* Fix the bug in ONEDNN and deconvFusingScale.

* Fuse FC Bias when having DQscale.

* WR to perf regression on

* Update onednn version.

* Fix bug and clean code.

* FC fusing dq scale bug fix.

* Add more comments and debug information.

* Fix CI issues.

* Merge ONEDNN changes.

* Fix CI issues and bugs.

* Apply review comments.

* Update comments.

* Apply reveiw comments.

* Avoid using LPT BiasAttribute RTInfo.

* Applied review comments.

---------

Co-authored-by: Vladislav Golubev <vladislav.golubev@intel.com>
2023-04-13 19:02:48 +02:00
Maxim Vafin
25015f9790 [PT FE] Support prim::DictConstruct on the output (#16894)
* Support dict on the output

* Preserve output order
2023-04-13 16:42:17 +00:00
Aleksandr Voron
0426c645eb Fix for ambiguous overloaded function call (#16927)
* fix

* change type to unsigned long long
2023-04-13 19:28:30 +04:00
Shen, Wanglei
3461064507 update benchmark_app to remove setting UNDEFINED with -hint none (#16695)
* Remove setting ov::hint::PerformanceMode::UNDEFINED from benchmark_app

* update benchmark_app

* update python code and description

* update python code

* fix code style issue

* update python code

* update c++ app
2023-04-13 14:29:13 +00:00
Maxim Vafin
c592ecd44e [MO] Fix legacy If (#16613)
* Fix legacy If

* Add test for If op

* Small fix
2023-04-13 18:10:40 +04:00
bstankix
5795a50a22 [docs] Update switchers 5 (#16925) 2023-04-13 16:07:53 +02:00
Egor Duplenskii
a016e4e6bb [IE_TESTS] Avoid any extra work for the skipped tests (#16915)
i.e. do not clone the function if it is unnecessary
2023-04-13 13:23:38 +00:00
Vladimir Paramuzov
5299f26168 [GPU] Handle unsupported eltwise fusion for onednn gemm in dynamic cases (#16875)
* [GPU] Handle unsupported eltwise fusion for onednn gemm in dynamic cases

* Update src/plugins/intel_gpu/tests/fusions/gemm_fusion_test.cpp

Co-authored-by: Sergey Shlyapnikov <Sergeishlyapnikov@gmail.com>

---------

Co-authored-by: Sergey Shlyapnikov <Sergeishlyapnikov@gmail.com>
2023-04-13 15:55:44 +04:00
Roman Lyamin
656428bc4f [GPU] Skip kernel logic for Concat fix (#16885) 2023-04-13 15:55:05 +04:00
Min, Byungil
da7ee613a3 [GPU] Disable oneDNN failed TCs on dGPU (#16853)
Signed-off-by: Min, Byungil <byungil.min@intel.com>
2023-04-13 20:41:29 +09:00
Taylor Yeonbok Lee
df6557cfad [GPI]Fixed not to allocate internal buffer with size 0 (#16899)
* Fixed not to allocate internal buffer with size 0

* Fixed unittest failure
2023-04-13 10:32:12 +00:00
Ilya Churaev
f70954bda9 Fixed build for macOS with LLVM from brew (#16907) 2023-04-13 10:20:30 +00:00
Ilya Churaev
ad2dc4d479 Fixed ARM CPU tests. (#16910)
* Use name from OUTPUT_NAME property

* Fixed plugins without OUTPUT_NAME
2023-04-13 13:29:42 +04:00
Karol Blaszczak
7782d85b26 [DOCS] model caching update to GPU (#16909)
Update GPU.md
Update Model_caching_overview.md

Co-authored-by: Eddy Kim <eddy.kim@intel.com>
2023-04-13 11:09:16 +02:00
Mateusz Tabaka
5d80bca16e [TF frontend] Add test for Split->Conv->Concat scenario (#16816) 2023-04-13 10:42:17 +02:00
Jan Iwaszkiewicz
63c5be3ed2 [PyOV] Fix models checking and ensure correct destructor calls in tests (#16814) 2023-04-13 10:37:05 +02:00
Gorokhov Dmitriy
ae350c7107 [CPU] Fixed unused-private-field compilation errors (#16905) 2023-04-13 12:20:18 +04:00
Anastasiia Pnevskaia
4921d1ad28 Fix for slowdown of convert_model() after multiple runs (#16751)
* Used singleton class for version check.

* Moved VersionChecker to utitl/version.py, added tests.

* Minor corrections.

* Sort imports.

* Small correction.

* Small correction.
2023-04-13 11:59:11 +04:00
Nikolay Shchegolev
061ba1d773 [CPU] Convert i64->i32 for Reference node. (#16797) 2023-04-13 11:55:53 +04:00
Xuejun Zhai
e238bfc1d0 Fix C API test failed with debug version on Windows & MacOS (#16903)
Signed-off-by: Zhai, Xuejun <xuejun.zhai@intel.com>
2023-04-13 10:59:42 +04:00
Wang Wangwang
1037f24c46 [AUTO] Remove exclusive_asyc_requests property from AUTO plugin (#16840)
* Remove exclusive_asyc_requests property from AUTO plugin

* Update test case

* Add test case to test incorrect config

* Remove the test case related to exclusive_asyc_requests property of AUTO plugin
2023-04-13 06:35:42 +00:00
Vladimir Paramuzov
67c07ccebe [GPU] Support 7D and 8D tensors (#16810) 2023-04-13 09:04:14 +04:00
Tomasz Dołbniak
dcf6fb1e1a Allow stable sort in TopK when sorting by indices (#16811)
* Allow stable sort in TopK when sorting by indices

* Clarification of stable sorting by index and unblocked test

* XFAIL the test again

* Clarification of sorting by indices

* Revert of changes in previous versions op TopK (spec)
2023-04-13 05:26:01 +02:00
Vladislav Golubev
9c6d287a58 [LPT] GroupConvolution plugin tests: test class corrected to restore behavior in arm plugin instances (#16883)
* [LPT] GroupConvolution plugin tests: restored test params default values

* return FQOnData shape automatic generation
2023-04-13 01:41:44 +01:00
Min, Byungil
1ba87971d1 [GPU] fix unit-test seg fault error on dGPU (#16879)
Signed-off-by: Min, Byungil <byungil.min@intel.com>
2023-04-13 09:20:06 +09:00
Aleksandr Voron
73be9d31b6 Skip CPU tests on ARM platform (#16891)
* [CPU] ARM architecture support

This patch extends existing CPU plugin capabilities with ARM CPUs optimized support

* Fixed undefined reference in unit tests

* refactoring

* Fixed Eltwise node behavior for ARM

* init commit

* tests passed

* fix skip failures

* Apply suggestions from code review

---------

Co-authored-by: dmitrygo <dmitry.gorokhov@intel.com>
Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
2023-04-13 02:34:36 +04:00
Ilya Lavrenov
86142b0f4b Fixed compilation with gcc-12 (#16895) 2023-04-13 02:24:19 +04:00
Maciej Kwapulinski
0e975ffbb6 [GNA] smoke_MemoryTest suite enabled for transformation=NONE (#16481)
* relative threshold for smoke_MemoryTest suite adjusted for GNA

* smoke_MemoryTest suite enabled

* GNA MemoryTest. TransformNone: input changed to [5-10]. TransformLatency is disabled.

* RR comments applied

* RR2 comments applied

* RR3 comments applied

* clang-format-9 fix

* RR4 comments applied
2023-04-12 21:16:08 +00:00
Karol Blaszczak
65a49e903c Update prerelease_information.md (#16898) 2023-04-12 20:04:25 +00:00
Ilya Lavrenov
418f70abb0 Improvements related to arm support (#16892) 2023-04-12 23:02:57 +04:00
Taylor Yeonbok Lee
bee357bcf8 Fix softmax perf of stable diffusion (#16869) 2023-04-12 12:01:31 -07:00
Ilya Lavrenov
298bf15a1b Debian / RPM changes for ARM CPU plugin (#16871) 2023-04-12 23:00:07 +04:00
Aleksandr Voron
9b5ca2bb6a Add ACL license (#16889) 2023-04-12 19:53:47 +04:00
Wang, Yang
86d7c97fa9 Update the logic of benchmark app property setting (#16427)
* 1. refine the logic to ov::device::properties setting.
2. the config overrides will be performed if same config setting is came from CMD line.-a

Signed-off-by: Wang, Yang <yang4.wang@intel.com>

* Update configuration sample file within README.md.

* Update.

* Update.

* 1. Update configuration example file within REAMDME.md for Python version.
2. implement the config DEVICE_PROPERTIES value convertation between the string type and dictionary of Python type.
3. Update the configuration file loading and dumping logic.

Signed-off-by: Wang, Yang <yang4.wang@intel.com>

* Update.

* Update.

* Update.

* Update.

* Update.

* 1. Enable configs to be interchangeable between C++ and Python.
2. Update perf_count showing logic.

Signed-off-by: Wang, Yang <yang4.wang@intel.com>

* Revert the logic of showing show performance counters.

* Update help msg for loading config option.

---------

Signed-off-by: Wang, Yang <yang4.wang@intel.com>
2023-04-12 15:32:54 +00:00
Gorokhov Dmitriy
c283d21215 [CPU] ARM architecture support (#15256)
* [CPU] ARM architecture support

This patch extends existing CPU plugin capabilities with ARM CPUs optimized support
2023-04-12 18:42:05 +04:00
Sofya Balandina
a368e10fff [apiConformance] Stop work after crash and save report (#16539) 2023-04-12 18:31:39 +04:00
Irina Efode
c2a90f4c01 [CONFORMANCE] Fix error with import sigkill (#16884) 2023-04-12 18:21:57 +04:00
Wang Wangwang
c2c2143f45 clean AB property from virtual plugin and core global config (#16877)
* Benchmark_app set ov::hint::allow_auto_batching through compile_model

* Remove the process about allow_auto_batching in set_property of core

* Remove allow_auto_batching and auto_batch_timeout property from AUTO plugin

* Reserve the info logs and add API to check auto_batching

* Update test case, rm AB property test from core config tests

* Update some API in AUTO plugin config
2023-04-12 17:37:57 +04:00
Tomasz Dołbniak
fb49228fec Pillow modes in the preprocessor's resize mechanism (#16601) 2023-04-12 15:30:42 +02:00
Sofya Balandina
ed5148b75f [apiConformance] Refactor io_tensor tests (#16348) 2023-04-12 17:22:01 +04:00
Vladimir Paramuzov
7d4496bb12 [GPU] Remove unused constants from the graph (#16873) 2023-04-12 16:52:26 +04:00
Mateusz Bencer
e737e18b02 [ONNX FE] Fix Squeeze v1 (#16865) 2023-04-12 14:33:49 +02:00
Sergey Shlyapnikov
997f60f1c3 [GPU] Fix shape_of shape inference optimization (#16863) 2023-04-12 15:44:34 +04:00
Mateusz Tabaka
bdd79fe931 CompressQuantizeWeights - use f32 precision when computing scale and zero point (#16794)
Ticket: 101825
2023-04-12 12:42:39 +02:00
Szymon Irzabek
496fe7a7db [GNA] Extend unsupported concat detection to include cascaded concat with convolution (#16756) 2023-04-12 12:19:42 +02:00
Przemyslaw Wysocki
69d6ef33fc [PyOV] Align and bump numpy, further tidy up requirements (#16652)
* Align numpy

* Simplify the rest

* Minor change

* Minor change

* Restart CI

* Update paddle reqs
2023-04-12 13:14:38 +04:00
Marcin Kusmierski
b755d17090 [GNA] Fix plugin crash when infinite loop discovered. (#16770) 2023-04-12 10:00:52 +02:00
Maxim Vafin
23c90aecea Add support for opset10 and opset11 in MO IR Reader (#16742)
* Add support for opset10 and opset11 in MO IR Reader

* Fix unique

* Refactor tests

* Fix Unique shape infer

* Update tests

* Apply suggestions from code review

Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Apply review feedback

* Fix BOM tests

* Update tools/mo/unit_tests/mo/utils/ir_reader/ops_test.py

* Improve error log

* Fix test fails when using pytest

* Add changes forgotten in last commit

* Fix error message

---------

Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>
2023-04-12 11:35:52 +04:00
Ivan Tikhonov
132dceb146 Delete redandant node copies in TSSqueeze, TSUnsqueeze and TSReduction transformations (#16753)
* Delete redandant node copies in TSSqueeze, TSUnsqueeze and TSReduction transformations, add new tests

* codestyle

* codestyle
2023-04-12 11:30:48 +04:00
Bo Liu
4bb9222c6e fix Paddle unit tests unexpected exceptions and seg fault issue (#16808)
* fix Paddle unit tests unexpected exceptions and seg fault issue

* parse confine from reqfile to keep algin with other requirements

* Apply suggestions from code review

* Apply suggestions from code review
2023-04-12 11:13:25 +04:00
Karol Blaszczak
4b16c7554e [DOCS] minor fixes for front_ext and pre-notes (#16866) 2023-04-12 07:55:34 +02:00
Roman Lyamin
f8aacf3b19 [GPU] Small fix for gather_nonzero (#16858) 2023-04-12 09:15:49 +04:00
Steve Yoo
0312d8cf1b Skip asymmetric compensation if its type is not data, and add its unittests (#16494) 2023-04-11 20:16:25 -07:00
Roman Lyamin
2312ec79a2 [GPU] Skip failing lstm tests (#16868) 2023-04-12 02:10:42 +02:00
Edward Shogulin
586dd4fb0a [Snippets] BF16 enforce in snippets (#16587) 2023-04-12 01:12:17 +02:00
Anastasia Kuporosova
31aa35b646 [PyOv] remove commented functions without implementation (#16864) 2023-04-12 01:07:29 +04:00
Przemyslaw Wysocki
ea213f687a Fix regex (#16850) 2023-04-12 01:06:54 +04:00
Wang, Yang
3740ba9226 [IE Sample] incorrect nstreams retrieved from plugin (#16849)
* Retrieve the ov::num_streams through compiledModel rather than through plugin.

* Update python version.
2023-04-12 01:06:20 +04:00
Ivan Tikhonov
920900fbda Delete the redundant check in convert method of TF FrontEnd class (#16846)
* remove a check in convert method

* delete unused variables and comment

* leave only one pass::Manager in normalize method
2023-04-12 01:05:16 +04:00
Ilya Churaev
4a43753e02 Enable some tests for Template plugin (#16832)
* Remove the skip of template plugin tests

* Enable some skipped tests for template plugin

* Added cancel callback, collect per-layer statistic, fixed tests

* Fixed template tests

* Rename internal API terminate to cancel

* Fixed windows tests

* Fixed logic with performance counters
2023-04-12 01:02:28 +04:00
Ian Hunter
209db8a29b Update ie_common.h (#16860) 2023-04-12 00:52:02 +04:00
Andrew Kwangwoong Park
63b16baa7e [GPU] Fix strided slice clamped negative begin with negative stride (#16843)
Signed-off-by: Andrew Park <andrew.park@intel.com>
2023-04-11 11:52:22 -07:00
Roman Kazantsev
9e89b6c5f6 [TF FE] Support NonMaxSuppression with named outputs (#16835)
* [TF FE] Support NonMaxSuppression with named outputs

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Simplify the test for NMS named outputs

* Share a script for test model generation

---------

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2023-04-11 19:14:59 +02:00
Taylor Yeonbok Lee
7513e9dee1 [GPU] Applied w/a to resolve softmax accuracy issue (#16818)
* Applied w/a to resolve softmax accuracy issue
The original impl resulted in accuracy issue if leftover is not aligned with subgroup size.
(e.g., for shape [1024, 306] where the lws = 32, itemsNum = 9, leftover = 18, subgroup size = 16)
In such a case, the result got wrong if subgroup block read/write is used.
As a w/a, not to use subgroup block read/write if leftover is not aligned with nsubgroup size.
However we can come up with better itenNum size / lefover handling in the follot bwing up work.

* Fix build error & minor revise

* Fix condition
2023-04-11 10:01:22 -07:00
Mateusz Tabaka
4fbd094cba BroadcastConstRangeReplacement - skip unsqueeze if Broadcast input is 1D (#16851)
Ticket: 106636
2023-04-11 17:59:03 +02:00
Vladislav Golubev
98afdc848a [LPT] ConvolutionTransformation: support for a new per channel dequantization representation (#16687)
* [LPT][TESTS] GrConv: added test cases with per channel dq on weights and without reshape

* FoldFQ: don't transform FQ with quantization by several dimensions

* ConvolutionTransformation: supported GrConv with per channel dq on weights and without reshape

* fold_reshape: refactoring
2023-04-11 14:07:23 +02:00
Vladislav Golubev
296c2d6603 [Transforamtions] NonZero horizontal fusion: review leftovers (#16639)
* Review comments applied

* codestyle

* review comments applied
2023-04-11 15:42:43 +04:00
Ekaterina Aidova
ca2265395d [PT FE]: fix aten::mean behaviour for provided dtype (#16790) 2023-04-11 14:29:29 +04:00
Ekaterina Aidova
d41663694c [PT FE]: aten::gather (#16784)
* [PT FE]: aten::gather

* add detach and sign
2023-04-11 14:28:05 +04:00
Ekaterina Aidova
d407bc1b3b [PT FE] fix invalid reshape shape after aten::index (#16821)
* [PT FE] fix invalid reshape shape after aten::index

* support aten::index_select
2023-04-11 12:41:59 +03:00
Eddy Kim
f6ee6e92f8 [GPU] fixed loop serialization logic for multi-stream execution (#16838)
* fixed loop serialization logic for multi-stream execution

* fixed the multistream unit test
2023-04-11 12:40:37 +04:00
Roman Kazantsev
f991f92f8c [TF FE] Test ResourceGather operation and fix debug caps (#16819)
* [TF FE] Test ResourceGather operation and fix debug caps

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Fix test generation script

---------

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2023-04-11 11:33:32 +04:00
yanlan song
527c2dad2a query capacity before popping (#16828)
* query capacity before popping

Signed-off-by: fishbell <bell.song@intel.com>

* refine

Signed-off-by: fishbell <bell.song@intel.com>

---------

Signed-off-by: fishbell <bell.song@intel.com>
2023-04-11 11:25:29 +04:00
Mingyu Kim
615177ae09 [GPU] Update onednn version to latest v3.1 (#16848) 2023-04-11 15:05:35 +09:00
Roman Lyamin
234fe92931 [GPU] MVN 1d dynamic batch case fix (#16826) 2023-04-11 09:42:51 +04:00
Oleg Pipikin
efc647a512 [Snippets][CPU] Fix cycle dependency check in snippets tokenizer (#16760) 2023-04-10 22:36:29 +04:00
Ilya Churaev
81821f3dbb Remove vopset typo (#16833)
* Remove vopset typo

* remove ::
2023-04-10 19:50:06 +04:00
Ilya Lavrenov
f1d6725477 Removed legacy src files from inference library (#16839) 2023-04-10 19:26:09 +04:00
dependabot[bot]
81af7f52cb Bump pytest from 7.2.0 to 7.3.0 in /src/bindings/python (#16830)
Bumps [pytest](https://github.com/pytest-dev/pytest) from 7.2.0 to 7.3.0.
- [Release notes](https://github.com/pytest-dev/pytest/releases)
- [Changelog](https://github.com/pytest-dev/pytest/blob/main/CHANGELOG.rst)
- [Commits](https://github.com/pytest-dev/pytest/compare/7.2.0...7.3.0)

---
updated-dependencies:
- dependency-name: pytest
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-04-10 16:50:12 +04:00
Ilya Lavrenov
feb08c408f Return benchmark_tool to openvino-dev wheel (#16834) 2023-04-10 16:34:51 +04:00
Ilya Lavrenov
023dc1fa3d Remove warnings during cython call (#16831) 2023-04-10 16:28:15 +04:00
Roman Kazantsev
f36ee94b4b [TF FE] Correct SpaceToBatch layer test (#16823)
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2023-04-10 14:41:02 +04:00
Evgenya Stepyreva
bc7a121a20 Removes legacy transformations from CNNNetworkNGraphImpl::reshape (#15853)
* Removes legacy transformations from CNNNetworkNGraphImpl::reshape

* Removes legacy transformations from CNNNetworkNGraphImpl::reshape

* 6 more models propagate shape more precise

* Removes legacy includes

* Fix invalidation

* Test change

* win fix

* Ilyas suggestion

* Unary ops -- removed shape relying on the output of the op, used shapes from the input tensor instead

* Code clean up

* Equal: bounds evaluation

* Equal: bounds evaluation

* Restrict TypeRelaxed from partial_value propagation

* TypeRelaxed: propagate lower/upper bounds

* Remove debug prints

* fix build

* GPU shape inference problem fixed

* Generate Proposals: better dynamic shape propagation

* Style
2023-04-10 12:36:56 +02:00
Ilya Churaev
b921bf2e29 Remove redundant copy of ov::Any in has_rt_info method (#16802)
* Remove redundant copy

* Fixed Python segfault and avoid a copy of ov::Any
2023-04-10 13:56:35 +04:00
Sergey Shlyapnikov
2075dcb7c3 [GPU] Fix Interpolate assert (#16806) 2023-04-10 12:29:01 +04:00
Sofya Balandina
ed50d3782c [apiConformance] Define mandatory scope for infer requiest tests (#16418) 2023-04-10 12:27:20 +04:00
Irina Efode
b7bf760516 [CONFORMANCE] Add re-run of interapted tests to avoid not reported tests (#16782)
* [CONFORMANCE] Add re-run of interapted tests to avod not reported tests

* Fix mistake with interapted

* test

* Remove extra prints
2023-04-10 12:23:59 +04:00
Wang Wangwang
57684e28ff [AUTO] Remove cache_dir property from AUTO plugin (#16775)
* Remove cache_dir property from AUTO plugin

* Pass the secondary property to hardware plugin

* Update test case

* Update test case, meta plugin will pass the properties to device without checking
2023-04-10 11:42:24 +04:00
Sergey Shlyapnikov
48dee7c30a [GPU] Fix missed weights params update (#16815) 2023-04-10 10:28:06 +04:00
Kelvin Choi
c7fe5ca73b [Coverity] Resource leak in primitive_inst.cpp (#16771) 2023-04-10 10:27:09 +04:00
Egor Duplenskii
b5a0497c19 [CPU][TESTS] Fix cmake subset target (#16710)
cmake iterates over a list and cannot iterate over space separated string
2023-04-10 10:00:35 +04:00
hyunback kim
f4179e8ee4 [GPU] Add to check FC bias data-type logic in issued kernel selection. (#16628)
* Fix unit test failure with broadcast primitive
* After introduce shape canonicalization, static broadcast unit test failed.
* Guilty commit is https://github.com/openvinotoolkit/openvino/pull/16166

Signed-off-by: hyunback <hyunback.kim@intel.com>
2023-04-10 14:54:15 +09:00
Chenhu Wang
d1a23e964e [CPU] Store emitter keep source vec values intact (#16313) 2023-04-10 09:51:50 +04:00
Chen Peter
13874b31e9 [AUTO] Initialize variable / reduce variable copy (#16743)
* [AUTO] Initialize variable / reduce variable copy

Signed-off-by: Peter Chen <peter.chen@intel.com>

* Be compatible with C++11

https://stackoverflow.com/questions/18184096

Signed-off-by: Peter Chen <peter.chen@intel.com>

* Fix C7555

C7555 “use of designated initializers requires at least ‘/std:c++latest’” in extern “C” code.

Signed-off-by: Peter Chen <peter.chen@intel.com>

* Init in constructor and use auto const &

Signed-off-by: Peter Chen <peter.chen@intel.com>

* Fix cpplint issue

common.hpp:72:  You don't need a ; after a }

Signed-off-by: Peter Chen <peter.chen@intel.com>

* Support all possible parameter numbers (0-6)

Signed-off-by: Peter Chen <peter.chen@intel.com>

* Fix cpplint issues

Signed-off-by: Peter Chen <peter.chen@intel.com>

---------

Signed-off-by: Peter Chen <peter.chen@intel.com>
2023-04-10 10:21:40 +08:00
Szymon Irzabek
8c69100439 [GNA] Fix tests which create convolution with stride > kernel size on height dimension (#16804) 2023-04-07 15:42:51 +02:00
yanlan song
769353df00 Support dynamic output models with all possible devices instead of CPU only (#15594)
* with dynamic output models, do not use intermediate IE blobs

Signed-off-by: fishbell <bell.song@intel.com>

* enable tests

Signed-off-by: fishbell <bell.song@intel.com>

* add some log/comment

Signed-off-by: fishbell <bell.song@intel.com>

* refine and enable tests

Signed-off-by: fishbell <bell.song@intel.com>

* change implementation

Signed-off-by: fishbell <bell.song@intel.com>

* fix issue with 1.0API

Signed-off-by: fishbell <bell.song@intel.com>

* enable unit test

Signed-off-by: fishbell <bell.song@intel.com>

* integrate test with folder change

Signed-off-by: fishbell <bell.song@intel.com>

* clean up cmake

Signed-off-by: fishbell <bell.song@intel.com>

* fix warnings

Signed-off-by: fishbell <bell.song@intel.com>

* fix conflict with master

Signed-off-by: fishbell <bell.song@intel.com>

* optimize common mock infer request

Signed-off-by: fishbell <bell.song@intel.com>

* rebase with master

Signed-off-by: fishbell <bell.song@intel.com>

* resolve merge conflict

Signed-off-by: fishbell <bell.song@intel.com>

---------

Signed-off-by: fishbell <bell.song@intel.com>
2023-04-07 20:44:36 +08:00
Daria Mityagina
8c40bfd9c7 detected vulnerability with shared_ptr (#16791) 2023-04-07 16:25:05 +04:00
Sofya Balandina
c6fc8e5adc [apiConformance] Exec_network_base refactor and define mandatory scope (#16413) 2023-04-07 16:17:50 +04:00
Ivan Tikhonov
72952bdc45 Disable ConstantFolding for ShapeOf subgraph in TS transformation (#16765)
* Disable ConstantFolding for ShapeOf expressions in TS transformation

* update ModelWithEmptyTensorListAndPushBack: add ShapeOf subgraph
2023-04-07 14:50:59 +04:00
Maxim Vafin
8b7e6878e8 [TF FE] Better support for named ports in tensorflow frontend (#16697)
* Fix in create_same_type_const_scalar; accurate updating type for parameter when inlining function call body

* Added Unique to the list of operations with named output ports (another MUSE fix)

* Draft: working version of extension with named ports in TF

* Merge fixes

* Refactor and productize POC

* Clean up

* Fix build

* Fix code style

* Fix lib so extension test

* Fix namespaces

* Remove usage of Any from CreatorFunction

* Fix build

* Fix arm build

* Apply review feedback

* Fix build after merge

* Apply suggestions from code review

---------

Co-authored-by: Sergey Lyalin <sergey.lyalin@intel.com>
2023-04-07 14:16:23 +04:00
Zlobin Vladimir
1eb6ad20c3 Update open_model_zoo submodule (#16779)
Fix model serialize

Ticket 107646
2023-04-07 12:45:49 +04:00
Fang Xu
4ade0e5533 fix wheel_blacklist_extension for macos (#16799) 2023-04-07 11:28:56 +04:00
Yuan Hu
06cacfe2a7 Revert "[CPU] optimize shape infer of Reshape (#16537)" (#16703)
This reverts commit 75c62ea320.
2023-04-07 06:18:58 +00:00
Roman Lyamin
132b657977 [GPU] Added GatherND dynamic support and changed logic for empty tensor support (#16690) 2023-04-07 09:19:05 +04:00
Vladimir Paramuzov
6d82f36050 Enable nop elimination for f16 type (#16749) 2023-04-07 09:18:27 +04:00
Xuejun Zhai
3be946371d Xuejun/remove api tensor related (#15877)
* [Remove APIs] remove api set_partial_shape()

Signed-off-by: Zhai, Xuejun <xuejun.zhai@intel.com>

* [Remove APIs] remove api set_element_type()

Signed-off-by: Zhai, Xuejun <xuejun.zhai@intel.com>

* [Remove APIs] remove api set_tensor_type()

Signed-off-by: Zhai, Xuejun <xuejun.zhai@intel.com>

* Revert "[Remove APIs] remove api set_tensor_type()"

This reverts commit 96f89e222d.

* Revert "[Remove APIs] remove api set_element_type()"

This reverts commit 33ebb61977.

* Apply suggestions from code review

---------

Signed-off-by: Zhai, Xuejun <xuejun.zhai@intel.com>
Co-authored-by: Evgenya Stepyreva <eva.my.link@gmail.com>
Co-authored-by: Evgenya Stepyreva <evgenya.stepyreva@intel.com>
2023-04-07 06:37:25 +02:00
Eddy Kim
07437eec1e updated to store dims and data type for binary post ops (#16792) 2023-04-06 20:13:53 -07:00
yanlan song
51967fd27b Optimize and fix fps number in log (#16615)
* fix in correct fps with cpu_help

Signed-off-by: fishbell <bell.song@intel.com>

* fix some threading issue

Signed-off-by: fishbell <bell.song@intel.com>

* indenting

Signed-off-by: fishbell <bell.song@intel.com>

* fix lock

Signed-off-by: fishbell <bell.song@intel.com>

* formatting

Signed-off-by: fishbell <bell.song@intel.com>

* do print in destructor, avoid CI script parse failure

Signed-off-by: fishbell <bell.song@intel.com>

* fix build warning

Signed-off-by: fishbell <bell.song@intel.com>

---------

Signed-off-by: fishbell <bell.song@intel.com>
2023-04-07 09:31:40 +08:00
Eddy Kim
df4d7bd3e9 fix uninitialized scalar variables (#16772) 2023-04-06 17:28:45 -07:00
Mingyu Kim
e17a6f29bf [GPU] Use unique symbol name for typedef (#16777) 2023-04-07 09:03:28 +09:00
Paul Youngsoo Ahn
24ab3f7c41 [GPU] Fix sub kernel ordering issue in kernels_cache (#16746)
* [GPU] Fix sub kernel ordering issue in kernels_cache (#16746)

* [GPU] Add unit test for sub kernel idx (#16746)

* [GPU]Follow up code review (#16746)

* [GPU] Skip kernel compilation when current node is optimized out in update_impl (#16746)

* [GPU]Code refactoring (#16746)
2023-04-06 12:44:25 -07:00
Nadezhda Ageeva
5e2f424fd0 [HETERO] Enable smoke_Hetero_CachingSupportCase tests (#16711) 2023-04-06 21:19:37 +04:00
Zlobin Vladimir
c7c7c4bb05 samples/cpp remove unused code (#16787) 2023-04-06 20:59:00 +04:00
Ivan Tikhonov
4812879318 Enable transformation callback for TS transformations (#16767) 2023-04-06 20:42:01 +04:00
Ilya Lavrenov
d2deae225a Added rpath for TBB libs to find hwloc (#16788) 2023-04-06 20:33:32 +04:00
Sofya Balandina
5ccc743707 [apiConformance] Fix relative all (#16518) 2023-04-06 18:24:13 +02:00
Vladislav Golubev
5f416dc4d2 [LPT] Introduced BiasAttribute (#16781)
* Extended check on ConvSum fusing

* [LPT] Introduced 'bias' rt attribute

* [CPU][TESTS] Added FQLayerDQBias tests
2023-04-06 16:01:04 +00:00
Anastasiia Pnevskaia
906ec7ee1b Fixed command lines in MO docs. (#16780)
* Fixed command lines in docs.

* Removed freezing from tutorial.
2023-04-06 18:30:59 +04:00
Jan Iwaszkiewicz
92eb62fe63 [PyOV] Fix getting all names in OVDict (#16665)
* [PyOV] Fix getting all names in OVDict

* Add docs and adjust tests

* Fix linter issues

* Adjust typing and add test for incorrect key type

---------

Co-authored-by: Michal Lukaszewski <michal.lukaszewski@intel.com>
2023-04-06 14:44:37 +02:00
Ilya Lavrenov
d732024ccb Extended a list of libraries to exclude from wheel package (#16764) 2023-04-06 16:37:11 +04:00
Tomasz Jankowski
cb436112b2 [Transformations] Assert valid output shape on ReduceReshape fusion (#16712)
* Assert valid output shape on ReduceReshape fusion

* Remove useless test step

* Reduce tests flow

* _dummy_

to retrigger failed nonrestartable check
2023-04-06 11:55:57 +00:00
Ilya Churaev
cafc7359c5 Set test case name for ReshapeMatMul tests (#16705)
Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
2023-04-06 15:40:20 +04:00
Anastasia Kuporosova
dbe051aa79 [POT] use serialize methid (#16768) 2023-04-06 15:02:24 +04:00
Katarzyna Mitrus
7a5c472ccc [ShapeInference] Fix Group Convolution shape infer - relax inputs size check (#16707)
* Relax group conv inputs size check

* Tests

* Relax inputs size checks for Backprop Convs
2023-04-06 10:59:37 +00:00
Min, Byungil
932a668a2f [GPU] Bugfix in reorder_bfyx_to_blocked_format kernel (#16689)
+ Bugfix bfyx_to_blocked_format kernel of reorder prim for doubl blocked format
+ issued format is bs_fs_yx_bsv16_fsv32. Added test-cases.
+ Fixed accuracy issue from check_accuracy_issue

Signed-off-by: Min, Byungil <byungil.min@intel.com>
2023-04-06 18:08:34 +09:00
Marcin Kusmierski
bc0c8374da [GNA] Fix issues with GNA 3.5 - Increasing kernel to stride for Convolution2D (#16642)
* [GNA] Add increasing kernel in case stride is bigger than kernel.

* [GNA] fix review comments
2023-04-06 09:34:38 +01:00
Wang Wangwang
53d9b26e1f [AUTO] Show the detailed failure message when AUTO load network failed (#16297)
* Show the detailed failure message when AUTO load network failed

* Add test case

* Update test case to check multi load network failed

* Update test case based master

* RM _availableDevices hard code from AUTO

---------

Co-authored-by: Chen Peter <peter.chen@intel.com>
2023-04-06 15:41:32 +08:00
Vladimir Paramuzov
b2a64e8c3a [GPU] Enable dynamic shapes support for gather elements (#16727) 2023-04-06 11:14:36 +04:00
Shen, Wanglei
9f0e557744 HOT FIX: update cpu map calculation for Windows to avoid incorrect total number of processors (#16763)
* update cpu map calculation for Windows

* update typo

* update typo

* update for hybrid CPU

* update for hybrid CPU

* update for typo
2023-04-06 15:08:42 +08:00
Ilya Churaev
70ef0b5316 Minimize rebuild for Makefiles generator (#16729)
* Add dependency from ov_plugins.hpp only for files which use it

* Remove rebuild files depends on CI_BUILD_NUMBER changes

* Try to fix static build

* Fixed comments

* Fixed build

* Merged some change

* Try to fix build

* Try to fix nvidia build

* Take LTO value from target property
2023-04-06 11:02:28 +04:00
Ilya Churaev
f2894d09e9 Fixed windows build after #16716 (#16773) 2023-04-06 11:02:10 +04:00
Roman Lyamin
38c8a3d15b [GPU] Added custom canonicalize_shapes for Gather (#16733) 2023-04-06 10:50:57 +04:00
Wang Wangwang
362389c733 [DOCS][AUTO] Add enable_runtime_fallback property to AUTO Device Sele… (#16645)
* [DOCS][AUTO] Add enable_runtime_fallback property to AUTO Device Selection article

* Update docs/OV_Runtime_UG/auto_device_selection.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/OV_Runtime_UG/auto_device_selection.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/OV_Runtime_UG/auto_device_selection.md

---------

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
2023-04-06 14:35:55 +08:00
Sun Xiaoxia
f95fd27c16 HOT FIX: CPU binding cannot follow numactl's control (#16736)
* fix numactl command issue

* fix comments
2023-04-06 13:47:34 +08:00
Oleg Pipikin
1c564226f3 Deprecate util functions in public api (#16716)
* Deprecate util functions in public api

* Add deprecation suppression for usage inside openvino

* Fix clang-format

* Fix1
2023-04-06 06:32:04 +04:00
Roman Kazantsev
6e97c82c97 [Spec] Fix Interpolate specification: input index port (#16762) 2023-04-05 23:01:02 +04:00
Anastasia Kuporosova
2e0bac34db [PyOV] Fix warnings (#16674)
* [PyOV] Fix warnings

* another win

* fix codestyle

* fix test

* fix any

* exclude some warnings
2023-04-05 20:01:43 +02:00
Irina Efode
ff8f361778 [CONFORMANCE] Solve the problem to generate filters (#16713)
* [CONFORMANCE] Solve the problem to generate filters

* trigger linux build

* Add handling of not run tests

* Remove extra
2023-04-05 21:25:05 +04:00
Sebastian Golebiewski
8c2766c4bc DOCS shift to rst - Converting TensorFlow YOLO Models (#16735)
Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>
2023-04-05 16:49:43 +02:00
Vitaliy Urusovskij
7442a17240 Add Core property to switch from mmap to read in IR Frontend (#16600)
* Add Core property to switch from `mmap` to `read`
in IR FrontEnd

* Add tests on `ov::enable_mmap` property

* Add `enable_mmap` in C & Py APIs

* ClangFormat
2023-04-05 18:22:11 +04:00
Sungeun Kim
fef04e468a [GPU] add WA to avoid hang issue. (#16724) 2023-04-05 16:32:42 +04:00
Andrew Kwangwoong Park
44cfbea9ab [GPU] Fix synchronization issue from wrong stream in multi-stream mode on dGPU (#16671)
Signed-off-by: Andrew Park <andrew.park@intel.com>
2023-04-05 16:29:47 +04:00
Vladimir Paramuzov
f5e199c494 [GPU] Don't reorder weights when can reinterpret (#16714)
* [GPU] Don't reorder weights when can reinterpret

* [GPU] Test fixes
2023-04-05 16:20:51 +04:00
Anastasiia Pnevskaia
4098434233 Parameter list and descriptions for mo.convert_model() method in docstring (#16459)
* Added convert_model() params docs.

* Added auto-generating of most cli params.

* Added auto-generating of cli params.

* Small correction.

* Removed wrong change.

* Corrected default values.

* Fixed errors, added tests.

* Small correction.

* Corrected params descriptions, moved cli specific params to separate file.

* Moved params specifics to utils/help.py.
2023-04-05 14:48:13 +04:00
Ekaterina Aidova
837f5a7d53 [PT FE]: fix aten::index inconsistent reshape (#16741)
* [PT FE]:  fix aten::index inconsistent reshape

* add index name, return false

* Update src/frontends/pytorch/src/transforms/aten_index_replacer.cpp
2023-04-05 10:44:25 +02:00
Bogdan Pereanu
73ab0dd065 Fixing run_timest python script for input and output precision (#16661)
* Fixing run_timest python script for input and output precision

* Update code according to the PR review

* Update run_timetest according to the last review

* Add input_precision and output_precision to test_timetest as well

* Set input/output precision per model
2023-04-05 12:16:27 +04:00
Ilya Churaev
f2d4c96032 Fixed add_output for new subgraph (#16726) 2023-04-05 11:39:47 +04:00
Tatiana Savina
c474f564a9 DOCS Change sample path (#16738)
* change path

* change path for sample

* change architecture in path

* change windows sample comment
2023-04-05 11:34:07 +04:00
Pavel Esir
f9bd2d2c1e [ie transformations] improve SoftMax fusion for better mixed precision inference (#16574)
* improve SoftMax fusion

* style and unit-test fix

* more precise SoftMax unit-tests

* rewritten SoftMaxFusion with single matcher

* fixes for align_mixed_fp32_fp16_types_test.cpp and mark_subgraph_to_keep_in_mixed_precision_test.cpp

* add include for pass/pattern/op/or.hpp

* get rank only when necessary

* style-fix

* add comment why SoftmaxFusion is called manually

* fix copy_runtime_info
2023-04-05 11:28:48 +04:00
Roman Kazantsev
45daa2095f [TF FE] Add diagnostics capabilities via Framework nodes (#16706)
* [TF FE] Add diagnostics capabilities via Framework nodes

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Refactor normalize logic

* Applied code-review feedback: fix in get_unsupported_operations_and_failures

* Handle unknown exception type

* Store only first encountered failure

* Update src/frontends/tensorflow/tests/convert_unsupported.cpp

* Apply code-review ffeedback: use stringstream

* Correct Key for exception message

* Fix build

* Use helper for creation of fw node with exception message inside

* Add test for conversion with unknown exception

---------

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2023-04-05 11:28:10 +04:00
Karol Blaszczak
18c876bf23 Update openvino_sphinx_theme.css (#16740) 2023-04-04 18:55:16 +02:00
Ivan Tikhonov
093990118d Remove legacy TransposeSinking transformation (#16731)
* delete TransposeSinkingOVTF transformation

* delete include from tf_lite frontend
2023-04-04 18:51:10 +04:00
Roman Kazantsev
c034975183 [TF FE] Fix layer tests for BatchToSpace and add to the pre-commit (#16722)
* [TF FE] Fix layer tests for BatchToSpace and add to the pre-commit

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Specify type for batch_shape

---------

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2023-04-04 18:46:12 +04:00
Mateusz Bencer
d4b394c1b6 Skip Reduce* ops layer tests due to ORT error (#16730)
* skip reduce mean

* skip other reduce ops
2023-04-04 14:50:34 +02:00
Tatiana Savina
1ee0c151ea DOCS shift to rst - Conversion tutorials (#16704) 2023-04-04 11:30:19 +00:00
Shen, Wanglei
9f54504232 update cpu properties name to enable_hyper_threading and enable_hyper_threading (#16723) 2023-04-04 15:25:11 +04:00
Maciej Smyk
8691ec2779 Update README.md (#16700) 2023-04-04 13:15:31 +02:00
Sebastian Golebiewski
f4fe856d9d [DOCS] Adding a new class for sortable tables - for master (#15314) 2023-04-04 12:42:19 +02:00
Eddy Kim
90615cf26a [GPU] Fix OneDNN primitive attr serialization logic (#16654)
* fix onednn primitive attr serialization logic

* added an onednn fc fusing serialization test

* added gemm fusing serialization tests
2023-04-03 18:24:40 -07:00
Maria Pushkina
4f7f7c31ee [CVS-104864] Action: Renamed group for concurrency (#16715) 2023-04-03 23:26:25 +04:00
Mikhail Ryzhov
06e6a69356 Revert "[GNA]Fix crash in gna_pluing when using POT (#16003)" (#16719)
This reverts commit 0a56927671.
2023-04-03 20:12:57 +02:00
Shen, Wanglei
f6c7213ae4 support Ecore only in streams calculation (#16552)
* support Ecore only in streams calculation

* fix merge conflict
2023-04-03 23:33:18 +08:00
Marcin Kusmierski
0a56927671 [GNA]Fix crash in gna_pluing when using POT (#16003) 2023-04-03 16:22:56 +01:00
River Li
dec425c408 [C API] remove UNDEFINED property value (#16709) 2023-04-03 16:54:03 +04:00
Mateusz Tabaka
03ab0e4388 Add ConvolutionToGroupConvolutionFusion (#16688)
Fuses Split->series of Conv->Concat to GroupConvolution op.

Ticket: 105170
2023-04-03 14:38:44 +02:00
Przemyslaw Wysocki
6237868437 Dependabot-ignore line (#16679) 2023-04-03 15:49:27 +04:00
Vladimir Paramuzov
f7d15e12c8 [GPU] Refactor dimensions jitter and max rank related code (#16603)
* [GPU] Refactor dimensions jitter and max rank related code
2023-04-03 13:34:06 +02:00
River Li
b7b788917d Fix double free in snippets (#16702) 2023-04-03 14:42:10 +04:00
Shen, Wanglei
86da15e621 enable new property ov::hint::use_cpu_pinning (#16383)
* enable ov::hint::use_cpu_pinning

* update test case for comments

* update header file

* update header file

* Delete cpu_streams_calculation.hpp

* Revert "Delete cpu_streams_calculation.hpp"

This reverts commit a1074ca843.

* update config name

* fix code styple issue

* update for merge conflict
2023-04-03 18:14:33 +08:00
Sebastian Golebiewski
e7c1cdf982 DOCS shift to rst (#16686) 2023-04-03 11:41:28 +02:00
Vitaliy Urusovskij
016d36f032 Add mmap notes in dldt_depl_optimization_latency.md (#16682) 2023-04-03 12:10:12 +04:00
Zlobin Vladimir
44330b22bd Update open_model_zoo submodule (#16678)
Upgrade onnx to 1.13. Ticket 102716
2023-04-03 12:09:18 +04:00
Roman Kazantsev
f4fca2d578 [TF FE] Activate TopK layer test with the second output in the pre-commit (#16691)
* [TF FE] Test the second output for TopK operation

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Switch off no sorted case

---------

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2023-04-03 11:38:27 +04:00
Sebastian Golebiewski
b2e4857a64 DOCS shift to rst - AccuracyAwareQuantization Parameters (#16663) 2023-04-03 08:29:40 +02:00
Sebastian Golebiewski
02b35d7984 DOCS shift to rst - POT CLI Example (#16649) 2023-04-03 08:29:10 +02:00
Sebastian Golebiewski
3a5b819685 DOCS shift to rst - POT API examples (#16627) 2023-04-03 08:28:47 +02:00
Sebastian Golebiewski
2f5be5e81c DOCS shift to rst - Post-Training Optimization (#16621) 2023-04-03 08:28:24 +02:00
Sebastian Golebiewski
848c9e3b76 DOCS shift to rst (#16616) 2023-04-03 08:27:02 +02:00
Maciej Smyk
cddbb667a5 [DOCS] Automatic Batching Update (#16607) 2023-04-03 08:26:21 +02:00
Maciej Smyk
950b46ecad DOCS shift to rst - Model Creation C++ Sample & Model Creation Python* Sample (#16637) 2023-04-03 08:24:35 +02:00
Maciej Smyk
bb20151c9d DOCS shift to rst - Hello Query Device C++ Sample & Hello Query Device Python* Sample (#16650) 2023-04-03 08:24:02 +02:00
Maciej Smyk
f5dced8e69 DOCS shift to rst - Hello Classification Samples (#16681) 2023-04-03 08:23:44 +02:00
Yaroslav Torzuk
8491f15ba7 [GPU] Softmax for stable diffusion (#15863) 2023-04-03 10:21:02 +04:00
Anton Voronov
b64cbff10b [CPU] FQ shape agnostic kernel (#16585) 2023-04-03 09:55:49 +04:00
Karol Blaszczak
d7f70b647b [DOCS] shift to rst -install guides aptyumdocker (#16680) 2023-04-03 07:48:57 +02:00
Wang Wangwang
99eda5b5e1 [PYTHON][CAPI][AUTO] Add ENABLE_STARTUP_FALLBACK and ENABLE_RUNTIME_FALLBACK proper… (#16436)
* [AUTO] Add ENABLE_STARTUP_FALLBACK and ENABLE_RUNTIME_FALLBACK properties to Python API

* Add DEVICE_BIND_BUFFER property

* Add AUTO properties to C API

* Update test case && Update AUTO properties in PYTHON API

* Create dedicated files for auto plugin

* Update header files

* Update test case

* Modify code style

* Update variable name

* Add test case for invalid input value
2023-04-03 11:56:48 +08:00
Oleg Pipikin
e978db3132 Move new util functions from public api to dev api (#16683) 2023-04-01 11:46:20 +04:00
Ilya Churaev
186a1ccdcd Move interpreter test to template plugin (#16673) 2023-03-31 20:49:07 +00:00
Oleg Pipikin
66ea57addd Move memory tests from core to template plugin tests (#16460)
* Move memory tests from core to template plugin tests

* Rewrite tests to use template plugin

* Don't clone model in INTExecutable

* Add reset and modify tests

* Delete old test

* Fix clang-format

* Fix VariableState::set_state

* Enable and add var modify tests

* Fix INTExecutable

* Apply comments
2023-03-31 19:55:55 +02:00
guozhong wang
341217de99 Unify code path for MULTI and AUTO CTPUT hint (#16349)
[MULTI] pass through to AUTO with CTPUT hint
After this change
-- MULTI doesn't support setting infer request via CPU(4),GPU(8).
-- MULTI doesn't support CompiledModel::set_property() and ExecutableNetwork::GetConfig().
2023-03-31 18:40:41 +02:00
Roman Kazantsev
9a5a8f6abc [TF FE] Move to TopK-11 operation and update downgrading TopK transformation (#16590)
* [TF FE] Move to TopK-11 operation

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Update downgrading transformation

---------

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2023-03-31 17:44:58 +02:00
dependabot[bot]
c33a3f87f0 Bump attrs from 22.1.0 to 22.2.0 in /tests (#16676)
Bumps [attrs](https://github.com/python-attrs/attrs) from 22.1.0 to 22.2.0.
- [Release notes](https://github.com/python-attrs/attrs/releases)
- [Changelog](https://github.com/python-attrs/attrs/blob/main/CHANGELOG.md)
- [Commits](https://github.com/python-attrs/attrs/compare/22.1.0...22.2.0)

---
updated-dependencies:
- dependency-name: attrs
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-31 11:35:38 +00:00
Gorokhov Dmitriy
6e09e53f0d [CORE] Added optimized fp32->fp16 precision conversion implementation (#16672) 2023-03-31 11:02:39 +00:00
Egor Duplenskii
6d1e5d336d [CPU] Enable execution_mode API property (#16367) 2023-03-31 14:29:18 +04:00
Sebastian Golebiewski
f9ff518d16 DOCS shift to rst - Model Optimization Guide articles (#16598) 2023-03-31 11:26:04 +02:00
Sergey Shlyapnikov
bb93bfd90f [GPU] Add clDNN shape agnostic kernels usage as an initial impls for dGPU (#16018)
* [GPU] Add clDNN shape agnostic kernels usage as an initial impls for dGPU

* [GPU] Use layout as a key of weights cache, implement logic for weights cache capacity calculation based on available memory
2023-03-31 13:05:59 +04:00
Zhang Yi
fc88bed604 [CPU] Improvement for NoneZero and Gather (#16641) 2023-03-31 11:05:43 +02:00
Chen Xu
35398e339d [CPU] Implement TopK-11 to CPU plugin (#16522) 2023-03-31 10:28:20 +02:00
Pavel Esir
6d064d26cb remove deprecated MO args (#16626)
Co-authored-by: Andrei Kochin <andrei.kochin@intel.com>
Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>
2023-03-31 12:28:04 +04:00
Ilya Lavrenov
ee0bb79ed6 Fixed LTO build (#16629)
* Partially fixed LTO

* Fixed issues with cnpy LTO

* CPU

* Disabled failing GPU test
2023-03-31 11:34:42 +04:00
Anton Voronov
43fca3d231 [CPU] Introduced shape agnostic eltwise (#15976) 2023-03-31 11:28:54 +04:00
Ilya Churaev
1b9bd61767 Added constructor from string for element Type (#16643)
* Added constructor from string for element Type

* Fixed code style

* Removed WA for tests
2023-03-31 07:24:32 +00:00
Maciej Smyk
73e75c58ba DOCS shift to rst - Hello NV12 Input Classification C++ Sample & Hello NV12 Input Classification C Sample (#16664) 2023-03-31 09:07:59 +02:00
Maciej Smyk
385bbbd49b DOCS shift to rst - Hello Reshape SSD C++ Sample & Hello Reshape SSD Python* Sample (#16662) 2023-03-31 09:07:30 +02:00
Sebastian Golebiewski
8fad140a02 DOCS shift to rst - Quantization articles (#16596) 2023-03-31 09:06:52 +02:00
Oleg Pipikin
9cf4ee1eae Fix sanitizer out-of-memory error (#16457)
* Fix sanitizer out-of-memory error

* Add implementation for Windows

* apply comments

* Fix1

* Fix2

* Fix3
2023-03-31 07:49:45 +04:00
Bogdan Pereanu
bf8e5cb4a2 Fix ITT build fail (#16648) 2023-03-31 01:18:13 +04:00
Roman Kazantsev
fc95d8e544 [TF FE] Align opset usage in utils (#16656)
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2023-03-30 17:58:46 +00:00
Ilya Lavrenov
e94f7b25c0 Fixed cmake dev warnings (#16655) 2023-03-30 21:01:41 +04:00
dependabot[bot]
5e149aa0dd Bump test-generator from 0.1.1 to 0.1.2 in /tests (#16625)
Bumps [test-generator](https://github.com/kevinastone/generator) from 0.1.1 to 0.1.2.
- [Release notes](https://github.com/kevinastone/generator/releases)
- [Changelog](https://github.com/kevinastone/generator/blob/master/HISTORY.rst)
- [Commits](https://github.com/kevinastone/generator/compare/v0.1.1...v0.1.2)

---
updated-dependencies:
- dependency-name: test-generator
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
2023-03-30 16:58:02 +00:00
Sebastian Golebiewski
ab96cc939b DOCS shift to rst - Embedding Preprocessing Computation (#16659) 2023-03-30 16:59:58 +02:00
Ilya Churaev
b3503c8b7a Fixed coverity for ov::Any (#16647) 2023-03-30 13:12:50 +00:00
Tatiana Savina
961a99586a DOCS shift to rst Supported Model Formats (#16657)
* add model intro doc

* add supported model formats page

* add TF doc

* add pytorch doc

* add paddle  doc

* add mxnet doc

* add caffe doc

* add kaldi doc

* fix format

* fix cide snippets

* fix code snippets

* fix kaldi doc

* kaldi code snippets

* fix format

* fix list

* directive test

* fix note

* move code block

* code snippets style
2023-03-30 14:44:31 +02:00
Pawel Raasz
392b67f082 Fix pooling padding update (#16531)
* Review adaptive max pool shape inference

* Review AvgPool and MaxPool

* Review convolution operator

* Review GroupConvolution shape inference

* Review ConvolutionBackpropData operator

* Review GroupConvolutionBackpropData op

* Review BinaryConvolution operator
- add common bases for convolution ops
- refactor convolution ops

* Review DeformableConvolution operator

* Use new convolution shape_infer in GPU

* Fix build and test issues

* Correct set output spatial shape
in default constructed back prop convolutions

* The convolution shape_infer use pads as parameters
the external padding can be operators or other class padding properties shape_infer should not modify operators padding when
called from plugin

* Apply code formatting

* Fix padding validation and update

* Max and Avg pool don't update op properties
from plugin shape inference
- use ShapeInferWithPadding for pooling operators

* Remove not used function in shape_inference

* Fix evaluates in MaxPool

* Relax convolution shape infer inputs size check

* Remove unused entryFallbackWithPadding class

* Remove unused dilations variable

* Remove unused resize_attributes from max_pool_base

---------

Co-authored-by: mitruska <katarzyna.mitrus@intel.com>
2023-03-30 11:55:53 +00:00
Sebastian Golebiewski
7983e00b00 DOCS shift to rst - Cutting Off Parts of a Model article (#16640) 2023-03-30 13:05:53 +02:00
Irina Efode
87365fa21d [CONFORMANCE] Parallelization over HW devices (#16431)
* init

* just fix version

* Update merge script

* remove extra code

* Uncomment correct func

* dd

* validate_nvidia

* Small refactoring

* Trigger linux build

* Update main.cpp

revert

* trigger

* fix build

* Update main.cpp
2023-03-30 14:45:49 +04:00
totoka-intel
086ee93bcd [doc] Install guide openvino_2022 link location fix (#16572) 2023-03-30 12:40:20 +02:00
Ilya Lavrenov
ccf9c19f61 Deprecated UNDEFINED values for execution / performance hints (#16563)
* Deprecated UNDEFINED values for execution / performance hints

* Update src/tests/functional/plugin/gpu/shared_tests_instances/behavior/ov_plugin/properties_tests.cpp

* Fixes

* Fixes
2023-03-30 13:48:19 +04:00
Ilya Lavrenov
5b203efb9c Disable PDPD test on Linux debian post-commit (#16644) 2023-03-30 12:24:17 +04:00
Bogdan Pereanu
5eea99d96c Update timetest tool to support ip and op params config (#15916)
* User can set input and output precision for timetest tool

* Update run_timetest.py with the ip and op options as well

* Use only one getType function

* Add extra line at the end of the file

* Remove unused parameters

* Update comment accordingly

---------

Co-authored-by: Vitaliy Urusovskij <vitaliy.urusovskij@intel.com>
2023-03-30 11:27:51 +04:00
Sebastian Golebiewski
3573a38e0b DOCS shift to rst - Model Optimizer Usage (#16630) 2023-03-30 08:24:22 +02:00
Sebastian Golebiewski
712d1b99d1 DOCS shift to rst - Post-training Quantization with NNCF (#16631) 2023-03-30 08:23:55 +02:00
Vladislav Golubev
b0e6b1e83c [TF FE] NgramCompilation test fix (#16636)
* [TF FE] NgramCompilation test fixed

---------

Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>
2023-03-29 18:34:23 +00:00
Artyom Anokhov
2a01695370 Deployment Manager: updated configs with 2023.0.0 layout and versions (#16633) 2023-03-29 19:34:26 +02:00
Ilya Lavrenov
0250f62d11 Revert inference precision to be a hint (#16634) 2023-03-29 18:59:33 +04:00
Maciej Smyk
7d8f4af78a DOCS shift to rst - Automatic Speech Recognition C++ Sample & Automatic Speech Recognition Python* Sample (#16609) 2023-03-29 16:39:09 +02:00
Karol Blaszczak
10668f4f3a Docs shift to rst - install guides linux (#16568) 2023-03-29 15:40:53 +02:00
Vladislav Golubev
8d59252966 [Transforamtions] NonZero horizontal fusion (#16571)
* Added ValuePredicate 'consumers_more_than'

* NonZero fusion

* NonZero fusion tests
2023-03-29 17:23:37 +04:00
Edward Shogulin
a9360f8045 [CPU] Element-wise precision selection fix (#16547) 2023-03-29 12:31:30 +00:00
Roman Kazantsev
0c2308506f [TF FE] Fix leftovers from review (#16619)
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2023-03-29 16:28:37 +04:00
Tomasz Jankowski
f7e898893d Add PRelu fusion (#16617) 2023-03-29 11:32:57 +00:00
Przemyslaw Wysocki
591c3e61c5 [PyOV] Simplify requirement files (#15343)
* Partial progress

* Finish v1

* Cleanup

* Remove useless files

* Fix path to pdpd

* Fix onnx path

* Minor change

* Rework MO

* Minor change

* Remove some costraints

* Add MO constraints

* Update gitignore for MO

* Minor change

* Apply tech sync discussion

* Cleanup

* CR comment

* Debug ONNX FE

* simplify ONNX FE

* Update cmake

* Hardcode ONNX requirement

* Add dependency resolver to cmake

* Add constraints for openvino/tests

* Add missing pytest-html

* Fix -c path

* Revert debug changes to path

* Add cmake to copy constraints.txt

* Update dependabot

* Remove slash

* Remove cmake

* Debug prints

* Minor changes

* Move reqs check to separate file

* Add requirements parser to benchmark_tool

* Fix smoke tests constraints

* Minor fixes

* Minor change

* My fixes were apparently wrong

* Debug - self.executable_path

* Debug - add singledispatch to tests and tools

* Debug - print IE_APP_PATHs

* Revert "Debug - print IE_APP_PATHs"

This reverts commit 67ccb6d3f5.

* Revert "Debug - add singledispatch to tests and tools"

This reverts commit 3b945931e2.

* Revert "Debug - self.executable_path"

This reverts commit 3aa724eff6.

* update dependabot

* update dependabot

* Skip benchmark_app tests

* Use CMAKE_CURRENT_BINARY_DIR in cmake

* Remove debug prints

* minor change

---------

Signed-off-by: p-wysocki <przemyslaw.wysocki@intel.com>
2023-03-29 14:27:27 +04:00
Sun Xiaoxia
988a8dd6a9 Xiaoxia/Optimize the streams calculation process (#15777)
* add _big_core_logic_streams

* modify core binding with cpu mapping table

* get _cpu_ids with querying cpu_mapping_table

* fix mac build issue

* fix cpu func test issue

* fix clang-format issue

* remove getCoreOffset and getThreadStep

* motify return false from cpuMapAvailable on windows

* remove core binding in latency mode

* add bind core on windows

* add model prefer threads

* modify streams calculating schedule in ApplyPerformanceHints

* modify MakeDefaultMultiThreaded and Stream

* add unified cpu binding with cpu_mapping on linux and windows. add GPU core binding interface. modify streams calculation scheduling

* fix code style issue

* modify default streams to 1 to fix ci test issue

* add SetStreamtoConfig, modify getNumOfAvailableCPUCores to fix continuous call loadnetwork issue

* modify code according to comments

* fix build issue on macos

* fix macos error

* fix cputest issue

* fix build issue on macos

* move features about CPU to lin_system_config.cpp

* fix code style

* fix bebian_arm build failed issue

* fix macos build issue

* fix code style

* fix test issue on windows

* fix code style

* add latency in hybrid_aware condition

* add the condition used all cores in latency mode

* fix code style

* fix code style

* add init_cpu

* fix code style

* fix streams=2 issue

* fix multi gpu core bind issue

* modify interface

* fix debian arm build issue

* add bind core in different socket

* fix code style

* fix build issue on windows

* fix GPU set_executor_config sync issue

* fix latency issue

* fix bind_cores issue

* modify model prefer on tigerlake machine

* modify according to comments

* fix code style

* modify GPU reserve CPU interface, remove bind core on windows feater

* fix code style

* add 3rd type core in cpu_mapping_table

* fix build issue

* update test case

* modify core bind behavior in latency mode

* remove 3rd core type function

* update format

* add lock in get_task_flag

* not bind core in latency mode

* change model_prefer to 0 with latency mode on core machine. bind core with latency mode on core machine

* remove a void thread

* modify condition of create task_area

* modify comments

* fix according to comments

* fix spelling mistake

* fix according to comments

* fix code style

---------

Co-authored-by: Shen, Wanglei <wanglei.shen@intel.com>
2023-03-29 18:26:27 +08:00
Ilya Churaev
f3dcf93f96 Remove suppression Wno-delete-non-abstract-non-virtual-dtor (#16560)
* Remove suppression Wno-delete-non-abstract-non-virtual-dtor

* Fixed Allocator warning

* Suppress warning for GPU plugin

* Skip warning for GNA

* Fixed preprocessing

* Added virtual constructor for base plugin class

* Some fix for CPU

* Suppress for CPU

* Fixed any

* Fixed meta

* Disable warning for paddle

* Fixed Allocator tests

* Move suppress to paddle

* Fixed benchmark_app
2023-03-29 14:19:30 +04:00
Vladislav Golubev
df3c06ecb4 [CPU] Ngram node fusion (#16131) 2023-03-29 13:58:41 +04:00
Karol Blaszczak
f4da729a19 [DOCS] prerelease notes 0329 (#16584) 2023-03-29 11:27:12 +02:00
Yuan Hu
75c62ea320 [CPU] optimize shape infer of Reshape (#16537)
* add reshape shapeinfer in cpu plugin

Signed-off-by: Hu Yuan2 <yuan2.hu@intel.com>

* add squeeze and unsqueeze

Signed-off-by: Hu Yuan2 <yuan2.hu@intel.com>

* add precision i8 i64 on test

Signed-off-by: Hu Yuan2 <yuan2.hu@intel.com>

* fix code out of bounds risk

Signed-off-by: Hu Yuan2 <yuan2.hu@intel.com>

* test performance of this PR

Signed-off-by: Hu Yuan2 <yuan2.hu@intel.com>

* fix code issue

Signed-off-by: Hu Yuan2 <yuan2.hu@intel.com>

* Revert "test performance of this PR"

This reverts commit f4f9f002de28d03bc1c55c24067f75b74824904c.

* fix reviewer comment

fix throw message
not create ov::shape instance
remove i8 test case

Signed-off-by: Hu Yuan2 <yuan2.hu@intel.com>

* fix pytorch layer test failed issue

inputShape(1,0) outpattern(-1) is a valid input

Signed-off-by: Hu Yuan2 <yuan2.hu@intel.com>

* fix windows compile issue

Signed-off-by: Hu Yuan2 <yuan2.hu@intel.com>

* fix rebase mistaken

Signed-off-by: Hu Yuan2 <yuan2.hu@intel.com>

---------

Signed-off-by: Hu Yuan2 <yuan2.hu@intel.com>
2023-03-29 11:26:49 +02:00
Tingqian Li
05ab0f32d7 [CPU] Simple fix of redundant const-weight reordering for brgconv node in dynamic model (#16305) 2023-03-29 10:27:08 +02:00
Mateusz Tabaka
556d469f6b [PADDLE] add paddle opextension support (#16439)
* add opextension support

* support opconversion

* fix test contructor ambiguous

* fix ci fail

* add tag to avoid compiler ambiguous

* move tests to layer_tests & remove PaddleTag

* static cast

* use create_ov_node_by_name

---------

Co-authored-by: Luo Cheng <cheng.luo@intel.com>
2023-03-29 12:23:47 +04:00
Roman Kazantsev
35e03d33bb [TF FE] Support frozen models in text protobuf format aka pbtxt (#16604)
* [TF FE] Support frozen models in Text Protobuf format aka pbtxt

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Fix gen_wrapper.py for pbtxt

* Fix is_supported method

* Fix gen_wrapper.py script

* Adopt test_text_frozen_format unit-test

* Update src/frontends/tensorflow/src/frontend.cpp

* Update src/frontends/tensorflow/src/frontend.cpp

---------

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2023-03-29 11:58:08 +04:00
Min, Byungil
7a95830d24 [GPU] Disable failed onednn tests (#16614)
* Resolved failed unit-tests for fully connected

Signed-off-by: Min, Byungil <byungil.min@intel.com>
2023-03-29 15:53:04 +09:00
Min, Byungil
ea6e3481cd [GPU] Fix failed onednn tests (#16410)
* Fix failed unit-tests on dGPU

+ modified fully_connected_random_test_i8_3d not to have ambiguous
+ oneDNN does NOT support i64 type for reorder. Added exception.
+ bugfix in prepare_primitive_fusing about exception of activation function
+ Add exception logic for dynamic to select ocl type in is_node_for_onednn

Signed-off-by: Min, Byungil <byungil.min@intel.com>
2023-03-29 15:50:09 +09:00
Roman Kazantsev
966c47e7cd [MO] Remove Python version check (#16612)
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2023-03-29 09:23:14 +04:00
Katarzyna Mitrus
f7891aa034 [Interpolate-11] Reference implementation for Interpolate-11 (#16342)
* Reference impl for interpolate-11 init

* ND support init

* Tests clean up

* Add evaluate method for Interpolate-11

* New version tests init

* Type parametrized tests

* Tests duplication clean up and reusage of v4 test cases

* Add clipping to the type bounds

* Style fix

* Add float type tests

* Fix default ports values

* Commented code clean up

* Add passing cube_coeff param

* Tests clean up

* Add separate namespace

* Adjust variable names

* Adjust function name

* Use vectors instead of raw ptrs

* update func to static inline

* Adjust types

* Add Interpolate-11 to template plugin evaluates map

* Revert interpolate-11 core evaluate support

* Use const ref to filter

* Use static cast

* Update link
2023-03-29 07:11:56 +02:00
Taylor Yeonbok Lee
daf562832f [GPU] Fix malfunction in crop static kernel in dynamic shape scenario (#16586)
* Fix malfunction in crop static kernel in dynamic shape execution

* Add unittest

* Fix lint errort
2023-03-29 04:19:24 +00:00
Sergey Shlyapnikov
6c766a81b5 [GPU] Treat warnings C4267 as errors for Windows (#16345) 2023-03-28 22:56:47 +00:00
Mateusz Tabaka
b82bedd648 Add Conversion and Op Extension to Pytorch frontend (#16434)
Tickets: 98766 and 98767
2023-03-29 00:25:29 +02:00
Wilson Seok
79b267033c [GPU] Fix program::replace() to copy duplicated connection from single constant (#16529)
* fix program::replace() to copy duplicated connection from single constant

* add unit test

* modified with review feedback
2023-03-28 19:25:22 +00:00
Vitaliy Urusovskij
40cc006bae Enable MapAllocator in IR Frontend (#12673)
* Enable MapAllocator in IR Frontend

* Fix `ov_infer_request_ppp` test

With `mmap()`ing of IR, .bin can't be deleted until unmapping.
And it shows that there was a leak in test

* Add comment to Win `CreateFile()` regarding
FILE_SHARE_DELETE

* Unmap .bin file before IR files deletion

Wait ov::Model deletion to trigger .bin file unmapping
before IR files deletion

* ClangFormat

* Add `use_map_allocator` switch in FE

In case of direct use of FE (e.g. via MO), `mmap()` is OFF.
But in case of use FE via Core, `mmap()` is ON.
2023-03-28 23:24:13 +04:00
Pawel Raasz
796bd98913 Review convolution classes for shape inference aspects (#16375)
* Review adaptive max pool shape inference

* Review AvgPool and MaxPool

* Review convolution operator

* Review GroupConvolution shape inference

* Review ConvolutionBackpropData operator

* Review GroupConvolutionBackpropData op

* Review BinaryConvolution operator
- add common bases for convolution ops
- refactor convolution ops

* Review DeformableConvolution operator

* Use new convolution shape_infer in GPU

* Fix build and test issues

* Correct set output spatial shape
in default constructed back prop convolutions

* The convolution shape_infer use pads as parameters
the external padding can be operators or other class padding properties shape_infer should not modify operators padding when
called from plugin

* Apply code formatting

* Fix padding validation and update

* Use shape inference with padding instead fallback
for DeformableConvolution from opset1

* Update convertPadding function to be template
2023-03-28 19:10:08 +00:00
Maxim Vafin
8d90c11a35 Fix sporadic fails when beta==0 in baddmm (#16610)
* Fix sporadic fails when beta==0 in baddmm

* Remove sporadic test loop
2023-03-28 18:47:35 +00:00
Mateusz Bencer
4403433309 [ONNX FE] Implementation of ONNX STFT op (#16461) 2023-03-28 20:47:17 +02:00
Eddy Kim
e169c7cd38 fix a bug in permute_bfzyx_to_bfyxz (#16599) 2023-03-28 18:19:35 +00:00
Irina Efode
3849d5aa02 [INSTALL] Fix setupvars (installation for MacOS) and build python (#16514)
* Fix installation + Python build on MacOS

* Update setupvars.sh

* Update setupvars.sh

* Revert

* revert

* Update scripts/setupvars/setupvars.sh

---------

Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
2023-03-28 22:03:22 +04:00
Chen Peter
a2218ab169 Add notes for oneTBB (#16606)
* Add notes for oneTBB

Signed-off-by: Peter Chen <peter.chen@intel.com>

* Update wording

---------

Signed-off-by: Peter Chen <peter.chen@intel.com>
2023-03-28 21:19:00 +04:00
Alexandra Sidorova
38c924a3ae [Snippets] Added support of BF16/I8/U8 for MatMul (#15063) 2023-03-28 20:49:26 +04:00
Paul Youngsoo Ahn
253e4eb366 [GPU] Remove duplicated OpenCL kernel compilation on static model (#16262)
* * update kernel_ids using hash value
* Change set to unordered_map for kernels_code
* replace unique_id to hash value
* Remove hash_val params
* remove redundant codes (#16262)
** Remove unique_id in program_node
** Remove gen_kernel_id
** Remove set_kernels_source
** Remove remove_kernels
** Remove kernel_idx in kernels_cache

* * Use kernel_impl_params instead of kernel_id
* Divide batch when entry_point are duplicated
* rollback removing unique_id

* * Fix get_kernel failure issue (#102467)
 - Modify has function of custom_gpu_primitive and generic_layer
 - Add ==operation of generic_layer for _kernels map in kernels_cache
 - Fix invalid kernel_impl_params related to unique_ptr life cycle issue

* Improve kernels_cache (#102467)
* Move add_kernels_source step to build_implementations
* Change replace kernels_code key to kernel_impl_params
* Return kernel vector in get_kernels

* Modify function name to get_kernels (#102467)

* Fix functions related graph serialization (#102467)

* Fix failure to run dynamic model (#102467)

* Add unit test

* Code review follow-up
- Add const to input params
- Add missing code to check kernel duplication in kernels_cache

* Add const to input params (#102467)

* [GPU] update hash and ==operator for generic_layer and custom_gpu_primitive (#102467)

* [GPU] override get_kernels_source in generic_layer and custom_gpu_primitive (#102467)

* [GPU] Fix onednn build error (#102467)

* [GPU] Fix Lin build error (#102467)

* [GPU] kernels_cache::get_kernels return vector of clone of cldnn::kernel (#102467)

* Updated serialization logics for improved kernel caches (#16262)

* primitive key kernel cache for serialization
* kernel serialization with binaries hash
* fix kernel cache init function for deserialization
* removed unnecessary codes

* [GPU] Update commnet and fix test failure (#16262)

* [GPU] Fix custom_gpu_primitive unit test failures (#16262)

* [GPU] Improved kernels cache serialization (#16262)
* removed hash in serialization logic
* update not to create a new kernels_cache for serialization
* code refactoring in serialization logic

* [GPU] Follow-up code review (#16262)

* [GPU] modify lock(#16262)

* [GPU] Fix custom_gpu_primitive unit test failure (#16262)

---------

Co-authored-by: Eddy Kim <eddy.kim@intel.com>
2023-03-28 18:48:19 +02:00
Roman Kazantsev
17c3e67336 [TF FE] Add layer test for Mish activation function (#16557)
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2023-03-28 18:43:11 +02:00
Maxim Vafin
44f0419a0b Get mo version once (#16576) 2023-03-28 15:45:08 +00:00
Ilya Churaev
05b0c58521 Add doc for ENABLE_QSPECTRE option (#16605)
* Add doc for ENABLE_QSPECTRE option

* Updated the link
2023-03-28 19:41:20 +04:00
Mikhail Ryzhov
55e9cae54f [GNA] Pre/Post processing via evaluate (#15691) 2023-03-28 14:50:02 +00:00
Ekaterina Aidova
6fc0b6479e [PT FE]: revert usage mo.convert_model in pt layer tests (#16573)
* [PT FE]: revert usage mo.convert_model in tests

* fix failed test
2023-03-28 16:06:21 +04:00
Pawel Raasz
49d150b3b8 Review PSROIPooling class for shape inference aspects (#16447)
* Review ROIPooling class
- check interval shape and label propagation
- add template shape_infer
- add shape infer into cpu plugin
- add test with StaticShape

* Use get_output_roi instead of get_output_size

* Add missing includes

* Review PSROIPooling operator
- review interval and label propagation
- add template shape_infer implementation
- add shape_infer to cpu plugin
2023-03-28 15:02:16 +04:00
Marcin Kacprzak
d9d1df2fe3 [GNA] Implemented ExecutionMode support in GNA Plugin (#16396) 2023-03-28 14:47:12 +04:00
Shen, Wanglei
a726f0ae38 Enable new property ov::hint::scheduling_core_type (#16106)
* enable apply_processor_type()

* declare PROCESSOR_TYPE

* enable readProperties

* test case for get_property()

* enable set_property() and test cases

* reduce changes

* fix code style issue

* fix python test case issue

* remove python interface

* move processor type definition out of dev_api

* refine coding

* add dependency

* update header file

* update description

* merge intel_cpu header file

* add inline in-code documentation

* change 'UNDEFINED' to 'DEFAULT'

* remove ProcTypeConfig

* refine change

* refine change

* refine process_type to scheduling_core_type

* refine description

* fix code style issue

* change to ov::hint::scheduling_core_type

* fix code style issue

* fix code style issue

* fix python issue

* fix python issue

* fix python issue

* fix python issue

* change core_type_cfg to ov::hint::SchedulingCoreType

* update test case for comments

* update test case for comments

* add default for comments

* update code style

* update for comments

* update for comments

* fix typo

* move cpu_map_scheduling into threading folder

* update for merge conflict

* update for code style
2023-03-28 10:04:30 +04:00
Vladimir Paramuzov
906939a1f1 [GPU] Fixed invalid is_dynamic flag value for scalar inputs (#16565) 2023-03-28 10:03:51 +04:00
hyunback kim
d06a22f4e4 [GPU] Support FC+eltwise fusion in fp16 for OneDNN (#16303)
* [GPU] Support FC+eltwise fusion in fp16

Signed-off-by: hyunback <hyunback.kim@intel.com>
2023-03-28 14:49:49 +09:00
Ilya Churaev
5dff012233 Fixed Warnings in reference implementations (#16559)
* Fixed Warnings in reference implementations

* Removed suppression from shape_inference
2023-03-28 00:45:40 +04:00
Ilya Churaev
167bf7e16a Added test to check that layout can be created from serialized format (#16575) 2023-03-28 00:43:30 +04:00
Shen, Wanglei
815d4abc03 enable new property ov::hint::use_hyper_threading (#16176)
* enable apply_processor_type()

* declare PROCESSOR_TYPE

* enable readProperties

* test case for get_property()

* enable set_property() and test cases

* reduce changes

* fix code style issue

* fix python test case issue

* remove python interface

* move processor type definition out of dev_api

* refine coding

* add dependency

* update header file

* update description

* merge intel_cpu header file

* add inline in-code documentation

* change 'UNDEFINED' to 'DEFAULT'

* remove ProcTypeConfig

* refine change

* refine change

* enable new property use hyper threading

* update description

* resume legacy code

* change to ov::hint namespace

* update including header file

* update C API and Python API

* update description for comments

* update test case for comments

* update function location for comments

* fix typo

* fix typo

* fix code style issue and update test case

* move cpu_map_scheduling into threading folder
2023-03-28 00:39:26 +04:00
Orest Chura
aa0df8e535 [Python][Build] Fix building openvino wheel on Windows (#16374)
* Add snippets dependency

* - removed dependency back
- added an INTEL_CPU condition on snippets configuring -> no dependency when configured w/0 CPU

* Disable snippets_ngraph_functions conditionally if inference_engine_snippets are not configured

---------

Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
2023-03-27 23:52:58 +04:00
Sebastian Golebiewski
68e067062f Update Doxyfile.config (#16564) 2023-03-27 23:47:10 +04:00
Sebastian Golebiewski
1ca94326cb DOCS shift to rst - Benchmark Samples and Tools (#16566) 2023-03-27 18:29:05 +02:00
Sebastian Golebiewski
5c5a29d095 DOCS shift to rst -Sync Benchmark Samples (#16561) 2023-03-27 18:28:16 +02:00
Maciej Smyk
6e99b48ecc DOCS shift to rst - OpenVINO™ Samples and Get Started with C++ Samples (#16577) 2023-03-27 18:26:47 +02:00
Irina Efode
9863b32792 [CONFORMANCE] w/a Api Conformance crash for NVIDIA (#16508) 2023-03-27 17:57:10 +02:00
Maciej Smyk
7ccf1c89cf DOCS shift to rst - Image Classification Async C++ Sample & Image Classification Async Python* Sample (#16580) 2023-03-27 16:54:50 +02:00
Roman Kazantsev
5e9ea6a146 [TF FE] Refactor utils routine (#16554)
Move all openvino_conversion rountines into utils. Avoid using Squeeze without axis
that can create dynamic output rank

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2023-03-27 11:10:00 +00:00
Roman Lyamin
5113a5538c [GPU] Added shape canonicalization mechanism (#16166) 2023-03-27 15:02:06 +04:00
Tomasz Adamowicz
4936d4bb1d [GNA] Introduce 16Byte memory alignment for LNL (GNA3.6) (#16363)
* [GNA] Introduce 16Byte memory alignment for LNL (GNA3.6)

* update after review
2023-03-27 10:42:34 +01:00
Mang Guo
5e835e327b [CPU] Fix edge memory share issue (#16202) 2023-03-27 13:20:51 +04:00
Piotr Krzemiński
6b70c449ba [PT FE] Add aten::Chunk implementation (#16035)
* [PT FE] Add chunk implementation:

* [PT FE] Fix chunk int64 instead of const node errors, add tests for chunking

* [PT FE] Test Chunk-If implementation

* [PT FE] Change the translate to replace chunk implementation, use VariadicSplit instead of Slice

* [PT FE] Reduce artifacts from debugging

* Update test_chunk.py

* [PT FE] Improve & debug chunk implementation:

* [PT FE] Simplify implementation, fix remaining bugs

* [PT FE] Statify the split lenghts output

* [PT FE] Clear code, remove debugging artifacts
2023-03-27 11:16:16 +02:00
Mateusz Mikolajczyk
7d16ee1835 [PT FE] Add torchvision::deform_conv2d translation (#16450)
* Initial commit

* Initial commit

* Cleanup

* Improve tests

* Make NodeContext const
2023-03-27 11:13:32 +02:00
Roman Kazantsev
bb9de29062 [TF FE] Add layer test for Bucketize (#16556)
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2023-03-27 12:03:07 +04:00
cecilia peng
a1b8a6a941 [CPU] Disable ConvertNMSToNMSIEInternal (#16128) 2023-03-27 11:22:53 +04:00
dependabot[bot]
64e9dc32cd Bump awalsh128/cache-apt-pkgs-action from 1.2.4 to 1.3.0 (#16562)
Bumps [awalsh128/cache-apt-pkgs-action](https://github.com/awalsh128/cache-apt-pkgs-action) from 1.2.4 to 1.3.0.
- [Release notes](https://github.com/awalsh128/cache-apt-pkgs-action/releases)
- [Commits](https://github.com/awalsh128/cache-apt-pkgs-action/compare/v1.2.4...v1.3.0)

---
updated-dependencies:
- dependency-name: awalsh128/cache-apt-pkgs-action
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-27 11:09:21 +04:00
Ilya Lavrenov
2638014d00 Enable build with system version of snappy (#16549)
* Enable build with system version of snappy

* Create Snappy::snappy alia
2023-03-27 10:01:11 +04:00
Maksim Kutakov
0765fa108a [CPU] Debug Caps build fix (#16536) 2023-03-27 07:28:43 +02:00
Ilya Lavrenov
3f3bda592b Revert "[MO] remove deprecated: data_type, disable_nhwc_to_nchw, tensorflow_use_custom_operations_config (#16394)" (#16555)
This reverts commit 43ef89e625.
2023-03-27 09:04:41 +04:00
Sergey Shlyapnikov
ce67ac09d3 [GPU] Disable OneDNN primitive cache (#16525) 2023-03-26 23:29:47 +04:00
Maksim Kutakov
ab151fd357 [CPU] Temporal object access fix (#16546) 2023-03-26 22:36:14 +04:00
Mateusz Tabaka
1df14c6a6c Add docs for OPENVINO_FRAMEWORK_MAP macro (#14928)
* Add docs for OPENVINO_FRAMEWORK_MAP macro

Ticket: 98762

* Apply suggestions from code review

Co-authored-by: Piotr Krzemiński <piotrkrzeminski1234@gmail.com>

---------

Co-authored-by: Piotr Krzemiński <piotrkrzeminski1234@gmail.com>
2023-03-26 22:00:43 +04:00
Rajat U Krishna
6eb8f4b2b7 [Docs][PyOV] Fix broken link to section (#16553)
* [Docs][PyOV] Minor change to fix a broken link in code_examples.md
2023-03-26 21:58:52 +04:00
Pavel Esir
43ef89e625 [MO] remove deprecated: data_type, disable_nhwc_to_nchw, tensorflow_use_custom_operations_config (#16394)
* removed deprecated MO options: data_type, disable_nhwc_to_nchw, tensorflow_use_custom_operations_config

* fix layer_test_class.py

* data_type -> precision in layer_test_class.py

* typo fix

* corrected layer tests for compress_to_fp16 argument
2023-03-26 21:38:15 +04:00
Andrew Kwangwoong Park
2956717118 [GPU] Added shape agnostic TopK kernel (#16161)
* [GPU] Added shape agnostic TopK kernel implementation

Signed-off-by: Andrew Park <andrew.park@intel.com>

* Update kernel to use internal buffers for shape agnostic kernel

Signed-off-by: Andrew Park <andrew.park@intel.com>

* Add WA to compile_graph for shape agnostic arg_max_min_axis with non-const k input

Signed-off-by: Andrew Park <andrew.park@intel.com>

* Fix is_dynamic pameter for FillCLKernelData with the case where the output is static shape

Signed-off-by: Andrew Park <andrew.park@intel.com>

* Fix corner case where inbuf size becomes 0 when ops_size is 1

Signed-off-by: Andrew Park <andrew.park@intel.com>

---------

Signed-off-by: Andrew Park <andrew.park@intel.com>
2023-03-25 22:32:17 -07:00
guozhong wang
60ab7490bf Implement CTPUT in AUTO code logic (#16220)
* Implement CTPUT in AUTO code logic

* Add logic to handle device loading failure

* add some code comments

* fix warnning conversion from size_t to int

* Updated code according to comments of bell and wanglei

* the preferred device code path need to be updated with ctput also

* add fallback logic for CTPUT

* Modify the code logic according to bell suggestion

* Add prints for debugging bug

* throw exception when no device to run pipline task

* initialize idleWorkerRequest for CTPUT

* fix getting properties

Signed-off-by: fishbell <bell.song@intel.com>

refine

Signed-off-by: fishbell <bell.song@intel.com>

* fix warning

Signed-off-by: fishbell <bell.song@intel.com>

* fix illegal character on windows

Signed-off-by: fishbell <bell.song@intel.com>

* fix illegal character

Signed-off-by: fishbell <bell.song@intel.com>

add missing include

Signed-off-by: fishbell <bell.song@intel.com>

* more code refine

Signed-off-by: fishbell <bell.song@intel.com>

---------

Signed-off-by: fishbell <bell.song@intel.com>
Co-authored-by: fishbell <bell.song@intel.com>
2023-03-26 12:35:26 +08:00
Ilya Lavrenov
e66b837104 Properties improvements: part 2 (#16489)
* Properties improvements: part 2

* Accurate configs handling in HETERO / BATCH

* Align plugins in caching properties

* Fixed caching mock tests

* Added new TestNoCachingProperties test

* Fixed test

* Added ov::caching_properties to API 1.0 metrics as well

* Fixes for HETERO plugin

* Fixed tests

* Even more refactoring in HETERO plugin config management
2023-03-25 19:28:05 +04:00
Fang Xu
a96da994ec Update prebuilt oneTBB2021.2.1 (#16548)
*update prebuilt oneTBB2021.2.1

*modify tbb and tbb component installation

*modify the implementation of removing soft links

*update prebuilt oneTBB2021.2.1
macos: 11.4
windows: win10+visual studio 2019(MSVC 14.21)
https://github.com/open-mpi/hwloc/archive/refs/tags/hwloc-2.8.0.tar.gz
https://github.com/open-mpi/hwloc/archive/refs/tags/hwloc-2.8.0.zip
https://github.com/oneapi-src/oneTBB/archive/refs/tags/v2021.2.1.tar.gz(commitid:96af5d3)
https://github.com/oneapi-src/oneTBB/archive/refs/tags/v2021.2.1.zip(commitid:96af5d3)

before building oneTBB 2021.2.1, replace all strings "2_4" of the source code with "2_5"

for windows, after compilation, replace all strings
INTERFACE_COMPILE_DEFINITIONS "\$<\$<CONFIG:DEBUG>:TBB_USE_DEBUG>" to INTERFACE_COMPILE_DEFINITIONS "\$<\$<CONFIG:DEBUG>:TBB_USE_DEBUG>;__TBB_NO_IMPLICIT_LINKAGE=1"
in cmake file "%cd%\install\lib\cmake\TBB\TBBTargets.cmake"
2023-03-25 08:46:43 +00:00
Ilya Lavrenov
580b99c99b Align plugins in caching properties (#16528)
* Align plugins in caching properties

* Fixed caching mock tests

* Added new TestNoCachingProperties test

* Fixed test

* Added ov::caching_properties to API 1.0 metrics as well
2023-03-25 00:26:18 +04:00
Taylor Yeonbok Lee
6a25143045 [GPU] Prevent memory reset at runtime allocation for dynamic shape, fix wrong padding handling (#16351)
* Prevent memory reset at runtime allocation for dynamic shape

* Set default alloc to reset mem

* Additional fixes :
- If there is any convolution/deconvolution users which requires padded input, enqueue reset buffer when reuse buffer.
- Removed cl finish from gpu_buffer::fill. (Hopefully it should be waited only when needed. Otherwise sync is to be done by event)
- Removed buffer reset from on_execute of nonzero count, which is not needed any more.

* Remove unused API

* Fix tensor offset to project the padding

* Added unittest

* Applied review comment
2023-03-24 13:10:33 -07:00
Ekaterina Aidova
1ef94ec069 [PT FE]: support aten::linalng_vector_norm (#16109)
* [PT FE]: support aten::linalng_vector_norm

* more norm ops

* update tests
2023-03-24 21:00:17 +01:00
Ilya Lavrenov
18df64c135 More accurate hwloc finding in case of dynamic tbbbind (#16488) 2023-03-24 19:42:20 +01:00
Sebastian Golebiewski
81b4666632 Update tutorials (#16544) 2023-03-24 18:17:36 +01:00
Ekaterina Aidova
179403ddc9 [PT FE]: improve integration into mo.convert_model (#16243) 2023-03-24 16:55:07 +01:00
Karol Blaszczak
953a166a62 [DOCS] minor fix for content and config (#16538) 2023-03-24 14:33:04 +01:00
Ivan Tikhonov
5a8a195dad TransposeSinking: add support for Slice and Reshape ops (#16208)
* Resolve the performance issues in TransposeSinking transformation

* codestyle

* fix warning as error, fix tests failures

* fix ts for Concat and Reduce

* Fix TransposeReduceBackward

* fix the issue in TransposeFuse transformation

* fix TransposeReduce transformations

* Fix TransposeReduction, fix TransposeSinkingSplit, add unsqueeze support

* delete debug print

* Add additional validations

* fix node validation

* Fix validate for split, revert changes for concat, add BatchToSpace/SpaceToBatch

* Add SpaceToBatch/BatchToSpace

* fix TS for Interpolate + codestyle

* fix gna build

* Support TS for Interpolate, VariadicSplit, IsInf, IsNan, IsFinite + refactoring

* add the missed line

* add include

* TransposeSinking tests refactoring: part1

* TransposeSinking tests refactoring: part2

* Add limited support for StridedSlice op

* codestye

* TransposeReduction: skip the case when 2nd input for Squeeze is not provided

* Transpose sinking tests refactoring: part 3. + Revert changes in MOC.

* fix build

* codestyle

* Add tests for TS backward transformations, update TransposeSinkingFuse transformation, delete StridedSlice transformation prototype + tests refactoring

* fix unary tests

* Fix warning as error on Windows

* Add new tests for Unsqueeze/Squeeze; refactoring; remove debug code

* TransposeSinking: add support for Slice op

* Add descriptions to the transformations, add additional checks

* fix a warning

* TransposeSinking Rafactoring part2: move the transformations to a separate folder, align namespaces

* TransposeSinking refactoring: class names, namespaces

* codestyle

* resolve merge conflicts

* codestyle

* TSReduction refactoring, move Unsqueeze/Squeeze transformations to separate files, added limited support for Reshape op + tests

* fix minor mistakes

* fix warnings

* Added TSSlice transformation to TSGeneral, created TransposeSinkingGeneral alias in ov::pass namespace

* refactoring

* codestyle

* fix TSSqueeze/TSUnsqueeze transformations

* delete debug serialize

* remove TransposeSinking from MOC

* fix TSSqueeze/TSUnsqueeze transformations in case of Reshape op

* delete debug code

* codestyle

* fix unit tests, revert changes for TSSlice transformation

* fix TSSqueeze transformation

* resolve review comments

* codestyle
2023-03-24 17:01:15 +04:00
Georgy Krivoruchko
c5b348dd4f [POC][TF FE] Support SavedModel format (with compression) (#16317)
* Added Saved Model proto descriptors

* Included Google's protobuf repository

* Added wstring version of ov::util::directory_exists

* Added initial implementation of Saved Model iterator

# Conflicts:
#	src/frontends/tensorflow/src/frontend.cpp

* Added missing proto files to repository

* Implemented reading of variables index and data files

# Conflicts:
#	src/frontends/tensorflow/src/frontend.cpp

* Renamed class

# Conflicts:
#	src/frontends/tensorflow/src/frontend.cpp

* Fix for cross-platform directory_exists

* Fixed codestyle and simplified code

* CI fixes

* Separeted Saved Model iterator from Proto iterator

* Moved variables index into separate class

* Added initial implementation of reading a variables from
saved model

# Conflicts:
#	src/frontends/tensorflow/src/frontend.cpp

* Added external variable mapping

* Code cleanup

* Commit is for discussion purposes!!!
Implemented RestoreV2 with a workaround for strings
Not optimized, includes mem leak

* In progress...

* Added DT_STRING coverage into decoder_proto

* m_variables_index moved into underlying class

* Updated copyrgihts, added space between license and code

* Moved string constant to separate class

* Added AssignVariableOp operation

* Changed behavior of RestoreV2
Updated stubs for other ops

* Second working implementation, enabled:
Program-only models
Variables reading from data files

* Extended docs

* Fixed dynamic type

* Fixed naming

* Added Snappy submodule to support compression in TF FE

* Enabled Snappy Compression for TF FE

* Make static linkage of Snappy
Changing Warning as error behavior for 3rd party

* CI fixes

* Added Snappy copyright info

* Aligned behavior of StringConstant with UnsupportedConstant

* Added correct naming and removing unused inputs/outputs
2023-03-24 15:07:16 +04:00
Ilya Churaev
9eab122952 Disable QSpectre flag by default (#16526) 2023-03-24 13:55:42 +04:00
Ilya Churaev
077d0e43f2 Fixed Windows warnings for core (#16523) 2023-03-24 09:34:06 +00:00
Andrei Gorbachev
cabb917b1f [GPU] Fix warnings (#16516)
* fix a few warnings

* cast size_t to uint32_t
2023-03-24 13:26:24 +04:00
Maxim Vafin
86c4489aca [PT FE] Add telemetry extension support (#16438)
* Initial telemetry introduction in PyTorch frontend

* Add test

* remove obsolete checks from test

* Move statistics gathering into TranslateSession

* Fix code style

* Fix codestyle
2023-03-24 10:11:12 +01:00
Nadezhda Ageeva
65e5ed7dd7 [HETERO]: support caching properties (#16451)
* Fixed build

* [HETERO]: support caching properties

* Fix caching test

* Code style

* Change result type from map to vector

* Review comments

---------

Co-authored-by: Ilya Churaev <ilya.churaev@intel.com>
2023-03-24 12:25:55 +04:00
Luo Cheng
16933efc06 [CPU] Enable brgconv primitives with binary post-ops by default on AVX512+ ISA (#16286) 2023-03-24 11:31:00 +04:00
Shen, Wanglei
613b66ba35 include nireq during streams calculation (#16378)
* include nireq during streams calculation

* update description for comments

* update description
2023-03-24 15:27:13 +08:00
Roman Kazantsev
3f4b1e8205 [TF FE] Post leftovers to support the MUSE model in SavedModel format (#16520)
* [TF FE] Post leftovers to support the MUSE model in SavedModel format

It contains tests imitating a case with Tokenizer extension and raised problems:
setting custom type for body graph Parameter, named ports for RaggedTensorToSparse
and Unique operations.

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Update src/frontends/tensorflow/tests/convert_tricky_models.cpp

---------

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2023-03-24 11:07:56 +04:00
Ilya Churaev
fbdd158615 Small fixes for template plugin developer documentation (#16521) 2023-03-24 10:29:09 +04:00
Fang Xu
025115f695 Update prebuilt tbbbind static library for Linux (#15832)
* update prebuilt tbbbind static library

* update LICENSE file

* update SHA256

* update prebuilt tbbbind static library for linux

cmake 3.23.2
centos7

https://github.com/open-mpi/hwloc/archive/refs/tags/hwloc-2.8.0.tar.gz
./autogen.sh
./configure --enable-static --disable-io --disable-libudev --disable-libxml2 --disable-cairo CFLAGS="-fPIE"
make -j$(proc)
make install prefix=$(pwd)/install

https://github.com/oneapi-src/oneTBB/archive/refs/tags/v2021.7.0.tar.gz
sed -i "s/APPLE\s*OR\s*NOT\s*BUILD_SHARED_LIBS/APPLE/g" CMakeLists.txt
export HWLOC_DIR="${hwloc_root_dir}/hwloc-hwloc-2.8.0/install"
export PKG_CONFIG_PATH="${HWLOC_DIR}/lib/pkgconfig"
export CXXFLAGS="-I ${HWLOC_DIR}/include -L ${HWLOC_DIR}/lib"
~/cmake-3.23.2-linux-x86_64/bin/cmake -DTBB_TEST=OFF -DTBB_BUILD=OFF -DTBBMALLOC_BUILD=OFF -DBUILD_SHARED_LIBS=OFF
make -j$(nproc)

* remove changes for windows

---------

Co-authored-by: Chen Peter <peter.chen@intel.com>
2023-03-24 14:27:57 +08:00
Zlobin Vladimir
69cec4a5e2 py/benchmark_app: fix -hint (#16511)
* py/benchmark_app: fix -hint

Don't warn about values which are explicitly set in -hint.
That aligns C++ and Python implementations.

Ticket 106544

* Remove extra throw

* Fix code style
2023-03-24 10:24:08 +04:00
Kelvin Choi
8518a3a8e8 [GPU] Disable converting gather8 to 7 pass because GPU plugin supports gather8:nagative-index feature (#15868)
* Add GatherV7 and gatherV8 for convert_gather_0d pattern

* Add updating output_shape using reorder/reshape for scalar indice instead of using ConvertGather0D pass

* Add WA for NMS-gather8 pattern
2023-03-23 23:12:12 -07:00
Sergey Shlyapnikov
e434c320f5 [GPU] Update tuning params of shape agnostic version of fully_connected_bf_tiled kernel for dGPUs (#16482) 2023-03-24 09:08:09 +04:00
Georgy Krivoruchko
7601e8a874 Added compilation flag (#15436) 2023-03-24 07:09:29 +04:00
Tomasz Dołbniak
1b89ecdbae Interpolate v11 usage in ONNX FE (#16463) 2023-03-24 02:06:22 +00:00
Maxim Vafin
abaf61d059 Improve detectron2 support (#16011)
* Improve op support for detectron mask rcnn

* Initial commit

* Fix for reading processed list

* Format code

* Cleanup

* cleanup

* Cleanup

* cleanup test

* Add comment

* Add rt_info

* fix type

* More fixes for detectron

* Fix build

* Add tests for if

* Revert changes in index

* Add comment

* Fix test

* Fix get_axes_range

* Add tests and fix if type alignment

* Fix code style

---------

Co-authored-by: Mateusz <mateusz.mikolajczyk@intel.com>
2023-03-23 22:30:03 +00:00
Przemyslaw Wysocki
52b27d82c5 Upgrade ONNX to 1.13, protobuf to 3.20.3 and relax tensorflow (#14773)
* Bump ONNX version

* Bump protobuf

* Add xfails and skips

* Add tickets

* Skip ONNX Serialization tests

* Compile ONNX with C++17

* Force cpp17 - 2

* Use MSVC check

* Update tensorflow

* Minor change

* Bump onnx to 1.13.1

* Bump protobuf to 3.20.3

* Debug test tf

* Xfail tests in comp

* Update comp tests

* Update tf reqs

* Remove deprecated ONNX function

* Align PDPD FE protobuf req with 2.4.1

* Satisfy dependency review

* Attempt to fix dependency review

* Revert pdpd protobuf

* Skip pdpd tests

* Fix MO-TF-PB test

* Skip TF test case

* Add ticket numbers, rewrite reqs

* Fix requirements

* Minor change

* Set TF to 2.12

* Remove wrapt and skip test
2023-03-24 00:43:01 +04:00
Paul Youngsoo Ahn
74870f9b0b [GPU] Fix gpu dynamic model multistream test issue (#16510) (#16510) 2023-03-23 10:51:57 -07:00
Ilya Churaev
2755b32fb9 Changed Template plugin public property (#16496)
* Changed template plugin public property

* Add property documentation

* Fixed comments

* Fixed typo
2023-03-23 16:34:49 +01:00
Tomasz Dołbniak
de0a4e16fb TopK 11 exposed to Python (#16501) 2023-03-23 16:33:54 +01:00
Sebastian Golebiewski
44d6d97871 DOCS shift to rst - OpenVINO 2.0 Deployment (#16509) 2023-03-23 14:47:54 +01:00
Edward Shogulin
fb24e91416 [LPT] NNCF GroupConvolution 5D on weights support (#16336)
* [LPT] NNCF GroupConvolution 5D on weights support

* PullReshapeThroughDequantization rollback
2023-03-23 13:24:10 +00:00
Maksim Kutakov
8a246a8bf2 [CPU] Use Dnnl executor to avoid extra dnnl primitve desc query (#16372) 2023-03-23 16:25:39 +04:00
Nadezhda Ageeva
3b8d9c568c Allow skip LoadNetworkToDefaultDeviceNoThrow tests (#16507) 2023-03-23 16:09:13 +04:00
Irina Efode
448654ea65 [CONFORMANCE] Fix report gewneration in case of mixed reports: rel and abs (#16505) 2023-03-23 15:08:18 +04:00
Sebastian Golebiewski
c89da1aee2 DOCS shift to rst - Install OpenVINO on macOS, Raspbian (#16506) 2023-03-23 12:02:01 +01:00
Sofya Balandina
9d0749a5b7 [conformanceTests] Add key for manage pipeline after crashes (#16123)
* [conformanceTests] Add key for manage pipeline after crashes

* Move crash_handler to funcTestsUtils
2023-03-23 14:59:31 +04:00
Mateusz Bencer
a004601774 [ONNX FE] Fix Windows warnings (#16141) 2023-03-23 10:59:00 +01:00
Ilya Churaev
a3958d6ddf Use evaluation context for the inference (#16492) 2023-03-23 13:52:03 +04:00
Anastasia Kuporosova
982e1c1192 [PyOV] Fix issues with RTMap (#15636)
* [PyOV] Fix issues with RTMap

* update year

* some clean-up and items fix

* tests and small fixes

* Update src/bindings/python/src/pyopenvino/utils/utils.cpp

* undo changes

* fix serialization on python side

* rt_info as rt_map

* undo several changes in tests

* fix mo test

* sadd docstrings

* add tests

* fix codestyle

* try to fix win

* fix master

* apply comments
2023-03-23 10:29:32 +01:00
Edward Shogulin
087b10ff00 Snippets: precision propagation (#14996) 2023-03-23 13:16:04 +04:00
Sebastian Golebiewski
5fa95ff19d DOCS shift to rst - Protecting Deep Learning Model (#16474) 2023-03-23 10:12:13 +01:00
Sebastian Golebiewski
66ae71454a DOCS shift to rst - Install OpenVINO on Windows (#16502) 2023-03-23 10:09:43 +01:00
Roman Kazantsev
aaa4a4c210 [TF FE] Skip Assert operation and add test (#16484)
At the conversion stage we can't resolve Assert node because the condition
is computed only during inference time.

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2023-03-23 11:49:46 +04:00
Maciej Smyk
17174a3839 DOCS shift to rst - Troubleshooting (#16483)
* troubleshooting
* code-block fix
2023-03-23 08:39:46 +01:00
Jade Cho
a20b3631fb Support float64 data type as input of benchmark_app (#16435) 2023-03-23 13:55:55 +09:00
Ilya Churaev
a205c675db Fix leftovers after removing plugins.xml (#16487)
* Fixed comments

* Rename ie_plugins to ov_plugins

* Remove dependency from tests
2023-03-23 04:32:36 +00:00
Kelvin Choi
6bf2fe11ae [GPU] Need to exclude fused mem_dep from shape_infer_dep (#16300) 2023-03-22 13:00:29 -07:00
Tomasz Dołbniak
951c5fdae9 Interpolate 11 exposed to Python (#16465) 2023-03-22 18:12:16 +00:00
Yury Gaydaychuk
5290822f8b [CPU] Enabled BatchToSpace and SpaceToBatch with nonconstant inputs support (#16344) 2023-03-22 16:36:05 +00:00
Irina Efode
6ac5e42b62 [CONFORMANCE] Fix if impossible to remove log (#16485)
* fix_reporting

* w/a for remove

* Update merge_xmls.py

remove extra
2023-03-22 20:07:47 +04:00
Tomasz Dołbniak
8eb142ca6e Interpolate v11 -> v4 downgrade transformation (#16448) 2023-03-22 17:00:53 +01:00
Ilya Churaev
c23a1170ba Remove plugins xml (#16470)
* Update core_impl.cpp

Add first implementation of register_compile_time_plugins (needs to depend on the actual CMake configuration as a next step).

* Update core.cpp

Check for missing plugins.xml

* Update core_impl.cpp

Avoid exception for missing plugins.xml

* Update core_impl.hpp

Add register_compile_time_plugins function definition

* Plugin loading based on CMake configuration

* Remove debug output command

* Unify static/dynamic plugin loading

* Add CMake option for plugins.xml that defaults to off

* Move GENERATE_PLUGINS_XML option to features.cmake

* Add missing brace

* Remove unnecessary #ifdef check

* Prepare to resolve conflicts

* Fix compile error

* Activate generation of plugins.xml in OpenVINODeveloperPackageConfig.cmake

* Fix CMake installation

* Plugin loading logic implemented in ie_core.cpp as well

* Fix format

* Small fixes

* Fixed code style

* Skip if xml file wasn't found

* Added function to find compiled plugins

* Generalize plugins hpp

* Use new API

* Fixed old core

* Fixed static build

---------

Co-authored-by: CSBVision <bjoern.boeken@csb.com>
2023-03-22 15:51:07 +00:00
Jan Iwaszkiewicz
4561aa7109 [PyOV] OVDict class - new return value from inference (#16370) 2023-03-22 16:12:07 +01:00
Xuejun Zhai
8509d0dd82 [Deprecated API] remove version (#16426)
* [Remove version] Remove version from py openvino

Signed-off-by: Zhai, Xuejun <xuejun.zhai@intel.com>

* Modify caused by remove version

Signed-off-by: Zhai, Xuejun <xuejun.zhai@intel.com>

* Fix clang format issue

Signed-off-by: Zhai, Xuejun <xuejun.zhai@intel.com>

* Revert "Fix clang format issue"

This reverts commit 132787286f.

* Fix CI format issue

Signed-off-by: Zhai, Xuejun <xuejun.zhai@intel.com>

* Fix CI format issue

Signed-off-by: Zhai, Xuejun <xuejun.zhai@intel.com>

* Fix merge conflict error

Signed-off-by: Zhai, Xuejun <xuejun.zhai@intel.com>

---------

Signed-off-by: Zhai, Xuejun <xuejun.zhai@intel.com>
2023-03-22 16:09:14 +01:00
Ilya Lavrenov
1b72352f6f Fixed CVS-93736 (#16471) 2023-03-22 14:20:03 +04:00
Chen Xu
57c91e0c56 [CPU] Fix issue in reducing HW with small channel size in npsc layout (#16467) 2023-03-22 13:28:38 +04:00
Sebastian Golebiewski
90100451a3 DOCS shift to rst - Libraries for Local Distribution (#16469) 2023-03-22 09:43:44 +01:00
Sebastian Golebiewski
066ef694f5 DOCS shift to rst - Deploying Your Application with Deployment Manager (#16453) 2023-03-22 09:42:47 +01:00
Sebastian Golebiewski
2f69305aa3 DOCS shift to rst (#16445) 2023-03-22 09:41:59 +01:00
Sebastian Golebiewski
14e70e76fb DOCS shift to rst - Further Low-Level Implementation Details (#16444) 2023-03-22 09:39:32 +01:00
River Li
232c802e07 [CAPI] Add ov::hint::execution_mode property (#16466) 2023-03-22 12:18:40 +04:00
Sebastian Golebiewski
cbb25e9483 [DOCS] Proofreading developer documentation moved from wiki. (#15886)
Minor stylistic and grammar corrections. Fixing links

* Apply suggestions from code review

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>
2023-03-22 09:08:31 +01:00
hyunback kim
c14e6ef48e [GPU] Use 4dim directly for onednn in gemm (#16182)
* [GPU] Use 4-dim directly for onednn in gemm
   We were collapsing n-dim into 3d for onednn gemm, But it is not necessary, up to 4d.

Signed-off-by: hyunback <hyunback.kim@intel.com>
2023-03-22 17:08:10 +09:00
Roman Kazantsev
0070e8d939 [TF FE] Fix problems with invalidation of decoders (#16464)
* [TF FE] Fix problems with invalidation of decoders

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Fix comment

---------

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2023-03-22 12:02:59 +04:00
Ilya Churaev
f1c3356cfc Small Plugin DG changes (#16432) 2023-03-22 09:01:16 +01:00
Andrew Kwangwoong Park
04a2c4ce61 [GPU] Add shape agnostic optimized FullyConnectedIMAD kernel (#16417)
* [GPU] Added shape agnostic kernel for fully_connected_gpu_imad

Signed-off-by: Andrew Park <andrew.park@intel.com>

* Add fully_connected_gpu_imad shape agnostic TCs for ov_gpu_unit_tests

Signed-off-by: Andrew Park <andrew.park@intel.com>

* Apply comments

Signed-off-by: Andrew Park <andrew.park@intel.com>

---------

Signed-off-by: Andrew Park <andrew.park@intel.com>
2023-03-22 08:38:28 +01:00
Przemyslaw Wysocki
6cfea099d8 [PyOV] Align Python API's attributes and methods between its modules (#15889)
* Complete alignment

* Minor change

* Apply discussion results

* Apply discussion comments

* Clang

* Apply CR

* Code style
2023-03-22 10:22:44 +04:00
Min, Byungil
a71c83d366 [GPU] Resolve eltwise kernel build failure (#16458)
Signed-off-by: Min, Byungil <byungil.min@intel.com>
2023-03-22 15:15:02 +09:00
River Li
a204b04fae fix mem leak (#16456) 2023-03-22 09:45:03 +04:00
Xuejun Zhai
95636f7715 [Unicode API] Add wide char for compiler model APIs (#16180)
* [Unicode API] Add wide char for compiler model APIs

Signed-off-by: Zhai, Xuejun <xuejun.zhai@intel.com>

* Avoid duplicated func description

Signed-off-by: Zhai, Xuejun <xuejun.zhai@intel.com>

* Fix format issue

Signed-off-by: Zhai, Xuejun <xuejun.zhai@intel.com>

* Add unite test for wstring of complie model

Signed-off-by: Zhai, Xuejun <xuejun.zhai@intel.com>

* Clear code

Signed-off-by: Zhai, Xuejun <xuejun.zhai@intel.com>

* Add unite test for other compile model unicode APIs

Signed-off-by: Zhai, Xuejun <xuejun.zhai@intel.com>

* Clear log output

Signed-off-by: Zhai, Xuejun <xuejun.zhai@intel.com>

* Add parameter of device for compiled model unicode test

Signed-off-by: Zhai, Xuejun <xuejun.zhai@intel.com>

---------

Signed-off-by: Zhai, Xuejun <xuejun.zhai@intel.com>
2023-03-22 07:35:24 +04:00
Haiqi Pan
5e98696464 Fix Windows build warnings in template and core tests (#15967)
* fix C4305

* 1.0f

* Element

* fix c4244

* fix truncation from double to float in grn.cpp

* Revert "fix truncation from double to float in grn.cpp"

This reverts commit 5263b37cb2.

* fix grn.cpp

* add 4305

* fix low

* add TearDown

* revert softmax.cpp

* pragram

* fix conflicts

* fix conflicts

* size_t -> ov::label_t

* WIN32

---------

Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
2023-03-22 07:29:35 +04:00
River Li
d86d94edad [DOC][CAPI] document for remote tensor (#16408)
* [DOC][CAPI] document for remote tensor

* Update

* Update minor

* Update GPU_RemoteTensor_API.md

---------

Co-authored-by: Sebastian Golebiewski <sebastianx.golebiewski@intel.com>
2023-03-22 01:55:51 +04:00
Tingqian Li
b70e56d110 [CPU] Support using BF16 in INT8 models (#15663) 2023-03-21 22:39:25 +04:00
Tomasz Dołbniak
234f36e9b7 TopK v11 usage in ONNX FE (#16449) 2023-03-21 17:23:29 +00:00
Pavel Esir
d8e7b39edb flush by recreating constant (#16430) 2023-03-21 18:05:11 +04:00
Ilya Churaev
85d9c11b97 Fixed build (#16442) 2023-03-21 17:13:20 +04:00
Tomasz Jankowski
0893efe073 [Core] Assure TensorVector comparison uniqueness (#16232)
* Assure TensorVector comparison uniqueness

* Add test

* Make the flow clear
2023-03-21 16:58:34 +04:00
Liubov Talamanova
d402b6ed3e [POT] Return Mul to ignored ops for transformers (except CPU_SPR) (#16407) 2023-03-21 14:53:01 +04:00
Ilya Lavrenov
24ff43aa5b Fixed comparison of iterators (#16428) 2023-03-21 14:16:07 +04:00
Sebastian Golebiewski
8926282ac5 DOCS shift to rst - Multi device execution article (#16400) 2023-03-21 10:57:48 +01:00
hyunback kim
05e54e9f3d [GPU] Update the latest onedNN3.1 (#16381)
- Fix group conv regression issue

Signed-off-by: hyunback <hyunback.kim@intel.com>
2023-03-21 18:56:41 +09:00
Min, Byungil
5d6cd626bc Fix unit test on dGPU (#16295)
* Resolve failed cases and queue-type issue
+ Resolved out_of_order queue-type issue
+ Added get_test_default_config for setting default config of onednn
+ Cleared failed case

Signed-off-by: Min, Byungil <byungil.min@intel.com>
Co-authored-by: tuxedcat <tuxedcat@gmail.com>
2023-03-21 18:55:06 +09:00
Wang, Yang
5af4a8e8d6 Take VPUX out of AUTO default candidate device list (#16037)
* 1. Add device blacklist for AUTO plugin.
2. Update the logic to parse out the device candidate list from the inputting config MULTI_DEVICE_PRIORITIES.
3. Update the corresponding mock test cases.
4. Ignore the GTEST warning for the test cases.

Signed-off-by: Wang, Yang <yang4.wang@intel.com>

* Update.

* Update.

* Update.

* Add description about blacklist.

* Apply suggestions from code review

Update.

Co-authored-by: yanlan song <bell.song@intel.com>

* Update.

* Apply suggestions from code review

Updated.

Co-authored-by: yanlan song <bell.song@intel.com>
Co-authored-by: River Li <river.li@intel.com>

* Update test case.

* Update test case.

* Update test case.

* Update.

* Update.

---------

Signed-off-by: Wang, Yang <yang4.wang@intel.com>
Co-authored-by: yanlan song <bell.song@intel.com>
Co-authored-by: River Li <river.li@intel.com>
Co-authored-by: Shen, Wanglei <wanglei.shen@intel.com>
2023-03-21 17:46:44 +08:00
Ilya Churaev
ec0a1e58d1 Fixed some leftovers for 2.0 dev api (#16421)
* Fixed some leftovers for 2.0 dev api

* Fixed build issue
2023-03-21 09:34:37 +00:00
Maxim Vafin
7d56c75d65 Fix MO Reader for Squeeze without axes (#16398)
* Fix MO Reader for Squeeze without axes

* Fix style

* Update tools/mo/openvino/tools/mo/utils/ir_reader/internal_ops/squeeze.py
2023-03-21 10:28:58 +01:00
Pawel Raasz
63797db257 Review ROIPooling class for shape inference aspects (#16403)
* Review ROIPooling class
- check interval shape and label propagation
- add template shape_infer
- add shape infer into cpu plugin
- add test with StaticShape

* Use get_output_roi instead of get_output_size

* Add missing includes
2023-03-21 09:02:37 +00:00
Roman Kazantsev
82a992b95d [TF FE] Fix leftovers from code review (#16422)
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2023-03-21 08:31:10 +00:00
Ilya Churaev
60436dee5a Updated AsyncInferRequest documentation + leftovers (#16420) 2023-03-21 10:52:45 +04:00
Roman Kazantsev
5cb20f8858 [TF FE] Refactor StridedSlice translator and add layer test to precommit (#16376)
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2023-03-21 08:54:48 +04:00
Vladimir Paramuzov
98237b06b5 [GPU] Update memory_statistics property impl (#16399) 2023-03-21 08:52:52 +04:00
Maxim Vafin
7f8786d9aa [PT FE] Make NodeContext constant inside conversion rules (#16165)
* Make NodeContext constant inside conversion rules

* Use shared_ptr

* Fix ptr

* Fix logical not
2023-03-20 22:08:24 +01:00
Karol Blaszczak
4ffecce63f {DOCS} shift to rst - benchmarks (#16354) 2023-03-20 19:22:55 +01:00
Tomasz Dołbniak
d9c70dbce3 Explicit scales for all axes in interpolate op test (#16404) 2023-03-20 16:50:47 +00:00
Ilya Churaev
9c69e2f694 Added documentation for RemoteTensor and RemoteContext (#16391)
* Added documentation for RemoteTensor and RemoteContext

* Fixed documentation build

* Fixed some build issues
2023-03-20 20:45:40 +04:00
Xuejun Zhai
73bedced87 Remove RunLocker class (#16387)
Signed-off-by: Zhai, Xuejun <xuejun.zhai@intel.com>
2023-03-20 20:26:51 +04:00
Przemyslaw Wysocki
1e512af105 Skip test_mixed_dynamic_infer on ARM devices (#16397)
* Skip test

* Minor fix

* Minor update
2023-03-20 15:23:12 +00:00
Daria Mityagina
1c7b6a7b2a [VPUX] - Tensor data with element type f16, is not representable as pointer to i16 (#16379)
* [VPUX] - Tensor data with element type f16, is not representable as pointer to i16

* [VPUX] - Tensor data with element type f16, is not representable as pointer to i16
2023-03-20 14:54:27 +00:00
Ekaterina Aidova
71167df234 [PT FE]: enable dtype in softmax and constant filling, extend logical ops support (#16276) 2023-03-20 14:12:12 +00:00
Roman Kazantsev
86f0285db2 [TF FE] Support dynamic type for all operation translators (#16380)
* [TF FE] Support dynamic type for all operation translators

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Add test of conversion with dynamic type

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

---------

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2023-03-20 17:41:45 +04:00
Evgenya Stepyreva
ecc2f13dd2 CVS-105549 TFL OneHot renamed (#16388)
We rename one hot to fit the requirements of tensorflow common library.
2023-03-20 14:35:02 +01:00
Ilya Lavrenov
0c99135d44 Improved properties handling between Core and plugins (#16296)
* [HETERO]: adopt setting device properties in benchmark_app/speech_sample for HETERO

Fix IEClassHeteroExecutableNetworkGetMetricTest_SUPPORTED_METRICS test

Fix NumStreamsAndDefaultPerfHintToHWTest/PerHintAndDefaultPerfHintToHWTest tests

[HETERO][MULTI][AUTO] Make ov::device::properties regular property

[PYTHON] Update python BA with device properties

Update after rebase

Update src/plugins/auto/auto_executable_network.cpp

Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>

Update src/plugins/auto/multi_executable_network.cpp

Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>

Fix merge conflicts, apply some review comments

* Multiple improvements

* [HETERO]: adopt setting device properties in benchmark_app/speech_sample for HETERO

Fix IEClassHeteroExecutableNetworkGetMetricTest_SUPPORTED_METRICS test

Fix NumStreamsAndDefaultPerfHintToHWTest/PerHintAndDefaultPerfHintToHWTest tests

[HETERO][MULTI][AUTO] Make ov::device::properties regular property

[PYTHON] Update python BA with device properties

Update after rebase

Update src/plugins/auto/auto_executable_network.cpp

Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>

Update src/plugins/auto/multi_executable_network.cpp

Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>

Fix merge conflicts, apply some review comments

* Code style, bugfix after merging improvement

* More improvements

* Even more improvements

* Commit changes in core_impl.cpp

* Added parsing of any maps

* Fixed code-style

* Fixed AB mock tests build

* Fixed comparison

* Added new AB config key

* Improvements and fixes (#147)

* Fix BA, fix GetSupportedConfig call for virtual plugins (#148)

* Fix GPU tests (#149)

* Fix BA, fix GetSupportedConfig call for virtual plugins

* Fix GPU tests

* Code style

* Improvements 10

* Fixed incorrect tests

* Revert removal cache_dir

* Revert removal cache_dir

* Fixed clean

* Supported device ID in CPU

* More fixed tests

* clang-format

* Fix legacy GPU tests (#150)

* Removed clone_map

* clang-format

* Added clone_map back

---------

Co-authored-by: Nadezhda Ageeva <nadezhda.ageeva@intel.com>
Co-authored-by: Nadezhda Ageeva <nkogteva@gmail.com>
2023-03-20 12:42:40 +00:00
Sebastian Golebiewski
8b31e3aafe DOCS shift to rst - Quantizing models article (#16260) 2023-03-20 13:13:05 +01:00
Sebastian Golebiewski
c5f65eea73 DOCS shift to rst - Tensorflow Frontend Capabilities and Limitations (#16392) 2023-03-20 12:46:11 +01:00
Sebastian Golebiewski
083596e285 DOC shift to rst - Optimize Inference article group (#16382) 2023-03-20 12:43:04 +01:00
Sebastian Golebiewski
0f4c96e96e DOCS shift to rst - Python API Exclusives (#16390) 2023-03-20 12:40:39 +01:00
Sebastian Golebiewski
76e60ff258 DOCS shift to rst - Performance Hints (#16386) 2023-03-20 12:39:31 +01:00
Maciej Smyk
350f8fd95b DOCS shift to rst - Media Processing and CV Libraries (#16343)
* rst shift
2023-03-20 12:37:08 +01:00
Pavel Durandin
4a4f06ba3b GPU documentation update (#16393)
* GPU documentation update

* GPU documentation update
2023-03-20 14:27:13 +04:00
Roman Kazantsev
997414c64d [MO][TF FE] Do not print TF FE message in case of fallback (#16384)
* [MO][TF FE] Do not print TF FE message in case of fallback

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Correct test model with Switch and Merge

---------

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2023-03-20 10:06:45 +00:00
Ekaterina Aidova
f39684a7f8 [PT FE]: support mixed precision in floor divide (#16362)
* [PT FE]: support mixed precision in floor divide

* Update floordiv.cpp
2023-03-20 11:00:26 +01:00
Maksim Kutakov
4bf5a77ac9 [CPU] Lazy oneDNN memory object creation (#12972) 2023-03-20 13:56:28 +04:00
Mateusz Mikolajczyk
134ebb8889 [PT FE] Add aten::new_empty (#16312)
* Add new_empty

* Remove duplicated code for new_empty
2023-03-20 10:03:33 +01:00
Ilya Churaev
c472b020b7 Added documentation for InferRequest (#16350)
* Added documentation for InferRequest

* Updated documentation for methods

* Fixed doc
2023-03-20 13:02:15 +04:00
mei, yang
4411a6ea45 WA to resolve conflict between paddlepaddle and protobuf 3.20.3 (#16315) 2023-03-20 12:50:49 +04:00
Vladimir Paramuzov
a46fc47e6a [GPU] Enable tile with dynamic input (#16364) 2023-03-20 12:49:35 +04:00
Roman Kazantsev
3f06a9b6fb [TF FE] Test MatrixDiag translator in the pre-commit (#16377)
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2023-03-20 12:20:13 +04:00
Chen Peter
bd62da9ffe Update prebuilt tbbbind static library for Windows (#16138)
Visual Studio 2019
MSVC version:"14.20.27508" 

https://codeload.github.com/open-mpi/hwloc/zip/refs/tags/hwloc-2.8.0
"hwloc.sln" configuration "ReleaseStatic" and platform "x64"

https://codeload.github.com/oneapi-src/oneTBB/zip/refs/tags/v2021.7.0

CMakeLists.txt line:226", modify the content "if (APPLE OR NOT BUILD_SHARED_LIBS)" to "if (APPLE)"

cmake -G "Visual Studio 16 2019" -A x64 -DBUILD_SHARED_LIBS=OFF
-DTBB_DISABLE_HWLOC_AUTOMATIC_SEARCH=ON -DTBB_TEST=OFF -DTBB_BUILD=OFF
-DTBBMALLOC_BUILD=OFF -DTBBMALLOC_PROXY_BUILD=OFF
-DCMAKE_HWLOC_2_5_LIBRARY_PATH=hwloc-hwloc-2.8.0\contrib\windows\x64\ReleaseStatic
-DCMAKE_HWLOC_2_5_INCLUDE_PATH=hwloc-hwloc-2.8.0\hwloc-hwloc-2.8.0\include
-DCMAKE_HWLOC_2_5_DLL_PATH=hwloc-hwloc-2.8.0\contrib\windows\x64\ReleaseStatic
..

cmake --build . --config release

Signed-off-by: Chen Peter <peter.chen@intel.com>
Co-authored-by: Fang Xu <fang.xu@intel.com>
2023-03-20 11:56:46 +04:00
Ilya Churaev
7bce07a46e Introduce IRemoteContext (#16293)
* Intrioduce IRemoteContext

* Introduce template implementation remote tensor

* Change plugin API for abstracted remote context

* Added remote tests

* Fixed some comments

* Fixed comment

* Try to fix build

* Added core dev to plugin API

* Revert "Try to fix build"

This reverts commit 762276383d.

* Move make tensor logic to separate file

* Fixed code style

* Fixed blob allocation

* Fixed comments

* Fixed merge issue
2023-03-20 07:17:36 +00:00
Anastasia Kuporosova
8bfb6afd6a [PyOV] Remove deprecated API (#16361) 2023-03-20 10:47:06 +04:00
Chen Xu
a001f84cba [CPU] Gather node shape infer (#16168) 2023-03-20 10:30:02 +04:00
Ilya Churaev
2739a01d64 Update Variable State doc (#16358)
* Update Variable State doc

* Fixed build

* Try to fix build

* Remove error

* Fixed doc

* Fixed links

* Try to fix doc
2023-03-20 10:14:06 +04:00
Min, Byungil
bc15596c9e Remove redundant reorder (#15661)
+ Reorder 1d data
+ Reorder which only changes format

Signed-off-by: Min, Byungil <byungil.min@intel.com>
2023-03-20 14:58:09 +09:00
Oleg Pipikin
afa61ed3ec Fix ir10 inputs outputs serialization order (#16357)
* Change IRv10 serialisation to not reorder parameters and results

* Add tests

* Fix1
2023-03-20 06:57:19 +04:00
hyunback kim
4f49d0e07e [GPU] enable dumpgraph in unit-test (#15388)
Signed-off-by: hyunback <hyunback.kim@intel.com>
2023-03-20 11:51:51 +09:00
Wang Wangwang
9c7f7b8338 Runtime fallback to other devices (#16015)
* Runtime fallback to other devices

* Update properties.hpp

* Update infer callback in AUTO

* Avoid some hang cases

* Add test cases for AUTO runtime fallback

* Replace mockExecutor with ImmediateExecutor

* Update the runtime fallback logic

* Update test case and support the case thar infer failed on CPU_HELP

* Update the test to detect whether to throw exception

* fix the error of CTPUT

* Add lock to AUTO executable network GetContext

* Update variable name in selectOtherDevice API

* Simplify variables and add testcase to improve test coverage

* Fix the issues when release CPU_HELP device and clean up the code

* Clean up code
2023-03-20 10:13:07 +08:00
Ilya Churaev
b2a2266f60 Implement VariableState support for the Template plugin (#16356)
* Implement VariableState support for the Template plugin

* Suppress some warnings

* Try to fix Windows
2023-03-17 22:10:47 +04:00
Ilya Churaev
e6ceed0bb9 Add documentation for compiled model (#16332)
* Files renaming

* Updated CompiledModel documentation

* Fixed typo

* Fixed comments

* Fix comments and try to fix the documentation

* Fixed execution devices
2023-03-17 21:59:39 +04:00
Tomasz Dołbniak
6889101415 Axes bugfix (#16365) 2023-03-17 17:49:46 +00:00
Sebastian Golebiewski
d189df169c Update tutorials (#16368) 2023-03-17 17:15:32 +01:00
Egor Duplenskii
a754473689 [CPU][TESTS] Use run_on_model instead of run_on_function (#16359) 2023-03-17 14:18:09 +00:00
Tomasz Dołbniak
a99a5057e2 TopK v11 -> v3 downgrade transformation (#16339) 2023-03-17 12:40:56 +00:00
Vladislav Golubev
249d57f37e [Transformations] CompressQuantizeWeights: fp16 weights support (#16323)
* [Transformations] CompressQuantizeWeights: fp16 weights support

* Code style fix

* Code style fix
2023-03-17 14:43:32 +04:00
bstankix
d7c88fd694 Rebuild graph rendering (#16321)
* Bugfix and restyle graphs rendering
2023-03-17 11:36:46 +01:00
Egor Duplenskii
e1e44d6bac [CPU] Move to oneDNN v3.1 (#15918) 2023-03-17 14:17:55 +04:00
Xuejun Zhai
a9bd5f741d Xuejun/remove api model (#15924)
* [Remove APIs] remove api m_transformation_callback

Signed-off-by: Zhai, Xuejun <xuejun.zhai@intel.com>

* [Remove APIs] remove api run_on_function(), replaced by run_on_model()

Signed-off-by: Zhai, Xuejun <xuejun.zhai@intel.com>

* [Remove APIs] remove set_callback(), use get_pass_config() to configure transformation pipeline

Signed-off-by: Zhai, Xuejun <xuejun.zhai@intel.com>

* [Remove APIs] remove api add_matcher()

Signed-off-by: Zhai, Xuejun <xuejun.zhai@intel.com>

* Fix format issue

Signed-off-by: Zhai, Xuejun <xuejun.zhai@intel.com>

* [Remove APIs] Fix review comments

Signed-off-by: Zhai, Xuejun <xuejun.zhai@intel.com>

* Fix formast issue

Signed-off-by: Zhai, Xuejun <xuejun.zhai@intel.com>

* Fix merge master error

Signed-off-by: Zhai, Xuejun <xuejun.zhai@intel.com>

* Fix CI compiler error

Signed-off-by: Zhai, Xuejun <xuejun.zhai@intel.com>

* Update ONNX Runtime from rel-1.8.1 to rel-1.14.0

Signed-off-by: Zhai, Xuejun <xuejun.zhai@intel.com>

* Revert "Update ONNX Runtime from rel-1.8.1 to rel-1.14.0"

This reverts commit e31a9e04b7.

---------

Signed-off-by: Zhai, Xuejun <xuejun.zhai@intel.com>
2023-03-17 10:44:28 +04:00
Vladimir Paramuzov
bb59672639 [GPU] Fixed shape agnostic scatter nd update kernel (#16319) 2023-03-17 09:57:25 +04:00
Wilson Seok
c5ccb3e954 add condition for activation ceil so it works when data type is fp32 or fp16 only (#16334) 2023-03-17 11:46:44 +09:00
hyunback kim
8d1139b61a Fix unet3d mlperf dump (#16253)
* Enable dump in unet3d_mlperf

Signed-off-by: hyunback <hyunback.kim@intel.com>
2023-03-17 10:51:40 +09:00
Pawel Raasz
cb241a8e4a [Core] Non constant support for b2s and s2b nodes (#16290)
* Fix inference for non-const inputs for operators:
- batch to space
- space to batch

* Evaluate of b2s, s2b supports all parameter inputs
- update template plugin test to use parameters instead constants
2023-03-16 20:22:03 +01:00
Irina Efode
0ee8c966b2 [CONFORMANCE] Fix API report (#16338) 2023-03-16 22:15:51 +04:00
Andrew Kwangwoong Park
e4500c7d61 [GPU] Fixes for dynamic model in dGPU (#16298)
Signed-off-by: Andrew Park <andrew.park@intel.com>
2023-03-16 11:09:55 -07:00
Denis Orlov
6ffa8da922 Fix documentation (md and inline) for C++ and Python spech samples (#16185)
* Fix documentation (md and inline) for C++ and Python spech samples

* Fix clang-format

* Minor fix

* Fix clang-format

* Fix a typo

* Fix according to Mike's review

* Fix clang-format
2023-03-16 15:44:12 +00:00
Tomasz Dołbniak
6762fe692d Interpolate-11 spec + core op (#16162) 2023-03-16 14:37:57 +01:00
Tatiana Savina
8a7956e3cb DOCS shift to rst transformations (#16269)
* transformations to rst

* fix snippets

* fix links

* add sphinx directive

* change img path

* fix snippet path

* fix link

* fix anchor

* fix transformation image

* fix reference

* fix reference anchor

* fix matcher pass link
2023-03-16 13:57:18 +01:00
Marcin Kacprzak
91b9675bed [GNA] Replace log::warning() with THROW_GNA_EXCEPTION for unsupported Concat (#16144) 2023-03-16 12:41:38 +00:00
Xiuchuan Zhai
9229b4967e [CPU] optimize shape infer of stridedslice (#16069)
* optimize shape infer of stridedslice

* Update src/plugins/intel_cpu/src/nodes/strided_slice.cpp

Co-authored-by: Maksim Kutakov <maxim.kutakov@gmail.com>

* Update src/plugins/intel_cpu/src/nodes/strided_slice.cpp

Co-authored-by: Maksim Kutakov <maxim.kutakov@gmail.com>

* Update src/plugins/intel_cpu/src/nodes/strided_slice.cpp

Co-authored-by: Maksim Kutakov <maxim.kutakov@gmail.com>

* fix accoding to comments

* re-use get_sliced_value func

* re-use get_sliced_value func

---------

Co-authored-by: Maksim Kutakov <maxim.kutakov@gmail.com>
2023-03-16 12:27:14 +01:00
Karol Blaszczak
a72b9bac2f [DOCS] shift to rst - resources (#16256) 2023-03-16 12:10:27 +01:00
Jan Iwaszkiewicz
0372ca929a [PyOV] Constant/Tensor from scalars (#16270) 2023-03-16 11:15:19 +01:00
Tatiana Savina
c18f3824b0 DOCS shift to rst Custom operations (#16254)
* move to rst

* move to rst

* change intro

* fix_directive

* fix code snippets

* sphinx snippets fix

* change link

* align tab

* snippet path fix

* fix code snippet path

* fix code snippets

* fix hyperlink

* change format

* change intro

* fix list format
2023-03-16 10:55:39 +01:00
Wilson Seok
461cc2aee8 change activation position in reorder_data_bfyx_to_blocked_format kernel (#16307) 2023-03-16 17:48:23 +09:00
Ilya Churaev
790f74c01c Update building and plugin testing docs (#16333)
* Update building and plugin testing docs

* Fixed typo
2023-03-16 12:39:06 +04:00
Ilya Churaev
fbc420093d Fix Loaded from cache for new plugin API (#16301)
* Extend template plugin tests

* Fixed loaded_from_cache for new API

* Added const

* Added ov::loaded_from_cache as supported property of CompiledModel

* Remove loaded_from_cache from core

* Reverted logic for old plugins

* Fixed comments

* Fixed build
2023-03-16 12:29:56 +04:00
Xiping Yan
2194552dc5 [CPU] Fix crash issue: RuntimeError: Primitive descriptor was not found for… (#16186) 2023-03-16 10:17:06 +04:00
Andrei Gorbachev
2f3ae4518e [GPU] Fix warnings (#16196)
* fix 1

* fix 2-10

* fixed code style

* fixed win plugin

* fixed linux plugin

* fixed a part of tests

* fixed test fot linux

* fixed pooling_gpu_test fot linux

* fixed pooling_gpu_test fot linux

* fix after review and enable wd4267 in makefile

* fix after review

* errors of unit test are fixed
2023-03-16 09:29:16 +04:00
Xuejun Zhai
05866f05ea Update ONNX Runtime from rel-1.8.1 to rel-1.14.0 (#16184)
* Update ONNX Runtime from rel-1.8.1 to rel-1.14.0

Signed-off-by: Zhai, Xuejun <xuejun.zhai@intel.com>

* Upgrade Cmake to 3.24.0

Signed-off-by: Zhai, Xuejun <xuejun.zhai@intel.com>

* Revert "Upgrade Cmake to 3.24.0"

This reverts commit 04a00f60c0.

* Update CMake to version 3.24.0

Signed-off-by: Zhai, Xuejun <xuejun.zhai@intel.com>

* Skip CApiTest.test_custom_op_openvino_wrapper_library test for tmp, will add back with the new ONNX Runtime version

Signed-off-by: Zhai, Xuejun <xuejun.zhai@intel.com>

---------

Signed-off-by: Zhai, Xuejun <xuejun.zhai@intel.com>
2023-03-16 07:14:51 +04:00
Karol Blaszczak
0e1df68263 DOCS-image-fix-fix (#16324) 2023-03-15 19:41:38 +01:00
Irina Efode
072acc1ea7 Fix win run (#16309)
* ix conformance on win

* fix summarizer

* try
2023-03-15 19:45:21 +04:00
Ilya Churaev
8189d18648 Revert ITensor leftovers (#16316) 2023-03-15 18:44:12 +04:00
Sebastian Golebiewski
0d5b5b187d [DOCS] Adding 'Scrollbox' - new sphinx directive (#15305) 2023-03-15 15:23:20 +01:00
yanlan song
6ff02f5e25 remove invalid cases (#16234)
Signed-off-by: fishbell <bell.song@intel.com>
2023-03-15 13:58:48 +00:00
Ivan Tikhonov
e1ee8f0ec8 TransposeSinking refactoring: part 2 (class names, folders, file names) (#16291)
* Add descriptions to the transformations, add additional checks

* fix a warning

* TransposeSinking Rafactoring part2: move the transformations to a separate folder, align namespaces

* TransposeSinking refactoring: class names, namespaces

* codestyle

* resolve merge conflicts
2023-03-15 17:18:39 +04:00
Vladimir Paramuzov
28d3e1087e [GPU] Fix strided slice kernel with begin/end/stride as inputs (#16302) 2023-03-15 16:25:45 +04:00
Maciej Smyk
d59d8ba3a2 DOCS shift to rst - Additional Configurations (#16284) 2023-03-15 12:21:16 +01:00
Sebastian Golebiewski
523c587d29 DOCS shift to rst - Troubleshooting Reshape and NoDynamicShapes (#16304) 2023-03-15 11:58:57 +01:00
Maciej Smyk
bd1b00d654 DOCS shift to rst - OpenVINO™ Security Add-on (#16251) 2023-03-15 11:57:54 +01:00
Tomasz Jankowski
0f9583c3cf [Transformations] Nop-eliminate 1D Reshape node (#16083)
* Nop-eliminate 1D Reshape node

* Don't eliminate checking Reshape node
2023-03-15 14:43:46 +04:00
Tingqian Li
fdc2664b24 fix special FQ node (#14594)
* fix special FQ with zero range in quantized models

* fix format & comments

* Add test case

* remove dot interval test case from smoke_LPT/FakeQuantizeTransformation.CompareFunctions

* Remove dot interval gpu test case because Pooling is also folded

* handle review comment

* fix code style

* update docs

* remove fold_zero_multiply
2023-03-15 10:13:29 +00:00
Katarzyna Mitrus
f0c153858b [ShapeInference] EmbeddingBag-Offsets/Packed-Sum shape infer improvements (#16072)
* shape_infer

* Register EmbeddingBagPackedSum shape_nfer for CPU

* Tests

* Merge shapes to preserve 2rd input info

* More label tests

* Add emb_table size check

* rename shape infer file

* Add more tests

* Update constexpr

* Use OV_EXPECT_THROW

* Style

* Reuse emb_table for dynamic rank

* Add common util to calculate emb output shape

* Update embd shape infer to use common util

* Update embedding shape infer util
2023-03-15 11:12:57 +01:00
Katarzyna Mitrus
69ba802e03 [ShapeInference] EmbeddingSegmentsSum shape infer improvements (#16119)
* Update shape_infer

* type prop tests

* Preserve interval and label from input

* Add more tests

* Add emb table scalar check

* Update to use OV_EXPECT_THROW

* Update constexpr

* Code refactor
2023-03-15 11:05:52 +01:00
Maciej Smyk
76f29f8532 DOCS shift to rst - Installing OpenVINO (#16311) 2023-03-15 10:57:03 +01:00
Pawel Raasz
bdf1923972 Review bucketize shape inference (#16136)
* Review bucketize shape inference:
- check interval dimension and label propagation
- check template shape_infer implementation
- minor refactoring and add tests

* Add missing using of namespaces
2023-03-15 10:19:57 +01:00
Karol Blaszczak
ab684036f4 DOCS-image-fix (#16308) 2023-03-15 10:16:55 +01:00
Sebastian Golebiewski
a3d53c0415 DOCS shift to rst - Heterogeneous execution (#16285) 2023-03-15 09:57:30 +01:00
Sebastian Golebiewski
85f80f2a03 DOCS shift to rst - Stateful Models and LowLatency articles (#16288)
Fixing directives for snippets and inline code blocks. Shifting to reST.
2023-03-15 09:56:37 +01:00
Karol Blaszczak
d774cc65a9 DOCS shift to rst - cpu n gna (#16252) 2023-03-15 09:39:09 +01:00
Georgy Krivoruchko
36c18e29a8 [TF FE] Added Tensorflow CTCLoss layer test (#13644)
Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>
2023-03-15 08:18:29 +00:00
Maciej Smyk
4b7b3fb0ae DOCS shift to rst - Openvino Ecosystem article update (#16050)
Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>
2023-03-15 08:51:49 +01:00
Daria Ilina
e4f44b19fd Mark all failed ONNX layer tests as skip (#16188)
* Mark all failed ONNX layer tests as XFail

* Add additional xfailed marks

* Add one more failed tests into XFail

* Add conditions for CPU/GPU failures

* Revert "Add conditions for CPU/GPU failures"

This reverts commit 790524c59c.

* Add failures separation for CPU/GPU

* Replace all xfail with skip
2023-03-15 12:22:32 +06:00
Vladimir Paramuzov
e44fd03d2a [GPU] Shape agnostic concat kernel + refactoring (#16170) 2023-03-15 09:47:31 +04:00
Ilya Churaev
4b7d1d9f50 Remove redundant clone from Serialize pass (#16277)
* Remove redundant clone from serialize pass

* Revert padding changes in serialize pass

* Provide a class for local copy of nodes with paddigs

* Fixed comments
2023-03-15 07:23:54 +04:00
Eddy Kim
e348481849 [GPU] Transformed IR serialization for dynamic models (#16169)
* IR serialization for dynamic models

* added ShapeOf1To3 transformation pass

* fixed input output type mismatch

* removed unnecessary codes

* moved ConvertShapeOf1To3 from common to GPU plugin

* updated copyright year

* fixed build errors
2023-03-14 11:03:02 -07:00
Mateusz Tabaka
8477bc8897 Reduce the number of validate and infer types in ConvertPrecision (#15277)
* Reduce the number of validate and infer types in ConvertPrecision

Currently, ConvertPrecision pass frequently runs validate and infer types.
This is due to the fact that it iterates over every precision pair, then over
the whole model followed by validate and infer types.
The proposed solution is to iterate over the model: for each node iterate
over precisions array, update the node if required followed by validate and
infer types.

Ticket: 81311

* use map

* clang format

* move enum hasher

* fix gpu

* revalidate

* reinvalidate if node has changed

* remove validate for input prec changes

* fix gpu

* review

* find

* fix pytorch case

* revalidate

---------

Co-authored-by: Michal Lukaszewski <michal.lukaszewski@intel.com>
2023-03-14 21:45:24 +04:00
Maciej Smyk
7578c636b9 DOCS shift to rst - OpenVINO Security (#16280) 2023-03-14 17:23:34 +01:00
Ilya Churaev
95faa573ed Introduce ITensor instead of Blob (#16048)
* Introduce ITensor

* Added new allocator

* Hide ITensor from dev api

* Changed some python tests

* Remove deprecated API from sample

* Fixed warnings

* Skiped unsupported tests

* Fixed exception message

* Fixed template func tests

* Fixed incorrect tests

* Fixed comments and move ITensor to developer API

* Fixed CI issue

* Fixed allocated tensor

* Fixed docs and windows warning

* Fixed set shape for strided tensors

* Fixed build and some comments

* Introduce remote tensor

* Fixed code style

* Fixed build

* Remove static assert method

* Remove fail type

* Added device name API

* Try to fix GPU remote tests

* Added debug output

* Try to fix GPU tests

* Fixed comments

* Fixed build

* Added additional element type check

* Revert some comment changes
2023-03-14 19:12:27 +04:00
Ivan Tikhonov
596036a2db Transpose Sinking: refactoring (#16283)
* Add descriptions to the transformations, add additional checks

* fix a warning
2023-03-14 13:55:25 +00:00
River Li
0ca3ccb7fb [CAPI][TEST] remove testdata repository dependency (#16259)
* [CAPI][TEST] remove testdata repository dependency

Change-Id: I93798d47bcf4abf69562f76aab1469498b4e9ee1

* SCI issue

Change-Id: Ifb422fafdefe26be85a0ae8efdc1530f7f032079

* Apply TEST SUIT for C API tests

Change-Id: Ic716cdb357060e1cbf989871ae39d006f2878632

* Fix issue for static build

Change-Id: I5d33f23b48dcda4baf260553aa8d34e62c13c128
2023-03-14 12:34:33 +00:00
Tomasz Jankowski
0145e538f5 TopK v11 reference implementation (#16137)
* Stabilize ascending comparison of ref impl

* Use reference to gtest param

* Create ref impl tests

* Fix descending by index sorting

* Sort by index both ways

* Make sort by index always ascending (revert)
2023-03-14 12:13:53 +00:00
Sebastian Golebiewski
497b7885da DOCS shift to rst - Model Creation in OpenVINO RUNTIME (#16278) 2023-03-14 09:42:44 +01:00
Sebastian Golebiewski
a8be566e24 DOCS shift to rst - Configuring Devices (#16247) 2023-03-14 09:41:35 +01:00
hyunback kim
164db3def9 [GPU] Fix twin tranformer functional regression. (#16111)
* [GPU] Fix twin tranformer functional regression.

gemm/FC select_preferred_format select simple format depends on out rank size.

Signed-off-by: hyunback <hyunback.kim@intel.com>
2023-03-14 17:34:41 +09:00
Sebastian Golebiewski
1268bfdca2 DOCS shift to rst - Shape Inference and Preprocessing (#16213) 2023-03-14 09:31:44 +01:00
Taylor Yeonbok Lee
3a96e06d4c Minor fixes (#16275)
- Fix are_data_types_suitable_for_onednn not to invalidate output layout
- Fix seg fault of printing resample node info
2023-03-14 08:09:54 +00:00
Fang Xu
ae37ca671c [CPU] Fix core dump issue for ENABLE_DEBUG_CAPS (#16229) 2023-03-14 10:32:57 +04:00
Roman Kazantsev
dcc8a36d88 [TF FE] Test Switch and Merge support by TF FE (#16255) 2023-03-14 00:50:47 +04:00
Roman Kazantsev
3b71286f1d [TF FE] Test low-level While support by TF FE (#16257) 2023-03-14 00:50:22 +04:00
Taylor Yeonbok Lee
f0f1c47063 Fix concat to use ngraph shape infer (#16226)
Fix crop to return shape of original rank
2023-03-13 20:25:23 +00:00
Anastasiia Pnevskaia
9462b3ea16 Fixed clearing of pipeline config params and TF session in convert_model() (#16191)
* Fixed pipeline config params clearing.

* Added clearing of TF session. Added tests.
2023-03-13 20:03:02 +04:00
Georgy Krivoruchko
ca6ad433e4 Updated reading file (#16203) 2023-03-13 19:49:57 +04:00
Zlobin Vladimir
fbc9516662 Update Open_model_zoo submodule (#16218) 2023-03-13 15:28:07 +00:00
Tomasz Adamowicz
5311ab0938 [GNA] Refactor memory alignment macro usage (#15749)
* Add possibility to use memory alignment different than 64B

* update tests for new memory api

* Remove ineffective code

* [FIX] Fix memory alignment issues for graph compiler primitives

* Update after review
2023-03-13 15:22:58 +00:00
Tingqian Li
625890c666 [CPU] Flush denormals to zero only when DAZ is off (#12315) 2023-03-13 16:04:12 +01:00
Ilya Lavrenov
f080a0d9cf Added NCC style for frontends sources (#16200)
* Ability to provide several source dirs for ncc-style checks

* Fixed include headers; added NCC to TF common

* Fixed NCC for frontends

* Fixed NCC for frontends

* Extra fixes

* Fixest push --f

* Clang-format

* Apply comments

* Add an option to specify required clang-format version

* Update src/frontends/tensorflow/src/decoder_proto.cpp

* Update src/frontends/tensorflow/src/decoder_proto.cpp
2023-03-13 14:54:00 +00:00
Sebastian Golebiewski
a84f87e9dc DOCS shift to rst - Preprocessing article (#16250) 2023-03-13 14:50:44 +01:00
Sebastian Golebiewski
bf8c7fe668 DOCS shift to rst - Preprocessing API - details article (#16241) 2023-03-13 14:50:16 +01:00
Pawel Raasz
72566cde0d Review pooling classes for shape inference aspects (#16114)
* Review adaptive avg pool shape inference

* Review adaptive max pool shape inference

* Review AvgPool and MaxPool

* Minor improvement for StaticShape

* Update ShapeInferBaseWithPadding's infer
to be compatible with interface after rebase

* Fix build issues

* Set default pads before checks

* Fix include openvino headers
2023-03-13 12:48:44 +00:00
Ilya Churaev
63338b6e08 Introduce openvino ExecutionNode (#16242) 2023-03-13 15:45:08 +04:00
Sebastian Golebiewski
e19ba8b3e2 DOCS shift to rst - Layout API Overview article (#16244)
* shift-to-rst
2023-03-13 12:13:49 +01:00
Roman Kazantsev
0ffa4eb507 [Core] Allow ScatterND inputs type to be dynamic (#16236)
* Allow ScatterND inputs type to be dynamic

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Update src/core/src/op/util/scatter_nd_base.cpp

Co-authored-by: Pawel Raasz <pawel.raasz@intel.com>

* Update src/core/src/op/util/scatter_nd_base.cpp

Co-authored-by: Pawel Raasz <pawel.raasz@intel.com>

* Update src/core/src/op/util/scatter_nd_base.cpp

* Apply code-style

---------

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
Co-authored-by: Pawel Raasz <pawel.raasz@intel.com>
2023-03-13 10:36:00 +00:00
Irina Efode
df6cd3303a [CONFOMANCE] Add handling crashes (#16214)
* [CONFOMANCE] Add handling crashes

* Fix win paths
2023-03-13 14:23:26 +04:00
Ivan Tikhonov
8da01d4c2d TransposeSinking transformation: support new ops and fix performance issues (#15660)
* Resolve the performance issues in TransposeSinking transformation

* codestyle

* fix warning as error, fix tests failures

* fix ts for Concat and Reduce

* Fix TransposeReduceBackward

* fix the issue in TransposeFuse transformation

* fix TransposeReduce transformations

* Fix TransposeReduction, fix TransposeSinkingSplit, add unsqueeze support

* delete debug print

* Add additional validations

* fix node validation

* Fix validate for split, revert changes for concat, add BatchToSpace/SpaceToBatch

* Add SpaceToBatch/BatchToSpace

* fix TS for Interpolate + codestyle

* fix gna build

* Support TS for Interpolate, VariadicSplit, IsInf, IsNan, IsFinite + refactoring

* add the missed line

* add include

* TransposeSinking tests refactoring: part1

* TransposeSinking tests refactoring: part2

* Add limited support for StridedSlice op

* codestye

* TransposeReduction: skip the case when 2nd input for Squeeze is not provided

* Transpose sinking tests refactoring: part 3. + Revert changes in MOC.

* fix build

* codestyle

* Add tests for TS backward transformations, update TransposeSinkingFuse transformation, delete StridedSlice transformation prototype + tests refactoring

* fix unary tests

* Fix warning as error on Windows

* Add new tests for Unsqueeze/Squeeze; refactoring; remove debug code

* codestyle
2023-03-13 14:18:02 +04:00
Piotr Krzemiński
2eef025773 [PYTHON] Introduce Json Statistics Report aligned with C++ version (#15692)
* [PYTHON] Introduce Json Statistics Report aligned with C++ version

* [PYTHON] Update README with new json_stats flag

* [PYTHON] Fix missing StatisticsReportConfig compilation error

* [PYTHON] Fix README formatting

* [PYTHON] Fix indent, fix pcsort error thrown for timedelta/int type mismatch, fix some compilation errors

* [PYTHON] Apply Pythonization ideas & fix JSON report showing incorrect category results

* Update tools/benchmark_tool/openvino/tools/benchmark/utils/statistics_report.py

Co-authored-by: Zlobin Vladimir <vladimir.zlobin@intel.com>

* [PYHON] Align multiple-iterations behavior for reports

---------

Co-authored-by: Zlobin Vladimir <vladimir.zlobin@intel.com>
2023-03-13 13:58:40 +04:00
Sebastian Golebiewski
1e757de195 [DOCS] shift to rst - OpenVINO Inference Request 2023-03-13 10:04:54 +01:00
Ilya Churaev
087c3bd0af Fixed typo in core tests (#16235) 2023-03-13 07:15:54 +00:00
Paul Youngsoo Ahn
e8b108ac6b [GPU] Change lws to avoid synchronization issue in nonzero_count (#16116)
* [GPU] Change lws to avoid synchronization issue in nonzero_count (#16116)

* [GPU] Add unit test (#16116)

* [GPU] update count_nonzero_ref kernel(#16116)
- Support the case total data size exceed max work group size
- Add dynamic shape test case

* [GPU] Change input indexing calculation and add random input generator in unit test (#16116)

* [GPU] update random generation input funciton in nonzero_count (#16116)

* [GPU] update unit test (#16116)

* [GPU] cldnn unit test: update random generation function for other test failure (fusings_gpu/conv_fp32_multi_eltwise_quantization.basic/0) (#16116)
2023-03-12 23:32:20 -07:00
Zlobin Vladimir
0e91b07422 Prohibit FP16 FP32 values for cpp benchmark_app (#16217)
Ticket 100990

Python benchmark_app already prohibits FP16 and FP32.
2023-03-13 09:27:52 +04:00
Roman Kazantsev
32ac952e5f [TF FE] Convert a model with Framework nodes (#16053)
* [TF FE] Convert a model with Framework nodes

Now the conversion pipeline will convert all unsupported operations to Framework nodes
It is done with a hope that sub-graphs with Framework Nodes will be cut in later stages
like auto-pruning.

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Fix build issue

* Fix dynamic element type for FusedBatchNorm

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Fix build issue

* Fix build issue

* Continue translation in case translator limitation

* Change undefined to dynamic type

* Have one more change to dynamic type

* Change undefined to dynamic in Const translator

* Expect MO to handle dynamic type

* Exclude TransposeSinking pass if model contains Framework nodes

---------

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2023-03-12 01:42:34 +00:00
Ilya Lavrenov
1874c072b2 switched public Azure Linux build to clang (#16198)
* switched public Azure Linux build to clang

* Fixed GNA compilation

* Suppressed warning in GNA tests

* switched public Azure Linux build to clang

* Fixed GNA compilation

* Suppressed warning in GNA tests

* More fixes

* Skip test on CPU
2023-03-11 11:01:27 +04:00
Sebastian Golebiewski
a1a35e9211 issue-15090 (#16207)
Add command for installation of prerequisites on Linux.
2023-03-11 00:04:59 +04:00
Sebastian Golebiewski
8446f38924 DOCS shift to rst - Inference Pipeline article (#16224) 2023-03-10 18:13:15 +01:00
Ilya Churaev
75314c2c53 Rename OPENVINO_UNREACHABLE to OPENVINO_THROW (#16201)
* Changed some exceptions to OPENVINO_THROW

* Changed samples throw exception

* Fixed some comments

* Remove OPENVINO_UNREACHABLE
2023-03-10 20:23:13 +04:00
Maciej Smyk
4e89150a7c DOCS shift to rst - OpenVINO™ integration with TensorFlow (#16221) 2023-03-10 16:31:52 +01:00
Sebastian Golebiewski
63041ca559 234 update (#16211)
Adding notebook 234-encodec-audio-compression
2023-03-10 16:29:26 +01:00
Sebastian Golebiewski
4d8a4d3957 DOCS shift to rst - Dynamic Shapes article (#16215) 2023-03-10 16:20:06 +01:00
Maciej Smyk
5e406a80d3 [DOCS] OpenVINO Wiki links update - master (#16219)
* wiki links
2023-03-10 16:16:14 +01:00
Irina Efode
da8d5ba056 Fix generation of report in CI (#16209) 2023-03-10 14:02:43 +01:00
Karol Blaszczak
fa2ffc3bb4 Update prerelease_information.md (#16206) 2023-03-10 12:51:45 +01:00
Roman Lyamin
b8e1dea345 [GPU] Fix binary_convolution non-constant weights (#15898)
* [GPU] Fix binary_convolution non-constant weights

* [GPU] Remove unused checks related to allowInputReordering
2023-03-10 14:36:12 +04:00
Xiping Yan
198e90944f JIRA 93714 change bfloat16 to trivial for ov::core (#15922)
* Update bfloat16 to trivial type.
Remove "pragma GCC diagnostic ignored "-Wclass-memaccess""

Signed-off-by: Yan, Xiping <xiping.yan@intel.com>

* float16 also need to be transferred to Trivial.

Signed-off-by: Yan, Xiping <xiping.yan@intel.com>

---------

Signed-off-by: Yan, Xiping <xiping.yan@intel.com>
Co-authored-by: Ilya Churaev <ilya.churaev@intel.com>
2023-03-10 09:53:28 +00:00
Ilya Churaev
7a0beb5f1d Template plugin class doc (#16152)
* Update template plugin main documentation pages

* Update plugin documentation

* Add more documentation for method

* Register new doxygen groups

* Updated group

* Added ie group

* Fixed comments

* Reuse new implementation inside the old one

* Try to fix titles

* Fix class fields level
2023-03-10 13:42:26 +04:00
Sebastian Golebiewski
3c8bb1492e [DOCS] Align tabs in 'Install from PyPI' article - for master (#16085)
* align tabs

* Update installing-openvino-pip.md

* Update installing-openvino-pip.md

* Update installing-openvino-pip.md
2023-03-10 13:03:42 +04:00
Ilya Lavrenov
fa9677a6ee Removed visibility settings from samples (#16192) 2023-03-10 12:58:15 +04:00
Daria Mityagina
34bab897d6 [VPUX] - benchmark app issue fix - PERFORMANCE_HINT: UNDEFINED (#16065)
* [VPUX] - benchmark_app issue

* [VPUX] - benchmark_app issue - review
2023-03-10 12:46:51 +04:00
Karol Blaszczak
670668e593 DOCS shift to rst - devices and ARM (#16193) 2023-03-10 09:25:33 +01:00
Karol Blaszczak
0f19e9c0d2 shift to rst - GPU articles (#16175)
Co-authored-by: Sebastian Golebiewski <sebastianx.golebiewski@intel.com>
2023-03-10 08:34:27 +01:00
Ilya Churaev
45bdbf7486 Changed throw ov::Exception to macro (#16150)
* Changed throw ov::Exception to macro

* Fixed code style

* Revert myriad headers

* CPPlint fixes

* Fixed typo
2023-03-10 11:14:50 +04:00
Maxim Vafin
ec8a4abf6d Support more complicated cases of list concatenation (#16139)
* Support more complicated cases of list concatenation

* Fix codestyle

* Add tests
2023-03-10 07:51:10 +01:00
Chenhu Wang
a1510a5e5f fix warnings of overloaded virtual function (#16195) 2023-03-10 10:35:51 +04:00
Sofya Balandina
8e24483f5c Add missing ov_executable_network tests for teamplte plugin (#16190) 2023-03-10 10:28:37 +04:00
Maciej Smyk
da9b014c83 [DOCS] Moving How to build documentation from wiki to md docs - master (#16063)
* conditional_compilation

* how-to-build-2

* Update local-distribution.md

* Update build.md

* Update build.md

* Update docs/dev/build.md

Co-authored-by: Ilya Churaev <ilyachur@gmail.com>

* Update docs/dev/build.md

Co-authored-by: Ilya Churaev <ilyachur@gmail.com>

* Update docs/dev/static_libaries.md

Co-authored-by: Ilya Churaev <ilyachur@gmail.com>

* Update docs/dev/building_documentation.md

Co-authored-by: Ilya Churaev <ilyachur@gmail.com>

* Changes after review

* Update docs/dev/building_documentation.md

* Update docs/dev/static_libaries.md

* building articles update

---------

Co-authored-by: Ilya Churaev <ilyachur@gmail.com>
2023-03-10 10:24:56 +04:00
Ilya Lavrenov
6586742204 Disable sentencepiece extension in static build (#16194)
* Disable sentencepiece extension in static build to prevent double linking of protobuf

* Update .ci/azure/windows.yml
2023-03-10 09:35:36 +04:00
Mykhailo Hnap
d5e98cbdce [GPU] IsFinite, IsInf, IsNaN operations (#15979)
* [GPU] Enabled ComparisonLayerTest in single layer tests.

It seems that before, these tests were disabled cause of some failures. Now I cannot see any errors, so I just enabled all of them.

* [GPU] Run clang format for comparison single layer tests.

* [GPU] Added handling of f16 type to IsInfLayerTest.

* [GPU] Added single-layer tests for IsFinite and IsNaN operations.

* [GPU] Added single-layer test for IsInf operation.

* [GPU] Implemented IsFinite, IsInf, and IsNaN operations as activation functions.

But notice that currently, the activation kernel support only the same output data type as the input data type. So an additional reorder would be needed to convert to the correct output data type for these ops. Also worth noting is that activation functions are fused in reorder kernel. But for now, it's not working for these ops because in reorder activation call, there is a hard conversion of input data to output data type before activation. I don't know why it's added there, but it breaks fusion. So need to fix this activation fusion or disable this fusion for these ops.

* Revert "[GPU] Implemented IsFinite, IsInf, and IsNaN operations as activation functions."

This reverts commit 3f9ffe617ecddce6dbbcdeab9584a7ddeb6d1845.

* [GPU] Implemented IsFinite, IsInf, and IsNaN operations as eltwise op.

* [GPU] Changed CLDNN_ERROR_MESSAGE to OPENVINO_ASSERT in check_inputs_count method.
2023-03-09 16:10:48 -08:00
Andrew Kwangwoong Park
3ec386a741 [GPU] Minor fixes for dynamic BERT models (#16158)
* [GPU] Minor fix for dynamic bert-base-uncased-qqp

Signed-off-by: Andrew Park <andrew.park@intel.com>

* Fix to check full tensor only for static shape during creating onednn gemm

Signed-off-by: Andrew Park <andrew.park@intel.com>

---------

Signed-off-by: Andrew Park <andrew.park@intel.com>
2023-03-09 14:48:08 -08:00
Taylor Yeonbok Lee
dff7f2451b Revert PR15386's change (#16172)
- Previously, PR15386 changed allocation of memory of primitives which are to be used as shape infer dep to host memory, for better shape infer perf.
- However this causes cache coherence issue in dGPU.
- Reverting this change so that the memory will be allocated to devicet
2023-03-09 22:44:32 +00:00
Mateusz Mikolajczyk
31489931cf [PT FE] Fix failing translation of aten::index_put_ (#16140)
* Initial commit

* Fix for reading processed list

* Format code

* Cleanup

* cleanup

* Cleanup

* cleanup test

* Add comment

* Add rt_info

* fix type

* Update src/frontends/pytorch/src/transforms/aten_index_put_replacer.cpp

Co-authored-by: Maxim Vafin <maxim.vafin@intel.com>

---------

Co-authored-by: Andrei Kochin <andrei.kochin@intel.com>
Co-authored-by: Maxim Vafin <maxim.vafin@intel.com>
2023-03-09 20:14:58 +00:00
Alina Kladieva
654f3d988f Revert "Update open_model_zoo submodule (#16181)" (#16189)
This reverts commit a9cc52b462.
2023-03-09 20:10:47 +04:00
Roman Kazantsev
326aedb5f8 [TF FE] Support EmptyTensorList and TensorListPushBack operations (#16183)
* [TF FE] Support EmptyTensorList and TensorListPushBack operations

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Rename a script to generate the test model

* Correct test model generating script

---------

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2023-03-09 19:06:16 +04:00
Ryszard Jezierski
1051226fc9 Updated GNA lib version to 1906 (#16122) 2023-03-09 14:58:46 +00:00
Zlobin Vladimir
a9cc52b462 Update open_model_zoo submodule (#16181) 2023-03-09 17:06:19 +04:00
Pavel Esir
e43f606750 [LoadTime][MO] flush fp32 subnormals to zero at offline phase (#15929)
* flush fp32 subnormals to zero in IR

* style fix in test_offline_api.py

* simplified call of FlushFP32SubnormalsToZero: is called form offline_transformations.cpp

* reverted offline_transformations.py

* use fpclassify

* style-fix

* Update src/common/transformations/tests/common_optimizations/flush_fp32_subnormals_to_zero_test.cpp

Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>

---------

Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>
2023-03-09 16:21:28 +04:00
Maksim Proshin
f04507f56c Added "accuracy" value for "preset" parameter (#16016)
Added "accuracy" value for "preset" parameter
2023-03-09 14:59:29 +04:00
Karol Blaszczak
94b533b284 Update OV_2023_models_supported.pdf (#16179) 2023-03-09 11:57:41 +01:00
River Li
474ea8a8e2 Fix minor build error in ubuntu 22.04 (#16171)
Change-Id: I15180787f3196001d00137664e22d71aff4d0b32
2023-03-09 10:02:36 +00:00
Ilya Churaev
5040f59c96 Use new CPUStreamsExecutor inside the old one (#16164) 2023-03-09 13:41:38 +04:00
Sungeun Kim
0365ebf5ad disable test case: fusings_gpu/lrn_fp16_eltwise_activation.basic/7 (#16149) 2023-03-09 08:38:33 +00:00
Ilya Lavrenov
cc8295f27e Fixed PT FE compilation with clang on macOS (#16173) 2023-03-09 12:32:02 +04:00
Mateusz Bencer
c7e479ff78 [ONNX FE] Improved a method of operators registration (#15990)
* initial version of implementation

* styles applied

* fixed and registration

* add more unit tests

* fixed and in legacy opset

* review remarks

* refactor of version name range
2023-03-09 07:22:06 +01:00
Jade Cho
aaeace9740 [GPU] Fix stable diffusion failure (#16052)
* [dGPU] Enable stable diffusion

+ Prevent to fuse swish into oneDNN reorder.
+ Makes concat explicitly if batch size is greater than 1 and the siblings are oneDNN impl.
2023-03-09 14:35:31 +09:00
Andrew Kwangwoong Park
b7ff3a1d64 [GPU] Added shape agnostic Pad kernel implementation (#16160)
Signed-off-by: Andrew Park <andrew.park@intel.com>
2023-03-08 15:36:43 -08:00
Ilya Lavrenov
3d52fc843a Fixed RelWithDebInfo build (#16159) 2023-03-08 20:00:30 +04:00
Edward Shogulin
75c73a38f8 [nGraph] VisualizeTree cclang quick fix (#16155) 2023-03-08 19:36:25 +04:00
Ilya Lavrenov
c3b22af3f7 CoreImpl small refactoring (#16145)
* Small CoreImpl refactoring

* Removed cache_dirhandling from CPU plugin

* clang-format

* Fixed python tests

* Fix

* Fixed bugs in HETERO case

* Fixed clang-format and warnings in auto plugin

* Added import_export as capability for TEMPLATE plugin

* Commented throw exception from loaded_from_cache

* Fixed clang-formatof ro template plugin
2023-03-08 19:19:52 +04:00
Roman Kazantsev
f3e7e55968 [TF FE] Support multioutput body graph nodes (#16142)
This is a corner case because body graph nodes have named output ports.
This allows to support custom RetinaNet model.

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2023-03-08 17:29:42 +04:00
Karol Blaszczak
3dbea43ef1 [DOCS] remove mentions of myriad throughout docs (#15690)
* remove ov::device::thermal

ov::device::thermal was only supported on myriad

* additional cleanup

* remove myriad from AUTO and MULTI

auto n multi n hetero

+ remove mentions of listing myriad devices

* two final fixes

* Update ov_auto.py

---------

Co-authored-by: Ilya Churaev <ilya.churaev@intel.com>
2023-03-08 17:29:08 +04:00
Vladimir Paramuzov
75b48f2153 [GPU] Changed impls cache key type to avoid hash collisions (#16130) 2023-03-08 16:09:55 +04:00
Tomasz Dołbniak
e5ef0fee8e TopK base class cleanup (#16154) 2023-03-08 11:33:38 +00:00
Mateusz Bencer
50b76873e2 [ONNX] Fix external weights loading for the current dir path case (#16124) 2023-03-08 12:12:23 +01:00
Chenhu Wang
0786a963ab [CPU] Remove unused emitter parameters and API (#16117) 2023-03-08 14:15:44 +04:00
Ekaterina Aidova
6514b7600a [PT FE]: add translation for aten::cumsum (#16092)
* [PT FE]: add translation for aten::cumsum

* handle out and prim::dtype
2023-03-08 11:07:29 +01:00
Pawel Raasz
cd8999d43b Fix tile shape inference when repeats got dynamic shape (#15792)
* fix shape infer when repeats got dynamic shape

* Dynamic output shape when repeats dim is dynamic
2023-03-08 11:21:58 +04:00
Ilya Churaev
7c8dc76223 Rename template config and move transformations to code snippets (#16133)
* Rename template config and move transformations to code snippets

* Fixed documentation

* Rename template config
2023-03-08 07:12:05 +00:00
Xiping Yan
68b8d41c43 [CPU]JIRA 93714 fix CPU plugin warning after remove wd4309 wd4018 (#15961)
* Remove warning suppression: wd4018, wd4309

Signed-off-by: Yan, Xiping <xiping.yan@intel.com>

* Remove linux warning suppression no-sign-compare

Signed-off-by: Yan, Xiping <xiping.yan@intel.com>

* ov::intel_cpu::VectorDims base value type is size_t;
dnnl::memory::dims base value type is int64_t;

All compare data up to int64_t can fix warning and there is potential issue.

Signed-off-by: Yan, Xiping <xiping.yan@intel.com>

* channelAxis maybe == -1; means: no exist any more.

Signed-off-by: Yan, Xiping <xiping.yan@intel.com>

* Fix recursive macro: "one_of", "everyone_is" sign-compare warning.
Must pass same value type.

Signed-off-by: Yan, Xiping <xiping.yan@intel.com>

* Fix Windows sign unsign compare warning

Signed-off-by: Yan, Xiping <xiping.yan@intel.com>

* There are 2 instances:

using ov::Dimension::value_type = int64_t
using ov::intel_cpu::StaticDimension::value_type = size_t

All up to int64.

Signed-off-by: Yan, Xiping <xiping.yan@intel.com>

* linux have too many sign-compare issue.
Complete windows sign-compare firstly.

Signed-off-by: Yan, Xiping <xiping.yan@intel.com>

* Fix clang issues.

Signed-off-by: Yan, Xiping <xiping.yan@intel.com>

* Fix warning.
Because instantiate T1=unsigned int, T2=int

Signed-off-by: Yan, Xiping <xiping.yan@intel.com>

* Fix warning for tests unit reorder_node_test.cpp

Signed-off-by: Yan, Xiping <xiping.yan@intel.com>

* Fix warning : ASSERT_GE(step, 1u);

Signed-off-by: Yan, Xiping <xiping.yan@intel.com>

* Fix tests: warning C4018

Signed-off-by: Yan, Xiping <xiping.yan@intel.com>

* Remove auto, using int64_t is more reasonable.

Signed-off-by: Yan, Xiping <xiping.yan@intel.com>

---------

Signed-off-by: Yan, Xiping <xiping.yan@intel.com>
2023-03-08 10:41:00 +04:00
hyunback kim
a9cbccd829 Broadcast for post ops enable enable onednngemm (#16074)
* [GPU] Add data broadcasting for OneDNN binary ops for Gemm primitive
* Based on https://github.com/openvinotoolkit/openvino/pull/15790 and enable onednn gemm from support multiple users and non constant input.

--------

Signed-off-by: hyunback <hyunback.kim@intel.com>
Co-authored-by: Sergey Shlyapnikov <sergey.shlyapnikov@intel.com>
2023-03-08 13:55:51 +09:00
Roman Lyamin
681faadce3 [GPU] Added shape agnostic kernels for GatherElements and Tile (#15798)
* [GPU] Added shape agnostic kernel for GatherElements

* [GPU] Added shape agnostic kernel for Tile
2023-03-08 08:34:24 +04:00
Mateusz Mikolajczyk
b907bfab3b [PT FE] Fix for prim::Constant tensor with int/float values (#16029)
* Fix constant

* Change

* Remove duplicated code

* Add tests
2023-03-07 20:54:18 +00:00
Sebastian Golebiewski
1bd1eca8d9 [DOCS] Integrate OpenVINO with your application article - add TF Lite (#16020)
* fix snippets

* add tflite

* update tab directives

* Update integrate_with_your_application.md
2023-03-07 23:01:56 +04:00
Maxim Vafin
feb448cc89 Fix MO IR Reader for Eye op (#15996)
* Fix MO IR Reader for Eye op

* Fix eye value infer

* Remove debug output

* Add test for eye value infer

* Fix bom tests

* Fix alphabetical order
2023-03-07 20:30:14 +04:00
Mateusz Mikolajczyk
6358974c1a [PT FE] Add prim::PythonOp (#15714)
* Add PythonOp

* Fix deprecation & cleanup

* Apply suggestions from code review

* Fix dtype

* Apply suggestions from code review

Co-authored-by: Maxim Vafin <maxim.vafin@intel.com>

* Update to new tensor names handling

* Fix negation

* Apply changes from code review

* Remove unnecesary imports

* Update src/frontends/pytorch/src/op/pythonop.cpp

Co-authored-by: Maxim Vafin <maxim.vafin@intel.com>

---------

Co-authored-by: Maxim Vafin <maxim.vafin@intel.com>
2023-03-07 17:20:13 +01:00
Szymon Irzabek
e79636bfbb [GNA] Add 3.6 and 4.0 targets (#15735) 2023-03-07 17:14:59 +01:00
Tatiana Savina
cf7dfff35f delete dlwb imgs (#16066) 2023-03-07 16:05:32 +01:00
Maxim Vafin
82584543ba [PT FE] Support set/get element type, shape and value in PyTorch FE InputModel (#16100)
* Support setting and getting element type, shape and value in PyTorch FE InputModel

* Fix code style

* Fix code style

* Fix rsub layer test

* Fix py style

* Apply review feedback

* Fix code style

* Fix initial values of input and output flags in Place
2023-03-07 15:45:29 +01:00
Liubov Talamanova
e6ad0a5154 [POT] Removed Multiply from ignored ops for transformers (#15600)
* [POT] Removed Multiply from ignored ops for transformers

* Add VPU to change_configurations_by_model_type
2023-03-07 14:37:29 +00:00
Ilya Lavrenov
15e43e0cc2 Removed explicitl handling for cache_dir from GNA (#16134) 2023-03-07 13:42:56 +00:00
Vladimir Paramuzov
a1eb76ad06 [GPU] Move is_local_block_io_supported WA to kernel selector (#15235) 2023-03-07 15:12:08 +04:00
Zhang Yi
6fbaf4745a [CPU ]Fix bf16 marking logic & remove useless convert (#15404) 2023-03-07 10:02:14 +00:00
Ilya Churaev
4cea80915d Added CI documentation (#16129) 2023-03-07 12:55:36 +04:00
Nadezhda Ageeva
0dad7749b5 Fix RTInfo for ReduceL2Decomposition (#16107)
* Fix RTInfo for ReduceL2Decomposition

* Review comments
2023-03-07 10:24:55 +04:00
Min, Byungil
87b18a21c1 [GPU] Optimize eltwise kernel for blocked format (#15717)
* [GPU] Optimize eltwise kernel for blocked format

+ Optimize etlwise_blocked_opt
+ Replace deprecated kernels with eltwise_blocked_opt
+ Remove eltwise_b_fs_yx_fsv16, b_fs_yx_fsv4 kernels
+ Add test-cases in eltwise_gpu_test

Signed-off-by: byungilm <byungil.min@intel.com>
2023-03-07 14:21:09 +09:00
Vladimir Paramuzov
eff0bce7e3 [GPU] Move some op parameters from node to primitive class (#16070)
* [GPU] Move parameters of conv and quantize primitive from node to primitive

---------

Co-authored-by: Eddy Kim <eddy.kim@intel.com>
2023-03-07 08:56:00 +04:00
Mateusz Bencer
e77adca01f Fix sign-compare warnings in ONNX FE (#16125)
* editor.cpp

* fix scan.cpp

* fix place.cpp

* fix tensor_external_data.cpp

* fix editor.cpp

* remove no-sign

* add sign-comare in os_flags.cmake

* fix place.cpp

* fix tensor_external_data.cpp

* remove sign-compare

* fix onnx_transformations.cpp

* fixed get_input_port + refactor

---------

Co-authored-by: haiqi <haiqi.pan@intel.com>
2023-03-07 08:15:56 +04:00
Wang, Yang
4d3dcfc5d4 Enable AUTO to support execution mode hint. (#15595)
* Enable AUTO to support execution mode hint.

Signed-off-by: Wang, Yang <yang4.wang@intel.com>

* Add test case.

* Set default value "PERFORMANCE" for ov::hint::execution_mode.

* Update.

* Update.

* Correct default ov::hint::execution_mode value for the default value checking test case.

* Update.

* Delete obsolete config.hpp file.

---------

Signed-off-by: Wang, Yang <yang4.wang@intel.com>
2023-03-07 11:58:14 +08:00
River Li
4d7bffa593 [UT][AUTO_BATCH]auto batch plugin unit test (#15211)
* Init auto_batch plugin unit test

* Add more mock test

* Add to ci yml file

* Fix clang issue

* Resolve compilation issue

* Fix symbol multiple definition in static build

* Add test cases for AutoBatchInferRequest

* Add test cases for AutoBatchAsyncInferRequest

* qFixed build error after PR-15229

* Resolve blocked issue when call StartAsync test cases

* add more test for auto batch async inference

---------

Co-authored-by: Chen Peter <peter.chen@intel.com>
2023-03-07 11:55:26 +08:00
Wang Wangwang
83b57e2a64 AUTO cumulative throughput mode ignore candidate device that fail to … (#15420)
* AUTO cumulative throughput mode ignore candidate device that fail to load device

* Simplify the judgement logic of whether Auto set to Multi

* Add description about _AutoSetToMulti variable

* Update variable name to _AutoCallMulti

* Refine logic of AUTO execution_devices

* Add loading error massage

* Add test case

* Add filter to execution_devices of MULTI

* Add execution_devices test in load fail sitution

* Simplify the logic of execution_devices

* Update auto_executable_network.cpp

* Update src/plugins/auto/multi_executable_network.cpp

Co-authored-by: yanlan song <bell.song@intel.com>

* Update src/plugins/auto/auto_executable_network.cpp

Co-authored-by: yanlan song <bell.song@intel.com>

* Update test case

---------

Co-authored-by: Chen Peter <peter.chen@intel.com>
Co-authored-by: yanlan song <bell.song@intel.com>
2023-03-07 11:52:25 +08:00
Chenhu Wang
6e7bef529f [CPU] Single place to fill tails of load emitter (#14846) 2023-03-06 21:31:39 +04:00
Alexandra Sidorova
5269cb37d8 [Snippets] Fixed potential constants getter for FQ (#15892) 2023-03-06 21:19:26 +04:00
Andrew Kwangwoong Park
7123e8879e [GPU] Added shape agnostic optimized SoftMax kernel (#15834)
* [GPU] Added shape agnostic optimized SoftMax kernel

Signed-off-by: Andrew Park <andrew.park@intel.com>

* Update SoftmaxKernelBaseBF::Validate policy for shape agnostic kernel

Signed-off-by: Andrew Park <andrew.park@intel.com>

* Add softmax_gpu_bf shape agnostic TC for ov_gpu_unit_tests

Signed-off-by: Andrew Park <andrew.park@intel.com>

* Fix failed TCs for ie-tests-linux-ubuntu20-gpu

Signed-off-by: Andrew Park <andrew.park@intel.com>

* Update to use stack array instead of global buffer

Signed-off-by: Andrew Park <andrew.park@intel.com>

* Remove global buffer usage completely

Signed-off-by: Andrew Park <andrew.park@intel.com>

* Add #undef directive

Signed-off-by: Andrew Park <andrew.park@intel.com>

---------

Signed-off-by: Andrew Park <andrew.park@intel.com>
2023-03-06 09:10:29 -08:00
Tatiana Savina
41fd836196 [DOCS] Fix Frontend Extensions snippets (#16120)
* move fe to rst

* fix code snippets

* add more line breaks

* fix tabsets

* fix link

* fix anchor

* test

* fixing link

* change tab directive

* fix tabs

* align code tabs

* fix link

* fix snippets
2023-03-06 17:43:49 +01:00
Tatiana Savina
3faf4fcb3e [DOCS] Add OTX page to Ecosystem (#16118)
* add otx page

* change ecosystem page

* add ote img

* move ote page to rst

* fix path

* add path

* img test

* otx page

* add docs to ecosystem page
2023-03-06 16:58:26 +01:00
Edward Shogulin
cf8dccaedb [nGraph] VisualizeTree displays tensors (#16093) 2023-03-06 14:06:54 +00:00
Tomasz Jankowski
b8348cda2e Add AbsSubMul PReLu fusion (#16086) 2023-03-06 12:56:22 +00:00
Andrew Kwangwoong Park
4ce35fd851 [GPU] Minor fixes for dynamic model (#16075)
Signed-off-by: Andrew Park <andrew.park@intel.com>
2023-03-06 15:50:38 +04:00
Irina Efode
54d3641baa Fix reportgeneration in CI (#16113) 2023-03-06 12:05:47 +01:00
Xiping Yan
e6a65f406d [CPU] Fix warning: sequence point strict aliasing (#15989)
* Remove sequence-point and strict-aliasing

Signed-off-by: Yan, Xiping <xiping.yan@intel.com>

* Replace reinterpret_cast with memcpy to fix warning of -Wstrict-aliasing

Signed-off-by: Yan, Xiping <xiping.yan@intel.com>

* Fix warning: -Wsequence-point

Signed-off-by: Yan, Xiping <xiping.yan@intel.com>

* Fix functional test warning: -Wstrict-aliasing

Signed-off-by: Yan, Xiping <xiping.yan@intel.com>

* Lost ft1 declare.

Signed-off-by: Yan, Xiping <xiping.yan@intel.com>

* Fix warnging: strict-aliasing.

Signed-off-by: Yan, Xiping <xiping.yan@intel.com>

* Fix warnging: strict-aliasing. ie_test_utils

Signed-off-by: Yan, Xiping <xiping.yan@intel.com>

* fix float name error.

Signed-off-by: Yan, Xiping <xiping.yan@intel.com>

* Update src/plugins/intel_cpu/src/nodes/unique.cpp

Co-authored-by: Maksim Kutakov <maxim.kutakov@gmail.com>

* Update src/plugins/intel_cpu/src/nodes/unique.cpp

Co-authored-by: Maksim Kutakov <maxim.kutakov@gmail.com>

* 1: Maybe uint16_t is more reasonable;
2: Replace int16_t with bfloat16_t;

Signed-off-by: Yan, Xiping <xiping.yan@intel.com>

* fix build error.

Signed-off-by: Yan, Xiping <xiping.yan@intel.com>

---------

Signed-off-by: Yan, Xiping <xiping.yan@intel.com>
Co-authored-by: Maksim Kutakov <maxim.kutakov@gmail.com>
2023-03-06 10:57:12 +01:00
Sebastian Golebiewski
20c0927ff9 [DOCS] Better statement about MO extensions as internal API [Recreating #14062] (#15679)
Recreating #14062
2023-03-06 10:27:42 +01:00
Maksim Kutakov
b9a48f12c8 [CPU] Prevent out of bounds read inside Graph::InferDynamic (#16067) 2023-03-06 12:52:37 +04:00
Xiping Yan
8b66b35bf7 [CPU]Remove C4250 warning suppress, and fix the corresponding warning. (#15966) 2023-03-06 12:43:53 +04:00
Tomasz Dołbniak
4486470e02 TopK v11 core operator (#15910) 2023-03-06 08:31:18 +01:00
Xuejun Zhai
9b97235902 Xuejun/remove api in ov any (#15667)
* [Remove APIs] remove ov::any api  &

Signed-off-by: xuejun <Xuejun.Zhai@intel.com>

* [Remove APIs] remove ov::any api

Signed-off-by: xuejun <Xuejun.Zhai@intel.com>

* [Remove APIs] remove interfaces in ov::any  Base* operator->() & const Base* operator->()

Signed-off-by: xuejun <Xuejun.Zhai@intel.com>

* [Remove APIs] remove ov::any interfaces Base* get() & const Base* get()

Signed-off-by: xuejun <Xuejun.Zhai@intel.com>

* [Remove APIs] remove ov::any interfaces call(const Any& any) & dynamic_pointer_cast(const ::ov::Any& any) & static_pointer_cast(const ::ov::Any& any)

Signed-off-by: xuejun <Xuejun.Zhai@intel.com>

* [Remove APIs] fix code format issues in ov::any

Signed-off-by: xuejun <Xuejun.Zhai@intel.com>

* [Remove APIs] fix review issue

Signed-off-by: xuejun <xuejun.zhai@intel.com>

* [Remove APIs] clear code

Signed-off-by: xuejun <xuejun.zhai@intel.com>

* [Remove APIs] fix review issue

Signed-off-by: xuejun <xuejun.zhai@intel.com>

* [Remove APIs] fix compiler issue

Signed-off-by: xuejun <xuejun.zhai@intel.com>

* [Remove APIs] fix compiler issue

Signed-off-by: xuejun <xuejun.zhai@intel.com>

* [Remove APIs] fix compiler issue

Signed-off-by: xuejun <xuejun.zhai@intel.com>

* Fix variant error

Signed-off-by: xuejun <xuejun.zhai@intel.com>

---------

Signed-off-by: xuejun <Xuejun.Zhai@intel.com>
Signed-off-by: xuejun <xuejun.zhai@intel.com>
2023-03-06 10:24:08 +04:00
Ilya Lavrenov
79d3ff352e Migration to latest versions of submodules (#16103) 2023-03-06 10:22:34 +04:00
Shen, Wanglei
e605a4c344 Create streams info table based on processor type table (#15571)
* enable streams info table based on CPU mapping

* add detail processor info for mix stream

* fix code style issue

* fix typo

* fix code style issue for Android build

* update description of streams info table

* move streams info related function to new file

* remove duplicated definition

* add description for parameters of get_streams_info_table()

* update test case file

* fix windows build issue

* fix windows build issue

* fix windows build issue

* fix typo

* update latency mode for hybrid platform

* update limit threads for latency

* update latency mode for 2 sockets platform
2023-03-06 14:06:41 +08:00
Piotr Krzemiński
0860db0dc3 [PT FE] Add aten::ArgSort & aten::Sort (#15769)
* [PT FE] Add aten::argsort implementation & tests

* [PT FE] Fix formatting

* [PT FE] Fix incorrect node type for Gather

* [PT FE] Fix Reshape missing argument

* [PT FE] Simplify syntax, fix int/int64 conversion error

* [PT FE] Fix argsort incorrectly sorting negative dimension, fix tests

* [PT FE] Revert modify test class

* [PT FE] Fix formatting of argsort

* [PT FE] Fix define macro style

* [PT FE] Add missing EOF

* [PT FE] Add stable==false check, add support for different constructor calls

* [PT FE] Add aten::sort implementation & tests

* [PT FE] Apply style changes, add XFail test for stable sorting

* Update sort.cpp

* Update sort.cpp

* [PT FE] Apply style changes from aten::sort t PR

* Update test_argsort.py

* [PT FE] Apply suggested modifications

* Update test_argsort.py

* [PT FE] Apply review suggestions, add tests and extract sort method to utils

* [PT FE] Use utils sort function to implement argsort

* [PT FE] Fix input size check 4->5

* [PT FE] Implement improved tests

* [PT FE] Implement improved tests

* [PT FE] Add xfail to not yet supported tests

* [PT FE] Merge 2 implementations of sort and argsort into a single file

* [PT FE] Remove redundant sort_elements from utils

* [PT FE] Add num_inputs_check

---------

Co-authored-by: Maxim Vafin <maxim.vafin@intel.com>
2023-03-05 20:12:32 +01:00
Ilya Lavrenov
e1fbb7d768 Fixes for multi-config generators (#16097) 2023-03-05 10:46:53 +04:00
Ilya Lavrenov
9c4c559909 Fixed compilation on Debian 11 with gcc 12.2 (#16096) 2023-03-04 20:45:04 +04:00
mei, yang
32b177e9eb remove python dependency gast in paddle test (#16017) 2023-03-04 20:41:39 +04:00
Ekaterina Aidova
a1cde2e790 [PT FE]: fix padding vaqlue dtype (#16087) 2023-03-04 11:08:56 +00:00
Ekaterina Aidova
0edbc5ca60 [PT FE]: fix arange dtype if it is not provided (#16084) 2023-03-04 10:17:09 +01:00
Andrei Kochin
6a39d466a4 [MO] Update reminder message for 2023.0 (#16094) 2023-03-04 13:13:11 +04:00
Sebastian Golebiewski
de50251ceb notebooks update (#16090)
20230302220806
2023-03-03 18:23:22 +01:00
Anastasia Kuporosova
b130b73f80 [Docs][PyOV] Add docstrings for transformations + python examples for stateful model (#15978)
* [Docs][PyOV] Add docstrings for transformation + python examples for stateful

* add snippets + small improvements
2023-03-03 17:07:34 +01:00
Irina Efode
190b64a0af [CONFORMANCE] Add check the crash or non zero status in python runner (#16081)
* [CONFORMANCE] Add check the crash or non zero status in python runner

* Update run_parallel.py
2023-03-03 15:23:23 +04:00
Irina Efode
3b6bb06f1d Update merge script to work with outdated reports (#16078)
* Update merge script to work with outdated xml

* change report name
2023-03-03 11:02:04 +00:00
Chen Xu
35dacb370a [CPU] Fix issue about overwriting register names (#15815) 2023-03-03 10:50:33 +00:00
Irina Efode
4e8590bf9b Add correct handling of conformance processes (#16031)
* Update run_parallel.py

* Add correct handling of conformance processes

* remove extra

* Update run_parallel.py
2023-03-03 12:18:53 +04:00
Fang Xu
7deb9090bf [CPU] Enable conditional compilation for graph optimizer (#15478) 2023-03-03 11:55:39 +04:00
Mateusz Tabaka
84a42cde61 [CPU] Add support for dynamic shapes in RDFT (#14458) 2023-03-03 10:43:29 +04:00
Anastasiia Pnevskaia
9efdb38b96 convert_model() legacy extensions fix (#15742)
* Fixed legacy extensions passing to MO tool.

* Added tests.

* Corrected test.

* Add debug print.

* Moved tests to layer tests.

* Added comment.

* Moved legacy ext tests to separate file. Fixed tmp .pb file cleaning.

* Small correction.

* Run MO Python API tests directory in CI.

* Small fix.

* Fix for case of splitted output.

* Corrected imports.

* Corrected imports.

* Added run of legacy extensions tests from subprocess.
2023-03-03 09:59:30 +04:00
Maksim Kutakov
fc98454174 [CPU] Internal dynamism support for extensions (#15795) 2023-03-03 09:00:56 +04:00
yanlan song
33e0b8caeb rework the config and better follow new api (#15920)
* rework the config to follow new api

Signed-off-by: fishbell <bell.song@intel.com>

* fix tests

Signed-off-by: fishbell <bell.song@intel.com>

* clean up code

Signed-off-by: fishbell <bell.song@intel.com>

* fix missing property

Signed-off-by: fishbell <bell.song@intel.com>

* fix case failure caused by name compliance

Signed-off-by: fishbell <bell.song@intel.com>

* clean up code

Signed-off-by: fishbell <bell.song@intel.com>

* workaround negative values with unsigned type

Signed-off-by: fishbell <bell.song@intel.com>

* fix wrong exception type

Signed-off-by: fishbell <bell.song@intel.com>

* refactor config/metric

Signed-off-by: fishbell <bell.song@intel.com>

* apply review comments

Signed-off-by: fishbell <bell.song@intel.com>

---------

Signed-off-by: fishbell <bell.song@intel.com>
2023-03-03 10:04:28 +08:00
Steve Yoo
a16f1923d7 Added recalculating processing order if it is not correct (#15987) 2023-03-02 14:40:15 -08:00
Kelvin Choi
6979c06ca1 [GPU] Support non constant input for Pad (#15697)
* [GPU] Support non constant input for Pad

* Refactor by comments
2023-03-02 10:38:43 -08:00
Ilya Lavrenov
392e0fda34 Added missed licenses to openvino-dev (#16057) 2023-03-02 21:18:39 +04:00
Karol Blaszczak
c98591f8a8 Update prerelease_information.md (#16056) 2023-03-02 18:13:30 +01:00
Ilya Lavrenov
4d925e0a3d Test GPU plugin arm64 build via Android precommit (#16055) 2023-03-02 21:06:36 +04:00
Ilya Lavrenov
0a31bdc112 Fixed OpenMP + debian package code-path (#16058) 2023-03-02 21:06:15 +04:00
Ilya Lavrenov
df03f8bfce Removed fetch depth from Azure Pipelines (#16059) 2023-03-02 21:05:54 +04:00
Przemyslaw Wysocki
8cbd8b8e03 Remove setuptools upperbound (#16054) 2023-03-02 20:58:39 +04:00
Zhang Yi
0e9b133de5 [CPU] StridedSlice fix execution for empty output tensor (#16045)
* stridedslice skip execution with 0 dims

* use isExecutable & add subgraph tests

* remove useless code
2023-03-02 15:56:25 +00:00
hyunback kim
cb7eeadd62 [GPU] Integration oneDNN3.1 (#15804)
* [GPU] Integration oneDNN3.1
* [GPU] Add os_iyx_osv8 format

Signed-off-by: hyunback <hyunback.kim@intel.com>
2023-03-03 00:18:42 +09:00
bstankix
226bc301dc Graph builder modal (#16061)
* Bugfix and restyle Graph Builder

* Add disclaimer box to modal footer

* Change color gradient for pracision graphs

* Add default preselections to graph settings
2023-03-02 15:34:05 +01:00
Vladislav Golubev
57cf23857a [CPU] Split: dynamic split_lengths values support (#15914) 2023-03-02 14:19:45 +00:00
Ilya Lavrenov
7be7f25566 Added TF / TF Lite in API reference docs (#16023) 2023-03-02 16:45:18 +04:00
Anastasiia Pnevskaia
6185114bc4 Clearing of CustomReplacementRegistry.registry in convert_model() (#15893)
* Clearing of CustomReplacementRegistry.registry.

* Added test.
2023-03-02 14:49:45 +04:00
Vitaliy Urusovskij
18763f66ac Detect loops in topological_sort (#15688) 2023-03-02 13:21:52 +04:00
Ilya Churaev
65a45a6232 Fix coverity uninialized variable (#16039) 2023-03-02 13:20:14 +04:00
Ilya Lavrenov
0d798b7431 Building GPU plugin for Linux ARM64 (#16008)
* Building GPU plugin for ARM64

* changed order of headers

* Fixed clang-format
2023-03-02 12:43:33 +04:00
Roman Lyamin
24b0baa0d1 [GPU] Added support mixed input formats for Select (#16009) 2023-03-02 09:19:02 +04:00
Vladimir Paramuzov
27ac7d9092 [GPU] backend independent code for fuse params in program_node (#16028) 2023-03-02 09:18:29 +04:00
Wang, Yang
99a1800901 Fix the AUTO loadnetwork failure triggered by AUTO:-xxx (#15747)
* Check if the device is supported when AUTO retrieves the device list based on the ov::device::priorities.

* Update the logic to handle the situation like -d AUTO:-CPU to benchmark APP.

* Remove MYRIAD and add NVIDIA for AUTO supported devices.

---------

Co-authored-by: Chen Peter <peter.chen@intel.com>
Co-authored-by: Shen, Wanglei <wanglei.shen@intel.com>
2023-03-02 13:16:12 +08:00
Liubov Talamanova
c09b2ff8b1 [POT] Fix POT version in setup.py (#16021) 2023-03-01 22:30:26 +00:00
Ilya Lavrenov
5422242e86 Fixed compilation with gcc-7 (#15876)
* Fixed compilation with gcc-7

* Update src/core/reference/include/ngraph/runtime/reference/eye.hpp

Co-authored-by: Katarzyna Mitrus <katarzyna.mitrus@intel.com>

* returned f16 and bf16

---------

Co-authored-by: Katarzyna Mitrus <katarzyna.mitrus@intel.com>
2023-03-01 23:08:10 +04:00
Ilya Lavrenov
3c67509fc8 Removed custom cmake message (#16030) 2023-03-01 19:25:37 +04:00
Ilya Lavrenov
2e12af27e4 Improved GA precommit to track docs regressions (#16027) 2023-03-01 18:08:29 +04:00
Haiqi Pan
8307019380 fix benchmark_app python to support YES and NO values for -pin parameter (#15963)
* support YES and NO for -pin

* add if property_name == 'AFFINITY'
2023-03-01 17:31:04 +04:00
Maksim Kutakov
6ae024f86e [CPU] Fix OMP build (#16006)
* Fix OMP build

* Apply comments
2023-03-01 17:07:48 +04:00
Anastasia Kuporosova
4c0d28f26d Try to update minimum setuptools req (#15971)
* Try to update setuptools req

* suppress output from setup
2023-03-01 17:05:51 +04:00
Ilya Lavrenov
ba19d945ac Fixed clang-format for C API (#16024) 2023-03-01 16:36:20 +04:00
Vladimir Paramuzov
c5c7e4ff65 [GPU] Cleanup tuning cache methods (#16000) 2023-03-01 16:30:47 +04:00
Ilya Lavrenov
bde65c25c4 Fixed typo in docs (#16022) 2023-03-01 15:36:37 +04:00
Vladislav Golubev
84285ac317 [CPU] ConvertMatMulToFC fix (#15933) 2023-03-01 14:15:19 +04:00
Vladimir Paramuzov
3de00347f3 [GPU] Code cleanup (#16014)
* [GPU] Improve exception message for program build

* [GPU] Code cleanup
2023-03-01 14:05:59 +04:00
Vladislav Golubev
f0e12cf38b [CPU] Select via Eltwise implementation (#15740) 2023-03-01 14:03:47 +04:00
Ilya Churaev
113aefa3ff Move internal api to ov (#15964)
* Move cpu streams executor to new API

* Remove legacy headers from new dev API

* Fixed build issues

* Fixed build

* Fixed typo

* Fixed typo

* Fixed build

* Fixed code style

* Add exception for template constructor of SoPtr
2023-03-01 14:00:55 +04:00
Sebastian Golebiewski
21ac61fef5 [DOCS] Tensorflow models support in 23.0 update (#15974)
* tensorflow support update
adding tensorflow to main snippet

Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>
2023-03-01 10:16:09 +01:00
Maxim Vafin
b1d0e152e3 Use i32 across all PyTorch frontend (#15896)
* Use i32 across all PyTorch frontend

* Fix corner cases

* Fix tests
2023-03-01 09:50:34 +01:00
Mateusz Mikolajczyk
112c763256 [PT FE] Add prim::device and prim::type, improvements to aten::to, aten::eq, aten::ne (#15881)
* Add device, improve to and equal

* Rename and remove unused import

* Apply fixes from code review

* Fix decoder.py

* Load prim::device using prim::Constant

* Remove throwing exceptions

* Apply suggestions from code review

Co-authored-by: Maxim Vafin <maxim.vafin@intel.com>

---------

Co-authored-by: Maxim Vafin <maxim.vafin@intel.com>
2023-03-01 09:46:55 +01:00
Egor Duplenskii
051a1c661e [CPU][TESTS] Add options to manage test targets and scope (#14576)
- by using ENABLE_CPU_SUBSET_TESTS_PATH cmake variable
  one can specify list of relative paths to functional test
  which will be included into target ov_cpu_func_tests_subset

  Target was renamed from cpuDebugFuncTests to
  ov_cpu_func_tests_subset

- by using ENABLE_CPU_SPECIFIC_TARGET_PER_TEST=ON one can
  trigger generating specific target for each test file, i.e.
  - ov_cpu_func_slt_convolution
  - ov_cpu_func_subgraph_mha
2023-03-01 11:39:53 +04:00
Alexandra Sidorova
7f83c2c72d [Snippets] Added broadcast support for FQ weights calculation (#15837) 2023-03-01 11:14:58 +04:00
Katarzyna Mitrus
a7bb54da2d [ShapeInference] DeformablePSROIPooling shape infer (#15766)
* Add shape infer function

* Update shape_infer and usage

* Add setters

* Register shape_infer for CPU

* Tests

* Style

* Add cast for dim type

* Add precision

* Update input size check

* Move setters to cpp
2023-03-01 08:13:24 +01:00
Alexandra Sidorova
63d282fd73 [CPU] Added support of negative paddings for Pad (#15935) 2023-03-01 10:49:03 +04:00
Xuejun Zhai
51a3a02115 Xuejun/remove api result related (#15806)
* [Remove APIs] remove api Result(const Output<Node>& arg, bool), set_needs_default_layout(bool) & needs_default_layout()

Signed-off-by: Zhai, Xuejun <xuejun.zhai@intel.com>

* [Remove APIs] remove const AutoBroadcastSpec NUMPY & NONE

Signed-off-by: Zhai, Xuejun <xuejun.zhai@intel.com>

* [Remove APIs] clear code

Signed-off-by: Zhai, Xuejun <xuejun.zhai@intel.com>

---------

Signed-off-by: Zhai, Xuejun <xuejun.zhai@intel.com>
2023-03-01 07:43:48 +04:00
Ilya Churaev
e534efd4a8 Moved template backend to new API (#15878)
* Moved template backend to new API

* Fixed compilation

* Fixed some comments

* Fixed ov_core_unit_tests

* Fixed some tests

* Fixed ONNX Frontend tests

* Fixed transformation tests

* Fixed dynamic tests

* Fixed sporadic in CPU tests

* Added WA for plugin

* Fixed copy_to for scalar tensors
2023-03-01 07:12:33 +04:00
Maxim Vafin
87e714eb5c Add support for concatenation in Loop (#15899)
* Add support for concatenation in Loop

* Apply suggestions from code review

* Fix win build

* Fix issues with propagation shapes and types in Loop

* Fix einsum

* Set type and shape of count in frontend
2023-02-28 21:31:33 +01:00
Mateusz Tabaka
62ff31df8a ReverseInputChannelsFusion - no reverse input channels -> return (#15784)
* ReverseInputChannelsFusion - return early if there is no reverse input channels

Ticket: 98067

* run_passes

* fix unnecessary validate calls
2023-02-28 17:56:21 +00:00
Ilya Lavrenov
0988c2b813 Fixed compilation of docs snippets (#16004) 2023-02-28 20:53:31 +04:00
dependabot[bot]
07f287e362 Bump awalsh128/cache-apt-pkgs-action from 1.1.3 to 1.2.2 (#14999)
* Bump awalsh128/cache-apt-pkgs-action from 1.1.3 to 1.2.2

Bumps [awalsh128/cache-apt-pkgs-action](https://github.com/awalsh128/cache-apt-pkgs-action) from 1.1.3 to 1.2.2.
- [Release notes](https://github.com/awalsh128/cache-apt-pkgs-action/releases)
- [Commits](https://github.com/awalsh128/cache-apt-pkgs-action/compare/v1.1.3...v1.2.2)

---
updated-dependencies:
- dependency-name: awalsh128/cache-apt-pkgs-action
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

* Update .github/workflows/build_doc.yml

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
2023-02-28 19:54:33 +04:00
Karol Blaszczak
f9a8d9132d update NV12 (#15370)
* update NV12 docs and snippets

add single-plane input information
create single-plane cpp snippet
menu fix
update formatting for sphinx directives

Co-Authored-By: Ilya Churaev <ilyachur@gmail.com>
Co-Authored-By: Vladimir Paramuzov <vladimir.paramuzov@intel.com>

* additional snippet fixes

---------

Co-authored-by: Ilya Churaev <ilyachur@gmail.com>
Co-authored-by: Vladimir Paramuzov <vladimir.paramuzov@intel.com>
2023-02-28 17:58:08 +04:00
Zhang Yi
4dff2d1c60 [CPU] Enable dnnl cache (#15665)
* enable dnnl cache

* revise cmake comments
2023-02-28 13:23:37 +01:00
Vitaliy Urusovskij
5e48941f53 Apply Apivalidator to extra TBB libs (#15938) 2023-02-28 15:34:14 +04:00
Ilya Lavrenov
f7ccfd9b6e Install libtbb2 instead of libtbb12 on U22.04 (#15992) 2023-02-28 15:32:53 +04:00
Irina Efode
7aaf966039 [CONFORMANCE] Add relative weights for conformance (#15799)
* Add Weights by ops

* Upgrade conformance tools

* api_conformance

* Change prefix

* Reorg meta info

* Chnage base algo

* fix all other

* return summary

* Update the report

* wa

* review
2023-02-28 10:11:48 +01:00
dependabot[bot]
b748395f7d Bump paddlepaddle from 2.4.1 to 2.4.2 in /src/frontends/paddle/tests (#15809)
Bumps [paddlepaddle](https://github.com/paddlepaddle/paddle) from 2.4.1 to 2.4.2.
- [Release notes](https://github.com/paddlepaddle/paddle/releases)
- [Changelog](https://github.com/PaddlePaddle/Paddle/blob/develop/RELEASE.md)
- [Commits](https://github.com/paddlepaddle/paddle/compare/v2.4.1...v2.4.2)

---
updated-dependencies:
- dependency-name: paddlepaddle
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
2023-02-28 12:14:31 +04:00
Roman Lyamin
1070a3b6c1 [GPU] Added fp16 support for GatherTree (#15983) 2023-02-28 09:54:56 +04:00
guozhong wang
913f616964 Guozhong/improve auto infer request line coverage (#15511)
* find test case for MultiDeviceInferRequest::SetBlob

* improve line coverage of infer_request

* add test cases for queryState and exception test case for perf count

* fix querystate running fail

* add test case to memory_states.cpp

* rename name of test case

* add memory_states.cpp to CMakeLists.txt

* Use _LogTag to judge whether MULTI

* clang-format intel_gna/memory_states.cpp

* Modify the position of the macro ENABLE_INTEL_CPU in the test case

---------

Co-authored-by: Chen Peter <peter.chen@intel.com>
2023-02-28 12:51:37 +08:00
Anastasia Kuporosova
45dff75356 Check Node in Model creation (#15943)
* Check Node in Model creation

* apply fixes
2023-02-28 08:27:32 +04:00
Ilya Churaev
e5f2903c83 Changed template plugin namespace (#15962)
* Changed template plugin namespace

* Fixed documentation
2023-02-28 02:27:12 +04:00
Roman Kazantsev
68b7b8e69b [TF FE] Mark-up xfailed layer tests on GPU in nightly (#15981)
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2023-02-27 21:14:21 +00:00
Wilson Seok
93a1be3607 Skip set_selected_impl() of post_optimize_weight when target generic layer is already created (#15852) 2023-02-27 11:24:53 -08:00
Eddy Kim
d2a5be0ab8 enabled exec_graph and pc in deserialized model (#15975) 2023-02-27 10:14:04 -08:00
Maxim Vafin
2ced2ad929 Add removing dangling results in MultiSubGraphOp transformation (#15862)
* Add removing dangling results in MultiSubGraphOp transformation

* Add recursive call for nested subgraphs

* Fix frontend build

* Add tests

* Add more tests and fix special port

* Add more tests, fix LSTM tests

* Preserve merged inputs

* Fix code style

* Fix paddle tests

* Fix special output
2023-02-27 14:31:53 +01:00
Xiping Yan
b30b283f0d [CPU] Fix all warnings or errors after removing "-Wno-class-memaccess" in cpu plugin CMakeLists.txt (#15780)
* Remove -Wno-class-memaccess

Signed-off-by: Yan, Xiping <xiping.yan@intel.com>

* fix warnings for memset.

Signed-off-by: Yan, Xiping <xiping.yan@intel.com>

* Change bfloat16_t implementation to trivial.

Signed-off-by: Yan, Xiping <xiping.yan@intel.com>

* memset warning can be fixed via changing bfloat16_t to TRIVIAL.

Signed-off-by: Yan, Xiping <xiping.yan@intel.com>

* Revert "memset warning can be fixed via changing bfloat16_t to TRIVIAL."

This reverts commit 28a37af5c8.

---------

Signed-off-by: Yan, Xiping <xiping.yan@intel.com>
2023-02-27 10:11:55 +01:00
dependabot[bot]
a7443e13fa Bump actions/cache from 1 to 3 (#15965)
Bumps [actions/cache](https://github.com/actions/cache) from 1 to 3.
- [Release notes](https://github.com/actions/cache/releases)
- [Changelog](https://github.com/actions/cache/blob/main/RELEASES.md)
- [Commits](https://github.com/actions/cache/compare/v1...v3)

---
updated-dependencies:
- dependency-name: actions/cache
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-02-27 10:46:37 +04:00
Ilya Lavrenov
713b37cb25 Detection of WoA ARM64 in all places (#15960) 2023-02-27 10:29:14 +04:00
Ilya Churaev
957ff6edd8 Fixed some leftovers after plugin API merge (#15932) 2023-02-27 05:35:38 +01:00
Xiuchuan Zhai
2d91c36d32 Support paddle slim (#14834)
* support paddle slim

* fix scale shape issue in dequantize_linear

* fix node implicit construct failed in yolov5 and yolov7

* correct the round mode

* improve the accuracy of slim

* support paddle slim

* fix scale shape issue in dequantize_linear

* correct the round mode

* refactor some tests

* fix according to comments

* support zero_point and fallback round_mode
2023-02-27 08:23:44 +08:00
Ilya Lavrenov
5526969eba Turn off apiValidator for ARM64 WoA hosts (#15958) 2023-02-26 22:51:37 +04:00
Roman Kazantsev
5317b909f7 [TF FE] Test Nested While in the pre-commit (#15955)
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2023-02-26 11:28:51 +00:00
Maxim Vafin
4fac7eabc0 Use shared constant in pytorch decoder (#15917)
* Use shared constant in pytorch decoder

* Fix contigious array

* Support scalars

* Apply suggestions from code review

* Apply suggestions from code review

Co-authored-by: Ekaterina Aidova <ekaterina.aidova@intel.com>

---------

Co-authored-by: Ekaterina Aidova <ekaterina.aidova@intel.com>
2023-02-26 11:39:27 +01:00
Andrew Kwangwoong Park
39e63ace67 [GPU] Minor fix for dynamic mobilebert (#15909)
Signed-off-by: Andrew Park <andrew.park@intel.com>
2023-02-25 20:22:44 -08:00
Taylor Yeonbok Lee
fabf67ee5e [GPU] Enable crop for shape agnostic kernel (#15866)
* Enable crop shape agnostic kernel

* Added unit test

* Added new scalar argument for crop (eltwise) for being used as runtime input offset in shape agnostic kernel

* Fix eltwise to have runtime offset only for crop

* Fix unittest error

* Applied review comment
2023-02-25 15:49:46 -08:00
Ilya Lavrenov
15990afea2 Prevent infinite recursion (#15953) 2023-02-25 23:32:45 +04:00
Ilya Lavrenov
c0ef9a862e Fix for apiValidator when more than 1 target needs to be checked (#15950) 2023-02-25 16:33:08 +04:00
Artyom Anokhov
bd8c7506f0 requirements-dev: top-limited setuptools with 65.7.0 (#15946) 2023-02-25 14:54:50 +04:00
Taylor Yeonbok Lee
9822568194 Fix build error in clang++ (#15948) 2023-02-25 06:48:12 +04:00
Andrew Kwangwoong Park
46e8aad4bb [GPU] Fix output format not changing at runtime (#15887)
* [GPU] Fix output format not changing at runtime

Signed-off-by: Andrew Park <andrew.park@intel.com>

* Add remove_redundant_reorders pass TC for ov_gpu_unit_tests

Signed-off-by: Andrew Park <andrew.park@intel.com>

---------

Signed-off-by: Andrew Park <andrew.park@intel.com>
2023-02-24 14:26:54 -08:00
Eddy Kim
30939f5021 updated to share constant data memories across multiple streams (#15915) 2023-02-24 14:26:10 -08:00
Pawel Raasz
f13b1e9681 Review space to and shuffle channels operators for shape inference aspects (#15711)
* Review label and interval shape propagation for:
- space to batch
- space to depth
- shuffle channels
- depth to space
- batch to space

* Review template implementation of shape_infer for:
- space to batch
- space to depth
- shuffle channels
- depth to space
- batch to space

* Apply clang-format

* Update src/core/shape_inference/include/batch_to_space_shape_inference.hpp

Co-authored-by: Tomasz Jankowski <tomasz1.jankowski@intel.com>

* Update src/core/shape_inference/include/space_to_batch_shape_inference.hpp

Co-authored-by: Tomasz Jankowski <tomasz1.jankowski@intel.com>

* Shuffle channels remove label from channel dim

---------

Co-authored-by: Tomasz Jankowski <tomasz1.jankowski@intel.com>
2023-02-24 22:17:47 +00:00
Ilya Lavrenov
6d7b94b8cd Improved API validator logic (#15942) 2023-02-25 01:11:50 +04:00
Jan Iwaszkiewicz
6c0e2686ad [PyOV] Fix passing of the key in data dispatcher (#15941) 2023-02-24 23:07:59 +04:00
Ilya Lavrenov
57cb7015f0 Fixed samples build on Debian 10 with cmake 3.13 (#15940) 2023-02-24 22:28:45 +04:00
Alexandra Sidorova
8dd9ade211 [Snippets] Added matcher_name in ConvertConstantsToScalars pass (#15883) 2023-02-24 22:20:23 +04:00
Karol Blaszczak
5e6398a2d8 [DOCS] prereleasenotes update (#15944) 2023-02-24 17:07:46 +01:00
Irina Efode
e21c71dd48 Fix error with incorrect symbol in ParallelRunner (#15934)
* uncomment run

* Fix error with incorrect symbol in ParallelRunner

* Remove extra
2023-02-24 17:53:24 +04:00
Ekaterina Aidova
5c6ef54127 [PT FE]: support aten::index (#15544)
* [PT FE]: support aten::index

* bool indexing testing

* more tests, fix nonzero case

* apply code review
2023-02-24 14:33:00 +01:00
Pawel Raasz
ba45c993ac Review scatter elements update class for shape inference aspects (#15891)
* Review interval shape and label propagation

* Review template implementation of shape_infer
- add tests for default ctor
- expand test for static shape

* Add upper, lower and label evaluate
2023-02-24 13:38:51 +01:00
Roman Kazantsev
ad4bd6f752 [TF FE] Simplify FakrQuantWithMinMaxVars translator and add layer test (#15927)
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2023-02-24 12:13:46 +00:00
Sebastian Golebiewski
d60e3812ca [DOCS] Structure change for 'AUTO Device Selection' article - post merge fix for master (#15890)
Fixing the direcives for code snippets
2023-02-24 12:21:17 +01:00
Ilya Churaev
a730ef18eb Moved Task, Streams, CPUStreams Executors to new API (#15913)
* Moved Task, Streams, CPUStreams Executors to new API

* Fixed some build issues

* Fixed new build issues

* Try to fix tests

* Fixed inference unit tests

* Small build fix

* Added more system headers

* Try to fix naming style

* Fixed namespace

* Fixed android build
2023-02-24 15:20:32 +04:00
Mateusz Bencer
15935069ff Skip PullThroughReduce optimization for multi-output inputs (#15829)
* Skip PullThroughReduce optimization for multi-output inputs

* review remarks
2023-02-24 12:11:32 +01:00
Ruslan Nugmanov
091ba1f5ee Adds layer tests for binary and reduce tflite operations (#15791)
* adds binary and reduce layer tests

* adds binary with activations layer tests for tfl ops

* 1.moves helper functions and lists to utils file
2.adds axis as test parameter for reduce test
3.adds reluNto1 activation

* skips tanh and signbit activations

* Update tests/layer_tests/common/utils/tflite_utils.py

* Fused activations supported: RELU_N1_TO_1, SIGN_BIT

---------

Co-authored-by: Evgenya Stepyreva <evgenya.stepyreva@intel.com>
2023-02-24 14:46:29 +04:00
Nadezhda Ageeva
91d1600646 [TESTS] make axis of VariadicSplit int to support negative values (#15925) 2023-02-24 14:45:22 +04:00
Pawel Raasz
36992c9c46 Move shape_inference function to test utils (#15757) 2023-02-24 09:20:12 +01:00
Karol Blaszczak
310dd1d4c4 sphinx directives and less html (#15816)
changes to benchmarks page to align with theme
2023-02-24 09:03:19 +01:00
Sebastian Golebiewski
bd3a392d84 [DOCS] 'Quantization-aware Training' article update (#15617) 2023-02-24 08:57:04 +01:00
Sebastian Golebiewski
28cfb988e7 [DOCS] 'Quantizing with accuracy control' article update (#15642) 2023-02-24 08:56:23 +01:00
Sebastian Golebiewski
7a465e2422 [DOCS] 'Basic Quantization Flow' article update (#15641) 2023-02-24 08:55:51 +01:00
Sebastian Golebiewski
5ae859b4f8 [DOCS] 'Filter Pruning of Convolutional Models' article update (#15616)
Prep for conversion to rst
2023-02-24 08:55:28 +01:00
Paul Youngsoo Ahn
c1c8d6320e [GPU] Apply multi-threads for async compilation context (#15683)
* [GPU] Apply multi-threads for async compilation context (#15683)
- Use CPUStreamExecutor in compilation context
- Use single compilation context, impl_cache and kernels_cache for multple streams
- Move compilation context to cldnn::program
- Move impl_cache to cldnn::program
- Create thread-safe impl_cache
- Create thread independent compilation function in kernels_cache
- Use kernels_cache in program and remove it from network

* [GPU] Fix segfault issue: ocl_engine and ocl_device are released during remained compilation context task are running (#15683)
- compilation context has own CPUStreamExecutor

* [GPU] Follow-up codereview (#15683)
- LruCacheThreadSafe inherit LruCache
- FuncRemoveItem has std::pair<Key,Value> as input
- Change prepare_tools to init_program

* [GPU] Create primitive_impl::build_kernels (#15683)

* [GPU] Fix unit test build error (#15683)

* [GPU] Remove redundant code (#15683)
- Remove try catch for debug
- Call compilation_context.cancel() in destructor of network

* [GPU] combine two atomic counter in kernels_cache (#15683)

* [GPU] Follow-up code review (#15683)

* [GPU] Fix nullptr exception in unit test (#15683)

* [GPU] Follow-up code review (#15683)
- Modify mutex lock in compilation context

* [GPU] Fix windows build issue (#15683)
2023-02-23 23:08:50 -08:00
River Li
2d960fc6c5 Solve test case failure issue for 32bits (#15857)
* Solve test case failure issue for 32bits
  1. ov_core_unit_test
  2. ov_cpu_unit_test

Change-Id: I5e6afda0865fedc1de7fe84dd5f132e642263303

* Solve windows build issue

Change-Id: I1e6ea4d930c41322a73a701d566f0cdee2a4e098

* Disable several 64bit test cases in case of 32bit system

Change-Id: Ib8ef784953bf15cb42048dd905f17a85e52482b1

* Update a simple solution

Change-Id: Ie2e2cd369fe98bfcd26f3416bf36d4dfb0f24c25

* update for 64bits failure

Change-Id: I6571b7842a0fecc01fff169a21fa7aae9eb9da14

* Use OPENVINO_ARCH_64_BIT replace custom macro

Change-Id: I7e72b74aed8f0226513bc0e06ce2381322b42f71
2023-02-24 10:20:31 +04:00
hyunback kim
be5f90199d [GPU] Add oneDNN FC preferred_format to bfyx (#15704)
Signed-off-by: hyunback <hyunback.kim@intel.com>
2023-02-24 15:19:54 +09:00
Eddy Kim
f562e96305 [GPU] Fallback to kernel caching in the case of dynamic models (#15842)
* use kernel caching for dynamic models

* replaced cl_cache with blob

* updated to serialize dims info of input and output

* updated to skip unicode tests in Windows
2023-02-23 22:05:16 -08:00
Dohyun Kim (Felix)
a4f0b340d0 [GPU] Resolve unit test not run as onednn (#15217) 2023-02-24 10:07:56 +09:00
Dohyun Kim (Felix)
f00fb325a6 [GPU][DG2] Disable remained failing tests (#15873) 2023-02-24 10:07:01 +09:00
Roman Kazantsev
8a56234445 [TF FE] Refactor identity operations and layer test (#15904)
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2023-02-23 21:22:31 +01:00
Ilya Lavrenov
bd61d317f8 Revert "Install patchelf for aarch64 linux (#15865)" (#15919)
This reverts commit e34d4e4664.
2023-02-23 23:02:00 +04:00
Evgeny Kotov
b8de9beeac [GNA] fix Coverity warning (#15823)
* fix

* fix

* fix FindFirstConsumer
2023-02-23 16:00:44 +00:00
Tomasz Jankowski
e8d1be6e0f [Transformations] Enable missing runtime info check (#15796)
* Add rt info propagation to StridesOptimization

* Enable rt info check for pruning tests
2023-02-23 19:14:13 +04:00
Karol Blaszczak
6359926815 [DOCS]-new-prerelease-info-page (#15768) 2023-02-23 14:52:25 +01:00
Tomasz Dołbniak
c72d148ec7 TopK v11 - specification (#15583) 2023-02-23 14:51:48 +01:00
Haiqi Pan
3cc888555a fix ov::shutdown align definition (#15901) 2023-02-23 17:31:34 +04:00
Sebastian Golebiewski
92181b0a4b port 15720 (#15911)
Porting: https://github.com/openvinotoolkit/openvino/pull/15720/
Fixing path.
Ticket: 100955
2023-02-23 17:29:47 +04:00
Jan Iwaszkiewicz
3373c6743f [PyOV] Shared memory improvements for Tensor and Constant classes (#15814) 2023-02-23 13:46:41 +01:00
Oleg Pipikin
d9fc5bac80 Add GCC versiong to linux installation manual (#15858)
* Add GCC versiong to linux installation manual

* Fix comment
2023-02-23 13:40:01 +04:00
Anastasiia Pnevskaia
1e24c51abb Turn on onnx fallthrough in convert_model() (#15651)
* Turn on ONNX_FALLTHROUGH in torch.onnx.export().

* Removed wrong change.

* Added test.
2023-02-23 13:22:30 +04:00
Leonard Sikorski
bc663878eb [PT FE] Add torchvision::roi_align operator with layer test (#15821) 2023-02-23 09:26:17 +01:00
Ekaterina Aidova
288a750bc6 [PT FE]: support aten::einsum (#15844) 2023-02-23 11:39:28 +04:00
Maxim Vafin
a9efe5bd8d [PT FE] Extend upsample support (#15826)
* [PT FE] Extend upsample suport

* Update tests/layer_tests/pytorch_tests/test_upsample.py

Co-authored-by: Ekaterina Aidova <ekaterina.aidova@intel.com>

---------

Co-authored-by: Ekaterina Aidova <ekaterina.aidova@intel.com>
2023-02-23 11:34:29 +04:00
Roman Kazantsev
900332c46e [TF FE] Support conversion of models with non-standard extensions in the path (#15875)
* [TF FE] Support conversion of models with non-standard extensions in the path

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Update tools/mo/unit_tests/moc_tf_fe/conversion_basic_models.py

---------

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2023-02-23 11:29:14 +04:00
Ilya Lavrenov
87bcbc1747 Supported OpenSUSE 15.3 (#15897) 2023-02-23 11:25:33 +04:00
Dohyun Kim (Felix)
1028c7b5d5 [GPU] Fix weight reorder bug (#15672) 2023-02-23 14:48:46 +09:00
Jade Cho
c749163f72 [GPU] Update unit tests for swap XY (#15833) 2023-02-23 14:38:10 +09:00
Dohyun Kim (Felix)
1f196bacd3 [GPU][DG2] Fix some testcases (#15774)
* C++ exception with description write lock_type thrown in the test body. 
   Use get_output_values_to_float()
   * fusings_gpu/gemm_2in_act_scale_quantize_eltwise_i8.basic/2
   * fusings_gpu/gemm_2in_act_scale_eltwise.basic/2
* Remove WA test code of [GPU][DG2] Fix fusings_gpu/gemm_2in_scale.basic/7 #15353
   * Now non full-tensor post-ops are broadcasted
2023-02-23 14:23:40 +09:00
Dohyun Kim (Felix)
ed65583957 [GPU] Fix OV_GPU_DumpGraphs option (#15800) 2023-02-23 14:10:21 +09:00
Haiqi Pan
6bd7def8c4 fix sign-compare warnings in template backend (#15880)
* fix detection_output.hpp

* fix roi_align.hpp

* fix detection_output.hpp

* convolution_backprop_data.hpp

* convert_color_nv12.hpp

* fix detection_output.hpp

* fix detection_output.hpp

* remove no-sign

* fix code style

* fix roi_align.hpp

* fix roi_align.hpp

* fix detection_output.hpp

* fix C4267
2023-02-23 08:35:28 +04:00
Haiqi Pan
6f85ee7968 fix shutdown (#15882) 2023-02-22 22:04:19 +04:00
Anastasia Kuporosova
592ad68455 [PyOV] Fix deprecation warning in tests (#15830)
* [PyOV] Fix deprecation warning in tests

* Update src/bindings/python/tests/test_graph/test_manager.py
2023-02-22 18:08:49 +01:00
Ilya Churaev
893f96a7da Extend tensor API (#15811)
* Added some new tensor API

* Added tests on constructors

* Small changes

* Fixed tensor tests

* Fixed tests

* Added parametrized tests

* Extend tests and delete copy_to from remote tensor
2023-02-22 17:43:15 +01:00
Ilya Churaev
27ea9eab32 Moved executor manager to new API (#15871)
* Moved executor manager to new API

* Fixed template plugin build

* Remove setTBBFlag API

* Fixed export

* Added new files

* Revert "Added new files"

This reverts commit 981c3c863f.

* Fixed template plugin tests

* Remove redundant wrapper

* Remove wrappers for executor manager

* Fixed build
2023-02-22 17:19:35 +01:00
Ilya Lavrenov
98392a043b Fixed issues in setupvars.sh (#15884)
* Fixed issues with setupvar.sh

* Fixes setupvars realpath error

---------

Co-authored-by: Otoka, Tomasz <tomasz.otoka@intel.com>
2023-02-22 19:36:23 +04:00
Mateusz Tabaka
eaf368a5f5 PushConstantToSubgraph - use vector<bool> for remove_inputs_mask (#15827)
When MultiSubGraphOp has more than 32 inputs, int is not enough
to track inputs to be removed.

Ticket: 103248
2023-02-22 15:31:34 +01:00
Sofya Balandina
5251133202 [apiConformance] Refactor io_tensor.cpp (#15680) 2023-02-22 17:42:07 +04:00
Ilya Churaev
877018bab6 Moved Cache manager to new API (#15872)
* Moved Cache manager to new API

* Moved cache guard to ov namespace

* Added new files
2023-02-22 10:51:33 +00:00
Ilya Churaev
548f972e19 Added ov::IVariableState (#15843)
* Added ov::IVariableState

* Added variable state

* Try to fix Windows

* Fixed export
2023-02-22 14:30:46 +04:00
Marcin Kacprzak
c8643a9a30 [GNA] incorrect diag insertion (#14858)
* [GNA] Create ngraph implementation for relu_torch_pot model for further tests. Create legacy pass fusing FC-Eltwise-Const layers pattern into single FC layer with biases

* [GNA] Fix review comments, applied proper code style to changed code
2023-02-22 10:22:55 +00:00
Artur Kulikowski
f41c75b965 [NormalizeL2] normalization of reduction axes (#15841)
* Add test for negative axes, preliminary solution to solve uncorrect
results

* Normalize axes in operation NormalizeL2

* Add test for negative axes

* Add EOF
2023-02-22 13:10:06 +04:00
Wang, Yang
7f3ea9a59c Conversion fail for ov::hint::performance_mode with UNDEFINED value (#15629)
* Update ov::hint::performance_hint UNDEFINED value from empty string to "UNDEFINED".

* Update benchmark Python version.

* Update.

* Update.

* Update.

* Update the description about hint setting within benchmark APP README and help message.
2023-02-22 13:01:18 +04:00
Ilya Lavrenov
e34d4e4664 Install patchelf for aarch64 linux (#15865) 2023-02-22 12:55:27 +04:00
Taylor Yeonbok Lee
4fd38844a2 [GPU] Fix remote blob creation to use original shape (#15864)
* Fix remote blob creation to use original shape

* Revert "Fix remote blob creation to use original shape"

This reverts commit 35c674aa97.

* Fix cldnn tensor adjusted blob to be reinterpreted with actual input layout
2023-02-21 22:22:51 -08:00
Eddy Kim
a6ff809ad7 [GPU] Model caching unit tests (#15413)
* gpu model caching unit tests

* added serialization unit tests

* added save and load for quantize primitive_inst

* reduced the range of inputs for Gemm tests

* updated the copyright year
2023-02-22 05:53:43 +00:00
Luwei Zhou
d464f38788 Enhance dump_check.py tool for bf16 blob support. (#15002)
@luweizhou2016 please make CI green to merge it
2023-02-22 05:05:08 +00:00
Ilya Lavrenov
95c7c39b91 Supported rpmlint versions less 2.0 (#15856) 2023-02-22 01:35:28 +04:00
Oleg Pipikin
3644c26402 Remove WAs for myriad (#15854) 2023-02-21 18:05:33 +01:00
Roman Kazantsev
4746c04840 [Common][FE] Implement reverse infer for Transpose (#15824)
* [Common][FE] Implement reverse infer for Transpose

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Update src/common/transformations/tests/common_optimizations/reverse_shape_and_type_infer.cpp

* Update src/common/transformations/tests/common_optimizations/reverse_shape_and_type_infer.cpp

* Update src/common/transformations/src/transformations/common_optimizations/reverse_shape_and_type_infer.cpp

Co-authored-by: Maxim Vafin <maxim.vafin@intel.com>

* Add one more tests with constant order and known output

* Fix reverse infer for a case of know order and output shape

---------

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
Co-authored-by: Maxim Vafin <maxim.vafin@intel.com>
2023-02-21 13:42:39 +00:00
Tatiana Savina
f730edb084 [DOCS] Remove DL Workbench (#15733)
* remove dl wb docs

* text correction

* change ecosystem description

* replace link
2023-02-21 13:16:02 +00:00
Roman Kazantsev
0ddca519d6 [TF FE] Fix auto-pruning for QueueDequeue operation (#15838)
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2023-02-21 13:27:29 +01:00
Pavel Esir
15973fd2da enable --compress_to_fp16=True by default in MO (#15488)
* enable --compress_to_fp16 by default in MO

* corrected docs, added warning if user did't specify --compress_to_fp16 explicitly

* fix failing MO unit-tests

* do not wipe out data_type if user defined it explicitly by cli argument

* updated warning message and docs

* corrected phrasing

* corrected phrasing in FP16_Compression.md

* set compress_to_fp16=False for convert tests

* leftover: set compress_to_fp16=False for convert tests

* minor correction

* print info message in main.py, som minor changes

* typos fix

* fix losing information whether arguments set by user or got from defaults

* returned back default values instead of None

* more selective correcting of test_mo_convert_pytorch.py; added test for cases when compression is enabled/disabled or left by default

* fix test_mo_convert_pytorch.py
2023-02-21 13:07:43 +01:00
Katarzyna Mitrus
94b64fed79 [ShapeInference] DetectionOutput shape inference review (#15645)
* Add setter for attributes

* Add more type_prop tests

* Extend shape infer tests for default ctor

* Remove redundant set_output_size and update shapes init

---------

Co-authored-by: Pawel Raasz <pawel.raasz@intel.com>
2023-02-21 14:42:47 +04:00
cecilia peng
86b50044cd [CPU] optimize TensorIterator DynamicBuffer (#15163)
* optimize TensorIterator DynamicBuffer by preallocating a large chunk of intermediate buffer.

code clean.

review update: always copy in transfer as it is not worthy.

review update: update mem_holder_buffer as dnnl::memory instead of shared_ptr of it.

review update: reuse mem_buffer_holder even if the shape changes.

review update: growth factor.

review update: bug fix.

* fix code style

* review update: rewrite the dynamic buffer using the cpu Memory class, instead of dnnl::memory

* Update src/plugins/intel_cpu/src/nodes/tensoriterator.cpp

Co-authored-by: Maksim Kutakov <maxim.kutakov@gmail.com>

* Update src/plugins/intel_cpu/src/nodes/tensoriterator.cpp

Co-authored-by: Maksim Kutakov <maxim.kutakov@gmail.com>

* review update: minor fix

---------

Co-authored-by: Maksim Kutakov <maxim.kutakov@gmail.com>
2023-02-21 09:40:00 +01:00
Xiuchuan Zhai
8153664b52 [CPU] optimize the shape infer for transpose (#15695)
* optimize the shape infer for transpose

* optimize coding style as comment

* optimize coding style as comment

* fix an bug by PR-15614
2023-02-21 09:19:37 +01:00
Maxim Kurin
f347968a1d Don't use gold linker if mold linker is provided (#15746) 2023-02-21 10:57:41 +04:00
Konstantin Beluchenko
7f3f576151 [GPU] Permute 5d optimization (#14170) 2023-02-21 14:39:53 +09:00
Wang Wangwang
9e3b3e0566 Docs: Update the doc on execution devices property and enable_startup_fallback property (#14750)
* Docs: Update the doc on default hint and execution devices property (#14836)

* Docs: Update to LATENCY as default hint
* Docs: Update the doc on execution devices property
* Update auto_device_selection.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/OV_Runtime_UG/auto_device_selection.md

* Update docs/OV_Runtime_UG/auto_device_selection.md

* Update docs/OV_Runtime_UG/auto_device_selection.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/OV_Runtime_UG/auto_device_selection.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/OV_Runtime_UG/auto_device_selection.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/OV_Runtime_UG/auto_device_selection.md

* Update docs/OV_Runtime_UG/auto_device_selection.md

---------

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
2023-02-21 11:24:19 +08:00
Ilya Churaev
a5ec5f5476 Move template infer request (#15696)
* Move Template Infer Requests to new API

* Removed const_pointer_cast from plugin

* Fixed tests

* Fixed async tests

* Fixed some comments

* Added print ov::Tensor

* Fixed ONNX Frontend tests with multiple outputs to the same tensor

* Revert "Added print ov::Tensor"

This reverts commit b752f506bb.

* Fixed ov_core tests

* Fixed some tests

* Fixed batched tensors tests

* Fixed some tests

* Fixed more tests

* Fixed template plugin tests

* Fixed LP tests

* Fixed some comments

* Fixed some documentation issues

* Fixed comments

* Increase timeout because build termenated in case of common changes
2023-02-21 07:03:07 +04:00
Maxim Vafin
ce3ac296ae [PT FE] Fix aten::len for empty lists (#15820)
* [PT FE] Fix aten::len for empty lists

* Fix code style
2023-02-20 21:34:04 +00:00
Haiqi Pan
1b147c3560 remove no-sign (#15817) 2023-02-20 20:34:23 +04:00
Maxim Vafin
cbd56c3ed9 [PT FE] Enable reverse infer (#15802)
* Enable reverse infer in PT FE

* Inherit channes from weight of convolution

* Except 1

* Add tests

* Add shape propagation for concat
2023-02-20 16:07:28 +01:00
Leonard Sikorski
5d3cd81fd1 Add aten::narrow operator with layer test (#15788) 2023-02-20 15:47:25 +01:00
Anastasiia Pnevskaia
c8c4503672 Error messages correcting in MO extractor (#15783)
* Error messages correcting.

* Error messages correcting.

* Small corrections.
2023-02-20 15:20:54 +01:00
Roman Kazantsev
bc8d0ec71e [TF FE] Refactor ReverseSequence and add layer test (#15807)
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2023-02-20 12:26:19 +00:00
Wang, Yang
1a070b225e [AUTO] Failed to get the ov::model_name when loading network to target device is not ready (#15810)
* Return model name after loading network is ready.

* Update,
2023-02-20 12:16:18 +00:00
Roman Kazantsev
b75a3b3465 [TF FE] Implement layer test for GatherNd translator (#15813)
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2023-02-20 12:58:52 +01:00
Tomasz Dołbniak
310c4deab9 Allow dynamic data type in NMS (#15684) 2023-02-20 09:01:43 +01:00
Ilya Lavrenov
758ebe5242 Removed cross-check-tool (#15778) 2023-02-20 11:09:08 +04:00
Roman Kazantsev
699a1d1708 [TF FE] Refactor Gather operations and add layer tests (#15808)
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2023-02-20 11:02:42 +04:00
Xuejun Zhai
d9f0890a84 Xuejun/remove apis in ov node (#15803)
* [Remove APIs] remove get_default_value() in ov::Node

Signed-off-by: Zhai, Xuejun <xuejun.zhai@intel.com>

* [Remove APIs] remove set_op_annotations & get_op_annotations() in ov::Node

Signed-off-by: Zhai, Xuejun <xuejun.zhai@intel.com>

---------

Signed-off-by: Zhai, Xuejun <xuejun.zhai@intel.com>
2023-02-20 10:56:31 +04:00
Pawel Raasz
69728cb4ef Use new evaluate in template plugin (#15753)
* Use new evaluate method in template plugin

* Add tensor at the end of each iteration

* Remove class TemporaryOverrideOutputs

* Set shape of tensor after evaluate

* Revert "Remove class TemporaryOverrideOutputs"

This reverts commit e345ba9188.

* Update tensors when evaluate passed

* Copy data Tensor when HostTensor was initialized

* Set shape to output tensor in TemporaryOverrideOutputs

* Fix code style

* Add test

* Remove unused code

* Create reshape with scalar when shape is empty

* Reshape, special_zero = true

* Revert "Create reshape with scalar when shape is empty"

This reverts commit 0f901f419a.

* Use Shape with size zero and value max_int for dynamic tensors

* Restore Shape{0} for dynamic tensors

* Revert "Restore Shape{0} for dynamic tensors"

This reverts commit cb2d0e58eb.

* Temporary remove the test

* Use shape{0} for dynamic tensors

* Revert "Use shape{0} for dynamic tensors"

This reverts commit 08460a486b.

* Use Shape{0} for dynamic tensors

* Use new evaluate in template plugin
- Add tensor conversion between ov::Tensor <-> HostTensor
- Add shape utils to create special case shape to be dynamic shape
- Utils are in dev API to remove duplicates

* Move WA for set shape into the ov::tensor.

* Remove dynamic shape from or_tensor helper

* Mark tensor conversion utils as deprecated
- move shape util as core internal only
- update transpose test to not use deprecated functions

* Add missing deprecate suppression macro

---------

Co-authored-by: Artur Kulikowski <artur.kulikowski@intel.com>
2023-02-20 10:50:42 +04:00
Yury Gaydaychuk
7cffe848d6 [Debug tool] Commit slider (#14571) 2023-02-20 10:46:43 +04:00
Xuejun Zhai
749ff8c93f [Remove APIs] remove api set_data_shape (#15805)
Signed-off-by: Zhai, Xuejun <xuejun.zhai@intel.com>
2023-02-20 00:20:30 +04:00
Dohyun Kim (Felix)
b7bcef6864 [GPU] Improve OV_GPU_DumpLayers debug configuration (#15719)
Co-authored-by: Kim,SungEun <sungeun.kim@intel.com>
2023-02-19 14:57:19 +00:00
Ilya Lavrenov
1d5839fb92 Fixed compilation with clang (#15801) 2023-02-19 16:22:18 +04:00
River Li
9cc8bc882b [CC]Add CC support for ir reader (#15659)
* Add CC support for ir reader

Change-Id: I3e1c02222800be090a4307bff8c231ad28b23ff7

* Fix clang issue

Change-Id: Idaf7bc5632bd558cfb7b0ecd8891435e5ba5c6ca
2023-02-18 15:43:19 +04:00
Oleg Pipikin
e9f060cce4 Remove WA for myriad plugin (#15786) 2023-02-18 12:42:44 +04:00
Evgenya Stepyreva
5f8f5b5eee Fix TFL warnings (#15793) 2023-02-18 10:23:57 +04:00
Ilya Lavrenov
ed5fa69b41 Fixed compilation on CI (#15787) 2023-02-17 22:28:48 +04:00
Evgeny Kotov
04f300e187 Gather Sinking for Unary operations (#15289)
* initial

* fix year

* CanGatherPropagateForward review fix

* HasSameOutputGatherNodes fix code review

* fix namespaces code review

* fix ReverseGatherIndexes code review

* clang fixes

* clang fixes

* remove unneeded function

* move utils to utils dir + change namespace

* clang fixes

* windows build fixes

* resotore attr file

* resotore attr file

* code review fix
2023-02-17 18:36:22 +01:00
Roman Lyamin
efb51b058c [GPU] Added operator== for cldnn primitives (#15736) 2023-02-17 19:09:12 +04:00
Ilya Churaev
59542d5cd3 Added header with itt macro (#15775) 2023-02-17 18:11:55 +04:00
Tomasz Jankowski
ed49d51ee1 [Transformations] Add Broadcast v3 nop-elimination (#15765)
* Nop-eliminate Broadcast v3

* Combine multi ops wrapper
2023-02-17 14:12:04 +01:00
Xuejun Zhai
91df0a8aa9 [API remove] remove variantImpl & variantwrapper related class/interfaces (#15580)
* [API remove] remove variantImpl & variantwrapper related class/interfaces

Signed-off-by: xuejun <xuejun.zhai@intel.com>

* [Remove APIs] fix code format issue

Signed-off-by: xuejun <Xuejun.Zhai@intel.com>

* [Remove api] fix python compiler issue caused by deprecated varient

Signed-off-by: xuejun <Xuejun.Zhai@intel.com>

* [Remove APIs] fix code format issue

Signed-off-by: xuejun <xuejun.zhai@intel.com>

---------

Signed-off-by: xuejun <xuejun.zhai@intel.com>
Signed-off-by: xuejun <Xuejun.Zhai@intel.com>
2023-02-17 16:31:26 +04:00
Pavel Esir
0f459c0455 [Core] fix compress_float_constants.cpp for denormal float16 (#15575)
* fix compress_float_constants.cpp for denormal float16

* keep denormals also in fp16; added unit-test

* fixed comment

* removed debug code; added some comments in unit-tests

* fix warning as error

* fix sign
2023-02-17 16:23:31 +04:00
Roman Kazantsev
7d13bc6861 [TF FE] Remove NormalizeL2 translator and refactor layer test (#15760)
It turned out that NormalizeL2 is absent in tf.raw_ops api
and always presented in the decomposed form.

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2023-02-17 14:37:42 +04:00
Evgenya Stepyreva
eeed1f252a TFLite vol.2 (#15698)
* Adds base class and first test for tflite_layer tests

* adds layer tests for unary ops

* adds functionality to get tensors from ops

* 1. adds functionality to use custom funcs for input generation
2. removed UNIQUE op from testing ops

* adds functionality to use custom dtypes

* Cast operation support

* Enhanced tfl layer tests

* Cast operation support

* Transpose Sinking: fix dummy case

* Supported 3 more ops: L2_NORMALIZATION, ARG_MAX, ARG_MIN

* Support scalar shapes

* Supported 1 more op: TRANSPOSE_CONV

* Supported 2 more ops: COMPLEX_ABS, RFFT2D (in combination)

* (DE)QUANTIZE as Identity. Questionable

* Trigger tfl layer tests in .ci

* Apply suggestions from code review

* empty constant support

* Commit as-is. Debug prints inside

* Not ready yet

* Style

* Comments resolved

* Style

* Dynamic shape support

* Style

---------

Co-authored-by: rnugmano <ruslan.nugmanov@intel.com>
Co-authored-by: missjane <estepyreva@gmail.com>
2023-02-17 10:30:16 +00:00
Roman Kazantsev
bd0dfbcd7a [TF FE] Refactor OneHot translator and add layer test (#15763)
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2023-02-17 13:54:40 +04:00
Edward Shogulin
50090ed03a [Snippets] Linker issue fix in tests (#15346) 2023-02-17 13:13:04 +04:00
Ekaterina Aidova
225f9b3801 [PT FE]: fix aten::embedding realization for integer-like indicies an… (#15721)
* [PT FE]: fix aten::embedding realization for integer-like indicies and add tests

* more comments

* Update src/frontends/pytorch/src/op/embedding.cpp

Co-authored-by: Maxim Vafin <maxim.vafin@intel.com>

---------

Co-authored-by: Maxim Vafin <maxim.vafin@intel.com>
2023-02-17 08:09:46 +00:00
Vladislav Golubev
f03a3321fc GraphComparator: stricter requirements for sinks mapping (#15633) 2023-02-17 11:33:20 +04:00
Chen Xu
ff75fdadf3 [CPU] Register printing utilities (#15423) 2023-02-17 11:23:09 +04:00
Shen, Wanglei
c0fa45a1b9 remove invalid KEY_EXCLUSIVE_ASYNC_REQUESTS (#15776) 2023-02-17 11:02:20 +04:00
Wang, Yang
a088d1ab7d Disable core::set_property() to support ov::device::properties setting (#15175)
* Disable set_property() to support ov::device::properties setting.

* Update benchmark APP to set device properties through compile_model() instead of through set_property().

* Update.

* Update.

* Update some test case including ov::device::properties setting via core.ser_property().

* Since core.set_property() didn't support ov::device::properties setting, just remove the test case to check compile_model() works well if setting ov::device::properties via core.set_property() first.

* Update CompileModel in test name to CompiledModel

Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>

* Add corresponding test case.

Signed-off-by: Wang, Yang <yang4.wang@intel.com>

* Update.

* Update.

* Remove the changes of this commit as this modification has nothing to do
with this PR.

This reverts commit 4f04b9f085.

---------

Signed-off-by: Wang, Yang <yang4.wang@intel.com>
Co-authored-by: Chen Peter <peter.chen@intel.com>
Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
2023-02-17 10:15:38 +04:00
Ilya Lavrenov
672522492e Changed sample's format_reader libraries type from dynamic to static (#15756) 2023-02-17 09:30:19 +04:00
guozhong wang
f48a67acc9 Improve test coverage for auto_executable_network.cpp (#14693)
* add test case for device_bind_buffer

* Correct path to header file properties.hpp

* rename remote blob testcase with multi

* add test case for remote blob and device bind buffer

* add logs for debug

* disable test case RemoteBlobInitializedWithoutGPU

* add property for remote blob test case

* remove debug logs for bind_multi_schedule.cpp

* fix MultiDeviceMultipleGPU_Test fail

* add test case for oversubsciption of infer requests

* get optimal number to create inferRequests

* using macro ENABLE_INTEL_CPU to make sure tests need CPU

* fix the issue that canCreateRemoteTensorThenInferWithAffinity test case fails to run

* remove ov::hint::PerformanceMode::UNDEFINED from MultiDeviceMultipleGPU_Test
2023-02-17 11:45:28 +08:00
Roman Kazantsev
bce8b7a04c [TF FE] Support auto-pruning for Iterator, IteratorGetNext, Queue and Lookup (#15731)
* [TF FE] Support auto-pruning for Iterator, IteratorGetNext, Queue and Lookup operations

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Fix build

* Fix build issue

* Fix build issue

* Fix build issue: use TF NodeContext

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Add hash_table translator

* Simplify code in translate session a bit and remove isolated Parameter nodes

* Update src/frontends/tensorflow/src/translate_session.cpp

---------

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2023-02-17 07:34:09 +04:00
guozhong wang
12a706452c improve auto_executable_network line coverage (#14871)
* add test case for get perf_hint from GetMetric

* Increase Mock GetMetric test sleep time

* add mock test case for getMetric

* add new test case OVAutoExecutableNetworkTest

* convert ov::Any to ov::hint::Priority

* resolve conflict of get_metric.hpp

* add macro ENABLE_INTEL_CPU for gpu test case and fix cases not getting instantiated for cpu test

* fix the issue of running Mock GetMetric test cases fail

* add perf_hint test cases to properties_tests.cpp

* Modify the logic of judging whether it is a single device in ctput mode
2023-02-17 11:30:17 +08:00
Wang, Yang
d89615ff9e [AUTO] Enable AUTO compiledModel::get_property supporting its properties only. (#15003)
* Enable AUTO compiledModel::get_property supporting its properties only.

* Update.

* Update.

* Update some releated test cases.

* Update.

* Update related test case.

Signed-off-by: Wang, Yang <yang4.wang@intel.com>

---------

Signed-off-by: Wang, Yang <yang4.wang@intel.com>
Co-authored-by: Chen Peter <peter.chen@intel.com>
2023-02-17 11:27:51 +08:00
5444 changed files with 329722 additions and 120148 deletions

View File

@@ -31,14 +31,6 @@ pr:
- 'tools/*'
- 'tests/layer_tests/*'
resources:
repositories:
- repository: openvino_contrib
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/openvino_contrib
ref: master
variables:
- group: github
@@ -56,7 +48,6 @@ jobs:
VSTS_HTTP_TIMEOUT: 200
BUILD_TYPE: Release
OPENVINO_REPO_DIR: $(Build.Repository.LocalPath)
OPENVINO_CONTRIB_REPO_DIR: $(OPENVINO_REPO_DIR)/../openvino_contrib
WORK_DIR: $(Pipeline.Workspace)/_w
BUILD_DIR: $(WORK_DIR)/build
ANDROID_TOOLS: $(WORK_DIR)/android_tools
@@ -66,7 +57,7 @@ jobs:
SHARE_DIR: /mount/cinfsshare/onnxtestdata
CCACHE_DIR: $(SHARE_DIR)/ccache/master/android_arm64
LD_LIBRARY_PATH: $(Agent.ToolsDirectory)/Python/$(OV_PYTHON_VERSION)/x64/lib
OV_PYTHON_VERSION: 3.10.10 # Full version of Python its required for LD_LIBRARY_PATH. More details https://github.com/microsoft/azure-pipelines-tool-lib/blob/master/docs/overview.md#tool-cache
OV_PYTHON_VERSION: 3.11.2 # Full version of Python its required for LD_LIBRARY_PATH. More details https://github.com/microsoft/azure-pipelines-tool-lib/blob/master/docs/overview.md#tool-cache
steps:
- task: UsePythonVersion@0
@@ -76,7 +67,7 @@ jobs:
disableDownloadFromRegistry: false
architecture: 'x64'
githubToken: $(auth_token)
displayName: Setup Python 3.10
displayName: Setup Python 3.11
name: setupPython
- bash: |
#!/bin/bash
@@ -118,16 +109,9 @@ jobs:
- checkout: self
clean: 'true'
fetchDepth: '1'
submodules: 'true'
path: openvino
- checkout: openvino_contrib
clean: 'true'
fetchDepth: '1'
submodules: 'true'
path: openvino_contrib
- script: |
set -e
sudo -E $(OPENVINO_REPO_DIR)/install_build_dependencies.sh
@@ -149,18 +133,14 @@ jobs:
- task: CMake@1
inputs:
cmakeArgs: >
-GNinja
-G "Ninja Multi-Config"
-DCMAKE_VERBOSE_MAKEFILE=ON
-DCMAKE_BUILD_TYPE=$(BUILD_TYPE)
-DCMAKE_TOOLCHAIN_FILE=$(ANDROID_TOOLS)/ndk-bundle/build/cmake/android.toolchain.cmake
-DCMAKE_COMPILE_WARNING_AS_ERROR=ON
-DANDROID_ABI=$(ANDROID_ABI_CONFIG)
-DANDROID_STL=c++_shared
-DANDROID_PLATFORM=$(ANDROID_SDK_VERSION)
-DENABLE_TESTS=ON
-DBUILD_java_api=ON
-DBUILD_nvidia_plugin=OFF
-DOPENVINO_EXTRA_MODULES=$(OPENVINO_CONTRIB_REPO_DIR)/modules
-DCMAKE_CXX_LINKER_LAUNCHER=ccache
-DCMAKE_C_LINKER_LAUNCHER=ccache
-DCMAKE_CXX_COMPILER_LAUNCHER=ccache

View File

@@ -1 +1 @@
rel-1.8.1
rel-1.14.0

View File

@@ -32,13 +32,13 @@ resources:
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/openvino_contrib
ref: master
ref: releases/2023/0
- repository: testdata
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/testdata
ref: master
ref: releases/2023/0
variables:
- group: github
@@ -71,7 +71,7 @@ jobs:
maxParallel: '2'
# About 150% of total time
timeoutInMinutes: '120'
timeoutInMinutes: '180'
pool:
name: LIN_VMSS_VENV_F16S_U20_WU2
@@ -100,17 +100,17 @@ jobs:
BUILD_PYTHON: $(WORK_DIR)/build_python
INSTALL_PYTHON: $(INSTALL_OPENVINO)/extras/python
LD_LIBRARY_PATH: $(Agent.ToolsDirectory)/Python/$(OV_PYTHON_VERSION)/x64/lib
OV_PYTHON_VERSION: 3.10.10 # Full version of Python its required for LD_LIBRARY_PATH. More details https://github.com/microsoft/azure-pipelines-tool-lib/blob/master/docs/overview.md#tool-cache
OV_PYTHON_VERSION: 3.11.2 # Full version of Python its required for LD_LIBRARY_PATH. More details https://github.com/microsoft/azure-pipelines-tool-lib/blob/master/docs/overview.md#tool-cache
steps:
- task: UsePythonVersion@0
inputs:
versionSpec: '$(OV_PYTHON_VERSION)' # Setting only major & minor version will download latest release from GH repo example 3.10 will be 3.10.10.
versionSpec: '$(OV_PYTHON_VERSION)' # Setting only major & minor version will download latest release from GH repo example 3.10 will be 3.10.10.
addToPath: true
disableDownloadFromRegistry: false
architecture: 'x64'
githubToken: $(auth_token)
displayName: Setup Python 3.10
displayName: Setup Python 3.11
name: setupPython
- bash: |
#!/bin/bash
@@ -151,13 +151,11 @@ jobs:
- checkout: self
clean: 'true'
fetchDepth: '1'
submodules: 'true'
path: openvino
- checkout: openvino_contrib
clean: 'true'
fetchDepth: '1'
submodules: 'true'
path: openvino_contrib
@@ -165,7 +163,7 @@ jobs:
set -e
sudo -E $(REPO_DIR)/install_build_dependencies.sh
# Move jdk into contrib
# 'clang' compiler is to check that samples can be built using it
# 'clang' compiler is used as a default compiler
sudo apt --assume-yes install openjdk-11-jdk libbz2-dev clang
# For Python API
python3 -m pip install --upgrade pip
@@ -174,7 +172,8 @@ jobs:
# For running Python API tests
python3 -m pip install -r $(REPO_DIR)/src/bindings/python/src/compatibility/openvino/requirements-dev.txt
# For running Paddle frontend unit tests
python3 -m pip install -r $(REPO_DIR)/src/frontends/paddle/tests/requirements.txt
# TODO Reenable PDPD after paddlepaddle==2.5.0 with compliant protobuf is released (ticket 95904)
#python3 -m pip install -r $(REPO_DIR)/src/frontends/paddle/tests/requirements.txt
# For running ONNX frontend unit tests
python3 -m pip install -r $(REPO_DIR)/src/frontends/onnx/tests/requirements.txt
# For running TensorFlow frontend unit tests
@@ -219,7 +218,6 @@ jobs:
# Should be after 'Install dependencies' because Git lfs is not installed
- checkout: testdata
clean: 'true'
fetchDepth: '1'
lfs: 'true'
path: testdata
@@ -239,10 +237,15 @@ jobs:
-DENABLE_FASTER_BUILD=ON
-DENABLE_STRICT_DEPENDENCIES=OFF
-DOPENVINO_EXTRA_MODULES=$(OPENVINO_CONTRIB_REPO_DIR)/modules
-DCUSTOM_OPERATIONS="calculate_grid;complex_mul;fft;grid_sample;sparse_conv;sparse_conv_transpose"
-DCMAKE_CXX_COMPILER_LAUNCHER=ccache
-DCMAKE_C_COMPILER_LAUNCHER=ccache
-DCMAKE_CXX_LINKER_LAUNCHER=ccache
-DCMAKE_C_LINKER_LAUNCHER=ccache
-DCMAKE_CXX_COMPILER=clang++
-DCMAKE_C_COMPILER=clang
-DENABLE_SYSTEM_SNAPPY=ON
-DENABLE_SYSTEM_TBB=ON
-DCPACK_GENERATOR=$(CMAKE_CPACK_GENERATOR)
-DBUILD_nvidia_plugin=OFF
-S $(REPO_DIR)
@@ -290,7 +293,10 @@ jobs:
- script: cmake -DCOMPONENT=tests -DCMAKE_INSTALL_PREFIX=$(INSTALL_DIR) -P $(BUILD_LAYER_TESTS_DIR)/cmake_install.cmake
displayName: 'Install Layer Tests'
- script: python3 -m pip install openvino-dev --find-links=$(INSTALL_DIR)/tools
- script: |
set -e
python3 -m pip install $(INSTALL_DIR)/tools/openvino-*
python3 -m pip install $(INSTALL_DIR)/tools/openvino_dev-*
displayName: 'Install python wheels'
- script: |
@@ -303,7 +309,7 @@ jobs:
# Skip test_onnx/test_zoo_models and test_onnx/test_backend due to long execution time
- script: |
export LD_LIBRARY_PATH=$(REPO_DIR)/temp/gna_03.00.00.1910/linux/x64:$(LD_LIBRARY_PATH)
export LD_LIBRARY_PATH=$(REPO_DIR)/temp/gna_03.05.00.2116/linux/x64:$(LD_LIBRARY_PATH)
python3 -m pytest -s $(INSTALL_TEST_DIR)/pyngraph $(PYTHON_STATIC_ARGS) \
--junitxml=$(INSTALL_TEST_DIR)/TEST-Pyngraph.xml \
--ignore=$(INSTALL_TEST_DIR)/pyngraph/tests/test_onnx/test_zoo_models.py \
@@ -313,7 +319,7 @@ jobs:
# Skip test_onnx/test_zoo_models and test_onnx/test_backend due to long execution time
- script: |
# For python imports to import pybind_mock_frontend
export LD_LIBRARY_PATH=$(REPO_DIR)/temp/gna_03.00.00.1910/linux/x64:$(LD_LIBRARY_PATH)
export LD_LIBRARY_PATH=$(REPO_DIR)/temp/gna_03.05.00.2116/linux/x64:$(LD_LIBRARY_PATH)
export PYTHONPATH=$(INSTALL_TEST_DIR):$(INSTALL_DIR)/python/python3.8:$PYTHONPATH
python3 -m pytest -sv $(INSTALL_TEST_DIR)/pyopenvino $(PYTHON_STATIC_ARGS) \
--junitxml=$(INSTALL_TEST_DIR)/TEST-Pyngraph.xml \
@@ -323,7 +329,7 @@ jobs:
displayName: 'Python API 2.0 Tests'
- script: |
export LD_LIBRARY_PATH=$(REPO_DIR)/temp/gna_03.00.00.1910/linux/x64:$(LD_LIBRARY_PATH)
export LD_LIBRARY_PATH=$(REPO_DIR)/temp/gna_03.05.00.2116/linux/x64:$(LD_LIBRARY_PATH)
python3 -m pytest -s $(INSTALL_TEST_DIR)/mo/unit_tests --junitxml=$(INSTALL_TEST_DIR)/TEST-ModelOptimizer.xml
displayName: 'Model Optimizer UT'
@@ -361,10 +367,10 @@ jobs:
displayName: 'List install files'
- script: $(SAMPLES_INSTALL_DIR)/cpp/build_samples.sh -i $(INSTALL_DIR) -b $(BUILD_DIR)/cpp_samples
displayName: 'Build cpp samples'
displayName: 'Build cpp samples - gcc'
- script: $(SAMPLES_INSTALL_DIR)/cpp/build_samples.sh -b $(BUILD_DIR)/cpp_samples_clang
env:
env:
CC: clang
CXX: clang++
displayName: 'Build cpp samples - clang'
@@ -389,17 +395,16 @@ jobs:
- script: $(RUN_PREFIX) $(INSTALL_TEST_DIR)/ov_conditional_compilation_tests --gtest_print_time=1 --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-ConditionalCompilation.xml
displayName: 'Conditional Compilation Tests'
- script: $(RUN_PREFIX) $(INSTALL_TEST_DIR)/paddle_tests --gtest_print_time=1 --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-PaddleTests.xml
displayName: 'Paddle Tests'
- script: $(RUN_PREFIX) $(INSTALL_TEST_DIR)/ov_ir_frontend_tests --gtest_print_time=1 --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-IRFrontend.xml
displayName: 'IR Frontend Tests'
- script: $(RUN_PREFIX) $(INSTALL_TEST_DIR)/ov_onnx_frontend_tests --gtest_print_time=1 --gtest_filter=-*IE_GPU* --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-ONNXFrontend.xml
displayName: 'ONNX Frontend Tests'
# TODO Reenable PDPD after paddlepaddle==2.5.0 with compliant protobuf is released (ticket 95904)
- script: $(RUN_PREFIX) $(INSTALL_TEST_DIR)/paddle_tests --gtest_print_time=1 --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-Paddle.xml
displayName: 'Paddle Frontend UT'
enabled: 'false'
- script: $(RUN_PREFIX) $(INSTALL_TEST_DIR)/ov_tensorflow_frontend_tests --gtest_print_time=1 --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-Tensorflow.xml
displayName: 'TensorFlow Frontend Unit Tests'
@@ -430,10 +435,14 @@ jobs:
- script: $(RUN_PREFIX) $(INSTALL_TEST_DIR)/ov_gna_unit_tests --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-ov_gna_unit_tests.xml
displayName: 'GNA UT'
enabled: 'false' # TODO: fix
- script: $(RUN_PREFIX) $(INSTALL_TEST_DIR)/ieMultiPluginUnitTests --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-ieMultiPluginUnitTests.xml
displayName: 'MULTI UT'
- script: $(RUN_PREFIX) $(INSTALL_TEST_DIR)/ov_auto_batch_unit_tests --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-ov_auto_batch_unit_tests.xml
displayName: 'AutoBatch UT'
- script: $(RUN_PREFIX) $(INSTALL_TEST_DIR)/ov_template_func_tests --gtest_filter=*smoke* --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-templateFuncTests.xml
displayName: 'TEMPLATE FuncTests'
@@ -443,16 +452,10 @@ jobs:
- script: |
$(RUN_PREFIX) $(INSTALL_TEST_DIR)/InferenceEngineCAPITests --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-InferenceEngineCAPITests.xml
env:
DATA_PATH: $(MODELS_PATH)
MODELS_PATH: $(MODELS_PATH)
displayName: 'IE CAPITests'
- script: |
$(RUN_PREFIX) $(INSTALL_TEST_DIR)/ov_capi_test --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-ov_capi_test.xml
env:
DATA_PATH: $(MODELS_PATH)
MODELS_PATH: $(MODELS_PATH)
displayName: 'OV CAPITests'
- task: CMake@1
@@ -527,22 +530,9 @@ jobs:
python3 -m pip install -r $(LAYER_TESTS_DIR)/requirements.txt
export PYTHONPATH=$(LAYER_TESTS_DIR):$PYTHONPATH
export TEST_DEVICE=CPU
$(RUN_PREFIX) python3 -m pytest $(LAYER_TESTS_DIR)/mo_python_api_tests/test_mo_convert_complex_params.py --ir_version=11 --junitxml=./TEST-test_mo_convert_complex_params.xmlTEST
displayName: 'MO Python API Tests - Complex Python params'
$(RUN_PREFIX) python3 -m pytest $(LAYER_TESTS_DIR)/mo_python_api_tests/ --junitxml=./TEST-test_mo_convert.xmlTEST
displayName: 'MO Python API Tests'
- script: |
python3 -m pip install -r $(LAYER_TESTS_DIR)/requirements.txt
export PYTHONPATH=$(LAYER_TESTS_DIR):$PYTHONPATH
export TEST_DEVICE=CPU
$(RUN_PREFIX) python3 -m pytest $(LAYER_TESTS_DIR)/mo_python_api_tests/test_mo_convert_tf.py --ir_version=11 --junitxml=./TEST-test_mo_convert_tf.xmlTEST
displayName: 'MO Python API Tests - Import TF model from memory'
- script: |
python3 -m pip install -r $(LAYER_TESTS_DIR)/requirements.txt
export PYTHONPATH=$(LAYER_TESTS_DIR):$PYTHONPATH
export TEST_DEVICE=CPU
$(RUN_PREFIX) python3 -m pytest $(LAYER_TESTS_DIR)/mo_python_api_tests/test_mo_convert_pytorch.py --ir_version=11 --junitxml=./TEST-test_mo_convert_pytorch.xmlTEST
displayName: 'MO Python API Tests - Import PyTorch model from memory'
- script: |
python3 -m pip install -r $(LAYER_TESTS_DIR)/requirements.txt

View File

@@ -31,14 +31,6 @@ pr:
- 'tools/*'
- 'tests/layer_tests/*'
resources:
repositories:
- repository: openvino_contrib
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/openvino_contrib
ref: master
variables:
- group: github
@@ -54,34 +46,18 @@ jobs:
system.debug: true
VSTS_HTTP_RETRY: 5
VSTS_HTTP_TIMEOUT: 200
PYTHON_ARM_VERSION: "3.10.6"
PYTHON_EXEC: "python3.10"
OPENVINO_ARCH: 'aarch64'
NUM_PROC: 1
BUILD_TYPE: Release
OPENVINO_REPO_DIR: $(Build.Repository.LocalPath)
OPENVINO_CONTRIB_REPO_DIR: $(OPENVINO_REPO_DIR)/../openvino_contrib
OPENCV_REPO_DIR: $(OPENVINO_REPO_DIR)/../opencv
ONETBB_REPO_DIR: $(OPENVINO_CONTRIB_REPO_DIR)/../oneTBB
BUILD_PYTHON: $(WORK_DIR)/build_python
BUILD_OPENCV: $(WORK_DIR)/build_opencv
BUILD_ONETBB: $(WORK_DIR)/build_onetbb
BUILD_OPENVINO: $(WORK_DIR)/build
BUILD_OPENVINO_PYTHON: $(WORK_DIR)/build_python
CROSSENV_DIR: $(WORK_DIR)/cross_env
INSTALL_OPENVINO: $(WORK_DIR)/install_openvino
INSTALL_PYTHON: $(INSTALL_OPENVINO)/extras/python
INSTALL_ONETBB: $(WORK_DIR)/build/extras/oneTBB
INSTALL_ONETBB_PACKAGE: $(INSTALL_OPENVINO)/extras/oneTBB
INSTALL_OPENCV: $(INSTALL_OPENVINO)/extras/opencv
WORK_DIR: $(Pipeline.Workspace)/_w
SHARE_DIR: /mount/cinfsshare/onnxtestdata
TMP_DIR: /mnt/tmp
OPENVINO_CCACHE_DIR: $(SHARE_DIR)/ccache/master/linux_arm64
OPENCV_CCACHE_DIR: $(SHARE_DIR)/ccache/master/linux_arm64_opencv
ONETBB_CCACHE_DIR: $(SHARE_DIR)/ccache/master/linux_arm64_onetbb
LD_LIBRARY_PATH: $(Agent.ToolsDirectory)/Python/$(OV_PYTHON_VERSION)/x64/lib
OV_PYTHON_VERSION: 3.10.10 # Full version of Python its required for LD_LIBRARY_PATH. More details https://github.com/microsoft/azure-pipelines-tool-lib/blob/master/docs/overview.md#tool-cache
OV_PYTHON_VERSION: 3.11.2 # Full version of Python its required for LD_LIBRARY_PATH. More details https://github.com/microsoft/azure-pipelines-tool-lib/blob/master/docs/overview.md#tool-cache
steps:
- task: UsePythonVersion@0
@@ -91,7 +67,7 @@ jobs:
disableDownloadFromRegistry: false
architecture: 'x64'
githubToken: $(auth_token)
displayName: Setup Python 3.10
displayName: Setup Python 3.11
name: setupPython
- bash: |
#!/bin/bash
@@ -121,95 +97,89 @@ jobs:
- script: |
rm -rf $(WORK_DIR) ; mkdir $(WORK_DIR)
mkdir -p $(BUILD_ONETBB) $(BUILD_OPENCV) $(BUILD_OPENVINO) $(BUILD_OPENVINO_PYTHON) $(BUILD_PYTHON)
mkdir -p $(INSTALL_ONETBB) $(INSTALL_ONETBB_PACKAGE) $(INSTALL_OPENVINO) $(INSTALL_PYTHON) $(INSTALL_OPENCV)
mkdir -p $(BUILD_OPENVINO)
mkdir -p $(INSTALL_OPENVINO)
sudo rm -rf $(TMP_DIR) ; sudo mkdir $(TMP_DIR) ; sudo chmod 777 -R $(TMP_DIR)
sudo mkdir -p $(SHARE_DIR)
sudo apt --assume-yes update && sudo apt --assume-yes install nfs-common
sudo mount -vvv -t nfs cinfsshare.file.core.windows.net:/cinfsshare/onnxtestdata $(SHARE_DIR) -o vers=4,minorversion=1,sec=sys
mkdir -p $(OPENVINO_CCACHE_DIR)
mkdir -p $(OPENCV_CCACHE_DIR)
mkdir -p $(ONETBB_CCACHE_DIR)
displayName: 'Make directories'
- checkout: self
clean: 'true'
fetchDepth: '1'
submodules: 'true'
path: openvino
- checkout: openvino_contrib
clean: 'true'
fetchDepth: '1'
submodules: 'true'
path: openvino_contrib
- script: |
set -e
sudo -E $(OPENVINO_REPO_DIR)/install_build_dependencies.sh
$(OPENVINO_CONTRIB_REPO_DIR)/modules/arm_plugin/scripts/install_build_dependencies.sh
python3 -m pip install --upgrade pip
python3 -m pip install -r $(OPENVINO_REPO_DIR)/src/bindings/python/requirements.txt
python3 -m pip install -r $(OPENVINO_REPO_DIR)/src/bindings/python/wheel/requirements-dev.txt
env:
CCACHE_TEMPDIR: $(TMP_DIR)/ccache
CCACHE_BASEDIR: $(Pipeline.Workspace)
CCACHE_MAXSIZE: 50G
USE_CCACHE: 1
OPENCV_CCACHE_DIR: $(OPENCV_CCACHE_DIR)
ONETBB_CCACHE_DIR: $(ONETBB_CCACHE_DIR)
PYTHON_ARM_VERSION: $(PYTHON_ARM_VERSION)
NUM_PROC: $(NUM_PROC)
BUILD_PYTHON: $(BUILD_PYTHON)
WORK_DIR: $(WORK_DIR)
INSTALL_PYTHON: $(INSTALL_PYTHON)
BUILD_TYPE: $(BUILD_TYPE)
OPENVINO_REPO_DIR: $(OPENVINO_REPO_DIR)
BUILD_ONETBB: $(BUILD_ONETBB)
INSTALL_ONETBB: $(INSTALL_ONETBB)
INSTALL_OPENCV: $(INSTALL_OPENCV)
PYTHON_EXEC: $(PYTHON_EXEC)
ONETBB_REPO_DIR: $(ONETBB_REPO_DIR)
OPENCV_REPO_DIR: $(OPENCV_REPO_DIR)
BUILD_OPENCV: $(BUILD_OPENCV)
INSTALL_OPENVINO: $(INSTALL_OPENVINO)
# install dependencies needed to build CPU plugin for ARM
sudo -E apt --assume-yes install scons crossbuild-essential-arm64
# generic dependencies
sudo -E apt --assume-yes install cmake ccache
# Speed up build
sudo -E apt -y --no-install-recommends install unzip
wget https://github.com/ninja-build/ninja/releases/download/v1.10.2/ninja-linux.zip
unzip ninja-linux.zip
sudo cp -v ninja /usr/local/bin/
displayName: 'Install dependencies'
- script: |
set -e
/usr/local/bin/$(PYTHON_EXEC) -m pip install -U pip
/usr/local/bin/$(PYTHON_EXEC) -m pip install crossenv
/usr/local/bin/$(PYTHON_EXEC) -m crossenv $(INSTALL_PYTHON)/bin/$(PYTHON_EXEC) $(CROSSENV_DIR)
source $(CROSSENV_DIR)/bin/activate
build-pip3 install -U pip install -r $(OPENVINO_REPO_DIR)/src/bindings/python/wheel/requirements-dev.txt
cross-pip3 install -U pip install -r $(OPENVINO_REPO_DIR)/src/bindings/python/wheel/requirements-dev.txt
displayName: 'Create crossenv'
git submodule update --init -- $(OPENVINO_REPO_DIR)/src/plugins
git submodule update --init -- $(OPENVINO_REPO_DIR)/thirdparty/gtest
displayName: 'Init submodules for non Conan dependencies'
- task: CMake@1
inputs:
cmakeArgs: >
-GNinja
-DCMAKE_VERBOSE_MAKEFILE=ON
-DCMAKE_COMPILE_WARNING_AS_ERROR=ON
-DOpenCV_DIR=$(INSTALL_OPENCV)/cmake
-DENABLE_PYTHON=OFF
-DENABLE_TESTS=ON
-DENABLE_DATA=OFF
-DCMAKE_TOOLCHAIN_FILE=$(OPENVINO_REPO_DIR)/cmake/arm64.toolchain.cmake
-DCMAKE_BUILD_TYPE=$(BUILD_TYPE)
-DTHREADING=TBB
-DTBB_DIR=$(INSTALL_ONETBB)/lib/cmake/TBB
-DCMAKE_VERBOSE_MAKEFILE=ON
-DOPENVINO_EXTRA_MODULES=$(OPENVINO_CONTRIB_REPO_DIR)/modules/arm_plugin
-DCMAKE_CXX_COMPILER_LAUNCHER=ccache
-DCMAKE_C_COMPILER_LAUNCHER=ccache
-DCMAKE_CXX_LINKER_LAUNCHER=ccache
-DCMAKE_C_LINKER_LAUNCHER=ccache
-DARM_COMPUTE_SCONS_JOBS=$(NUM_PROC)
-DCMAKE_INSTALL_PREFIX=$(INSTALL_OPENVINO)
-S $(OPENVINO_REPO_DIR)
- script: |
python3 -m pip install conan
# generate build profile
conan profile detect
# generate host profile for linux_arm64
echo "include(default)" > $(BUILD_OPENVINO)/linux_arm64
echo "[buildenv]" >> $(BUILD_OPENVINO)/linux_arm64
echo "CC=aarch64-linux-gnu-gcc" >> $(BUILD_OPENVINO)/linux_arm64
echo "CXX=aarch64-linux-gnu-g++" >> $(BUILD_OPENVINO)/linux_arm64
# install OpenVINO dependencies
export CMAKE_CXX_COMPILER_LAUNCHER=ccache
export CMAKE_C_COMPILER_LAUNCHER=ccache
conan install $(OPENVINO_REPO_DIR)/conanfile.txt \
-pr:h $(BUILD_OPENVINO)/linux_arm64 \
-s:h arch=armv8 \
-of $(BUILD_OPENVINO) \
-b missing
env:
CCACHE_DIR: $(OPENVINO_CCACHE_DIR)
CCACHE_TEMPDIR: $(TMP_DIR)/ccache
CCACHE_BASEDIR: $(Pipeline.Workspace)
CCACHE_MAXSIZE: 50G
displayName: 'Install conan and dependencies'
- script: |
source $(BUILD_OPENVINO)/conanbuild.sh
cmake \
-G Ninja \
-DCMAKE_VERBOSE_MAKEFILE=ON \
-DBUILD_SHARED_LIBS=ON \
-DCMAKE_COMPILE_WARNING_AS_ERROR=ON \
-DENABLE_CPPLINT=OFF \
-DENABLE_PYTHON=OFF \
-DENABLE_TESTS=ON \
-DENABLE_DATA=OFF \
-DENABLE_SYSTEM_TBB=ON \
-DENABLE_SYSTEM_PROTOBUF=ON \
-DENABLE_SYSTEM_SNAPPY=ON \
-DENABLE_SYSTEM_PUGIXML=ON \
-DCMAKE_TOOLCHAIN_FILE=$(BUILD_OPENVINO)/conan_toolchain.cmake \
-DCMAKE_CXX_COMPILER_LAUNCHER=ccache \
-DCMAKE_C_COMPILER_LAUNCHER=ccache \
-DARM_COMPUTE_SCONS_JOBS=$(NUM_PROC) \
-DCMAKE_INSTALL_PREFIX=$(INSTALL_OPENVINO) \
-DCMAKE_BUILD_TYPE=$(BUILD_TYPE) \
-S $(OPENVINO_REPO_DIR) \
-B $(BUILD_OPENVINO)
displayName: 'CMake OpenVINO ARM plugin'
source $(BUILD_OPENVINO)/deactivate_conanbuild.sh
displayName: 'CMake configure'
- script: cmake --build $(BUILD_OPENVINO) --parallel --config $(BUILD_TYPE)
env:
@@ -217,38 +187,13 @@ jobs:
CCACHE_TEMPDIR: $(TMP_DIR)/ccache
CCACHE_BASEDIR: $(Pipeline.Workspace)
CCACHE_MAXSIZE: 50G
displayName: 'Build OpenVINO ARM plugin'
displayName: 'Build OpenVINO Runtime'
- script: cmake --build $(BUILD_OPENVINO) --parallel --config $(BUILD_TYPE) --target install
displayName: 'Install OpenVINO ARM plugin'
- script: |
source $(CROSSENV_DIR)/bin/activate
cmake \
-GNinja \
-DENABLE_PYTHON=ON \
-DENABLE_WHEEL=ON \
-DCMAKE_TOOLCHAIN_FILE=$(OPENVINO_REPO_DIR)/cmake/arm64.toolchain.cmake \
-DOpenVINODeveloperPackage_DIR=$(BUILD_OPENVINO) \
-DCMAKE_INSTALL_PREFIX=$(INSTALL_OPENVINO) \
-S $(OPENVINO_REPO_DIR)/src/bindings/python \
-B $(BUILD_OPENVINO_PYTHON)
deactivate
displayName: 'CMake OpenVINO python binding'
- script: cmake --build $(BUILD_OPENVINO_PYTHON) --parallel --config $(BUILD_TYPE)
env:
CCACHE_DIR: $(OPENVINO_CCACHE_DIR)
CCACHE_TEMPDIR: $(TMP_DIR)/ccache
CCACHE_BASEDIR: $(Pipeline.Workspace)
CCACHE_MAXSIZE: 50G
displayName: 'Build OpenVINO python binding'
- script: cmake --build $(BUILD_OPENVINO_PYTHON) --parallel --target install
displayName: 'Install OpenVINO python binding'
displayName: 'Install OpenVINO Runtime'
- task: PublishBuildArtifacts@1
inputs:
PathtoPublish: $(Build.ArtifactStagingDirectory)
ArtifactName: 'openvino_aarch64_linux'
displayName: 'Publish OpenVINO AArch64 linux package'
displayName: 'Publish OpenVINO Runtime for ARM'

View File

@@ -35,6 +35,7 @@ resources:
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/testdata
ref: releases/2023/0
variables:
- group: github
@@ -59,7 +60,7 @@ jobs:
INSTALL_DIR: $(WORK_DIR)/install_pkg
SETUPVARS: $(INSTALL_DIR)/setupvars.sh
LD_LIBRARY_PATH: $(Agent.ToolsDirectory)/Python/$(OV_PYTHON_VERSION)/x64/lib
OV_PYTHON_VERSION: 3.10.10 # Full version of Python its required for LD_LIBRARY_PATH. More details https://github.com/microsoft/azure-pipelines-tool-lib/blob/master/docs/overview.md#tool-cache
OV_PYTHON_VERSION: 3.11.2 # Full version of Python its required for LD_LIBRARY_PATH. More details https://github.com/microsoft/azure-pipelines-tool-lib/blob/master/docs/overview.md#tool-cache
steps:
- task: UsePythonVersion@0
@@ -69,7 +70,7 @@ jobs:
disableDownloadFromRegistry: false
architecture: 'x64'
githubToken: $(auth_token)
displayName: Setup Python 3.10
displayName: Setup Python 3.11
name: setupPython
- bash: |
#!/bin/bash
@@ -102,7 +103,6 @@ jobs:
- checkout: self
clean: 'true'
fetchDepth: '1'
submodules: 'true'
path: openvino
@@ -118,19 +118,17 @@ jobs:
- checkout: testdata
clean: 'true'
fetchDepth: '1'
lfs: 'true'
path: testdata
- task: CMake@1
inputs:
cmakeArgs: >
-GNinja
-G "Ninja Multi-Config"
-DENABLE_CPPLINT=OFF
-DENABLE_GAPI_PREPROCESSING=OFF
-DCMAKE_VERBOSE_MAKEFILE=ON
-DCMAKE_COMPILE_WARNING_AS_ERROR=ON
-DCMAKE_BUILD_TYPE=$(BUILD_TYPE)
-DENABLE_FASTER_BUILD=ON
-DENABLE_PROFILING_ITT=ON
-DSELECTIVE_BUILD=COLLECT
@@ -154,11 +152,10 @@ jobs:
- task: CMake@1
inputs:
cmakeArgs: >
-GNinja
-DSELECTIVE_BUILD=ON
-DSELECTIVE_BUILD_STAT=$(BUILD_DIR)/*.csv
-S $(REPO_DIR)
-B $(BUILD_DIR)
-S $(REPO_DIR)
displayName: 'CMake CC ON'
- script: cmake --build $(BUILD_DIR) --parallel --config $(BUILD_TYPE) --target openvino_intel_cpu_plugin openvino_ir_frontend

View File

@@ -4,7 +4,7 @@ resources:
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/openvino_contrib
ref: master
ref: releases/2023/0
variables:
- group: github
@@ -33,7 +33,7 @@ jobs:
SHARE_DIR: /mount/cinfsshare/onnxtestdata
CCACHE_DIR: $(SHARE_DIR)/ccache/master/linux_coverity
LD_LIBRARY_PATH: $(Agent.ToolsDirectory)/Python/$(OV_PYTHON_VERSION)/x64/lib
OV_PYTHON_VERSION: 3.10.10 # Full version of Python its required for LD_LIBRARY_PATH. More details https://github.com/microsoft/azure-pipelines-tool-lib/blob/master/docs/overview.md#tool-cache
OV_PYTHON_VERSION: 3.11.2 # Full version of Python its required for LD_LIBRARY_PATH. More details https://github.com/microsoft/azure-pipelines-tool-lib/blob/master/docs/overview.md#tool-cache
steps:
- task: UsePythonVersion@0
@@ -43,7 +43,7 @@ jobs:
disableDownloadFromRegistry: false
architecture: 'x64'
githubToken: $(auth_token)
displayName: Setup Python 3.10
displayName: Setup Python 3.11
name: setupPython
- bash: |
#!/bin/bash
@@ -82,13 +82,11 @@ jobs:
- checkout: self
clean: 'true'
fetchDepth: '1'
submodules: 'true'
path: openvino
- checkout: openvino_contrib
clean: 'true'
fetchDepth: '1'
submodules: 'true'
path: openvino_contrib
@@ -108,10 +106,9 @@ jobs:
inputs:
# Coverity has too many PARSE_ERROR errors with ENABLE_FASTER_BUILD=ON. Disabling FASTER_BUILD.
cmakeArgs: >
-GNinja
-G "Ninja Multi-Config"
-DENABLE_CPPLINT=OFF
-DCMAKE_VERBOSE_MAKEFILE=ON
-DCMAKE_BUILD_TYPE=$(BUILD_TYPE)
-DENABLE_FASTER_BUILD=OFF
-DENABLE_STRICT_DEPENDENCIES=OFF
-DBUILD_nvidia_plugin=OFF

View File

@@ -42,11 +42,13 @@ resources:
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/openvino_contrib
ref: releases/2023/0
- repository: testdata
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/testdata
ref: releases/2023/0
jobs:
- job: CUDAPlugin_Lin
@@ -100,13 +102,11 @@ jobs:
- checkout: self
clean: 'true'
fetchDepth: '1'
submodules: 'true'
path: openvino
- checkout: openvino_contrib
clean: 'true'
fetchDepth: '1'
submodules: 'true'
path: openvino_contrib
@@ -129,7 +129,7 @@ jobs:
python3 -m pip install -r /root/repos/openvino/src/bindings/python/requirements.txt &&
cmake -GNinja \
-DCMAKE_VERBOSE_MAKEFILE=ON \
-DDENABLE_CPPLINT=OFF \
-DENABLE_CPPLINT=OFF \
-DCMAKE_BUILD_TYPE=$(BUILD_TYPE) \
-DOPENVINO_EXTRA_MODULES=/root/repos/openvino_contrib/modules/nvidia_plugin \
-DENABLE_INTEL_CPU=OFF \

View File

@@ -34,7 +34,7 @@ resources:
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/testdata
ref: master
ref: releases/2023/0
jobs:
- job: Lin_Debian
@@ -102,14 +102,13 @@ jobs:
- checkout: self
clean: 'true'
fetchDepth: '1'
submodules: 'true'
path: openvino
- script: |
set -e
sudo -E $(REPO_DIR)/install_build_dependencies.sh
# 'clang' compiler is to check that samples can be built using it
# 'clang' is used as a default compiler
sudo apt --assume-yes install clang
sudo apt --assume-yes install --no-install-recommends libopencv-imgproc-dev libopencv-imgcodecs-dev
# For opencv-python: python3-setuptools and pip upgrade
@@ -143,7 +142,6 @@ jobs:
# Should be after 'Install dependencies' because Git lfs is not installed
- checkout: testdata
clean: 'true'
fetchDepth: '1'
lfs: 'true'
path: testdata
@@ -161,6 +159,7 @@ jobs:
-DENABLE_TESTS=ON
-DENABLE_FASTER_BUILD=ON
-DENABLE_STRICT_DEPENDENCIES=OFF
-DENABLE_SYSTEM_SNAPPY=ON
-DCMAKE_CXX_COMPILER_LAUNCHER=ccache
-DCMAKE_C_COMPILER_LAUNCHER=ccache
-DCMAKE_CXX_LINKER_LAUNCHER=ccache
@@ -263,9 +262,9 @@ jobs:
sudo apt-get install --no-install-recommends gnupg wget -y
wget https://apt.repos.intel.com/intel-gpg-keys/GPG-PUB-KEY-INTEL-SW-PRODUCTS.PUB
sudo apt-key add GPG-PUB-KEY-INTEL-SW-PRODUCTS.PUB
echo "deb https://apt.repos.intel.com/openvino/2022 focal main" | sudo tee /etc/apt/sources.list.d/intel-openvino-2022.list
sudo apt-get update -o Dir::Etc::sourcelist=/etc/apt/sources.list.d/intel-openvino-2022.list
sudo apt-get install openvino -y || exit 1
echo "deb https://apt.repos.intel.com/openvino/2023 ubuntu20 main" | sudo tee /etc/apt/sources.list.d/intel-openvino-2023.list
sudo apt-get update -o Dir::Etc::sourcelist=/etc/apt/sources.list.d/intel-openvino-2023.list
sudo apt-get install openvino-2023.0.1 -y || exit 1
# install our local one and make sure the conflicts are resolved
sudo apt-get install --no-install-recommends dpkg-dev -y
rm -r _CPack_Packages
@@ -283,13 +282,13 @@ jobs:
displayName: 'Clean build dir'
- script: $(SAMPLES_INSTALL_DIR)/cpp/build_samples.sh -i $(INSTALL_DIR)
displayName: 'Build cpp samples'
displayName: 'Build cpp samples - gcc'
- script: $(SAMPLES_INSTALL_DIR)/cpp/build_samples.sh -i $(INSTALL_DIR)
displayName: 'Build cpp samples - clang'
env:
CC: clang
CXX: clang++
displayName: 'Build cpp samples - clang'
- script: $(SAMPLES_INSTALL_DIR)/c/build_samples.sh -i $(INSTALL_DIR)
displayName: 'Build c samples'
@@ -306,11 +305,12 @@ jobs:
LD_LIBRARY_PATH: $(INSTALL_TEST_DIR)
displayName: 'ONNX Frontend Tests'
- script: |
$(INSTALL_TEST_DIR)/paddle_tests --gtest_print_time=1 --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-Paddle.xml
# TODO Reenable PDPD after paddlepaddle==2.5.0 with compliant protobuf is released (ticket 95904)
- script: $(INSTALL_TEST_DIR)/paddle_tests --gtest_print_time=1 --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-Paddle.xml
env:
LD_LIBRARY_PATH: $(INSTALL_TEST_DIR)
displayName: 'Paddle Frontend UT'
enabled: 'false'
- script: $(INSTALL_TEST_DIR)/ov_tensorflow_frontend_tests --gtest_print_time=1 --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-Tensorflow.xml
env:

View File

@@ -4,7 +4,7 @@
# type: github
# endpoint: openvinotoolkit
# name: openvinotoolkit/testdata
# ref: master
# ref: releases/2023/0
jobs:
- job: Lin_lohika
@@ -30,7 +30,6 @@ jobs:
# - checkout: self
# clean: 'true'
# fetchDepth: '1'
# submodules: 'true'
# path: openvino
@@ -42,7 +41,6 @@ jobs:
# Should be after 'Install dependencies' because Git lfs is not installed
# - checkout: testdata
# clean: 'true'
# fetchDepth: '1'
# submodules: 'true'
# lfs: 'true'
# path: testdata

View File

@@ -91,7 +91,6 @@ jobs:
- checkout: self
clean: 'true'
fetchDepth: '1'
submodules: 'true'
path: openvino

View File

@@ -56,7 +56,7 @@ jobs:
ONNXRUNTIME_UTILS: $(REPO_DIR)/.ci/azure/ci_utils/onnxruntime
ONNXRUNTIME_BUILD_DIR: $(ONNXRUNTIME_REPO_DIR)/build
LD_LIBRARY_PATH: $(Agent.ToolsDirectory)/Python/$(OV_PYTHON_VERSION)/x64/lib
OV_PYTHON_VERSION: 3.10.10 # Full version of Python its required for LD_LIBRARY_PATH. More details https://github.com/microsoft/azure-pipelines-tool-lib/blob/master/docs/overview.md#tool-cache
OV_PYTHON_VERSION: 3.11.2 # Full version of Python its required for LD_LIBRARY_PATH. More details https://github.com/microsoft/azure-pipelines-tool-lib/blob/master/docs/overview.md#tool-cache
steps:
- task: UsePythonVersion@0
@@ -66,7 +66,7 @@ jobs:
disableDownloadFromRegistry: false
architecture: 'x64'
githubToken: $(auth_token)
displayName: Setup Python 3.10
displayName: Setup Python 3.11
name: setupPython
- bash: |
#!/bin/bash
@@ -101,7 +101,6 @@ jobs:
- checkout: self
clean: 'true'
fetchDepth: '1'
submodules: 'true'
path: openvino
@@ -171,7 +170,7 @@ jobs:
- script: |
source $(INSTALL_DIR)/setupvars.sh
./onnxruntime_shared_lib_test
./onnxruntime_shared_lib_test --gtest_filter=-CApiTest.test_custom_op_openvino_wrapper_library
workingDirectory: $(ONNXRUNTIME_BUILD_DIR)/RelWithDebInfo
displayName: 'Run onnxruntime_shared_lib_test'

View File

@@ -35,13 +35,13 @@ resources:
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/openvino_contrib
ref: master
ref: releases/2023/0
- repository: testdata
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/testdata
ref: master
ref: releases/2023/0
variables:
- group: github
@@ -73,11 +73,11 @@ jobs:
steps:
- task: UsePythonVersion@0
inputs:
versionSpec: '3.10'
versionSpec: '3.11.2'
addToPath: true
architecture: 'x64'
githubToken: $(auth_token)
displayName: Setup Python 3.10
displayName: Setup Python 3.11
name: setupPython
- script: |
@@ -100,26 +100,19 @@ jobs:
- checkout: self
clean: 'true'
fetchDepth: '1'
submodules: 'true'
path: openvino
- checkout: openvino_contrib
clean: 'true'
fetchDepth: '1'
submodules: 'true'
path: openvino_contrib
- checkout: testdata
clean: 'true'
fetchDepth: '1'
lfs: 'true'
path: testdata
- task: UsePythonVersion@0
inputs:
versionSpec: '3.10'
- script: |
brew install cython
brew install automake
@@ -130,7 +123,8 @@ jobs:
- script: |
export PATH="/usr/local/opt/cython/bin:$PATH"
cmake -GNinja \
cmake \
-G Ninja \
-DENABLE_CPPLINT=OFF \
-DCMAKE_VERBOSE_MAKEFILE=ON \
-DCMAKE_BUILD_TYPE=$(BUILD_TYPE) \
@@ -143,9 +137,6 @@ jobs:
-DBUILD_nvidia_plugin=OFF \
-S $(REPO_DIR) \
-B $(BUILD_DIR)
env:
CC: gcc
CXX: g++
displayName: 'CMake OpenVINO'
- script: ls -alR $(REPO_DIR)/temp/

View File

@@ -32,13 +32,13 @@ resources:
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/openvino_contrib
ref: master
ref: releases/2023/0
- repository: testdata
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/testdata
ref: master
ref: releases/2023/0
jobs:
- job: Win
@@ -73,7 +73,7 @@ jobs:
INSTALL_DIR: $(WORK_DIR)\install_pkg
INSTALL_TEST_DIR: $(INSTALL_DIR)\tests
SETUPVARS: $(INSTALL_DIR)\setupvars.bat
PYTHON_DIR: C:\hostedtoolcache\windows\Python\3.10.7\x64
PYTHON_DIR: C:\hostedtoolcache\windows\Python\3.11.2\x64
CMAKE_VERSION: 3.24.0
CMAKE_CMD: $(WORK_DIR)\cmake-$(CMAKE_VERSION)-windows-x86_64\cmake-$(CMAKE_VERSION)-windows-x86_64\bin\cmake.exe
OV_CMAKE_TOOLCHAIN_FILE: $(REPO_DIR)\cmake\toolchains\mt.runtime.win32.toolchain.cmake
@@ -84,26 +84,26 @@ jobs:
- script: |
rd /Q /S $(WORK_DIR) & mkdir $(WORK_DIR)
rd /Q /S $(BUILD_DIR) & mkdir $(BUILD_DIR)
rd /Q /S $(WORK_DIR) & mkdir C:\hostedtoolcache\windows\Python\3.10.7
rd /Q /S $(BUILD_DIR) & mkdir C:\hostedtoolcache\windows\Python\3.10.7\x64
rd /Q /S $(WORK_DIR) & mkdir C:\hostedtoolcache\windows\Python\3.11.2
rd /Q /S $(BUILD_DIR) & mkdir C:\hostedtoolcache\windows\Python\3.11.2\x64
rd /Q /S $(BUILD_SAMPLES_DIR) & mkdir $(BUILD_SAMPLES_DIR)
rd /Q /S $(BUILD_SAMPLES_TESTS_DIR) & mkdir $(BUILD_SAMPLES_TESTS_DIR)
displayName: 'Make dir'
- script: curl -O https://www.python.org/ftp/python/3.10.7/python-3.10.7-amd64.exe
- script: curl -O https://www.python.org/ftp/python/3.11.2/python-3.11.2-amd64.exe
displayName: 'Download Python'
workingDirectory: $(WORK_DIR)
- script: |
python-3.10.7-amd64.exe /passive InstallAllUsers=0 Include_launcher=0 TargetDir=C:\hostedtoolcache\windows\Python\3.10.7\x64
cp C:\hostedtoolcache\windows\Python\3.8.2\x64.complete C:\hostedtoolcache\windows\Python\3.10.7\x64.complete
python-3.11.2-amd64.exe /passive InstallAllUsers=0 Include_launcher=0 TargetDir=C:\hostedtoolcache\windows\Python\3.11.2\x64
cp C:\hostedtoolcache\windows\Python\3.8.2\x64.complete C:\hostedtoolcache\windows\Python\3.11.2\x64.complete
displayName: 'Install Python'
workingDirectory: $(WORK_DIR)
- task: UsePythonVersion@0
displayName: 'Use Python'
inputs:
versionSpec: '3.10'
versionSpec: '3.11.2'
disableDownloadFromRegistry: true
- script: |
@@ -122,19 +122,16 @@ jobs:
- checkout: self
clean: 'true'
fetchDepth: '1'
submodules: 'true'
path: openvino
- checkout: openvino_contrib
clean: 'true'
fetchDepth: '1'
submodules: 'true'
path: openvino_contrib
- checkout: testdata
clean: 'true'
fetchDepth: '1'
lfs: 'true'
path: testdata
@@ -145,7 +142,8 @@ jobs:
python -m pip install -r $(REPO_DIR)\src\bindings\python\wheel\requirements-dev.txt
python -m pip install -r $(REPO_DIR)\src\bindings\python\requirements.txt
rem For running Paddle frontend unit tests
python -m pip install -r $(REPO_DIR)\src\frontends\paddle\tests\requirements.txt
# TODO Reenable PDPD after paddlepaddle==2.5.0 with compliant protobuf is released (ticket 95904)
#python -m pip install -r $(REPO_DIR)\src\frontends\paddle\tests\requirements.txt
rem For running ONNX frontend unit tests
python -m pip install -r $(REPO_DIR)\src\frontends\onnx\tests\requirements.txt
rem For running TensorFlow frontend unit tests
@@ -168,20 +166,21 @@ jobs:
- script: |
set PATH=$(WORK_DIR)\ninja-win;%PATH%
call "$(MSVS_VARS_PATH)" && $(CMAKE_CMD) -G "Ninja Multi-Config" ^
call "$(MSVS_VARS_PATH)" && $(CMAKE_CMD) ^
-G "Ninja Multi-Config" ^
-DENABLE_CPPLINT=OFF ^
-DENABLE_ONEDNN_FOR_GPU=$(CMAKE_BUILD_SHARED_LIBS) ^
-DBUILD_SHARED_LIBS=$(CMAKE_BUILD_SHARED_LIBS) ^
-DENABLE_FASTER_BUILD=ON ^
-DCMAKE_BUILD_TYPE=$(BUILD_TYPE) ^
-DENABLE_TESTS=ON ^
-DCMAKE_COMPILE_WARNING_AS_ERROR=ON ^
-DENABLE_STRICT_DEPENDENCIES=OFF ^
-DENABLE_PYTHON=ON ^
-DBUILD_nvidia_plugin=OFF ^
-DPYTHON_EXECUTABLE="C:\hostedtoolcache\windows\Python\3.10.7\x64\python.exe" ^
-DPYTHON_INCLUDE_DIR="C:\hostedtoolcache\windows\Python\3.10.7\x64\include" ^
-DPYTHON_LIBRARY="C:\hostedtoolcache\windows\Python\3.10.7\x64\libs\python310.lib" ^
-DCUSTOM_OPERATIONS="calculate_grid;complex_mul;fft;grid_sample;sparse_conv;sparse_conv_transpose" ^
-DPYTHON_EXECUTABLE="C:\hostedtoolcache\windows\Python\3.11.2\x64\python.exe" ^
-DPYTHON_INCLUDE_DIR="C:\hostedtoolcache\windows\Python\3.11.2\x64\include" ^
-DPYTHON_LIBRARY="C:\hostedtoolcache\windows\Python\3.11.2\x64\libs\python311.lib" ^
-DOPENVINO_EXTRA_MODULES=$(OPENVINO_CONTRIB_REPO_DIR)\modules ^
-DCMAKE_C_COMPILER:PATH="$(MSVC_COMPILER_PATH)" ^
-DCMAKE_CXX_COMPILER:PATH="$(MSVC_COMPILER_PATH)" ^
@@ -269,8 +268,9 @@ jobs:
- script: call $(SETUPVARS) && $(INSTALL_TEST_DIR)\ov_onnx_frontend_tests --gtest_print_time=1 --gtest_filter=-*IE_GPU* --gtest_output=xml:$(INSTALL_TEST_DIR)\TEST-ONNXFrontend.xml
displayName: 'ONNX Frontend Tests'
- script: call $(SETUPVARS) && $(INSTALL_TEST_DIR)\paddle_tests --gtest_print_time=1 --gtest_output=xml:$(INSTALL_TEST_DIR)\TEST-Paddle.xml
displayName: 'Paddle Frontend UT'
# TODO Reenable PDPD after paddlepaddle==2.5.0 with compliant protobuf is released (ticket 95904)
#- script: call $(SETUPVARS) && $(INSTALL_TEST_DIR)\paddle_tests --gtest_print_time=1 --gtest_output=xml:$(INSTALL_TEST_DIR)\TEST-Paddle.xml
# displayName: 'Paddle Frontend UT'
- script: call $(SETUPVARS) && $(INSTALL_TEST_DIR)\ov_tensorflow_frontend_tests --gtest_print_time=1 --gtest_output=xml:$(INSTALL_TEST_DIR)\TEST-Tensorflow.xml
displayName: 'TensorFlow Frontend Unit Tests'
@@ -305,6 +305,9 @@ jobs:
- script: call $(SETUPVARS) && $(INSTALL_TEST_DIR)\ieMultiPluginUnitTests --gtest_output=xml:$(INSTALL_TEST_DIR)\TEST-ieMultiPluginUnitTests.xml
displayName: 'MULTI UT'
- script: call $(SETUPVARS) && $(INSTALL_TEST_DIR)\ov_auto_batch_unit_tests --gtest_output=xml:$(INSTALL_TEST_DIR)\TEST-ov_auto_batch_unit_tests.xml
displayName: 'AutoBatch UT'
- script: call $(SETUPVARS) && $(INSTALL_TEST_DIR)\ov_template_func_tests --gtest_output=xml:$(INSTALL_TEST_DIR)\TEST-templateFuncTests.xml
displayName: 'TEMPLATE FuncTests'
@@ -314,16 +317,10 @@ jobs:
- script: |
call $(SETUPVARS) && $(INSTALL_TEST_DIR)\InferenceEngineCAPITests --gtest_output=xml:$(INSTALL_TEST_DIR)\TEST-InferenceEngineCAPITests.xml
env:
DATA_PATH: $(MODELS_PATH)
MODELS_PATH: $(MODELS_PATH)
displayName: 'IE CAPITests'
- script: |
call $(SETUPVARS) && $(INSTALL_TEST_DIR)\ov_capi_test --gtest_output=xml:$(INSTALL_TEST_DIR)\TEST-ov_capi_test.xml
env:
DATA_PATH: $(MODELS_PATH)
MODELS_PATH: $(MODELS_PATH)
displayName: 'OV CAPITests'
- task: PublishTestResults@2

View File

@@ -35,6 +35,7 @@ resources:
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/testdata
ref: releases/2023/0
variables:
- group: github
@@ -65,11 +66,11 @@ jobs:
steps:
- task: UsePythonVersion@0
inputs:
versionSpec: '3.10'
versionSpec: '3.11.2'
addToPath: true
architecture: 'x64'
githubToken: $(auth_token)
displayName: Setup Python 3.10
displayName: Setup Python 3.11
name: setupPython
- script: |
@@ -78,6 +79,8 @@ jobs:
python --version
where java
java -version
where cmake
cmake --version
wmic computersystem get TotalPhysicalMemory
wmic cpu list
wmic logicaldisk get description,name
@@ -93,7 +96,6 @@ jobs:
- checkout: self
clean: 'true'
fetchDepth: '1'
submodules: 'true'
path: openvino
@@ -107,15 +109,15 @@ jobs:
- checkout: testdata
clean: 'true'
lfs: 'true'
fetchDepth: '1'
path: testdata
- script: |
set PATH=$(WORK_DIR)\ninja-win;%PATH%
call "$(MSVS_VARS_PATH)" && cmake -GNinja ^
call "$(MSVS_VARS_PATH)" && cmake ^
-G Ninja ^
-DENABLE_CPPLINT=OFF ^
-DENABLE_GAPI_PREPROCESSING=OFF ^
-DENABLE_FASTER_BUILD=ON ^
-DENABLE_PLUGINS_XML=ON ^
-DCMAKE_COMPILE_WARNING_AS_ERROR=ON ^
-DCMAKE_BUILD_TYPE=$(BUILD_TYPE) ^
-DENABLE_PROFILING_ITT=ON ^
@@ -147,12 +149,11 @@ jobs:
displayName: 'List csv files'
- script: |
call "$(MSVS_VARS_PATH)" && cmake -G"Visual Studio 16 2019" ^
call "$(MSVS_VARS_PATH)" && cmake ^
-G "Visual Studio 16 2019" ^
-DVERBOSE_BUILD=ON ^
-DENABLE_CPPLINT=OFF ^
-DENABLE_GAPI_PREPROCESSING=OFF ^
-DENABLE_FASTER_BUILD=ON ^
-DCMAKE_BUILD_TYPE=$(BUILD_TYPE) ^
-DENABLE_PROFILING_ITT=OFF ^
-DSELECTIVE_BUILD=ON ^
-DCMAKE_COMPILE_WARNING_AS_ERROR=ON ^

View File

@@ -1,4 +1,4 @@
FROM ubuntu:22.04
FROM ubuntu:23.04
LABEL version=2021.03.30.1
@@ -38,6 +38,7 @@ RUN apt-get update && apt-get -y --no-install-recommends install \
python3 \
python3-pip \
python3-dev \
pybind11-dev \
python3-virtualenv \
cython3 \
tox && \
@@ -71,5 +72,5 @@ RUN ninja install
WORKDIR /openvino/src/bindings/python
ENV OpenVINO_DIR=/openvino/dist/runtime/cmake
ENV LD_LIBRARY_PATH=/openvino/dist/runtime/lib/intel64:/openvino/dist/runtime/3rdparty/tbb/lib
ENV PYTHONPATH=/openvino/bin/intel64/${BUILD_TYPE}/python_api/python3.10:${PYTHONPATH}
ENV PYTHONPATH=/openvino/bin/intel64/${BUILD_TYPE}/python_api/python3.11:${PYTHONPATH}
CMD tox

View File

@@ -1,5 +1,5 @@
---
name: Bug
name: Bug
about: Create a report to help us improve
title: "[Bug]"
labels: bug, support_request
@@ -8,19 +8,28 @@ assignees: ''
---
##### System information (version)
<!-- Example
- OpenVINO => 2020.4
- Operating System / Platform => Windows 64 Bit
- Compiler => Visual Studio 2017
- Problem classification: Model Conversion
<!-- Please use this template to submit a new issue and provide all the necessary information to expedite the response.
Example
- OpenVINO Source => Runtime /pip install / GitHub
- OpenVINO Version => Version 2022.3 / Github Master Branch / tag 2023.0
- Operating System / Platform => Windows 64 Bit / Ubuntu 20
- Compiler => Visual Studio 2017 / Cmake
- Problem classification: Model Conversion /Accuracy/TensorFlow FE
- Device use: CPU / GPU / HDDL
- Framework: TensorFlow (if applicable)
- Model name: ResNet50 (if applicable)
- Model name: ResNet50 and the link to pre-train modal (if applicable)
Please provide us with the link to your model or attach .zip file.
-->
- OpenVINO=> :grey_question:
- OpenVINO Source=> :grey_question:
- OpenVINO Version=> :grey_question:
- Operating System / Platform => :grey_question:
- Compiler => :grey_question:
- Problem classification => :grey_question:
- Device use: => :grey_question:
- Framework => :grey_question:
- Model name => :grey_question:
##### Detailed description
<!-- your description -->

114
.github/dependabot.yml vendored
View File

@@ -6,7 +6,7 @@ updates:
# Python product dependencies
#
# Python API requirements
# Python API, Frontends
- package-ecosystem: pip
directory: "/src/bindings/python/"
schedule:
@@ -17,12 +17,33 @@ updates:
assignees:
- "jiwaszki"
- "p-wysocki"
- "akuporos"
- "rkazants"
- "ceciliapeng2011"
- "meiyang-intel"
- "mbencer"
- "tomdol"
- "jane-intel"
versioning-strategy: increase-if-necessary
# Tests
- package-ecosystem: pip
directory: "/tests"
schedule:
interval: "daily"
time: "09:00"
timezone: "Poland"
open-pull-requests-limit: 3
assignees:
- "jiwaszki"
- "p-wysocki"
- "akuporos"
- "rkazants"
versioning-strategy: increase-if-necessary
# Model Optimizer requirements
# Model Optimizer, openvino_dev and Benchmark tool
- package-ecosystem: pip
directory: "/tools/mo"
directory: "/tools"
schedule:
interval: "daily"
time: "09:00"
@@ -33,6 +54,8 @@ updates:
- "andrei-kochin"
- "jiwaszki"
- "p-wysocki"
- "akuporos"
- "Wovchena"
allow:
- dependency-name: "*"
dependency-type: "production"
@@ -51,89 +74,10 @@ updates:
- "KodiaqQ"
- "jiwaszki"
- "p-wysocki"
- "akuporos"
- "rkazants"
versioning-strategy: increase-if-necessary
# benchmark_tool requirements
- package-ecosystem: pip
directory: "/tools/benchmark_tool"
schedule:
interval: "daily"
time: "09:00"
timezone: "Asia/Dubai"
open-pull-requests-limit: 3
assignees:
- "Wovchena"
- "jiwaszki"
- "p-wysocki"
- "rkazants"
versioning-strategy: increase-if-necessary
#
# Tests requirements for frontends
#
# PaddlePaddle FE tests requirements
- package-ecosystem: pip
directory: "/src/frontends/paddle/tests/"
schedule:
interval: "daily"
time: "09:00"
timezone: "Asia/Shanghai"
open-pull-requests-limit: 3
assignees:
- "ceciliapeng2011"
- "meiyang-intel"
- "jiwaszki"
- "p-wysocki"
- "rkazants"
versioning-strategy: increase-if-necessary
# ONNX FE tests requirements
- package-ecosystem: pip
directory: "/src/frontends/onnx/tests/"
schedule:
interval: "daily"
time: "09:00"
timezone: "Poland"
open-pull-requests-limit: 3
assignees:
- "mbencer"
- "tomdol"
- "jiwaszki"
- "p-wysocki"
- "rkazants"
versioning-strategy: increase-if-necessary
# TensorFlow FE tests requirements
- package-ecosystem: pip
directory: "/src/frontends/tensorflow/tests/"
schedule:
interval: "daily"
time: "09:00"
timezone: "Asia/Dubai"
open-pull-requests-limit: 3
assignees:
- "rkazants"
- "jiwaszki"
- "p-wysocki"
versioning-strategy: increase-if-necessary
# TensorFlow Lite FE tests requirements
- package-ecosystem: pip
directory: "/src/frontends/tensorflow_lite/tests/"
schedule:
interval: "daily"
time: "09:00"
timezone: "Asia/Dubai"
open-pull-requests-limit: 3
assignees:
- "jane-intel"
- "rkazants"
- "jiwaszki"
- "p-wysocki"
versioning-strategy: increase-if-necessary
#
# Python Samples
#
@@ -149,6 +93,7 @@ updates:
- "Wovchena"
- "jiwaszki"
- "p-wysocki"
- "akuporos"
- "rkazants"
versioning-strategy: increase-if-necessary
@@ -163,6 +108,7 @@ updates:
- "Wovchena"
- "jiwaszki"
- "p-wysocki"
- "akuporos"
- "rkazants"
versioning-strategy: increase-if-necessary
@@ -177,6 +123,7 @@ updates:
- "Wovchena"
- "jiwaszki"
- "p-wysocki"
- "akuporos"
- "rkazants"
versioning-strategy: increase-if-necessary
@@ -191,6 +138,7 @@ updates:
- "Wovchena"
- "jiwaszki"
- "p-wysocki"
- "akuporos"
- "rkazants"
versioning-strategy: increase-if-necessary

View File

@@ -11,7 +11,7 @@ env:
DOXYREST_VER: '2.1.3'
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
group: ${{ github.workflow }}-${{ github.head_ref && github.ref || github.run_id }}
cancel-in-progress: true
jobs:
@@ -25,7 +25,7 @@ jobs:
lfs: true
- name: Install apt-get dependencies
uses: awalsh128/cache-apt-pkgs-action@v1.1.3
uses: awalsh128/cache-apt-pkgs-action@v1.3.0
with:
packages: graphviz texlive liblua5.2-0 libclang1-9 libclang-cpp9
version: 3.0

View File

@@ -30,6 +30,13 @@ jobs:
submodules: recursive
lfs: true
- name: Install OpenCL
uses: awalsh128/cache-apt-pkgs-action@v1.3.0
if: runner.os == 'Linux'
with:
packages: ocl-icd-opencl-dev opencl-headers
version: 3.0
- name: CMake configure
run: cmake -DCMAKE_BUILD_TYPE=Release -B build

View File

@@ -85,8 +85,8 @@ jobs:
- name: Install Clang dependency
run: |
sudo apt update
sudo apt --assume-yes remove clang-7 clang-8 clang-9 clang-10 clang-11 clang-12 clang-13
sudo apt --assume-yes install libclang-14-dev
sudo apt --assume-yes remove clang-7 clang-8 clang-9 clang-10 clang-11 clang-12 clang-13 clang-15
sudo apt --assume-yes install clang-14 libclang-14-dev
- name: Install Python-based dependencies
run: python3 -m pip install -r cmake/developer_package/ncc_naming_style/requirements_dev.txt

View File

@@ -30,7 +30,7 @@ jobs:
python-version: '3.10'
- name: Cache pip
uses: actions/cache@v1
uses: actions/cache@v3
with:
path: ~/.cache/pip
key: ${{ runner.os }}-pip-${{ hashFiles('tools/mo/requirements*.txt') }}

3
.gitignore vendored
View File

@@ -26,6 +26,7 @@ temp/
.repo/
CMakeLists.txt.user
docs/IE_PLUGIN_DG/html/
CMakeUserPresets.json
*.project
*.cproject
@@ -57,3 +58,5 @@ __pycache__
/tools/mo/*.mapping
/tools/mo/*.dat
/tools/mo/*.svg
/src/plugins/intel_cpu/tools/commit_slider/*.json
/src/plugins/intel_cpu/tools/commit_slider/slider_cache/*

6
.gitmodules vendored
View File

@@ -66,3 +66,9 @@
[submodule "thirdparty/flatbuffers/flatbuffers"]
path = thirdparty/flatbuffers/flatbuffers
url = https://github.com/google/flatbuffers.git
[submodule "thirdparty/snappy"]
path = thirdparty/snappy
url = https://github.com/google/snappy.git
[submodule "ARMComputeLibrary"]
path = src/plugins/intel_cpu/thirdparty/ComputeLibrary
url = https://github.com/ARM-software/ComputeLibrary.git

View File

@@ -17,12 +17,12 @@ else()
endif()
endif()
project(OpenVINO DESCRIPTION "OpenVINO toolkit")
if(NOT CMAKE_BUILD_TYPE)
set(CMAKE_BUILD_TYPE "Release" CACHE STRING "CMake build type" FORCE)
if(POLICY CMP0091)
cmake_policy(SET CMP0091 NEW) # Enables use of MSVC_RUNTIME_LIBRARY
endif()
project(OpenVINO DESCRIPTION "OpenVINO toolkit")
find_package(IEDevScripts REQUIRED
PATHS "${OpenVINO_SOURCE_DIR}/cmake/developer_package"
NO_CMAKE_FIND_ROOT_PATH
@@ -39,19 +39,34 @@ if(ENABLE_COVERAGE)
endif()
# resolving dependencies for the project
message (STATUS "PROJECT ............................... " ${PROJECT_NAME})
message (STATUS "CMAKE_VERSION ......................... " ${CMAKE_VERSION})
message (STATUS "CMAKE_BINARY_DIR ...................... " ${CMAKE_BINARY_DIR})
message (STATUS "CMAKE_SOURCE_DIR ...................... " ${CMAKE_SOURCE_DIR})
message (STATUS "OpenVINO_SOURCE_DIR ................... " ${OpenVINO_SOURCE_DIR})
message (STATUS "OpenVINO_BINARY_DIR ................... " ${OpenVINO_BINARY_DIR})
message (STATUS "CMAKE_GENERATOR ....................... " ${CMAKE_GENERATOR})
message (STATUS "CMAKE_C_COMPILER_ID ................... " ${CMAKE_C_COMPILER_ID})
message (STATUS "CMAKE_CXX_COMPILER_ID ................. " ${CMAKE_CXX_COMPILER_ID})
message (STATUS "CMAKE_BUILD_TYPE ...................... " ${CMAKE_BUILD_TYPE})
message (STATUS "CMAKE_TOOLCHAIN_FILE .................. " ${CMAKE_TOOLCHAIN_FILE})
message (STATUS "GLIBC_VERSION.......................... " ${OV_GLIBC_VERSION})
if(OV_GENERATOR_MULTI_CONFIG)
string(REPLACE ";" " " config_types "${CMAKE_CONFIGURATION_TYPES}")
message (STATUS "CMAKE_CONFIGURATION_TYPES ............. " ${config_types})
unset(config_types)
if(CMAKE_GENERATOR MATCHES "^Ninja Multi-Config$")
message (STATUS "CMAKE_DEFAULT_BUILD_TYPE .............. " ${CMAKE_DEFAULT_BUILD_TYPE})
endif()
else()
message (STATUS "CMAKE_BUILD_TYPE ...................... " ${CMAKE_BUILD_TYPE})
endif()
if(CMAKE_GENERATOR_PLATFORM)
message (STATUS "CMAKE_GENERATOR_PLATFORM .............. " ${CMAKE_GENERATOR_PLATFORM})
endif()
if(CMAKE_GENERATOR_TOOLSET)
message (STATUS "CMAKE_GENERATOR_TOOLSET ............... " ${CMAKE_GENERATOR_TOOLSET})
endif()
if(CMAKE_TOOLCHAIN_FILE)
message (STATUS "CMAKE_TOOLCHAIN_FILE .................. " ${CMAKE_TOOLCHAIN_FILE})
endif()
if(NOT OV_GLIBC_VERSION VERSION_EQUAL 0.0)
message (STATUS "GLIBC_VERSION ......................... " ${OV_GLIBC_VERSION})
endif()
# remove file with exported developer targets to force its regeneration
file(REMOVE "${CMAKE_BINARY_DIR}/ngraphTargets.cmake")

View File

@@ -1,55 +1,88 @@
# How to contribute to the OpenVINO repository
# Contributing to OpenVINO
We welcome community contributions to OpenVINO™. Please read the following guide to learn how to find ideas for contribution, practices for good pull requests, checking your changes with our tests and more.
## How to contribute to the OpenVINO project
OpenVINO™ is always looking for opportunities to improve and your contributions
play a big role in this process. There are several ways you can make the
product better:
## Before you start contributing you should
### Provide Feedback
- Make sure you agree to contribute your code under [OpenVINO™ (Apache 2.0)](https://github.com/openvinotoolkit/openvino/blob/master/LICENSE) license.
- Figure out what youre going to contribute. If you dont know what you are going to work on, navigate to the [Github "Issues" tab](https://github.com/openvinotoolkit/openvino/issues). Make sure that there isn't someone working on it. In the latter case you might provide support or suggestion in the issue or in the linked pull request.
- If you are going to fix a bug, check that it's still exists in the latest release. This can be done by building the latest master branch, and make sure that the error is still reproducible there. We do not fix bugs that only affect older non-LTS releases like 2020.2 for example (more details about [branching strategy](https://github.com/openvinotoolkit/openvino/wiki/Branches)).
* **Report bugs / issues**
If you experience faulty behavior in OpenVINO or its components, you can
[create a new issue](https://github.com/openvinotoolkit/openvino/issues)
in the GitHub issue tracker.
* **Propose new features / improvements**
If you have a suggestion for improving OpenVINO or want to share your ideas, you can open a new
[GitHub Discussion](https://github.com/openvinotoolkit/openvino/discussions).
If your idea is already well defined, you can also create a
[Feature Request Issue](https://github.com/openvinotoolkit/openvino/issues/new?assignees=octocat&labels=enhancement%2Cfeature&projects=&template=feature_request.yml&title=%5BFeature+Request%5D%3A+)
In both cases, provide a detailed description, including use cases, benefits, and potential challenges.
If your points are especially well aligned with the product vision, they will be included in the
[development roadmap](./ROADMAP.md).
User feedback is crucial for OpenVINO development and even if your input is not immediately prioritized,
it may be used at a later time or undertaken by the community, regardless of the official roadmap.
### Contribute Code Changes
* **Fix Bugs or Develop New Features**
If you want to help improving OpenVINO, choose one of the issues reported in
[GitHub Issue Tracker](https://github.com/openvinotoolkit/openvino/issues) and
[create a Pull Request](./CONTRIBUTING_PR.md) addressing it. Consider one of the
tasks listed as [first-time contributions](https://github.com/openvinotoolkit/openvino/issues/17502).
If the feature you want to develop is more complex or not well defined by the reporter,
it is always a good idea to [discuss it](https://github.com/openvinotoolkit/openvino/discussions)
with OpenVINO developers first. Before creating a new PR, check if nobody is already
working on it. In such a case, you may still help, having aligned with the other developer.
Importantly, always check if the change hasn't been implemented before you start working on it!
You can build OpenVINO using the latest master branch and make sure that it still needs your
changes. Also, do not address issues that only affect older non-LTS releases, like 2022.2.
* **Develop a New Device Plugin**
Since the market of computing devices is constantly evolving, OpenVINO is always open to extending
its support for new hardware. If you want to run inference on a device that is currently not supported,
you can see how to develop a new plugin for it in the
[Plugin Developer Guide](https://docs.openvino.ai/canonical/openvino_docs_ie_plugin_dg_overview.html).
### Improve documentation
* **OpenVINO developer documentation** is contained entirely in this repository, under the
[./docs/dev](https://github.com/openvinotoolkit/openvino/tree/master/docs/dev) folder.
* **User documentation** is built from several sources and published at
[docs.openvino.ai](docs.openvino.ai), which is the recommended place for reading
these documents. Use the files maintained in this repository only for editing purposes.
* The easiest way to help with documentation is to review it and provide feedback on the
existing articles. Whether you notice a mistake, see the possibility of improving the text,
or think more information should be added, you can reach out to any of the documentation
contributors to discuss the potential changes.
You can also create a Pull Request directly, following the [editor's guide](./docs/CONTRIBUTING_DOCS.md).
## "Fork & Pull Request model" for code contribution
### Promote and Support OpenVINO
### [](https://github.com/openvinotoolkit/openvino/blob/master/CONTRIBUTING.md#the-instruction-in-brief)The instruction in brief
* **Popularize OpenVINO**
Articles, tutorials, blog posts, demos, videos, and any other involvement
in the OpenVINO community is always a welcome contribution. If you discuss
or present OpenVINO on various social platforms, you are raising awareness
of the product among A.I. enthusiasts and enabling other people to discover
the toolkit. Feel free to reach out to OpenVINO developers if you need help
with making such community-based content.
- Register at GitHub. Create your fork of OpenVINO™ repository [https://github.com/openvinotoolkit/openvino](https://github.com/openvinotoolkit/openvino) (see [https://help.github.com/articles/fork-a-repo](https://help.github.com/articles/fork-a-repo) for details).
- Install Git.
- Set your user name and email address in a Git configuration according to GitHub account (see [https://git-scm.com/book/en/v2/Getting-Started-First-Time-Git-Setup](https://git-scm.com/book/en/v2/Getting-Started-First-Time-Git-Setup) for details).
- Choose a task for yourself. It could be a bugfix or some new code.
- Choose a base branch for your work. More details about branches and policies are here: [Branches](https://github.com/openvinotoolkit/openvino/wiki/Branches)
- Clone your fork to your computer.
- Create a new branch (with a meaningful name) from the base branch you chose.
- Modify / add the code following our [Coding Style Guide](./docs/dev/coding_style.md).
- If you want to add a new sample, please look at this [Guide for contributing to C++/C/Python IE samples](https://github.com/openvinotoolkit/openvino/wiki/SampleContribute)
- If you want to contribute to the documentation and want to add a new guide, follow that instruction [Documentation guidelines](https://github.com/openvinotoolkit/openvino/wiki/CodingStyleGuideLinesDocumentation)
- Run testsuite locally:
- execute each test binary from the artifacts directory, e.g. `<source dir>/bin/intel64/Release/ieFuncTests`
- When you are done, make sure that your branch is to date with latest state of the branch you want to contribute to (e.g. `git fetch upstream && git merge upstream/master`), push your branch to your GitHub fork; then create a pull request from your branch to the base branch (see [https://help.github.com/articles/using-pull-requests](https://help.github.com/articles/using-pull-requests) for details).
## Making a good pull request
Following these guidelines will increase the likelihood of your pull request being accepted:
- One PR one issue.
- Build perfectly on your local system.
- Choose the right base branch [Branches](https://github.com/openvinotoolkit/openvino/wiki/Branches).
- Follow the [Coding Style Guide](./docs/dev/coding_style.md) for your code.
- Update documentation using [Documentation guidelines](https://github.com/openvinotoolkit/openvino/wiki/CodingStyleGuideLinesDocumentation) if needed.
- Cover your changes with test.
- Add license at the top of new files [C++ example](https://github.com/openvinotoolkit/openvino/blob/master/samples/cpp/classification_sample_async/main.cpp#L1-L2), [Python example](https://github.com/openvinotoolkit/openvino/blob/master/samples/python/hello_classification/hello_classification.py#L3-L4).
- Add enough information: a meaningful title, the reason why you made the commit and a link to the issue page if exists.
- Remove unrelated to PR changes.
- If it is still WIP and you want to check CI test results early then use _Draft_ PR.
- Submit your PR and become an OpenVINO™ contributor!
* **Help Other Community Members**
If you are an experienced OpenVINO user and want to help, you can always
share your expertise with the community. Check GitHub Discussions and
Issues to see if you can help someone.
## Testing and merging pull requests
## License
Your pull request will be automatically tested by OpenVINO™'s precommit (testing status are automatically reported as "green" or "red" circles in precommit steps on PR's page). If any builders have failed, you need fix the issue. To rerun the automatic builds just push changes to your branch on GitHub. No need to close pull request and open a new one!
## Merging PR
When the reviewer accepts the pull request and the pre-commit shows a "green" status, the review status is set to "Approved", which signals to the OpenVINO™ maintainers that they can merge your pull request.
By contributing to the OpenVINO project, you agree that your contributions will be
licensed under the terms stated in the [LICENSE](./LICENSE.md) file.

111
CONTRIBUTING_DOCS.md Normal file
View File

@@ -0,0 +1,111 @@
# OpenVINO Documentation Guide
## Basic article structure
OpenVINO documentation is built using Sphinx and the reStructuredText formatting.
That means the basic formatting rules need to be used:
### White Spaces
OpenVINO documentation is developed to be easily readable in both html and
reStructuredText. Here are some suggestions on how to make it render nicely
and improve document clarity.
### Headings (including the article title)
They are made by "underscoring" text with punctuation marks (at least as
many marks as letters in the underscored header). We use the following convention:
```
H1
====================
H2
####################
H3
++++++++++++++++++++
H4
--------------------
H5
....................
```
### Line length
In programming, a limit of 80 characters per line is a common BKM. It may also apply
to reading natural languages fairly well. For this reason, we aim at lines of around
70 to 100 characters long. The limit is not a strict rule but rather a guideline to
follow in most cases. The breaks will not translate to html, and rightly so, but will
make reading and editing documents in GitHub or an editor much easier.
### Tables
Tables may be difficult to implement well in websites. For example, longer portions
of text, like descriptions, may render them difficult to read (e.g. improper cell
widths or heights). Complex tables may also be difficult to read in source files.
To prevent that, check the [table directive documentation](https://www.sphinx-doc.org/en/master/usage/restructuredtext/directives.html#table-directives)
and see our custom directives. Use the following guidelines for easier editing:
* For very big and complex data sets: use a list instead of a table or remove
the problematic content from the table and implement it differently.
* For very big and complex data sets that need to use tables: use an external
file (e.g. PDF) and link to it.
* For medium tables that look bad in source (e.g. due to long lines of text),
use the reStructuredText list table format.
* For medium and small tables, use the reStructuredText grid or simple table formats.
## Cross-linking
There are several directives Sphinx uses for linking, each has its purpose and format.
Follow these guidelines for consistent results:
* Avoid absolute references to internal documents as much as possible (link to source, not html).
* Note that sphinx uses the "back-tick" character and not the "inverted-comma" => ` vs. '
* When a file path starts at the same directory is used, put "./" at its beginning.
* Always add a space before the opening angle bracket ("<") for target files.
Use the following formatting for different links:
* link to an external page / file
* `` `text <url> `__ ``
* use a double underscore for consistency
* link to an internal documentation page / file
* `` :doc:`a docs page <relative file path>` ``
* Link to an rst or md file within our documentation, so that it renders properly in html
* link to a header on the same page
* `` 'a header in the same article <this-is-section-header-title>`__ ``
* anchors are created automatically for all existing headers
* such anchor looks like the header, with minor adjustments:
* all letters are lower case,
* remove all special glyphs, like brackets,
* replace spaces with hyphens
* Create an anchor in an article
* `` .. _anchor-in-the target-article:: ``
* put it before the header to which you want to link
* See the rules for naming anchors / labels at the bottom of this article
* link to an anchor on a different page in our documentation
* `` :ref:`the created anchor <anchor-in-the target-article>` ``
* link to the anchor using just its name
* anchors / labels
Read about anchors
Sphinx uses labels to create html anchors, which can be linked to from anywhere in documentation.
Although they may be put at the top of any article to make linking to it very easy, we do not use
this approach. Every label definition starts with an underscore, the underscore is not used in links.
Most importantly, every label needs to be globally unique. It means that it is always a good
practice to start their labels with a clear identifier of the article they reside in.

63
CONTRIBUTING_PR.md Normal file
View File

@@ -0,0 +1,63 @@
# How to Prepare a Good PR
OpenVINO is an open-source project and you can contribute to its code directly.
To do so, follow these guidelines for creating Pull Requests, so that your
changes get the highest chance of being merged.
## General Rules of a Good Pull Request
* Create your own fork of the repository and use it to create PRs.
Avoid creating change branches in the main repository.
* Choose a proper branch for your work and create your own branch based on it.
* Give your branches, commits, and Pull Requests meaningful names and descriptions.
It helps to track changes later. If your changes cover a particular component,
you can indicate it in the PR name as a prefix, for example: ``[DOCS] PR name``.
* Follow the [OpenVINO code style guide](https://github.com/openvinotoolkit/openvino/blob/master/docs/dev/coding_style.md).
* Make your PRs small - each PR should address one issue. Remove all changes
unrelated to the PR.
* Document your contribution! If your changes may impact how the user works with
OpenVINO, provide the information in proper articles. You can do it yourself,
or contact one of OpenVINO documentation contributors to work together on
developing the right content.
* For Work In Progress, or checking test results early, use a Draft PR.
## Ensure Change Quality
Your pull request will be automatically tested by OpenVINO™'s pre-commit and marked
as "green" if it is ready for merging. If any builders fail, the status is "red,"
you need to fix the issues listed in console logs. Any change to the PR branch will
automatically trigger the checks, so you don't need to recreate the PR, Just wait
for the updated results.
Regardless of the automated tests, you should ensure the quality of your changes:
* Test your changes locally:
* Make sure to double-check your code.
* Run tests locally to identify and fix potential issues (execute test binaries
from the artifacts directory, e.g. ``<source dir>/bin/intel64/Release/ieFuncTests``)
* Before creating a PR, make sure that your branch is up to date with the latest
state of the branch you want to contribute to (e.g. git fetch upstream && git
merge upstream/master).
## Branching Policy
* The "master" branch is used for development and constitutes the base for each new release.
* Each OpenVINO release has its own branch: ``releases/<year>/<release number>``.
* The final release each year is considered a Long Term Support version,
which means it remains active.
* Contributions are accepted only by active branches, which are:
* the "master" branch for future releases,
* the most recently published version for fixes,
* LTS versions (for two years from their release dates).
## Need Additional Help? Check these Articles
* [How to create a fork](https://help.github.com/articles/fork-a-rep)
* [Install Git](https://git-scm.com/book/en/v2/Getting-Started-First-Time-Git-Setup)
* If you want to add a new sample, please have a look at the Guide for contributing
to C++/C/Python IE samples and add the license statement at the top of new files for
C++ example, Python example.

View File

@@ -2,13 +2,14 @@
<img src="docs/img/openvino-logo-purple-black.png" width="400px">
[![Stable release](https://img.shields.io/badge/version-2022.2-green.svg)](https://github.com/openvinotoolkit/openvino/releases/tag/2022.2.0)
[![Apache License Version 2.0](https://img.shields.io/badge/license-Apache_2.0-green.svg)](LICENSE)
![GitHub branch checks state](https://img.shields.io/github/checks-status/openvinotoolkit/openvino/master?label=GitHub%20checks)
![Azure DevOps builds (branch)](https://img.shields.io/azure-devops/build/openvinoci/b2bab62f-ab2f-4871-a538-86ea1be7d20f/13?label=Public%20CI)
[![PyPI Status](https://badge.fury.io/py/openvino.svg)](https://badge.fury.io/py/openvino)
[![Anaconda Status](https://anaconda.org/conda-forge/openvino/badges/version.svg)](https://anaconda.org/conda-forge/openvino)
[![brew Status](https://img.shields.io/homebrew/v/openvino)](https://formulae.brew.sh/formula/openvino)
[![PyPI Downloads](https://pepy.tech/badge/openvino)](https://pepy.tech/project/openvino)
[![Anaconda Downloads](https://anaconda.org/conda-forge/openvino/badges/downloads.svg)](https://anaconda.org/conda-forge/openvino/files)
[![brew Downloads](https://img.shields.io/homebrew/installs/dy/openvino)](https://formulae.brew.sh/formula/openvino)
</div>
## Contents:
@@ -69,24 +70,24 @@ The OpenVINO™ Runtime can infer models on different hardware devices. This sec
<tbody>
<tr>
<td rowspan=2>CPU</td>
<td> <a href="https://docs.openvino.ai/nightly/openvino_docs_OV_UG_supported_plugins_CPU.html#doxid-openvino-docs-o-v-u-g-supported-plugins-c-p-u">Intel CPU</a></tb>
<td> <a href="https://docs.openvino.ai/2023.0/openvino_docs_OV_UG_supported_plugins_CPU.html#doxid-openvino-docs-o-v-u-g-supported-plugins-c-p-u">Intel CPU</a></tb>
<td><b><i><a href="./src/plugins/intel_cpu">openvino_intel_cpu_plugin</a></i></b></td>
<td>Intel Xeon with Intel® Advanced Vector Extensions 2 (Intel® AVX2), Intel® Advanced Vector Extensions 512 (Intel® AVX-512), and AVX512_BF16, Intel Core Processors with Intel AVX2, Intel Atom Processors with Intel® Streaming SIMD Extensions (Intel® SSE)</td>
</tr>
<tr>
<td> <a href="https://docs.openvino.ai/nightly/openvino_docs_OV_UG_supported_plugins_ARM_CPU.html">ARM CPU</a></tb>
<td> <a href="https://docs.openvino.ai/2023.0/openvino_docs_OV_UG_supported_plugins_CPU.html">ARM CPU</a></tb>
<td><b><i><a href="https://github.com/openvinotoolkit/openvino_contrib/tree/master/modules/arm_plugin">openvino_arm_cpu_plugin</a></i></b></td>
<td>Raspberry Pi™ 4 Model B, Apple® Mac mini with M1 chip, NVIDIA® Jetson Nano™, Android™ devices
</tr>
<tr>
<td>GPU</td>
<td><a href="https://docs.openvino.ai/nightly/openvino_docs_OV_UG_supported_plugins_GPU.html#doxid-openvino-docs-o-v-u-g-supported-plugins-g-p-u">Intel GPU</a></td>
<td><a href="https://docs.openvino.ai/2023.0/openvino_docs_OV_UG_supported_plugins_GPU.html#doxid-openvino-docs-o-v-u-g-supported-plugins-g-p-u">Intel GPU</a></td>
<td><b><i><a href="./src/plugins/intel_gpu">openvino_intel_gpu_plugin</a></i></b></td>
<td>Intel Processor Graphics, including Intel HD Graphics and Intel Iris Graphics</td>
</tr>
<tr>
<td>GNA</td>
<td><a href="https://docs.openvino.ai/nightly/openvino_docs_OV_UG_supported_plugins_GNA.html#doxid-openvino-docs-o-v-u-g-supported-plugins-g-n-a">Intel GNA</a></td>
<td><a href="https://docs.openvino.ai/2023.0/openvino_docs_OV_UG_supported_plugins_GNA.html#doxid-openvino-docs-o-v-u-g-supported-plugins-g-n-a">Intel GNA</a></td>
<td><b><i><a href="./src/plugins/intel_gna">openvino_intel_gna_plugin</a></i></b></td>
<td>Intel Speech Enabling Developer Kit, Amazon Alexa* Premium Far-Field Developer Kit, Intel Pentium Silver J5005 Processor, Intel Pentium Silver N5000 Processor, Intel Celeron J4005 Processor, Intel Celeron J4105 Processor, Intel Celeron Processor N4100, Intel Celeron Processor N4000, Intel Core i3-8121U Processor, Intel Core i7-1065G7 Processor, Intel Core i7-1060G7 Processor, Intel Core i5-1035G4 Processor, Intel Core i5-1035G7 Processor, Intel Core i5-1035G1 Processor, Intel Core i5-1030G7 Processor, Intel Core i5-1030G4 Processor, Intel Core i3-1005G1 Processor, Intel Core i3-1000G1 Processor, Intel Core i3-1000G4 Processor</td>
</tr>
@@ -104,22 +105,22 @@ OpenVINO™ Toolkit also contains several plugins which simplify loading models
</thead>
<tbody>
<tr>
<td><a href="https://docs.openvino.ai/nightly/openvino_docs_IE_DG_supported_plugins_AUTO.html#doxid-openvino-docs-i-e-d-g-supported-plugins-a-u-t-o">Auto</a></td>
<td><a href="https://docs.openvino.ai/2023.0/openvino_docs_OV_UG_supported_plugins_AUTO.html">Auto</a></td>
<td><b><i><a href="./src/plugins/auto">openvino_auto_plugin</a></i></b></td>
<td>Auto plugin enables selecting Intel device for inference automatically</td>
</tr>
<tr>
<td><a href="https://docs.openvino.ai/nightly/openvino_docs_OV_UG_Automatic_Batching.html">Auto Batch</a></td>
<td><a href="https://docs.openvino.ai/2023.0/openvino_docs_OV_UG_Automatic_Batching.html">Auto Batch</a></td>
<td><b><i><a href="./src/plugins/auto_batch">openvino_auto_batch_plugin</a></i></b></td>
<td>Auto batch plugin performs on-the-fly automatic batching (i.e. grouping inference requests together) to improve device utilization, with no programming effort from the user</td>
</tr>
<tr>
<td><a href="https://docs.openvino.ai/nightly/openvino_docs_OV_UG_Hetero_execution.html#doxid-openvino-docs-o-v-u-g-hetero-execution">Hetero</a></td>
<td><a href="https://docs.openvino.ai/2023.0/openvino_docs_OV_UG_Hetero_execution.html#doxid-openvino-docs-o-v-u-g-hetero-execution">Hetero</a></td>
<td><b><i><a href="./src/plugins/hetero">openvino_hetero_plugin</a></i></b></td>
<td>Heterogeneous execution enables automatic inference splitting between several devices</td>
</tr>
<tr>
<td><a href="https://docs.openvino.ai/nightly/openvino_docs_OV_UG_Running_on_multiple_devices.html#doxid-openvino-docs-o-v-u-g-running-on-multiple-devices">Multi</a></td>
<td><a href="https://docs.openvino.ai/2023.0/openvino_docs_OV_UG_Running_on_multiple_devices.html#doxid-openvino-docs-o-v-u-g-running-on-multiple-devices">Multi</a></td>
<td><b><i><a href="./src/plugins/auto">openvino_auto_plugin</a></i></b></td>
<td>Multi plugin enables simultaneous inference of the same model on several devices in parallel</td>
</tr>
@@ -156,14 +157,14 @@ The list of OpenVINO tutorials:
## System requirements
The system requirements vary depending on platform and are available on dedicated pages:
- [Linux](https://docs.openvino.ai/nightly/openvino_docs_install_guides_installing_openvino_linux_header.html)
- [Windows](https://docs.openvino.ai/nightly/openvino_docs_install_guides_installing_openvino_windows_header.html)
- [macOS](https://docs.openvino.ai/nightly/openvino_docs_install_guides_installing_openvino_macos_header.html)
- [Raspbian](https://docs.openvino.ai/nightly/openvino_docs_install_guides_installing_openvino_raspbian.html)
- [Linux](https://docs.openvino.ai/2023.0/openvino_docs_install_guides_installing_openvino_linux_header.html)
- [Windows](https://docs.openvino.ai/2023.0/openvino_docs_install_guides_installing_openvino_windows_header.html)
- [macOS](https://docs.openvino.ai/2023.0/openvino_docs_install_guides_installing_openvino_macos_header.html)
- [Raspbian](https://docs.openvino.ai/2023.0/openvino_docs_install_guides_installing_openvino_raspbian.html)
## How to build
See the [OpenVINO Wiki](https://github.com/openvinotoolkit/openvino/wiki#how-to-build) to get more information about the OpenVINO build process.
See [How to build OpenVINO](./docs/dev/build.md) to get more information about the OpenVINO build process.
## How to contribute
@@ -188,7 +189,6 @@ Report questions, issues and suggestions, using:
* [Neural Network Compression Framework (NNCF)](https://github.com/openvinotoolkit/nncf) - a suite of advanced algorithms for model inference optimization including quantization, filter pruning, binarization and sparsity
* [OpenVINO™ Training Extensions (OTE)](https://github.com/openvinotoolkit/training_extensions) - convenient environment to train Deep Learning models and convert them using OpenVINO for optimized inference.
* [OpenVINO™ Model Server (OVMS)](https://github.com/openvinotoolkit/model_server) - a scalable, high-performance solution for serving deep learning models optimized for Intel architectures
* [DL Workbench](https://docs.openvino.ai/nightly/workbench_docs_Workbench_DG_Introduction.html) - an alternative, web-based version of OpenVINO designed to facilitate optimization and compression of pre-trained deep learning models.
* [Computer Vision Annotation Tool (CVAT)](https://github.com/opencv/cvat) - an online, interactive video and image annotation tool for computer vision purposes.
* [Dataset Management Framework (Datumaro)](https://github.com/openvinotoolkit/datumaro) - a framework and CLI tool to build, transform, and analyze datasets.
@@ -196,7 +196,7 @@ Report questions, issues and suggestions, using:
\* Other names and brands may be claimed as the property of others.
[Open Model Zoo]:https://github.com/openvinotoolkit/open_model_zoo
[OpenVINO™ Runtime]:https://docs.openvino.ai/nightly/openvino_docs_OV_UG_OV_Runtime_User_Guide.html
[Model Optimizer]:https://docs.openvino.ai/nightly/openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html
[Post-Training Optimization Tool]:https://docs.openvino.ai/nightly/pot_introduction.html
[OpenVINO™ Runtime]:https://docs.openvino.ai/2023.0/openvino_docs_OV_UG_OV_Runtime_User_Guide.html
[Model Optimizer]:https://docs.openvino.ai/2023.0/openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html
[Post-Training Optimization Tool]:https://docs.openvino.ai/2023.0/pot_introduction.html
[Samples]:https://github.com/openvinotoolkit/openvino/tree/master/samples

View File

@@ -53,7 +53,7 @@ if(THREADING STREQUAL "OMP")
update_deps_cache(OMP "${OMP}" "Path to OMP root folder")
debug_message(STATUS "intel_omp=" ${OMP})
ie_cpack_add_component(omp HIDDEN)
ov_cpack_add_component(omp HIDDEN)
file(GLOB_RECURSE source_list "${OMP}/*${CMAKE_SHARED_LIBRARY_SUFFIX}*")
install(FILES ${source_list}
DESTINATION ${OV_CPACK_RUNTIMEDIR}
@@ -96,11 +96,12 @@ function(ov_download_tbb)
if(WIN32 AND X86_64)
# TODO: add target_path to be platform specific as well, to avoid following if
# build oneTBB 2021.2.1 with Visual Studio 2019 (MSVC 14.21)
RESOLVE_DEPENDENCY(TBB
ARCHIVE_WIN "tbb2020_617e9a71_win.zip"
ARCHIVE_WIN "oneapi-tbb-2021.2.2-win.zip"
TARGET_PATH "${TEMP}/tbb"
ENVIRONMENT "TBBROOT"
SHA256 "01cac3cc48705bd52b83a6e1fa1ed95c708928be76160f5b9c5c37f954d56df4"
SHA256 "103b19a8af288c6a7d83ed3f0d2239c4afd0dd189fc12aad1d34b3c9e78df94b"
USE_NEW_LOCATION TRUE)
elseif(ANDROID AND X86_64)
RESOLVE_DEPENDENCY(TBB
@@ -108,12 +109,13 @@ function(ov_download_tbb)
TARGET_PATH "${TEMP}/tbb"
ENVIRONMENT "TBBROOT"
SHA256 "f42d084224cc2d643314bd483ad180b081774608844000f132859fca3e9bf0ce")
elseif(LINUX AND X86_64)
elseif(LINUX AND X86_64 AND OV_GLIBC_VERSION VERSION_GREATER_EQUAL 2.17)
# build oneTBB 2021.2.1 with gcc 4.8 (glibc 2.17)
RESOLVE_DEPENDENCY(TBB
ARCHIVE_LIN "tbb2020_617e9a71_lin_strip.tgz"
ARCHIVE_LIN "oneapi-tbb-2021.2.1-lin.tgz"
TARGET_PATH "${TEMP}/tbb"
ENVIRONMENT "TBBROOT"
SHA256 "e7a38f68059fb36de8b59d40b283a849f26275e34a58d2acadfdb84d49e31b9b"
SHA256 "0a56f73baaa40d72e06949ea6d593ae63a19f7580ce71c08287c1f59d2e5b988"
USE_NEW_LOCATION TRUE)
elseif(YOCTO_AARCH64)
RESOLVE_DEPENDENCY(TBB
@@ -122,11 +124,36 @@ function(ov_download_tbb)
ENVIRONMENT "TBBROOT"
SHA256 "321261ff2eda6d4568a473cb883262bce77a93dac599f7bd65d2918bdee4d75b")
elseif(APPLE AND X86_64)
# build oneTBB 2021.2.1 with OS version 11.4
RESOLVE_DEPENDENCY(TBB
ARCHIVE_MAC "tbb2020_617e9a71_mac.tgz"
ARCHIVE_MAC "oneapi-tbb-2021.2.1-mac.tgz"
TARGET_PATH "${TEMP}/tbb"
ENVIRONMENT "TBBROOT"
SHA256 "67a44b695bef3348416eaf5bf2baca2b1401576c0e09c394304eba1e0eee96cd"
SHA256 "c57ce4b97116cd3093c33e6dcc147fb1bbb9678d0ee6c61a506b2bfe773232cb"
USE_NEW_LOCATION TRUE)
elseif(WIN32 AND AARCH64)
# build oneTBB 2021.2.1 with Visual Studio 2022 (MSVC 14.35)
RESOLVE_DEPENDENCY(TBB
ARCHIVE_WIN "oneapi-tbb-2021.2.1-win-arm64.zip"
TARGET_PATH "${TEMP}/tbb"
ENVIRONMENT "TBBROOT"
SHA256 "09fe7f5e7be589aa34ccd20fdfd7cad9e0afa89d1e74ecdb008a75d0af71d6e1"
USE_NEW_LOCATION TRUE)
elseif(LINUX AND AARCH64 AND OV_GLIBC_VERSION VERSION_GREATER_EQUAL 2.17)
# build oneTBB 2021.2.1 with gcc 4.8 (glibc 2.17)
RESOLVE_DEPENDENCY(TBB
ARCHIVE_LIN "oneapi-tbb-2021.2.1-lin-arm64.tgz"
TARGET_PATH "${TEMP}/tbb"
ENVIRONMENT "TBBROOT"
SHA256 "6b87194a845aa9314f3785d842e250d934e545eccc4636655c7b27c98c302c0c"
USE_NEW_LOCATION TRUE)
elseif(APPLE AND AARCH64)
# build oneTBB 2021.2.1 with export MACOSX_DEPLOYMENT_TARGET=11.0
RESOLVE_DEPENDENCY(TBB
ARCHIVE_MAC "oneapi-tbb-2021.2.1-mac-arm64.tgz"
TARGET_PATH "${TEMP}/tbb"
ENVIRONMENT "TBBROOT"
SHA256 "15d46ef19501e4315a5498af59af873dbf8180e9a3ea55253ccf7f0c0bb6f940"
USE_NEW_LOCATION TRUE)
else()
message(WARNING "Prebuilt TBB is not available on current platform")
@@ -177,16 +204,18 @@ function(ov_download_tbbbind_2_5)
if(WIN32 AND X86_64)
RESOLVE_DEPENDENCY(TBBBIND_2_5
ARCHIVE_WIN "tbbbind_2_5_static_win_v1.zip"
ARCHIVE_WIN "tbbbind_2_5_static_win_v2.zip"
TARGET_PATH "${TEMP}/tbbbind_2_5"
ENVIRONMENT "TBBBIND_2_5_ROOT"
SHA256 "a67afeea8cf194f97968c800dab5b5459972908295242e282045d6b8953573c1")
SHA256 "49ae93b13a13953842ff9ae8d01681b269b5b0bc205daf18619ea9a828c44bee"
USE_NEW_LOCATION TRUE)
elseif(LINUX AND X86_64)
RESOLVE_DEPENDENCY(TBBBIND_2_5
ARCHIVE_LIN "tbbbind_2_5_static_lin_v2.tgz"
ARCHIVE_LIN "tbbbind_2_5_static_lin_v3.tgz"
TARGET_PATH "${TEMP}/tbbbind_2_5"
ENVIRONMENT "TBBBIND_2_5_ROOT"
SHA256 "865e7894c58402233caf0d1b288056e0e6ab2bf7c9d00c9dc60561c484bc90f4")
SHA256 "d39deb262c06981b5e2d2e3c593e9fc9be62ce4feb91dd4e648e92753659a6b3"
USE_NEW_LOCATION TRUE)
else()
# TMP: for Apple Silicon TBB does not provide TBBBind
if(NOT (APPLE AND AARCH64))
@@ -298,8 +327,8 @@ if(ENABLE_INTEL_GNA)
GNA_LIB_DIR
libGNA_INCLUDE_DIRS
libGNA_LIBRARIES_BASE_PATH)
set(GNA_VERSION "03.00.00.1910")
set(GNA_HASH "894ddbc0ae3459f04513b853b0cabc32890dd4ea37228a022b6a32101bdbb7f8")
set(GNA_VERSION "03.05.00.2116")
set(GNA_HASH "960350567702bda17276ac4c060d7524fb7ce7ced785004bd861c81ff2bfe2c5")
set(FILES_TO_EXTRACT_LIST gna_${GNA_VERSION}/include)
if(WIN32)

View File

@@ -24,7 +24,6 @@ function(set_ci_build_number)
endfunction()
include(features)
include(message)
set_ci_build_number()
@@ -112,10 +111,13 @@ else()
set(BIN_FOLDER "bin/${ARCH_FOLDER}")
endif()
set(CMAKE_BUILD_TYPE "Release" CACHE STRING "CMake build type")
set_property(CACHE CMAKE_BUILD_TYPE PROPERTY STRINGS "Release;Debug;RelWithDebInfo;MinSizeRel")
if(CMAKE_GENERATOR MATCHES "^Ninja Multi-Config$")
if(CMAKE_GENERATOR STREQUAL "Ninja Multi-Config")
# 'Ninja Multi-Config' specific, see:
# https://cmake.org/cmake/help/latest/variable/CMAKE_DEFAULT_BUILD_TYPE.html
set(CMAKE_DEFAULT_BUILD_TYPE "Release" CACHE STRING "CMake default build type")
elseif(NOT OV_GENERATOR_MULTI_CONFIG)
set(CMAKE_BUILD_TYPE "Release" CACHE STRING "CMake build type")
set_property(CACHE CMAKE_BUILD_TYPE PROPERTY STRINGS "Release;Debug;RelWithDebInfo;MinSizeRel")
endif()
if(USE_BUILD_TYPE_SUBFOLDER)
@@ -153,10 +155,10 @@ set(CMAKE_DEBUG_POSTFIX ${IE_DEBUG_POSTFIX})
set(CMAKE_RELEASE_POSTFIX ${IE_RELEASE_POSTFIX})
# Support CMake multi-configuration for Visual Studio / Ninja or Xcode build
if (OV_GENERATOR_MULTI_CONFIG)
if(OV_GENERATOR_MULTI_CONFIG)
set(IE_BUILD_POSTFIX $<$<CONFIG:Debug>:${IE_DEBUG_POSTFIX}>$<$<CONFIG:Release>:${IE_RELEASE_POSTFIX}>)
else ()
if (CMAKE_BUILD_TYPE STREQUAL "Debug")
else()
if(CMAKE_BUILD_TYPE STREQUAL "Debug")
set(IE_BUILD_POSTFIX ${IE_DEBUG_POSTFIX})
else()
set(IE_BUILD_POSTFIX ${IE_RELEASE_POSTFIX})
@@ -238,7 +240,7 @@ if(ENABLE_LTO)
LANGUAGES C CXX)
if(NOT IPO_SUPPORTED)
set(ENABLE_LTO "OFF" CACHE STRING "Enable Link Time Optmization" FORCE)
set(ENABLE_LTO "OFF" CACHE STRING "Enable Link Time Optimization" FORCE)
message(WARNING "IPO / LTO is not supported: ${OUTPUT_MESSAGE}")
endif()
endif()
@@ -248,8 +250,8 @@ endif()
macro(ov_install_static_lib target comp)
if(NOT BUILD_SHARED_LIBS)
get_target_property(target_type ${target} TYPE)
if(${target_type} STREQUAL "STATIC_LIBRARY")
set_target_properties(${target} PROPERTIES EXCLUDE_FROM_ALL FALSE)
if(target_type STREQUAL "STATIC_LIBRARY")
set_target_properties(${target} PROPERTIES EXCLUDE_FROM_ALL OFF)
endif()
install(TARGETS ${target} EXPORT OpenVINOTargets
ARCHIVE DESTINATION ${OV_CPACK_ARCHIVEDIR} COMPONENT ${comp} ${ARGN})

View File

@@ -4,61 +4,105 @@
if(WIN32)
set(PROGRAMFILES_ENV "ProgramFiles(X86)")
file(TO_CMAKE_PATH $ENV{${PROGRAMFILES_ENV}} PROGRAMFILES)
set(UWP_SDK_PATH "${PROGRAMFILES}/Windows Kits/10/bin/${CMAKE_VS_WINDOWS_TARGET_PLATFORM_VERSION}/x64")
message(STATUS "Trying to find apivalidator in: ${UWP_SDK_PATH}")
find_host_program(UWP_API_VALIDATOR
NAMES apivalidator
PATHS "${UWP_SDK_PATH}"
DOC "ApiValidator for UWP compliance")
# check that PROGRAMFILES_ENV is defined, because in case of cross-compilation for Windows
# we don't have such variable
if(DEFINED ENV{PROGRAMFILES_ENV})
file(TO_CMAKE_PATH $ENV{${PROGRAMFILES_ENV}} PROGRAMFILES)
if(UWP_API_VALIDATOR)
message(STATUS "Found apivalidator: ${UWP_API_VALIDATOR}")
set(WDK_PATHS "${PROGRAMFILES}/Windows Kits/10/bin/${CMAKE_VS_WINDOWS_TARGET_PLATFORM_VERSION}/x64"
"${PROGRAMFILES}/Windows Kits/10/bin/x64")
message(STATUS "Trying to find apivalidator in: ")
foreach(wdk_path IN LISTS WDK_PATHS)
message(" * ${wdk_path}")
endforeach()
find_host_program(ONECORE_API_VALIDATOR
NAMES apivalidator
PATHS ${WDK_PATHS}
DOC "ApiValidator for OneCore compliance")
if(ONECORE_API_VALIDATOR)
message(STATUS "Found apivalidator: ${ONECORE_API_VALIDATOR}")
endif()
endif()
endif()
function(_ie_add_api_validator_post_build_step_recursive)
cmake_parse_arguments(API_VALIDATOR "" "TARGET" "" ${ARGN})
list(APPEND API_VALIDATOR_TARGETS ${API_VALIDATOR_TARGET})
set(API_VALIDATOR_TARGETS ${API_VALIDATOR_TARGETS} PARENT_SCOPE)
get_target_property(IS_IMPORTED ${API_VALIDATOR_TARGET} IMPORTED)
if(IS_IMPORTED)
return()
endif()
get_target_property(LIBRARY_TYPE ${API_VALIDATOR_TARGET} TYPE)
if(LIBRARY_TYPE STREQUAL "EXECUTABLE" OR LIBRARY_TYPE STREQUAL "SHARED_LIBRARY")
get_target_property(LINKED_LIBRARIES ${API_VALIDATOR_TARGET} LINK_LIBRARIES)
if(LINKED_LIBRARIES)
foreach(ITEM IN LISTS LINKED_LIBRARIES)
if(NOT TARGET ${ITEM})
continue()
endif()
get_target_property(LIBRARY_TYPE_DEPENDENCY ${ITEM} TYPE)
if(LIBRARY_TYPE_DEPENDENCY STREQUAL "SHARED_LIBRARY")
_ie_add_api_validator_post_build_step_recursive(TARGET ${ITEM})
endif()
endforeach()
endif()
if(LIBRARY_TYPE MATCHES "^(SHARED_LIBRARY|MODULE_LIBRARY|EXECUTABLE)$" AND
NOT ${API_VALIDATOR_TARGET} IN_LIST API_VALIDATOR_TARGETS)
list(APPEND API_VALIDATOR_TARGETS ${API_VALIDATOR_TARGET})
endif()
# keep checks target list to track cyclic dependencies, leading to infinite recursion
list(APPEND checked_targets ${API_VALIDATOR_TARGET})
if(NOT LIBRARY_TYPE STREQUAL "INTERFACE_LIBRARY")
get_target_property(LINKED_LIBRARIES ${API_VALIDATOR_TARGET} LINK_LIBRARIES)
else()
set(LINKED_LIBRARIES)
endif()
get_target_property(INTERFACE_LINKED_LIBRARIES ${API_VALIDATOR_TARGET} INTERFACE_LINK_LIBRARIES)
foreach(library IN LISTS LINKED_LIBRARIES INTERFACE_LINKED_LIBRARIES)
if(TARGET "${library}")
get_target_property(orig_library ${library} ALIASED_TARGET)
if(orig_library IN_LIST checked_targets OR library IN_LIST checked_targets)
# in case of cyclic dependencies, we need to skip current target
continue()
endif()
if(TARGET "${orig_library}")
_ie_add_api_validator_post_build_step_recursive(TARGET ${orig_library})
else()
_ie_add_api_validator_post_build_step_recursive(TARGET ${library})
endif()
endif()
endforeach()
set(API_VALIDATOR_TARGETS ${API_VALIDATOR_TARGETS} PARENT_SCOPE)
endfunction()
set(VALIDATED_LIBRARIES "" CACHE INTERNAL "")
set(VALIDATED_TARGETS "" CACHE INTERNAL "")
function(_ov_add_api_validator_post_build_step)
set(UWP_API_VALIDATOR_APIS "${PROGRAMFILES}/Windows Kits/10/build/universalDDIs/x64/UniversalDDIs.xml")
set(UWP_API_VALIDATOR_EXCLUSION "${UWP_SDK_PATH}/BinaryExclusionlist.xml")
if((NOT UWP_API_VALIDATOR) OR (WINDOWS_STORE OR WINDOWS_PHONE))
if((NOT ONECORE_API_VALIDATOR) OR (WINDOWS_STORE OR WINDOWS_PHONE))
return()
endif()
cmake_parse_arguments(API_VALIDATOR "" "TARGET" "" ${ARGN})
# see https://learn.microsoft.com/en-us/windows-hardware/drivers/develop/validating-windows-drivers#known-apivalidator-issues
# ApiValidator does not run on Arm64 because AitStatic does not work on Arm64
if(HOST_AARCH64)
return()
endif()
if(X86_64)
set(wdk_platform "x64")
elseif(X86)
set(wdk_platform "x86")
elseif(ARM)
set(wdk_platform "arm")
elseif(AARCH64)
set(wdk_platform "arm64")
else()
message(FATAL_ERROR "Unknown configuration: ${CMAKE_HOST_SYSTEM_PROCESSOR}")
endif()
find_file(ONECORE_API_VALIDATOR_APIS NAMES UniversalDDIs.xml
PATHS "${PROGRAMFILES}/Windows Kits/10/build/${CMAKE_VS_WINDOWS_TARGET_PLATFORM_VERSION}/universalDDIs/${wdk_platform}"
"${PROGRAMFILES}/Windows Kits/10/build/universalDDIs/${wdk_platform}"
DOC "Path to UniversalDDIs.xml file")
find_file(ONECORE_API_VALIDATOR_EXCLUSION NAMES BinaryExclusionlist.xml
PATHS ${WDK_PATHS}
DOC "Path to BinaryExclusionlist.xml file")
if(NOT ONECORE_API_VALIDATOR_APIS)
message(FATAL_ERROR "Internal error: apiValidator is found (${ONECORE_API_VALIDATOR}), but UniversalDDIs.xml file has not been found for ${wdk_platform} platform")
endif()
cmake_parse_arguments(API_VALIDATOR "" "TARGET" "EXTRA" "" ${ARGN})
if(NOT API_VALIDATOR_TARGET)
message(FATAL_ERROR "RunApiValidator requires TARGET to validate!")
@@ -69,74 +113,81 @@ function(_ov_add_api_validator_post_build_step)
endif()
# collect targets
_ie_add_api_validator_post_build_step_recursive(TARGET ${API_VALIDATOR_TARGET})
if (API_VALIDATOR_EXTRA)
foreach(target IN LISTS API_VALIDATOR_EXTRA)
_ie_add_api_validator_post_build_step_recursive(TARGET ${target})
endforeach()
endif()
# remove targets which were tested before
foreach(target IN LISTS API_VALIDATOR_TARGETS)
list(FIND VALIDATED_LIBRARIES ${target} index)
if (NOT index EQUAL -1)
list(APPEND VALIDATED_TARGETS ${target})
endif()
if(TARGET "${target}")
get_target_property(orig_target ${target} ALIASED_TARGET)
list(FIND VALIDATED_LIBRARIES ${orig_target} index)
if (NOT index EQUAL -1)
list(APPEND VALIDATED_TARGETS ${target})
endif()
endif()
endforeach()
foreach(item IN LISTS VALIDATED_TARGETS)
list(REMOVE_ITEM API_VALIDATOR_TARGETS ${item})
endforeach()
list(REMOVE_DUPLICATES API_VALIDATOR_TARGETS)
if(NOT API_VALIDATOR_TARGETS)
return()
endif()
# apply check
macro(api_validator_get_target_name)
get_target_property(IS_IMPORTED ${target} IMPORTED)
get_target_property(is_imported ${target} IMPORTED)
get_target_property(orig_target ${target} ALIASED_TARGET)
if(IS_IMPORTED)
get_target_property(target_location ${target} LOCATION)
get_filename_component(target_name "${target_location}" NAME_WE)
if(is_imported)
get_target_property(imported_configs ${target} IMPORTED_CONFIGURATIONS)
foreach(imported_config RELEASE RELWITHDEBINFO DEBUG)
if(imported_config IN_LIST imported_configs)
get_target_property(target_location ${target} IMPORTED_LOCATION_${imported_config})
get_filename_component(target_name "${target_location}" NAME_WE)
break()
endif()
endforeach()
unset(imported_configs)
elseif(TARGET "${orig_target}")
set(target_name ${orig_target})
set(target_location $<TARGET_FILE:${orig_target}>)
else()
set(target_name ${target})
set(target_location $<TARGET_FILE:${target}>)
endif()
unset(orig_target)
unset(is_imported)
endmacro()
foreach(target IN LISTS API_VALIDATOR_TARGETS)
api_validator_get_target_name()
if(CMAKE_VERSION VERSION_GREATER_EQUAL 3.21 AND OV_GENERATOR_MULTI_CONFIG)
set(output_file "${CMAKE_BINARY_DIR}/api_validator/$<CONFIG>/${target_name}.txt")
if(CMAKE_VERSION VERSION_GREATER_EQUAL 3.20 AND OV_GENERATOR_MULTI_CONFIG)
set(output_file "${OpenVINO_BINARY_DIR}/api_validator/$<CONFIG>/${target_name}.txt")
else()
set(output_file "${CMAKE_BINARY_DIR}/api_validator/${target_name}.txt")
set(output_file "${OpenVINO_BINARY_DIR}/api_validator/${target_name}.txt")
endif()
add_custom_command(TARGET ${API_VALIDATOR_TARGET} POST_BUILD
COMMAND ${CMAKE_COMMAND} --config $<CONFIG>
-D UWP_API_VALIDATOR=${UWP_API_VALIDATOR}
-D UWP_API_VALIDATOR_TARGET=$<TARGET_FILE:${target}>
-D UWP_API_VALIDATOR_APIS=${UWP_API_VALIDATOR_APIS}
-D UWP_API_VALIDATOR_EXCLUSION=${UWP_API_VALIDATOR_EXCLUSION}
-D UWP_API_VALIDATOR_OUTPUT=${output_file}
list(APPEND post_build_commands
${CMAKE_COMMAND} --config $<CONFIG>
-D ONECORE_API_VALIDATOR=${ONECORE_API_VALIDATOR}
-D ONECORE_API_VALIDATOR_TARGET=${target_location}
-D ONECORE_API_VALIDATOR_APIS=${ONECORE_API_VALIDATOR_APIS}
-D ONECORE_API_VALIDATOR_EXCLUSION=${ONECORE_API_VALIDATOR_EXCLUSION}
-D ONECORE_API_VALIDATOR_OUTPUT=${output_file}
-D CMAKE_TOOLCHAIN_FILE=${CMAKE_TOOLCHAIN_FILE}
-P "${IEDevScripts_DIR}/api_validator/api_validator_run.cmake"
BYPRODUCTS ${output_file}
COMMENT "[apiValidator] Check ${target_name} for OneCore compliance"
VERBATIM)
-P "${IEDevScripts_DIR}/api_validator/api_validator_run.cmake")
list(APPEND byproducts_files ${output_file})
unset(target_name)
unset(target_location)
endforeach()
add_custom_command(TARGET ${API_VALIDATOR_TARGET} POST_BUILD
COMMAND ${post_build_commands}
BYPRODUCTS ${byproducts_files}
COMMENT "[apiValidator] Check ${API_VALIDATOR_TARGET} and dependencies for OneCore compliance"
VERBATIM)
# update list of validated libraries
list(APPEND VALIDATED_LIBRARIES ${API_VALIDATOR_TARGETS})
set(VALIDATED_LIBRARIES "${VALIDATED_LIBRARIES}" CACHE INTERNAL "" FORCE)
list(APPEND VALIDATED_TARGETS ${API_VALIDATOR_TARGETS})
set(VALIDATED_TARGETS "${VALIDATED_TARGETS}" CACHE INTERNAL "" FORCE)
endfunction()
#

View File

@@ -4,9 +4,9 @@
cmake_policy(SET CMP0012 NEW)
foreach(var UWP_API_VALIDATOR UWP_API_VALIDATOR_TARGET
UWP_API_VALIDATOR_APIS UWP_API_VALIDATOR_EXCLUSION
UWP_API_VALIDATOR_OUTPUT CMAKE_TOOLCHAIN_FILE)
foreach(var ONECORE_API_VALIDATOR ONECORE_API_VALIDATOR_TARGET
ONECORE_API_VALIDATOR_APIS ONECORE_API_VALIDATOR_EXCLUSION
ONECORE_API_VALIDATOR_OUTPUT CMAKE_TOOLCHAIN_FILE)
if(NOT DEFINED ${var})
message(FATAL_ERROR "Variable ${var} is not defined")
endif()
@@ -14,18 +14,18 @@ endforeach()
# create command
if(NOT EXISTS "${UWP_API_VALIDATOR_APIS}")
message(FATAL_ERROR "${UWP_API_VALIDATOR_APIS} does not exist")
if(NOT EXISTS "${ONECORE_API_VALIDATOR_APIS}")
message(FATAL_ERROR "${ONECORE_API_VALIDATOR_APIS} does not exist")
endif()
set(command "${UWP_API_VALIDATOR}"
-SupportedApiXmlFiles:${UWP_API_VALIDATOR_APIS}
-DriverPackagePath:${UWP_API_VALIDATOR_TARGET})
if(EXISTS "${UWP_API_VALIDATOR_EXCLUSION}")
set(command "${ONECORE_API_VALIDATOR}"
-SupportedApiXmlFiles:${ONECORE_API_VALIDATOR_APIS}
-DriverPackagePath:${ONECORE_API_VALIDATOR_TARGET})
if(EXISTS "${ONECORE_API_VALIDATOR_EXCLUSION}")
list(APPEND command
-BinaryExclusionListXmlFile:${UWP_API_VALIDATOR_EXCLUSION}
-BinaryExclusionListXmlFile:${ONECORE_API_VALIDATOR_EXCLUSION}
-StrictCompliance:TRUE)
set(UWP_HAS_BINARY_EXCLUSION ON)
set(ONECORE_HAS_BINARY_EXCLUSION ON)
endif()
# execute
@@ -36,13 +36,13 @@ execute_process(COMMAND ${command}
RESULT_VARIABLE exit_code
OUTPUT_STRIP_TRAILING_WHITESPACE)
file(WRITE "${UWP_API_VALIDATOR_OUTPUT}" "${output_message}\n\n\n${error_message}")
file(WRITE "${ONECORE_API_VALIDATOR_OUTPUT}" "CMAKE COMMAND: ${command}\n\n\n${output_message}\n\n\n${error_message}")
# post-process output
get_filename_component(name "${UWP_API_VALIDATOR_TARGET}" NAME)
get_filename_component(name "${ONECORE_API_VALIDATOR_TARGET}" NAME)
if(NOT UWP_HAS_BINARY_EXCLUSION)
if(NOT ONECORE_HAS_BINARY_EXCLUSION)
if(CMAKE_TOOLCHAIN_FILE MATCHES "onecoreuap.toolchain.cmake$")
# empty since we compile with static MSVC runtime
else()
@@ -66,7 +66,7 @@ endif()
# write output
if(UWP_HAS_BINARY_EXCLUSION AND NOT exit_code EQUAL 0)
if(ONECORE_HAS_BINARY_EXCLUSION AND NOT exit_code EQUAL 0)
message(FATAL_ERROR "${error_message}")
endif()

View File

@@ -0,0 +1,52 @@
import pkg_resources
import re
import os
def check_python_requirements(requirements_path: str) -> None:
"""
Checks if the requirements defined in `requirements_path` are installed
in the active Python environment, while also taking constraints.txt files
into account.
"""
constraints = {}
constraints_path = None
requirements = []
# read requirements and find constraints file
with open(requirements_path) as f:
raw_requirements = f.readlines()
for line in raw_requirements:
if line.startswith("-c"):
constraints_path = os.path.join(os.path.dirname(requirements_path), line.split(' ')[1][:-1])
# read constraints if they exist
if constraints_path:
with open(constraints_path) as f:
raw_constraints = f.readlines()
for line in raw_constraints:
if line.startswith("#") or line=="\n":
continue
line = line.replace("\n", "")
package, delimiter, constraint = re.split("(~|=|<|>|;)", line, maxsplit=1)
if constraints.get(package) is None:
constraints[package] = [delimiter + constraint]
else:
constraints[package].extend([delimiter + constraint])
for line in raw_requirements:
if line.startswith(("#", "-c")):
continue
line = line.replace("\n", "")
if re.search("\W", line):
requirements.append(line)
else:
constraint = constraints.get(line)
if constraint:
for marker in constraint:
requirements.append(line+marker)
else:
requirements.append(line)
else:
requirements = raw_requirements
pkg_resources.require(requirements)

View File

@@ -3,23 +3,23 @@
#
if(ENABLE_CLANG_FORMAT)
set(clang_format_required_version 9)
set(CLANG_FORMAT_FILENAME clang-format-${clang_format_required_version} clang-format)
set(CLANG_FORMAT_REQUIRED_VERSION 9 CACHE STRING "Clang-format version to use")
set(CLANG_FORMAT_FILENAME clang-format-${CLANG_FORMAT_REQUIRED_VERSION} clang-format)
find_host_program(CLANG_FORMAT NAMES ${CLANG_FORMAT_FILENAME} PATHS ENV PATH)
if(CLANG_FORMAT)
execute_process(COMMAND ${CLANG_FORMAT} ${CMAKE_CURRENT_SOURCE_DIR} ARGS --version OUTPUT_VARIABLE CLANG_VERSION)
if(NOT CLANG_VERSION)
message(WARNING "Supported clang-format version is ${clang_format_required_version}!")
message(WARNING "Supported clang-format version is ${CLANG_FORMAT_REQUIRED_VERSION}!")
set(ENABLE_CLANG_FORMAT OFF)
else()
string(REGEX REPLACE "[^0-9]+([0-9]+)\\..*" "\\1" CLANG_FORMAT_MAJOR_VERSION ${CLANG_VERSION})
if(NOT CLANG_FORMAT_MAJOR_VERSION EQUAL clang_format_required_version)
if(NOT CLANG_FORMAT_MAJOR_VERSION EQUAL CLANG_FORMAT_REQUIRED_VERSION)
message(WARNING "Supported clang-format version is 9! Provided version ${CLANG_FORMAT_MAJOR_VERSION}")
set(ENABLE_CLANG_FORMAT OFF)
endif()
endif()
else()
message(WARNING "Supported clang-format-${clang_format_required_version} is not found!")
message(WARNING "Supported clang-format-${CLANG_FORMAT_REQUIRED_VERSION} is not found!")
set(ENABLE_CLANG_FORMAT OFF)
endif()
endif()
@@ -70,6 +70,10 @@ function(add_clang_format_target TARGET_NAME)
continue()
endif()
if(IS_DIRECTORY "${source_file}")
message(FATAL_ERROR "Directory ${source_file} cannot be passed to clang-format")
endif()
file(RELATIVE_PATH source_file_relative "${CMAKE_CURRENT_SOURCE_DIR}" "${source_file}")
set(output_file "${CMAKE_CURRENT_BINARY_DIR}/clang_format/${source_file_relative}.clang")
string(REPLACE ".." "__" output_file "${output_file}")

View File

@@ -4,8 +4,13 @@
macro(enable_fuzzing)
# Enable (libFuzzer)[https://llvm.org/docs/LibFuzzer.html] if supported.
set(FUZZING_COMPILER_FLAGS "-fsanitize=fuzzer-no-link -fprofile-instr-generate -fcoverage-mapping")
set(FUZZING_LINKER_FLAGS "-fsanitize-coverage=trace-pc-guard -fprofile-instr-generate")
if(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
# see https://learn.microsoft.com/en-us/cpp/build/reference/fsanitize?view=msvc-160#remarks
set(FUZZING_COMPILER_FLAGS "/fsanitize=fuzzer")
elseif(OV_COMPILER_IS_CLANG)
set(FUZZING_COMPILER_FLAGS "-fsanitize=fuzzer-no-link -fprofile-instr-generate -fcoverage-mapping")
set(FUZZING_LINKER_FLAGS "-fsanitize-coverage=trace-pc-guard -fprofile-instr-generate")
endif()
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} ${FUZZING_COMPILER_FLAGS}")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${FUZZING_COMPILER_FLAGS}")
@@ -20,6 +25,10 @@ function(add_fuzzer FUZZER_EXE_NAME FUZZER_SOURCES)
add_executable(${FUZZER_EXE_NAME} ${FUZZER_SOURCES})
target_link_libraries(${FUZZER_EXE_NAME} PRIVATE fuzz-testhelper)
if(ENABLE_FUZZING)
set_target_properties(${FUZZER_EXE_NAME} PROPERTIES LINK_FLAGS "-fsanitize=fuzzer")
if(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
# no extra flags are required
elseif(OV_COMPILER_IS_CLANG)
set_target_properties(${FUZZER_EXE_NAME} PROPERTIES LINK_FLAGS "-fsanitize=fuzzer")
endif()
endif()
endfunction(add_fuzzer)

View File

@@ -12,23 +12,17 @@ include(CheckCXXCompilerFlag)
# Defines ie_c_cxx_deprecated varaible which contains C / C++ compiler flags
#
macro(ov_disable_deprecated_warnings)
if(WIN32)
if(CMAKE_CXX_COMPILER_ID STREQUAL "Intel")
if(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
set(ie_c_cxx_deprecated "/wd4996")
elseif(CMAKE_CXX_COMPILER_ID STREQUAL "Intel")
if(WIN32)
set(ie_c_cxx_deprecated "/Qdiag-disable:1478,1786")
elseif(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
set(ie_c_cxx_deprecated "/wd4996")
elseif(OV_COMPILER_IS_CLANG)
set(ie_c_cxx_deprecated "-Wno-deprecated-declarations")
endif()
else()
if(CMAKE_CXX_COMPILER_ID STREQUAL "Intel")
else()
set(ie_c_cxx_deprecated "-diag-disable=1478,1786")
elseif(OV_COMPILER_IS_CLANG OR CMAKE_COMPILER_IS_GNUCXX)
set(ie_c_cxx_deprecated "-Wno-deprecated-declarations")
endif()
endif()
if(NOT ie_c_cxx_deprecated)
elseif(OV_COMPILER_IS_CLANG OR CMAKE_COMPILER_IS_GNUCXX)
set(ie_c_cxx_deprecated "-Wno-deprecated-declarations")
else()
message(WARNING "Unsupported CXX compiler ${CMAKE_CXX_COMPILER_ID}")
endif()
@@ -49,24 +43,18 @@ endmacro()
# Defines ie_c_cxx_deprecated_no_errors varaible which contains C / C++ compiler flags
#
macro(ov_deprecated_no_errors)
if(WIN32)
if(CMAKE_CXX_COMPILER_ID STREQUAL "Intel")
if(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
# show 4996 only for /w4
set(ie_c_cxx_deprecated_no_errors "/wd4996")
elseif(CMAKE_CXX_COMPILER_ID STREQUAL "Intel")
if(WIN32)
set(ie_c_cxx_deprecated_no_errors "/Qdiag-warning:1478,1786")
elseif(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
# show 4996 only for /w4
set(ie_c_cxx_deprecated_no_errors "/wd4996")
elseif(OV_COMPILER_IS_CLANG)
set(ie_c_cxx_deprecated_no_errors "-Wno-error=deprecated-declarations")
endif()
else()
if(CMAKE_CXX_COMPILER_ID STREQUAL "Intel")
else()
set(ie_c_cxx_deprecated_no_errors "-diag-warning=1478,1786")
elseif(OV_COMPILER_IS_CLANG OR CMAKE_COMPILER_IS_GNUCXX)
set(ie_c_cxx_deprecated_no_errors "-Wno-error=deprecated-declarations")
endif()
endif()
if(NOT ie_c_cxx_deprecated_no_errors)
elseif(OV_COMPILER_IS_CLANG OR CMAKE_COMPILER_IS_GNUCXX)
set(ie_c_cxx_deprecated_no_errors "-Wno-error=deprecated-declarations")
else()
message(WARNING "Unsupported CXX compiler ${CMAKE_CXX_COMPILER_ID}")
endif()
@@ -101,23 +89,21 @@ endmacro()
# Provides SSE4.2 compilation flags depending on an OS and a compiler
#
macro(ie_sse42_optimization_flags flags)
if(WIN32)
if(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
# No such option for MSVC 2019
elseif(CMAKE_CXX_COMPILER_ID STREQUAL "Intel")
if(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
# No such option for MSVC 2019
elseif(CMAKE_CXX_COMPILER_ID STREQUAL "Intel")
if(WIN32)
set(${flags} /QxSSE4.2)
else()
message(WARNING "Unsupported CXX compiler ${CMAKE_CXX_COMPILER_ID}")
set(${flags} -xSSE4.2)
endif()
elseif(OV_COMPILER_IS_CLANG OR CMAKE_COMPILER_IS_GNUCXX)
set(${flags} -msse4.2)
if(EMSCRIPTEN)
list(APPEND ${flags} -msimd128)
endif()
else()
if(CMAKE_CXX_COMPILER_ID STREQUAL "Intel")
set(${flags} -xSSE4.2)
else()
set(${flags} -msse4.2)
if(EMSCRIPTEN)
list(APPEND ${flags} -msimd128)
endif()
endif()
message(WARNING "Unsupported CXX compiler ${CMAKE_CXX_COMPILER_ID}")
endif()
endmacro()
@@ -127,20 +113,18 @@ endmacro()
# Provides AVX2 compilation flags depending on an OS and a compiler
#
macro(ie_avx2_optimization_flags flags)
if(WIN32)
if(CMAKE_CXX_COMPILER_ID STREQUAL "Intel")
if(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
set(${flags} /arch:AVX2)
elseif(CMAKE_CXX_COMPILER_ID STREQUAL "Intel")
if(WIN32)
set(${flags} /QxCORE-AVX2)
elseif(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
set(${flags} /arch:AVX2)
else()
message(WARNING "Unsupported CXX compiler ${CMAKE_CXX_COMPILER_ID}")
endif()
else()
if(CMAKE_CXX_COMPILER_ID STREQUAL "Intel")
set(${flags} -xCORE-AVX2)
else()
set(${flags} -mavx2 -mfma)
endif()
elseif(OV_COMPILER_IS_CLANG OR CMAKE_COMPILER_IS_GNUCXX)
set(${flags} -mavx2 -mfma)
else()
message(WARNING "Unsupported CXX compiler ${CMAKE_CXX_COMPILER_ID}")
endif()
endmacro()
@@ -151,24 +135,18 @@ endmacro()
# depending on an OS and a compiler
#
macro(ie_avx512_optimization_flags flags)
if(WIN32)
if(CMAKE_CXX_COMPILER_ID STREQUAL "Intel")
if(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
set(${flags} /arch:AVX512)
elseif(CMAKE_CXX_COMPILER_ID STREQUAL "Intel")
if(WIN32)
set(${flags} /QxCOMMON-AVX512)
elseif(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
set(${flags} /arch:AVX512)
else()
message(WARNING "Unsupported CXX compiler ${CMAKE_CXX_COMPILER_ID}")
endif()
else()
if(CMAKE_CXX_COMPILER_ID STREQUAL "Intel")
set(${flags} -xCOMMON-AVX512)
endif()
if(CMAKE_CXX_COMPILER_ID STREQUAL "GNU")
set(${flags} -mavx512f -mfma)
endif()
if(CMAKE_CXX_COMPILER_ID MATCHES "^(Clang|AppleClang)$")
set(${flags} -mavx512f -mfma)
endif()
elseif(OV_COMPILER_IS_CLANG OR CMAKE_COMPILER_IS_GNUCXX)
set(${flags} -mavx512f -mfma)
else()
message(WARNING "Unsupported CXX compiler ${CMAKE_CXX_COMPILER_ID}")
endif()
endmacro()
@@ -265,8 +243,10 @@ endfunction()
function(ov_force_include target scope header_file)
if(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
target_compile_options(${target} ${scope} /FI"${header_file}")
else()
elseif(OV_COMPILER_IS_CLANG OR CMAKE_COMPILER_IS_GNUCXX)
target_compile_options(${target} ${scope} -include "${header_file}")
else()
message(WARNING "Unsupported CXX compiler ${CMAKE_CXX_COMPILER_ID}")
endif()
endfunction()
@@ -318,11 +298,11 @@ set(CMAKE_VISIBILITY_INLINES_HIDDEN ON)
if(CMAKE_CL_64)
# Default char Type Is unsigned
# ie_add_compiler_flags(/J)
else()
elseif(CMAKE_COMPILER_IS_GNUCXX OR OV_COMPILER_IS_CLANG)
ie_add_compiler_flags(-fsigned-char)
endif()
if(WIN32)
if(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
#
# Common options / warnings enabled
#
@@ -335,16 +315,14 @@ if(WIN32)
# This option helps ensure the fewest possible hard-to-find code defects. Similar to -Wall on GNU / Clang
ie_add_compiler_flags(/W3)
if(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
# Increase Number of Sections in .Obj file
ie_add_compiler_flags(/bigobj)
# Build with multiple processes
ie_add_compiler_flags(/MP)
# Increase Number of Sections in .Obj file
ie_add_compiler_flags(/bigobj)
# Build with multiple processes
ie_add_compiler_flags(/MP)
if(AARCH64 AND NOT MSVC_VERSION LESS 1930)
# otherwise, _ARM64_EXTENDED_INTRINSICS is defined, which defines 'mvn' macro
ie_add_compiler_flags(/D_ARM64_DISTINCT_NEON_TYPES)
endif()
if(AARCH64 AND NOT MSVC_VERSION LESS 1930)
# otherwise, _ARM64_EXTENDED_INTRINSICS is defined, which defines 'mvn' macro
ie_add_compiler_flags(/D_ARM64_DISTINCT_NEON_TYPES)
endif()
# Handle Large Addresses
@@ -361,42 +339,62 @@ if(WIN32)
set(CMAKE_SHARED_LINKER_FLAGS "${CMAKE_SHARED_LINKER_FLAGS} /WX")
set(CMAKE_MODULE_LINKER_FLAGS "${CMAKE_MODULE_LINKER_FLAGS} /WX")
set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} /WX")
if(CMAKE_CXX_COMPILER_ID STREQUAL "Intel")
ie_add_compiler_flags(/Qdiag-warning:47,1740,1786)
endif()
endif()
#
# Disable noisy warnings
#
if(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
# C4251 needs to have dll-interface to be used by clients of class
ie_add_compiler_flags(/wd4251)
# C4275 non dll-interface class used as base for dll-interface class
ie_add_compiler_flags(/wd4275)
elseif(CMAKE_CXX_COMPILER_ID STREQUAL "Intel")
# 161: unrecognized pragma
# 177: variable was declared but never referenced
# 556: not matched type of assigned function pointer
# 1744: field of class type without a DLL interface used in a class with a DLL interface
# 1879: unimplemented pragma ignored
# 2586: decorated name length exceeded, name was truncated
# 2651: attribute does not apply to any entity
# 3180: unrecognized OpenMP pragma
# 11075: To get full report use -Qopt-report:4 -Qopt-report-phase ipo
# 15335: was not vectorized: vectorization possible but seems inefficient. Use vector always directive or /Qvec-threshold0 to override
ie_add_compiler_flags(/Qdiag-disable:161,177,556,1744,1879,2586,2651,3180,11075,15335)
endif()
# C4251 needs to have dll-interface to be used by clients of class
ie_add_compiler_flags(/wd4251)
# C4275 non dll-interface class used as base for dll-interface class
ie_add_compiler_flags(/wd4275)
#
# Debug information flags, by default CMake adds /Zi option
# but provides no way to specify CMAKE_COMPILE_PDB_NAME on root level
# In order to avoid issues with ninja we are replacing default flag instead of having two of them
# and observing warning D9025 about flag override
#
string(REPLACE "/Zi" "/Z7" CMAKE_C_FLAGS_DEBUG "${CMAKE_C_FLAGS_DEBUG}")
string(REPLACE "/Zi" "/Z7" CMAKE_CXX_FLAGS_DEBUG "${CMAKE_CXX_FLAGS_DEBUG}")
string(REPLACE "/Zi" "/Z7" CMAKE_C_FLAGS_RELWITHDEBINFO "${CMAKE_C_FLAGS_RELWITHDEBINFO}")
string(REPLACE "/Zi" "/Z7" CMAKE_CXX_FLAGS_RELWITHDEBINFO "${CMAKE_CXX_FLAGS_RELWITHDEBINFO}")
elseif(CMAKE_CXX_COMPILER_ID STREQUAL "Intel" AND WIN32)
#
# Warnings as errors
#
if(CMAKE_COMPILE_WARNING_AS_ERROR AND CMAKE_VERSION VERSION_LESS 3.24)
ie_add_compiler_flags(/Qdiag-warning:47,1740,1786)
endif()
#
# Disable noisy warnings
#
# 161: unrecognized pragma
ie_add_compiler_flags(/Qdiag-disable:161)
# 177: variable was declared but never referenced
ie_add_compiler_flags(/Qdiag-disable:177)
# 556: not matched type of assigned function pointer
ie_add_compiler_flags(/Qdiag-disable:556)
# 1744: field of class type without a DLL interface used in a class with a DLL interface
ie_add_compiler_flags(/Qdiag-disable:1744)
# 1879: unimplemented pragma ignored
ie_add_compiler_flags(/Qdiag-disable:1879)
# 2586: decorated name length exceeded, name was truncated
ie_add_compiler_flags(/Qdiag-disable:2586)
# 2651: attribute does not apply to any entity
ie_add_compiler_flags(/Qdiag-disable:2651)
# 3180: unrecognized OpenMP pragma
ie_add_compiler_flags(/Qdiag-disable:3180)
# 11075: To get full report use -Qopt-report:4 -Qopt-report-phase ipo
ie_add_compiler_flags(/Qdiag-disable:11075)
# 15335: was not vectorized: vectorization possible but seems inefficient.
# Use vector always directive or /Qvec-threshold0 to override
ie_add_compiler_flags(/Qdiag-disable:15335)
else()
#
# Common enabled warnings
@@ -412,11 +410,6 @@ else()
# Warn if an undefined identifier is evaluated in an #if directive. Such identifiers are replaced with zero.
ie_add_compiler_flags(-Wundef)
check_cxx_compiler_flag("-Wsuggest-override" SUGGEST_OVERRIDE_SUPPORTED)
if(SUGGEST_OVERRIDE_SUPPORTED)
set(CMAKE_CXX_FLAGS "-Wsuggest-override ${CMAKE_CXX_FLAGS}")
endif()
#
# Warnings as errors
#
@@ -460,14 +453,13 @@ else()
endif()
endif()
# if(OV_COMPILER_IS_CLANG)
# ie_add_compiler_flags(-Wshorten-64-to-32)
# endif()
# TODO
if(OV_COMPILER_IS_CLANG)
ie_add_compiler_flags(-Wno-delete-non-abstract-non-virtual-dtor)
check_cxx_compiler_flag("-Wsuggest-override" SUGGEST_OVERRIDE_SUPPORTED)
if(SUGGEST_OVERRIDE_SUPPORTED)
set(CMAKE_CXX_FLAGS "-Wsuggest-override ${CMAKE_CXX_FLAGS}")
endif()
check_cxx_compiler_flag("-Wunused-but-set-variable" UNUSED_BUT_SET_VARIABLE_SUPPORTED)
#
# link_system_libraries(target <PUBLIC | PRIVATE | INTERFACE> <lib1 [lib2 lib3 ...]>)
#
@@ -499,6 +491,11 @@ endfunction()
# Tries to use gold linker in current scope (directory, function)
#
function(ov_try_use_gold_linker)
# don't use the gold linker, if the mold linker is set
if(CMAKE_EXE_LINKER_FLAGS MATCHES "mold" OR CMAKE_MODULE_LINKER_FLAGS MATCHES "mold" OR CMAKE_SHARED_LINKER_FLAGS MATCHES "mold")
return()
endif()
# gold linker on ubuntu20.04 may fail to link binaries build with sanitizer
if(CMAKE_COMPILER_IS_GNUCXX AND NOT ENABLE_SANITIZER AND NOT CMAKE_CROSSCOMPILING)
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -fuse-ld=gold" PARENT_SCOPE)

View File

@@ -5,7 +5,9 @@
include(CheckCXXCompilerFlag)
if (ENABLE_SANITIZER)
if (WIN32)
if(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
# the flag is available since MSVC 2019 16.9
# see https://learn.microsoft.com/en-us/cpp/build/reference/fsanitize?view=msvc-160
check_cxx_compiler_flag("/fsanitize=address" SANITIZE_ADDRESS_SUPPORTED)
if (SANITIZE_ADDRESS_SUPPORTED)
set(SANITIZER_COMPILER_FLAGS "${SANITIZER_COMPILER_FLAGS} /fsanitize=address")
@@ -14,26 +16,28 @@ if (ENABLE_SANITIZER)
"Please, check requirements:\n"
"https://github.com/openvinotoolkit/openvino/wiki/AddressSanitizer-and-LeakSanitizer")
endif()
else()
elseif(CMAKE_COMPILER_IS_GNUCXX OR OV_COMPILER_IS_CLANG)
set(SANITIZER_COMPILER_FLAGS "${SANITIZER_COMPILER_FLAGS} -fsanitize=address")
check_cxx_compiler_flag("-fsanitize-recover=address" SANITIZE_RECOVER_ADDRESS_SUPPORTED)
if (SANITIZE_RECOVER_ADDRESS_SUPPORTED)
set(SANITIZER_COMPILER_FLAGS "${SANITIZER_COMPILER_FLAGS} -fsanitize-recover=address")
endif()
set(SANITIZER_LINKER_FLAGS "${SANITIZER_LINKER_FLAGS} -fsanitize=address")
else()
message(WARNING "Unsupported CXX compiler ${CMAKE_CXX_COMPILER_ID}")
endif()
endif()
if (ENABLE_UB_SANITIZER)
if (WIN32)
message(FATAL_ERROR "UndefinedBehavior sanitizer is not supported in Windows")
if(ENABLE_UB_SANITIZER)
if(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
message(FATAL_ERROR "UndefinedBehavior sanitizer is not supported in Windows with MSVC compiler. Please, use clang-cl or mingw")
endif()
# TODO: Remove -fno-sanitize=null as thirdparty/ocl/clhpp_headers UBSAN compatibility resolved:
# https://github.com/KhronosGroup/OpenCL-CLHPP/issues/17
# Mute -fsanitize=function Indirect call of a function through a function pointer of the wrong type.
# Sample cases:
# call to function GetAPIVersion through pointer to incorrect function type 'void *(*)()'
# call to function get_api_version through pointer to incorrect function type 'void *(*)()'
# Mute -fsanitize=alignment Use of a misaligned pointer or creation of a misaligned reference. Also sanitizes assume_aligned-like attributes.
# Sample cases:
# VPU_FixedMaxHeapTest.DefaultConstructor test case load of misaligned address 0x62000000187f for type 'const DataType', which requires 4 byte alignment
@@ -48,43 +52,50 @@ if (ENABLE_UB_SANITIZER)
set(SANITIZER_COMPILER_FLAGS "${SANITIZER_COMPILER_FLAGS} -fno-sanitize=function")
endif()
if (CMAKE_CXX_COMPILER_ID STREQUAL "GNU")
# TODO: Remove -Wno-maybe-uninitialized after CVS-61143 fix
if(CMAKE_COMPILER_IS_GNUCXX)
# TODO: Remove -Wno-maybe-uninitialized after CVS-61143 is fixed
set(SANITIZER_COMPILER_FLAGS "${SANITIZER_COMPILER_FLAGS} -Wno-maybe-uninitialized")
endif()
check_cxx_compiler_flag("-fsanitize-recover=undefined" SANITIZE_RECOVER_UNDEFINED_SUPPORTED)
if (SANITIZE_RECOVER_UNDEFINED_SUPPORTED)
if(SANITIZE_RECOVER_UNDEFINED_SUPPORTED)
set(SANITIZER_COMPILER_FLAGS "${SANITIZER_COMPILER_FLAGS} -fsanitize-recover=undefined")
endif()
set(SANITIZER_LINKER_FLAGS "${SANITIZER_LINKER_FLAGS} -fsanitize=undefined")
endif()
if (ENABLE_THREAD_SANITIZER)
set(SANITIZER_COMPILER_FLAGS "${SANITIZER_COMPILER_FLAGS} -fsanitize=thread")
set(SANITIZER_LINKER_FLAGS "${SANITIZER_LINKER_FLAGS} -fsanitize=thread")
if(ENABLE_THREAD_SANITIZER)
if(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
message(FATAL_ERROR "Thread sanitizer is not supported in Windows with MSVC compiler. Please, use clang-cl or mingw")
elseif(CMAKE_COMPILER_IS_GNUCXX OR OV_COMPILER_IS_CLANG)
set(SANITIZER_COMPILER_FLAGS "${SANITIZER_COMPILER_FLAGS} -fsanitize=thread")
set(SANITIZER_LINKER_FLAGS "${SANITIZER_LINKER_FLAGS} -fsanitize=thread")
else()
message(WARNING "Unsupported CXX compiler ${CMAKE_CXX_COMPILER_ID}")
endif()
endif()
# common sanitizer options
if (DEFINED SANITIZER_COMPILER_FLAGS)
if(DEFINED SANITIZER_COMPILER_FLAGS)
# ensure symbols are present
if (NOT WIN32)
if(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
set(SANITIZER_COMPILER_FLAGS "${SANITIZER_COMPILER_FLAGS} /Oy-")
elseif(CMAKE_COMPILER_IS_GNUCXX OR OV_COMPILER_IS_CLANG)
set(SANITIZER_COMPILER_FLAGS "${SANITIZER_COMPILER_FLAGS} -g -fno-omit-frame-pointer")
if(NOT OV_COMPILER_IS_CLANG)
if(CMAKE_COMPILER_IS_GNUCXX)
# GPU plugin tests compilation is slow with -fvar-tracking-assignments on GCC.
# Clang has no var-tracking-assignments.
set(SANITIZER_COMPILER_FLAGS "${SANITIZER_COMPILER_FLAGS} -fno-var-tracking-assignments")
endif()
# prevent unloading libraries at runtime, so sanitizer can resolve their symbols
if (NOT CMAKE_CXX_COMPILER_ID STREQUAL "AppleClang")
if(NOT OV_COMPILER_IS_APPLECLANG)
set(SANITIZER_LINKER_FLAGS "${SANITIZER_LINKER_FLAGS} -Wl,-z,nodelete")
if(OV_COMPILER_IS_CLANG AND CMAKE_CXX_COMPILER_VERSION VERSION_GREATER_EQUAL 8.0)
set(SANITIZER_LINKER_FLAGS "${SANITIZER_LINKER_FLAGS} -fuse-ld=lld")
endif()
endif()
else()
set(SANITIZER_COMPILER_FLAGS "${SANITIZER_COMPILER_FLAGS} /Oy-")
message(WARNING "Unsupported CXX compiler ${CMAKE_CXX_COMPILER_ID}")
endif()
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} ${SANITIZER_COMPILER_FLAGS}")

View File

@@ -2,58 +2,68 @@
# SPDX-License-Identifier: Apache-2.0
#
if(UNIX)
set(IE_C_CXX_FLAGS "${IE_C_CXX_FLAGS} -Wformat -Wformat-security")
if(CMAKE_COMPILER_IS_GNUCXX OR OV_COMPILER_IS_CLANG OR
(UNIX AND CMAKE_CXX_COMPILER_ID STREQUAL "Intel"))
set(OV_C_CXX_FLAGS "${OV_C_CXX_FLAGS} -Wformat -Wformat-security")
if (NOT ENABLE_SANITIZER)
if(EMSCRIPTEN)
# emcc does not support fortification, see:
# https://stackoverflow.com/questions/58854858/undefined-symbol-stack-chk-guard-in-libopenh264-so-when-building-ffmpeg-wit
else()
# ASan does not support fortification https://github.com/google/sanitizers/issues/247
set(IE_C_CXX_FLAGS "${IE_C_CXX_FLAGS} -D_FORTIFY_SOURCE=2")
set(OV_C_CXX_FLAGS "${OV_C_CXX_FLAGS} -D_FORTIFY_SOURCE=2")
endif()
endif()
if(NOT APPLE)
set(CMAKE_EXE_LINKER_FLAGS_RELEASE "${CMAKE_EXE_LINKER_FLAGS_RELEASE} -pie")
endif()
if(CMAKE_CXX_COMPILER_ID STREQUAL "GNU")
set(IE_LINKER_FLAGS "${IE_LINKER_FLAGS} -z noexecstack -z relro -z now")
set(IE_C_CXX_FLAGS "${IE_C_CXX_FLAGS} -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv")
if(CMAKE_COMPILER_IS_GNUCXX)
set(OV_C_CXX_FLAGS "${OV_C_CXX_FLAGS} -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv")
if(CMAKE_CXX_COMPILER_VERSION VERSION_LESS 4.9)
set(IE_C_CXX_FLAGS "${IE_C_CXX_FLAGS} -fstack-protector-all")
set(OV_C_CXX_FLAGS "${OV_C_CXX_FLAGS} -fstack-protector-all")
else()
set(IE_C_CXX_FLAGS "${IE_C_CXX_FLAGS} -fstack-protector-strong")
set(OV_C_CXX_FLAGS "${OV_C_CXX_FLAGS} -fstack-protector-strong")
endif()
if (NOT ENABLE_SANITIZER)
set(IE_C_CXX_FLAGS "${IE_C_CXX_FLAGS} -s")
# Remove all symbol table and relocation information from the executable
set(OV_C_CXX_FLAGS "${OV_C_CXX_FLAGS} -s")
endif()
if(NOT MINGW)
set(OV_LINKER_FLAGS "${OV_LINKER_FLAGS} -z noexecstack -z relro -z now")
endif()
elseif(OV_COMPILER_IS_CLANG)
if(EMSCRIPTEN)
# emcc does not support fortification
# https://stackoverflow.com/questions/58854858/undefined-symbol-stack-chk-guard-in-libopenh264-so-when-building-ffmpeg-wit
else()
set(IE_C_CXX_FLAGS "${IE_C_CXX_FLAGS} -fstack-protector-all")
set(OV_C_CXX_FLAGS "${OV_C_CXX_FLAGS} -fstack-protector-all")
endif()
elseif(CMAKE_CXX_COMPILER_ID STREQUAL "Intel")
if (NOT ENABLE_SANITIZER)
set(IE_C_CXX_FLAGS "${IE_C_CXX_FLAGS} -Wl,--strip-all")
set(OV_C_CXX_FLAGS "${OV_C_CXX_FLAGS} -Wl,--strip-all")
endif()
set(IE_C_CXX_FLAGS "${IE_C_CXX_FLAGS} -fstack-protector-strong")
set(IE_LINKER_FLAGS "${IE_LINKER_FLAGS} -z noexecstack -z relro -z now")
endif()
else()
if(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
set(IE_C_CXX_FLAGS "${IE_C_CXX_FLAGS} /sdl")
endif()
set(IE_C_CXX_FLAGS "${IE_C_CXX_FLAGS} /guard:cf")
if(ENABLE_INTEGRITYCHECK)
set(CMAKE_SHARED_LINKER_FLAGS_RELEASE "${CMAKE_SHARED_LINKER_FLAGS_RELEASE} /INTEGRITYCHECK")
set(OV_C_CXX_FLAGS "${OV_C_CXX_FLAGS} -fstack-protector-strong")
set(OV_LINKER_FLAGS "${OV_LINKER_FLAGS} -z noexecstack -z relro -z now")
endif()
elseif(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
set(OV_C_CXX_FLAGS "${OV_C_CXX_FLAGS} /sdl /guard:cf")
endif()
set(CMAKE_C_FLAGS_RELEASE "${CMAKE_C_FLAGS_RELEASE} ${IE_C_CXX_FLAGS}")
set(CMAKE_CXX_FLAGS_RELEASE "${CMAKE_CXX_FLAGS_RELEASE} ${IE_C_CXX_FLAGS}")
set(CMAKE_SHARED_LINKER_FLAGS_RELEASE "${CMAKE_SHARED_LINKER_FLAGS_RELEASE} ${IE_LINKER_FLAGS}")
set(CMAKE_MODULE_LINKER_FLAGS_RELEASE "${CMAKE_MODULE_LINKER_FLAGS_RELEASE} ${IE_LINKER_FLAGS}")
set(CMAKE_EXE_LINKER_FLAGS_RELEASE "${CMAKE_EXE_LINKER_FLAGS_RELEASE} ${IE_LINKER_FLAGS}")
if(ENABLE_QSPECTRE)
set(OV_C_CXX_FLAGS "${OV_C_CXX_FLAGS} /Qspectre")
endif()
if(ENABLE_INTEGRITYCHECK)
set(CMAKE_SHARED_LINKER_FLAGS_RELEASE "${CMAKE_SHARED_LINKER_FLAGS_RELEASE} /INTEGRITYCHECK")
endif()
set(CMAKE_C_FLAGS_RELEASE "${CMAKE_C_FLAGS_RELEASE} ${OV_C_CXX_FLAGS}")
set(CMAKE_CXX_FLAGS_RELEASE "${CMAKE_CXX_FLAGS_RELEASE} ${OV_C_CXX_FLAGS}")
set(CMAKE_SHARED_LINKER_FLAGS_RELEASE "${CMAKE_SHARED_LINKER_FLAGS_RELEASE} ${OV_LINKER_FLAGS}")
set(CMAKE_MODULE_LINKER_FLAGS_RELEASE "${CMAKE_MODULE_LINKER_FLAGS_RELEASE} ${OV_LINKER_FLAGS}")
set(CMAKE_EXE_LINKER_FLAGS_RELEASE "${CMAKE_EXE_LINKER_FLAGS_RELEASE} ${OV_LINKER_FLAGS}")
unset(OV_C_CXX_FLAGS)
unset(OV_LINKER_FLAGS)

View File

@@ -641,7 +641,7 @@ _repository = None
# Files to exclude from linting. This is set by the --exclude flag.
_excludes = None
# Whether to supress PrintInfo messages
# Whether to suppress PrintInfo messages
_quiet = False
# The allowed line length of files.
@@ -752,7 +752,7 @@ def ParseNolintSuppressions(filename, raw_line, linenum, error):
'Unknown NOLINT error category: %s' % category)
def ProcessGlobalSuppresions(lines):
def ProcessGlobalSuppressions(lines):
"""Updates the list of global error suppressions.
Parses any lint directives in the file that have global effect.
@@ -780,7 +780,7 @@ def IsErrorSuppressedByNolint(category, linenum):
"""Returns true if the specified error category is suppressed on this line.
Consults the global error_suppressions map populated by
ParseNolintSuppressions/ProcessGlobalSuppresions/ResetNolintSuppressions.
ParseNolintSuppressions/ProcessGlobalSuppressions/ResetNolintSuppressions.
Args:
category: str, the category of the error.
@@ -6203,7 +6203,7 @@ def ProcessFileData(filename, file_extension, lines, error,
ResetNolintSuppressions()
CheckForCopyright(filename, lines, error)
ProcessGlobalSuppresions(lines)
ProcessGlobalSuppressions(lines)
RemoveMultiLineComments(filename, lines, error)
clean_lines = CleansedLines(lines)

View File

@@ -8,7 +8,7 @@ include(target_flags)
# FIXME: there are compiler failures with LTO and Cross-Compile toolchains. Disabling for now, but
# this must be addressed in a proper way
ie_dependent_option (ENABLE_LTO "Enable Link Time Optimization" OFF
"LINUX OR (APPLE AND AARCH64);EMSCRIPTEN OR NOT CMAKE_CROSSCOMPILING;CMAKE_CXX_COMPILER_VERSION VERSION_GREATER 4.9" OFF)
"LINUX;EMSCRIPTEN OR NOT CMAKE_CROSSCOMPILING;CMAKE_CXX_COMPILER_VERSION VERSION_GREATER 4.9" OFF)
ie_option (OS_FOLDER "create OS dedicated folder in output" OFF)
@@ -26,6 +26,8 @@ endif()
ie_option (CMAKE_COMPILE_WARNING_AS_ERROR "Enable warnings as errors" ${CMAKE_COMPILE_WARNING_AS_ERROR_DEFAULT})
ie_dependent_option (ENABLE_QSPECTRE "Enable Qspectre mitigation" OFF "CMAKE_CXX_COMPILER_ID STREQUAL MSVC" OFF)
ie_dependent_option (ENABLE_INTEGRITYCHECK "build DLLs with /INTEGRITYCHECK flag" OFF "CMAKE_CXX_COMPILER_ID STREQUAL MSVC" OFF)
ie_option (ENABLE_SANITIZER "enable checking memory errors via AddressSanitizer" OFF)
@@ -72,7 +74,12 @@ ie_option (VERBOSE_BUILD "shows extra information about build" OFF)
ie_option (ENABLE_UNSAFE_LOCATIONS "skip check for MD5 for dependency" OFF)
ie_dependent_option (ENABLE_FUZZING "instrument build for fuzzing" OFF "OV_COMPILER_IS_CLANG;NOT WIN32" OFF)
if(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC" AND MSVC_VERSION GREATER_EQUAL 1930)
# Visual Studio 2022: 1930-1939 = VS 17.0 (v143 toolset)
set(_msvc_version_2022 ON)
endif()
ie_dependent_option (ENABLE_FUZZING "instrument build for fuzzing" OFF "OV_COMPILER_IS_CLANG OR _msvc_version_2022" OFF)
#
# Check features

View File

@@ -15,8 +15,8 @@ set(OV_FRONTEND_MAP_DEFINITION " FrontendsStaticRegistry registry = {")
foreach(frontend IN LISTS FRONTEND_NAMES)
# common
set(_OV_FRONTEND_DATA_FUNC "GetFrontEndData${frontend}")
set(_OV_VERSION_FUNC "GetAPIVersion${frontend}")
set(_OV_FRONTEND_DATA_FUNC "get_front_end_data_${frontend}")
set(_OV_VERSION_FUNC "get_api_version_${frontend}")
# declarations
set(OV_FRONTEND_DECLARATIONS "${OV_FRONTEND_DECLARATIONS}

View File

@@ -171,7 +171,7 @@ macro(ov_add_frontend)
endforeach()
# Disable all warnings for generated code
set_source_files_properties(${PROTO_SRCS} ${PROTO_HDRS} PROPERTIES COMPILE_OPTIONS -w GENERATED TRUE)
set_source_files_properties(${PROTO_SRCS} ${PROTO_HDRS} PROPERTIES COMPILE_OPTIONS -w GENERATED ON)
# Create library
add_library(${TARGET_NAME} ${LIBRARY_SRC} ${LIBRARY_HEADERS} ${LIBRARY_PUBLIC_HEADERS}
@@ -182,7 +182,7 @@ macro(ov_add_frontend)
add_library(openvino::frontend::${OV_FRONTEND_NAME} ALIAS ${TARGET_NAME})
endif()
# Shutdown protobuf when unloading the front dynamic library
# Shutdown protobuf when unloading the frontend dynamic library
if(proto_files AND BUILD_SHARED_LIBS)
target_link_libraries(${TARGET_NAME} PRIVATE ov_protobuf_shutdown)
endif()
@@ -190,21 +190,8 @@ macro(ov_add_frontend)
if(NOT BUILD_SHARED_LIBS)
# override default function names
target_compile_definitions(${TARGET_NAME} PRIVATE
"-DGetFrontEndData=GetFrontEndData${OV_FRONTEND_NAME}"
"-DGetAPIVersion=GetAPIVersion${OV_FRONTEND_NAME}")
endif()
# enable LTO
set_target_properties(${TARGET_NAME} PROPERTIES
INTERPROCEDURAL_OPTIMIZATION_RELEASE ${ENABLE_LTO})
if(OV_FRONTEND_SKIP_NCC_STYLE)
# frontend's CMakeLists.txt must define its own custom 'ov_ncc_naming_style' step
else()
ov_ncc_naming_style(FOR_TARGET ${TARGET_NAME}
SOURCE_DIRECTORY "${frontend_root_dir}/include"
ADDITIONAL_INCLUDE_DIRECTORIES
$<TARGET_PROPERTY:frontend_common::static,INTERFACE_INCLUDE_DIRECTORIES>)
"-Dget_front_end_data=get_front_end_data_${OV_FRONTEND_NAME}"
"-Dget_api_version=get_api_version_${OV_FRONTEND_NAME}")
endif()
target_include_directories(${TARGET_NAME}
@@ -214,13 +201,10 @@ macro(ov_add_frontend)
${frontend_root_dir}/src
${CMAKE_CURRENT_BINARY_DIR})
ie_add_vs_version_file(NAME ${TARGET_NAME}
ov_add_vs_version_file(NAME ${TARGET_NAME}
FILEDESCRIPTION ${OV_FRONTEND_FILEDESCRIPTION})
ie_add_api_validator_post_build_step(TARGET ${TARGET_NAME})
target_link_libraries(${TARGET_NAME} PUBLIC openvino::runtime)
target_link_libraries(${TARGET_NAME} PRIVATE ${OV_FRONTEND_LINK_LIBRARIES})
target_link_libraries(${TARGET_NAME} PRIVATE ${OV_FRONTEND_LINK_LIBRARIES} PUBLIC openvino::runtime)
ov_add_library_version(${TARGET_NAME})
# WA for TF frontends which always require protobuf (not protobuf-lite)
@@ -231,23 +215,34 @@ macro(ov_add_frontend)
if(proto_files)
if(OV_FRONTEND_PROTOBUF_LITE)
if(NOT protobuf_lite_installed)
ov_install_static_lib(${Protobuf_LITE_LIBRARIES} ${OV_CPACK_COMP_CORE})
set(protobuf_lite_installed ON CACHE INTERNAL "" FORCE)
endif()
link_system_libraries(${TARGET_NAME} PRIVATE ${Protobuf_LITE_LIBRARIES})
set(protobuf_target_name libprotobuf-lite)
set(protobuf_install_name "protobuf_lite_installed")
else()
if(NOT protobuf_installed)
ov_install_static_lib(${Protobuf_LIBRARIES} ${OV_CPACK_COMP_CORE})
set(protobuf_installed ON CACHE INTERNAL "" FORCE)
endif()
link_system_libraries(${TARGET_NAME} PRIVATE ${Protobuf_LIBRARIES})
set(protobuf_target_name libprotobuf)
set(protobuf_install_name "protobuf_installed")
endif()
if(ENABLE_SYSTEM_PROTOBUF)
# use imported target name with namespace
set(protobuf_target_name "protobuf::${protobuf_target_name}")
endif()
# prptobuf generated code emits -Wsuggest-override error
link_system_libraries(${TARGET_NAME} PRIVATE ${protobuf_target_name})
# protobuf generated code emits -Wsuggest-override error
if(SUGGEST_OVERRIDE_SUPPORTED)
target_compile_options(${TARGET_NAME} PRIVATE -Wno-suggest-override)
endif()
# install protobuf if it is not installed yet
if(NOT ${protobuf_install_name})
if(ENABLE_SYSTEM_PROTOBUF)
# we have to add find_package(Protobuf) to the OpenVINOConfig.cmake for static build
# no needs to install protobuf
else()
ov_install_static_lib(${protobuf_target_name} ${OV_CPACK_COMP_CORE})
set("${protobuf_install_name}" ON CACHE INTERNAL "" FORCE)
endif()
endif()
endif()
if(flatbuffers_schema_files)
@@ -255,10 +250,30 @@ macro(ov_add_frontend)
endif()
add_clang_format_target(${TARGET_NAME}_clang FOR_TARGETS ${TARGET_NAME}
EXCLUDE_PATTERNS ${PROTO_SRCS} ${PROTO_HDRS} ${flatbuffers_schema_files})
EXCLUDE_PATTERNS ${PROTO_SRCS} ${PROTO_HDRS} ${proto_files} ${flatbuffers_schema_files})
# enable LTO
set_target_properties(${TARGET_NAME} PROPERTIES
INTERPROCEDURAL_OPTIMIZATION_RELEASE ${ENABLE_LTO})
if(OV_FRONTEND_SKIP_NCC_STYLE)
# frontend's CMakeLists.txt must define its own custom 'ov_ncc_naming_style' step
else()
ov_ncc_naming_style(FOR_TARGET ${TARGET_NAME}
SOURCE_DIRECTORIES "${frontend_root_dir}/include"
"${frontend_root_dir}/src"
ADDITIONAL_INCLUDE_DIRECTORIES
$<TARGET_PROPERTY:${TARGET_NAME},INTERFACE_INCLUDE_DIRECTORIES>
$<TARGET_PROPERTY:${TARGET_NAME},INCLUDE_DIRECTORIES>)
endif()
add_dependencies(ov_frontends ${TARGET_NAME})
# must be called after all target_link_libraries
ie_add_api_validator_post_build_step(TARGET ${TARGET_NAME})
# installation
if(NOT OV_FRONTEND_SKIP_INSTALL)
if(BUILD_SHARED_LIBS)
# Note:
@@ -268,7 +283,7 @@ macro(ov_add_frontend)
set(dev_component "${OV_CPACK_COMP_CORE_DEV}")
# TODO: whether we need to do it configuralbe on Windows installer?
ie_cpack_add_component(${lib_component} HIDDEN)
ov_cpack_add_component(${lib_component} HIDDEN)
if(OV_FRONTEND_LINKABLE_FRONTEND)
set(export_set EXPORT OpenVINOTargets)

View File

@@ -10,12 +10,12 @@
namespace {
using GetFrontEndDataFunc = void*();
using GetAPIVersionFunc = ov::frontend::FrontEndVersion();
using get_front_end_data_func = void*();
using get_api_version_func = ov::frontend::FrontEndVersion();
struct Value {
GetFrontEndDataFunc* m_dataFunc;
GetAPIVersionFunc* m_versionFunc;
get_front_end_data_func* m_dataFunc;
get_api_version_func* m_versionFunc;
};
using FrontendsStaticRegistry = std::vector<Value>;

View File

@@ -2,41 +2,6 @@
# SPDX-License-Identifier: Apache-2.0
#
include(target_flags)
# TODO: remove this function: we must not have conditions for particular OS names or versions
# cmake needs to look at /etc files only when we build for Linux on Linux
if(CMAKE_HOST_LINUX AND LINUX)
function(get_linux_name res_var)
if(EXISTS "/etc/lsb-release")
# linux version detection using cat /etc/lsb-release
file(READ "/etc/lsb-release" release_data)
set(name_regex "DISTRIB_ID=([^ \n]*)\n")
set(version_regex "DISTRIB_RELEASE=([0-9]+(\\.[0-9]+)?)")
else()
execute_process(COMMAND find -L /etc/ -maxdepth 1 -type f -name *-release -exec cat {} \;
OUTPUT_VARIABLE release_data
RESULT_VARIABLE result)
string(REPLACE "Red Hat" "CentOS" release_data "${release_data}")
set(name_regex "NAME=\"([^ \"\n]*).*\"\n")
set(version_regex "VERSION=\"([0-9]+(\\.[0-9]+)?)[^\n]*\"")
endif()
string(REGEX MATCH ${name_regex} name ${release_data})
set(os_name ${CMAKE_MATCH_1})
string(REGEX MATCH ${version_regex} version ${release_data})
set(os_name "${os_name} ${CMAKE_MATCH_1}")
if(os_name)
set(${res_var} ${os_name} PARENT_SCOPE)
else ()
set(${res_var} NOTFOUND PARENT_SCOPE)
endif ()
endfunction()
else()
function(get_linux_name res_var)
set(${res_var} NOTFOUND PARENT_SCOPE)
endfunction()
endif ()
function(get_linux_name res_var)
set(${res_var} NOTFOUND PARENT_SCOPE)
endfunction()

View File

@@ -1,27 +0,0 @@
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
if(UNIX AND ENABLE_ERROR_HIGHLIGHT)
function(message)
string(ASCII 27 ESC)
set(RESET "${ESC}[m")
set(RED "${ESC}[31;1m")
set(YELLOW "${ESC}[33;1m")
list(GET ARGV 0 MessageType)
list(REMOVE_AT ARGV 0)
foreach(arg IN LISTS ARGV)
set(_msg "${_msg}${arg}")
endforeach()
if(MessageType STREQUAL FATAL_ERROR OR MessageType STREQUAL SEND_ERROR)
_message(${MessageType} "${RED}${_msg}${RESET}")
elseif(MessageType STREQUAL WARNING)
_message(${MessageType} "${YELLOW}${_msg}${RESET}")
else()
_message(${MessageType} "${_msg}")
endif()
endfunction()
endif()

View File

@@ -18,7 +18,7 @@ function(ov_native_compile_external_project)
set(multiValueArgs CMAKE_ARGS NATIVE_TARGETS)
cmake_parse_arguments(ARG "" "${oneValueRequiredArgs};${oneValueOptionalArgs}" "${multiValueArgs}" ${ARGN})
if(YOCTO_AARCH64)
if(YOCTO_AARCH64 OR EMSCRIPTEN)
# need to unset several variables which can set env to cross-environment
foreach(var SDKTARGETSYSROOT CONFIG_SITE OECORE_NATIVE_SYSROOT OECORE_TARGET_SYSROOT
OECORE_ACLOCAL_OPTS OECORE_BASELIB OECORE_TARGET_ARCH OECORE_TARGET_OS CC CXX
@@ -31,10 +31,17 @@ function(ov_native_compile_external_project)
endif()
endforeach()
# set root path
if(YOCTO_AARCH64)
set(root_path "$ENV{OECORE_NATIVE_SYSROOT}")
elseif(EMSCRIPTEN)
set(root_path "$ENV{EMSDK}")
endif()
# filter out PATH from yocto locations
string(REPLACE ":" ";" custom_path "$ENV{PATH}")
foreach(path IN LISTS custom_path)
if(NOT path MATCHES "^$ENV{OECORE_NATIVE_SYSROOT}")
if(DEFINED root_path AND NOT path MATCHES "^${root_path}")
list(APPEND clean_path "${path}")
endif()
endforeach()
@@ -63,6 +70,39 @@ function(ov_native_compile_external_project)
set(ARG_NATIVE_SOURCE_SUBDIR SOURCE_SUBDIR ${ARG_NATIVE_SOURCE_SUBDIR})
endif()
if(OV_GENERATOR_MULTI_CONFIG)
if(CMAKE_GENERATOR MATCHES "^Ninja Multi-Config$")
list(APPEND ARG_CMAKE_ARGS "-DCMAKE_CONFIGURATION_TYPES=${CMAKE_DEFAULT_BUILD_TYPE}")
list(APPEND ARG_CMAKE_ARGS "-DCMAKE_DEFAULT_BUILD_TYPE=${CMAKE_DEFAULT_BUILD_TYPE}")
endif()
else()
list(APPEND ARG_CMAKE_ARGS "-DCMAKE_BUILD_TYPE=${CMAKE_BUILD_TYPE}")
endif()
if(CMAKE_VERSION VERSION_GREATER_EQUAL 3.21)
if(DEFINED CMAKE_CXX_LINKER_LAUNCHER)
list(APPEND ARG_CMAKE_ARGS "-DCMAKE_CXX_LINKER_LAUNCHER=${CMAKE_CXX_LINKER_LAUNCHER}")
endif()
if(DEFINED CMAKE_C_LINKER_LAUNCHER)
list(APPEND ARG_CMAKE_ARGS "-DCMAKE_C_LINKER_LAUNCHER=${CMAKE_C_LINKER_LAUNCHER}")
endif()
endif()
if(compile_flags)
list(APPEND ARG_CMAKE_ARGS "-DCMAKE_CXX_FLAGS=${compile_flags}" "-DCMAKE_C_FLAGS=${compile_flags}")
endif()
if(DEFINED CMAKE_CXX_COMPILER_LAUNCHER)
list(APPEND ARG_CMAKE_ARGS "-DCMAKE_CXX_COMPILER_LAUNCHER=${CMAKE_CXX_COMPILER_LAUNCHER}")
endif()
if(DEFINED CMAKE_C_COMPILER_LAUNCHER)
list(APPEND ARG_CMAKE_ARGS "-DCMAKE_C_COMPILER_LAUNCHER=${CMAKE_C_COMPILER_LAUNCHER}")
endif()
if(DEFINED CMAKE_MAKE_PROGRAM)
list(APPEND ARG_CMAKE_ARGS "-DCMAKE_MAKE_PROGRAM=${CMAKE_MAKE_PROGRAM}")
endif()
ExternalProject_Add(${ARG_TARGET_NAME}
# Directory Options
SOURCE_DIR "${CMAKE_CURRENT_SOURCE_DIR}"
@@ -71,23 +111,17 @@ function(ov_native_compile_external_project)
INSTALL_DIR "${ARG_NATIVE_INSTALL_DIR}"
# Configure Step Options:
CMAKE_COMMAND
${NATIVE_CMAKE_COMMAND}
"${NATIVE_CMAKE_COMMAND}" -E env ${cmake_env}
"${NATIVE_CMAKE_COMMAND}"
CMAKE_ARGS
"-DCMAKE_CXX_COMPILER_LAUNCHER=${CMAKE_CXX_COMPILER_LAUNCHER}"
"-DCMAKE_C_COMPILER_LAUNCHER=${CMAKE_C_COMPILER_LAUNCHER}"
"-DCMAKE_CXX_LINKER_LAUNCHER=${CMAKE_CXX_LINKER_LAUNCHER}"
"-DCMAKE_C_LINKER_LAUNCHER=${CMAKE_C_LINKER_LAUNCHER}"
"-DCMAKE_CXX_FLAGS=${compile_flags}"
"-DCMAKE_C_FLAGS=${compile_flags}"
"-DCMAKE_POLICY_DEFAULT_CMP0069=NEW"
"-DCMAKE_INSTALL_PREFIX=${ARG_NATIVE_INSTALL_DIR}"
"-DCMAKE_BUILD_TYPE=${CMAKE_BUILD_TYPE}"
${ARG_CMAKE_ARGS}
CMAKE_GENERATOR "${CMAKE_GENERATOR}"
${ARG_NATIVE_SOURCE_SUBDIR}
# Build Step Options:
BUILD_COMMAND
${NATIVE_CMAKE_COMMAND}
"${NATIVE_CMAKE_COMMAND}"
--build "${CMAKE_CURRENT_BINARY_DIR}/build"
--config Release
--parallel

View File

@@ -27,6 +27,8 @@ elseif(PYTHON_VERSION_MINOR EQUAL 9)
set(clang_version 12)
elseif(PYTHON_VERSION_MINOR EQUAL 10)
set(clang_version 14)
elseif(PYTHON_VERSION_MINOR EQUAL 11)
set(clang_version 14)
else()
message(WARNING "Cannot suggest clang package for python ${PYTHON_VERSION_MAJOR}.${PYTHON_VERSION_MINOR}")
endif()
@@ -112,13 +114,13 @@ endif()
#
# ov_ncc_naming_style(FOR_TARGET target_name
# SOURCE_DIRECTORY dir
# [SOURCE_DIRECTORIES dir1 dir2 ...]
# [STYLE_FILE style_file.style]
# [ADDITIONAL_INCLUDE_DIRECTORIES dir1 dir2 ..]
# [DEFINITIONS def1 def2 ..])
#
# FOR_TARGET - name of the target
# SOURCE_DIRECTORY - directory to check sources from
# SOURCE_DIRECTORIES - directory to check sources from
# STYLE_FILE - path to the specific style file
# ADDITIONAL_INCLUDE_DIRECTORIES - additional include directories used in checked headers
# DEFINITIONS - additional definitions passed to preprocessor stage
@@ -129,9 +131,9 @@ function(ov_ncc_naming_style)
endif()
cmake_parse_arguments(NCC_STYLE "FAIL"
"FOR_TARGET;SOURCE_DIRECTORY;STYLE_FILE" "ADDITIONAL_INCLUDE_DIRECTORIES;DEFINITIONS" ${ARGN})
"FOR_TARGET;STYLE_FILE" "SOURCE_DIRECTORIES;ADDITIONAL_INCLUDE_DIRECTORIES;DEFINITIONS" ${ARGN})
foreach(var FOR_TARGET SOURCE_DIRECTORY)
foreach(var FOR_TARGET SOURCE_DIRECTORIES)
if(NOT DEFINED NCC_STYLE_${var})
message(FATAL_ERROR "${var} is not defined in ov_ncc_naming_style function")
endif()
@@ -141,18 +143,18 @@ function(ov_ncc_naming_style)
set(NCC_STYLE_STYLE_FILE ${ncc_style_dir}/openvino.style)
endif()
file(GLOB_RECURSE sources
RELATIVE "${NCC_STYLE_SOURCE_DIRECTORY}"
"${NCC_STYLE_SOURCE_DIRECTORY}/*.hpp"
"${NCC_STYLE_SOURCE_DIRECTORY}/*.cpp")
foreach(source_dir IN LISTS NCC_STYLE_SOURCE_DIRECTORIES)
file(GLOB_RECURSE local_sources "${source_dir}/*.hpp" "${source_dir}/*.cpp")
list(APPEND sources ${local_sources})
endforeach()
list(APPEND NCC_STYLE_ADDITIONAL_INCLUDE_DIRECTORIES "${NCC_STYLE_SOURCE_DIRECTORY}")
# without it sources with same name from different directories will map to same .ncc_style target
file(RELATIVE_PATH source_dir_rel ${CMAKE_SOURCE_DIR} ${NCC_STYLE_SOURCE_DIRECTORY})
list(APPEND NCC_STYLE_ADDITIONAL_INCLUDE_DIRECTORIES ${NCC_STYLE_SOURCE_DIRECTORIES})
foreach(source IN LISTS sources)
set(output_file "${ncc_style_bin_dir}/${source_dir_rel}/${source}.ncc_style")
set(full_source_path "${NCC_STYLE_SOURCE_DIRECTORY}/${source}")
foreach(source_file IN LISTS sources)
get_filename_component(source_dir "${source_file}" DIRECTORY)
file(RELATIVE_PATH source_dir_rel "${CMAKE_SOURCE_DIR}" "${source_dir}")
get_filename_component(source_name "${source_file}" NAME)
set(output_file "${ncc_style_bin_dir}/${source_dir_rel}/${source_name}.ncc_style")
add_custom_command(
OUTPUT
@@ -161,7 +163,7 @@ function(ov_ncc_naming_style)
"${CMAKE_COMMAND}"
-D "PYTHON_EXECUTABLE=${PYTHON_EXECUTABLE}"
-D "NCC_PY_SCRIPT=${ncc_script_py}"
-D "INPUT_FILE=${full_source_path}"
-D "INPUT_FILE=${source_file}"
-D "OUTPUT_FILE=${output_file}"
-D "DEFINITIONS=${NCC_STYLE_DEFINITIONS}"
-D "CLANG_LIB_PATH=${libclang_location}"
@@ -170,12 +172,12 @@ function(ov_ncc_naming_style)
-D "EXPECTED_FAIL=${NCC_STYLE_FAIL}"
-P "${ncc_style_dir}/ncc_run.cmake"
DEPENDS
"${full_source_path}"
"${source_file}"
"${ncc_style_dir}/openvino.style"
"${ncc_script_py}"
"${ncc_style_dir}/ncc_run.cmake"
COMMENT
"[ncc naming style] ${source}"
"[ncc naming style] ${source_dir_rel}/${source_name}"
VERBATIM)
list(APPEND output_files ${output_file})
endforeach()
@@ -191,6 +193,6 @@ endfunction()
if(TARGET ncc_all)
ov_ncc_naming_style(FOR_TARGET ncc_all
SOURCE_DIRECTORY "${ncc_style_dir}/self_check"
SOURCE_DIRECTORIES "${ncc_style_dir}/self_check"
FAIL)
endif()

View File

@@ -1,7 +1,7 @@
# custom OpenVINO values
CppMethod: '^(operator\W+|[a-z_\d]+|signaling_NaN|quiet_NaN)$'
ClassName: '^([A-Z][\w]+|b?float16|numeric_limits|ngraph_error|stopwatch|unsupported_op)$'
StructName: '^([A-Z][\w]+|element_type_traits|hash|oi_pair)$'
StructName: '^([A-Z][\w]+|element_type_traits|hash|oi_pair|stat)$'
FunctionName: '^(operator\W+|[a-z_\d]+)|PrintTo$'
Namespace: '^([a-z\d_]*|InferenceEngine)$'
NamespaceAlias: '^([a-z\d_]+|InferenceEngine)$'
@@ -12,7 +12,7 @@ TemplateNonTypeParameter: '^\w*$'
ClassTemplate: '^([A-Z][\w]+|element_type_traits)$'
TemplateTypeParameter: '^\w*$'
ParameterName: '^\w*$'
FunctionTemplate: '^(operator.+|[\w]+|Impl<.*>)$'
FunctionTemplate: '^(operator.+|[\w]+|SoPtr.+|Impl<.*>)$'
TypeAliasName: '^\w+$'
VariableReference: '^\w+$'
@@ -27,7 +27,7 @@ CxxDynamicCastExpression: '^.*$'
# not needed values
ClassTemplatePartialSpecialization: '^.*$'
ConversionFunction: '^.*$'
UsingDirective: 'XXXX'
UsingDirective: '^.*$'
ClassAccessSpecifier: '^.*$' # looks like can be fixed
TypeReference: '^.*$' # looks like can be fixed
CxxBaseSpecifier: '^.*$' # looks like can be fixed

View File

@@ -25,7 +25,7 @@ macro(ov_common_libraries_cpack_set_dirs)
set(OV_CPACK_IE_CMAKEDIR ${CMAKE_INSTALL_LIBDIR}/cmake/inferenceengine${OpenVINO_VERSION})
set(OV_CPACK_NGRAPH_CMAKEDIR ${CMAKE_INSTALL_LIBDIR}/cmake/ngraph${OpenVINO_VERSION})
set(OV_CPACK_OPENVINO_CMAKEDIR ${CMAKE_INSTALL_LIBDIR}/cmake/openvino${OpenVINO_VERSION})
set(OV_CPACK_DOCDIR ${CMAKE_INSTALL_DATADIR}/doc/openvino-${OpenVINO_VERSION})
set(OV_CPACK_LICENSESDIR licenses)
ov_get_pyversion(pyversion)
if(pyversion)

View File

@@ -31,6 +31,7 @@ macro(ov_debian_cpack_set_dirs)
set(OV_CPACK_NGRAPH_CMAKEDIR ${OV_CPACK_RUNTIMEDIR}/cmake/ngraph${OpenVINO_VERSION})
set(OV_CPACK_OPENVINO_CMAKEDIR ${OV_CPACK_RUNTIMEDIR}/cmake/openvino${OpenVINO_VERSION})
set(OV_CPACK_DOCDIR ${CMAKE_INSTALL_DATADIR}/doc/openvino-${OpenVINO_VERSION})
set(OV_CPACK_LICENSESDIR ${OV_CPACK_DOCDIR}/licenses)
set(OV_CPACK_PYTHONDIR lib/python3/dist-packages)
# non-native stuff

View File

@@ -29,6 +29,7 @@ macro(ov_cpack_set_dirs)
set(OV_CPACK_NGRAPH_CMAKEDIR runtime/cmake)
set(OV_CPACK_OPENVINO_CMAKEDIR runtime/cmake)
set(OV_CPACK_DOCDIR docs)
set(OV_CPACK_LICENSESDIR ${OV_CPACK_DOCDIR}/licenses)
set(OV_CPACK_SAMPLESDIR samples)
set(OV_CPACK_WHEELSDIR tools)
set(OV_CPACK_TOOLSDIR tools)
@@ -66,11 +67,11 @@ endmacro()
ov_cpack_set_dirs()
#
# ie_cpack_add_component(NAME ...)
# ov_cpack_add_component(NAME ...)
#
# Wraps original `cpack_add_component` and adds component to internal IE list
#
function(ie_cpack_add_component name)
function(ov_cpack_add_component name)
if(NOT ${name} IN_LIST IE_CPACK_COMPONENTS_ALL)
cpack_add_component(${name} ${ARGN})
@@ -99,10 +100,10 @@ endif()
# if <FILE> is a symlink, we resolve it, but install file with a name of symlink
#
function(ov_install_with_name file component)
if((APPLE AND file MATCHES "^[^\.]+\.[0-9]+${CMAKE_SHARED_LIBRARY_SUFFIX}$") OR
(file MATCHES "^.*\.${CMAKE_SHARED_LIBRARY_SUFFIX}\.[0-9]+$"))
get_filename_component(actual_name "${file}" NAME)
if((APPLE AND actual_name MATCHES "^[^\.]+\.[0-9]+${CMAKE_SHARED_LIBRARY_SUFFIX}$") OR
(actual_name MATCHES "^.*\.${CMAKE_SHARED_LIBRARY_SUFFIX}\.[0-9]+$"))
if(IS_SYMLINK "${file}")
get_filename_component(actual_name "${file}" NAME)
get_filename_component(file "${file}" REALPATH)
set(install_rename RENAME "${actual_name}")
endif()
@@ -162,7 +163,7 @@ elseif(CPACK_GENERATOR STREQUAL "RPM")
include(packaging/rpm/rpm)
elseif(CPACK_GENERATOR STREQUAL "NSIS")
include(packaging/nsis)
elseif(CPACK_GENERATOR MATCHES "^(CONDA-FORGE|BREW)$")
elseif(CPACK_GENERATOR MATCHES "^(CONDA-FORGE|BREW|CONAN)$")
include(packaging/common-libraries)
endif()
@@ -194,7 +195,7 @@ macro(ie_cpack)
set(CPACK_STRIP_FILES ON)
endif()
# TODO: replace with openvino
# TODO: replace with openvino and handle multi-config generators case
if(WIN32)
set(CPACK_PACKAGE_NAME inference-engine_${CMAKE_BUILD_TYPE})
else()
@@ -202,6 +203,7 @@ macro(ie_cpack)
endif()
set(CPACK_PACKAGE_VERSION "${OpenVINO_VERSION}")
# build version can be empty in case we are running cmake out of git repository
if(NOT OpenVINO_VERSION_BUILD STREQUAL "000")
set(CPACK_PACKAGE_VERSION "${CPACK_PACKAGE_VERSION}.${OpenVINO_VERSION_BUILD}")
endif()

View File

@@ -10,6 +10,24 @@ endif()
set(rpmlint_passed ON)
execute_process(COMMAND "${rpmlint_PROGRAM}" --version
RESULT_VARIABLE rpmlint_exit_code
OUTPUT_VARIABLE rpmlint_version)
if(NOT rpmlint_exit_code EQUAL 0)
message(FATAL_ERROR "Failed to get ${rpmlint_PROGRAM} version. Output is '${rpmlint_version}'")
endif()
if(rpmlint_version MATCHES "([0-9]+)\.([0-9]+)")
set(rpmlint_version "${CMAKE_MATCH_1}.${CMAKE_MATCH_2}")
else()
message(FATAL_ERROR "Failed to parse rpmlint version '${rpmlint_version}'")
endif()
if(rpmlint_version VERSION_GREATER_EQUAL 2.0)
set(rpmlint_has_strict_option ON)
endif()
foreach(rpm_file IN LISTS CPACK_PACKAGE_FILES)
get_filename_component(rpm_name "${rpm_file}" NAME)
get_filename_component(dir_name "${rpm_file}" DIRECTORY)
@@ -17,20 +35,25 @@ foreach(rpm_file IN LISTS CPACK_PACKAGE_FILES)
set(rpmlint_overrides "${dir_name}/${rpm_name}.rpmlintrc")
if(EXISTS "${rpmlint_overrides}")
set(file_option --file "${rpmlint_overrides}")
set(rpmlint_options --file "${rpmlint_overrides}")
endif()
if(rpmlint_has_strict_option)
list(APPEND rpmlint_options --strict)
endif()
execute_process(COMMAND "${rpmlint_PROGRAM}" --strict ${file_option} ${rpm_file}
execute_process(COMMAND "${rpmlint_PROGRAM}" ${rpmlint_options} ${rpm_file}
RESULT_VARIABLE rpmlint_exit_code
OUTPUT_VARIABLE rpmlint_output)
if(NOT rpmlint_exit_code EQUAL 0)
if(NOT rpmlint_exit_code EQUAL 0 OR NOT rpmlint_has_strict_option)
message("Package ${rpm_name}:")
message("${rpmlint_output}")
set(rpmlint_passed OFF)
if(rpmlint_has_strict_option)
set(rpmlint_passed OFF)
endif()
endif()
unset(file_option)
unset(rpmlint_options)
endforeach()
if(NOT rpmlint_passed)

View File

@@ -22,6 +22,11 @@ macro(ov_rpm_cpack_set_dirs)
set(OV_CPACK_NGRAPH_CMAKEDIR ${CMAKE_INSTALL_LIBDIR}/cmake/ngraph${OpenVINO_VERSION})
set(OV_CPACK_OPENVINO_CMAKEDIR ${CMAKE_INSTALL_LIBDIR}/cmake/openvino${OpenVINO_VERSION})
set(OV_CPACK_DOCDIR ${CMAKE_INSTALL_DATADIR}/doc/openvino-${OpenVINO_VERSION})
set(OV_CPACK_LICENSESDIR ${OV_CPACK_DOCDIR}/licenses)
# TODO:
# 1. define python installation directories for RPM packages
# 2. make sure only a single version of python API can be installed at the same time (define conflicts section)
# set(OV_CPACK_PYTHONDIR lib/python3/dist-packages)
ov_get_pyversion(pyversion)

View File

@@ -4,13 +4,13 @@
cmake_policy(SET CMP0007 NEW)
set(newContent " <plugin name=\"${IE_DEVICE_NAME}\" location=\"${IE_PLUGIN_LIBRARY_NAME}\">")
set(newContent " <plugin name=\"${OV_DEVICE_NAME}\" location=\"${OV_PLUGIN_LIBRARY_NAME}\">")
if(IE_PLUGIN_PROPERTIES)
if(OV_PLUGIN_PROPERTIES)
set(newContent "${newContent}
<properties>")
foreach(props IN LISTS IE_PLUGIN_PROPERTIES)
foreach(props IN LISTS OV_PLUGIN_PROPERTIES)
string(REPLACE ":" ";" props "${props}")
list(GET props 0 key)
@@ -27,4 +27,4 @@ endif()
set(newContent "${newContent}
</plugin>")
file(WRITE "${IE_CONFIG_OUTPUT_FILE}" "${newContent}")
file(WRITE "${OV_CONFIG_OUTPUT_FILE}" "${newContent}")

View File

@@ -2,7 +2,7 @@
# SPDX-License-Identifier: Apache-2.0
#
foreach(var IE_DEVICE_MAPPING IE_PLUGINS_HPP_HEADER IE_PLUGINS_HPP_HEADER_IN)
foreach(var OV_DEVICE_MAPPING BUILD_SHARED_LIBS OV_PLUGINS_HPP_HEADER OV_PLUGINS_HPP_HEADER_IN)
if(NOT DEFINED ${var})
message(FATAL_ERROR "${var} is required, but not defined")
endif()
@@ -10,29 +10,15 @@ endforeach()
# configure variables
set(IE_PLUGINS_DECLARATIONS "")
set(IE_PLUGINS_MAP_DEFINITION
set(OV_PLUGINS_DECLARATIONS "")
set(OV_PLUGINS_MAP_DEFINITION
" static const std::map<Key, Value> plugins_hpp = {")
foreach(dev_map IN LISTS IE_DEVICE_MAPPING)
foreach(dev_map IN LISTS OV_DEVICE_MAPPING)
string(REPLACE ":" ";" dev_map "${dev_map}")
list(GET dev_map 0 mapped_dev_name)
list(GET dev_map 1 actual_dev_name)
# common
set(_IE_CREATE_PLUGIN_FUNC "CreatePluginEngine${actual_dev_name}")
set(_IE_CREATE_EXTENSION_FUNC "CreateExtensionShared${actual_dev_name}")
# declarations
set(IE_PLUGINS_DECLARATIONS "${IE_PLUGINS_DECLARATIONS}
IE_DEFINE_PLUGIN_CREATE_FUNCTION_DECLARATION(${_IE_CREATE_PLUGIN_FUNC});")
if(${actual_dev_name}_AS_EXTENSION)
set(IE_PLUGINS_DECLARATIONS "${IE_PLUGINS_DECLARATIONS}
IE_DEFINE_EXTENSION_CREATE_FUNCTION_DECLARATION(${_IE_CREATE_EXTENSION_FUNC});")
else()
set(_IE_CREATE_EXTENSION_FUNC "nullptr")
endif()
# definitions
set(dev_config "{")
if(${mapped_dev_name}_CONFIG)
@@ -48,11 +34,31 @@ IE_DEFINE_EXTENSION_CREATE_FUNCTION_DECLARATION(${_IE_CREATE_EXTENSION_FUNC});")
endif()
set(dev_config "${dev_config}}")
set(IE_PLUGINS_MAP_DEFINITION "${IE_PLUGINS_MAP_DEFINITION}
{ \"${mapped_dev_name}\", Value { ${_IE_CREATE_PLUGIN_FUNC}, ${_IE_CREATE_EXTENSION_FUNC}, ${dev_config} } },")
if(NOT BUILD_SHARED_LIBS)
# common
set(_OV_CREATE_PLUGIN_FUNC "CreatePluginEngine${actual_dev_name}")
set(_OV_CREATE_EXTENSION_FUNC "CreateExtensionShared${actual_dev_name}")
# declarations
set(OV_PLUGINS_DECLARATIONS "${OV_PLUGINS_DECLARATIONS}
IE_DEFINE_PLUGIN_CREATE_FUNCTION_DECLARATION(${_OV_CREATE_PLUGIN_FUNC});")
if(${actual_dev_name}_AS_EXTENSION)
set(OV_PLUGINS_DECLARATIONS "${OV_PLUGINS_DECLARATIONS}
IE_DEFINE_EXTENSION_CREATE_FUNCTION_DECLARATION(${_OV_CREATE_EXTENSION_FUNC});")
else()
set(_OV_CREATE_EXTENSION_FUNC "nullptr")
endif()
set(OV_PLUGINS_MAP_DEFINITION "${OV_PLUGINS_MAP_DEFINITION}
{ \"${mapped_dev_name}\", Value { ${_OV_CREATE_PLUGIN_FUNC}, ${_OV_CREATE_EXTENSION_FUNC}, ${dev_config} } },")
else()
set(OV_PLUGINS_MAP_DEFINITION "${OV_PLUGINS_MAP_DEFINITION}
{ \"${mapped_dev_name}\", Value { \"${actual_dev_name}\", ${dev_config} } },")
endif()
endforeach()
set(IE_PLUGINS_MAP_DEFINITION "${IE_PLUGINS_MAP_DEFINITION}
set(OV_PLUGINS_MAP_DEFINITION "${OV_PLUGINS_MAP_DEFINITION}
};\n")
configure_file("${IE_PLUGINS_HPP_HEADER_IN}" "${IE_PLUGINS_HPP_HEADER}" @ONLY)
configure_file("${OV_PLUGINS_HPP_HEADER_IN}" "${OV_PLUGINS_HPP_HEADER}" @ONLY)

View File

@@ -6,11 +6,15 @@ include(CMakeParseArguments)
set(PLUGIN_FILES "" CACHE INTERNAL "")
function(ie_plugin_get_file_name target_name library_name)
function(ov_plugin_get_file_name target_name library_name)
set(LIB_PREFIX "${CMAKE_SHARED_MODULE_PREFIX}")
set(LIB_SUFFIX "${IE_BUILD_POSTFIX}${CMAKE_SHARED_MODULE_SUFFIX}")
set("${library_name}" "${LIB_PREFIX}${target_name}${LIB_SUFFIX}" PARENT_SCOPE)
get_target_property(LIB_NAME ${target_name} OUTPUT_NAME)
if (LIB_NAME STREQUAL "LIB_NAME-NOTFOUND")
set(LIB_NAME ${target_name})
endif()
set("${library_name}" "${LIB_PREFIX}${LIB_NAME}${LIB_SUFFIX}" PARENT_SCOPE)
endfunction()
if(NOT TARGET ov_plugins)
@@ -18,7 +22,7 @@ if(NOT TARGET ov_plugins)
endif()
#
# ie_add_plugin(NAME <targetName>
# ov_add_plugin(NAME <targetName>
# DEVICE_NAME <deviceName>
# [PSEUDO_DEVICE]
# [PSEUDO_PLUGIN_FOR <actual_device>]
@@ -32,29 +36,25 @@ endif()
# [ADD_CLANG_FORMAT]
# )
#
function(ie_add_plugin)
function(ov_add_plugin)
set(options SKIP_INSTALL PSEUDO_DEVICE ADD_CLANG_FORMAT AS_EXTENSION SKIP_REGISTRATION)
set(oneValueArgs NAME DEVICE_NAME VERSION_DEFINES_FOR PSEUDO_PLUGIN_FOR)
set(multiValueArgs DEFAULT_CONFIG SOURCES OBJECT_LIBRARIES CPPLINT_FILTERS)
cmake_parse_arguments(IE_PLUGIN "${options}" "${oneValueArgs}" "${multiValueArgs}" ${ARGN})
cmake_parse_arguments(OV_PLUGIN "${options}" "${oneValueArgs}" "${multiValueArgs}" ${ARGN})
if(NOT IE_PLUGIN_NAME)
if(NOT OV_PLUGIN_NAME)
message(FATAL_ERROR "Please, specify plugin target name")
endif()
if(NOT IE_PLUGIN_DEVICE_NAME)
message(FATAL_ERROR "Please, specify device name for ${IE_PLUGIN_NAME}")
if(NOT OV_PLUGIN_DEVICE_NAME)
message(FATAL_ERROR "Please, specify device name for ${OV_PLUGIN_NAME}")
endif()
# create and configure target
if(NOT IE_PLUGIN_PSEUDO_PLUGIN_FOR)
if(IE_PLUGIN_VERSION_DEFINES_FOR)
addVersionDefines(${IE_PLUGIN_VERSION_DEFINES_FOR} CI_BUILD_NUMBER)
endif()
set(input_files ${IE_PLUGIN_SOURCES})
foreach(obj_lib IN LISTS IE_PLUGIN_OBJECT_LIBRARIES)
if(NOT OV_PLUGIN_PSEUDO_PLUGIN_FOR)
set(input_files ${OV_PLUGIN_SOURCES})
foreach(obj_lib IN LISTS OV_PLUGIN_OBJECT_LIBRARIES)
list(APPEND input_files $<TARGET_OBJECTS:${obj_lib}>)
add_cpplint_target(${obj_lib}_cpplint FOR_TARGETS ${obj_lib})
endforeach()
@@ -65,116 +65,122 @@ function(ie_add_plugin)
set(library_type STATIC)
endif()
add_library(${IE_PLUGIN_NAME} ${library_type} ${input_files})
add_library(${OV_PLUGIN_NAME} ${library_type} ${input_files})
target_compile_definitions(${IE_PLUGIN_NAME} PRIVATE IMPLEMENT_INFERENCE_ENGINE_PLUGIN)
if(OV_PLUGIN_VERSION_DEFINES_FOR)
ov_add_version_defines(${OV_PLUGIN_VERSION_DEFINES_FOR} ${OV_PLUGIN_NAME})
endif()
target_compile_definitions(${OV_PLUGIN_NAME} PRIVATE IMPLEMENT_INFERENCE_ENGINE_PLUGIN)
if(NOT BUILD_SHARED_LIBS)
# to distinguish functions creating plugin objects
target_compile_definitions(${IE_PLUGIN_NAME} PRIVATE
IE_CREATE_PLUGIN=CreatePluginEngine${IE_PLUGIN_DEVICE_NAME}
OV_CREATE_PLUGIN=CreatePluginEngine${IE_PLUGIN_DEVICE_NAME})
if(IE_PLUGIN_AS_EXTENSION)
target_compile_definitions(${OV_PLUGIN_NAME} PRIVATE
IE_CREATE_PLUGIN=CreatePluginEngine${OV_PLUGIN_DEVICE_NAME}
OV_CREATE_PLUGIN=CreatePluginEngine${OV_PLUGIN_DEVICE_NAME})
if(OV_PLUGIN_AS_EXTENSION)
# to distinguish functions creating extensions objects
target_compile_definitions(${IE_PLUGIN_NAME} PRIVATE
IE_CREATE_EXTENSION=CreateExtensionShared${IE_PLUGIN_DEVICE_NAME})
target_compile_definitions(${OV_PLUGIN_NAME} PRIVATE
IE_CREATE_EXTENSION=CreateExtensionShared${OV_PLUGIN_DEVICE_NAME})
endif()
endif()
ie_add_vs_version_file(NAME ${IE_PLUGIN_NAME}
FILEDESCRIPTION "OpenVINO Runtime ${IE_PLUGIN_DEVICE_NAME} device plugin library")
ov_add_vs_version_file(NAME ${OV_PLUGIN_NAME}
FILEDESCRIPTION "OpenVINO Runtime ${OV_PLUGIN_DEVICE_NAME} device plugin library")
target_link_libraries(${IE_PLUGIN_NAME} PRIVATE openvino::runtime openvino::runtime::dev)
target_link_libraries(${OV_PLUGIN_NAME} PRIVATE openvino::runtime openvino::runtime::dev)
if(WIN32)
set_target_properties(${IE_PLUGIN_NAME} PROPERTIES COMPILE_PDB_NAME ${IE_PLUGIN_NAME})
set_target_properties(${OV_PLUGIN_NAME} PROPERTIES COMPILE_PDB_NAME ${OV_PLUGIN_NAME})
endif()
if(CMAKE_COMPILER_IS_GNUCXX AND NOT CMAKE_CROSSCOMPILING)
target_link_options(${IE_PLUGIN_NAME} PRIVATE -Wl,--unresolved-symbols=ignore-in-shared-libs)
target_link_options(${OV_PLUGIN_NAME} PRIVATE -Wl,--unresolved-symbols=ignore-in-shared-libs)
endif()
set(custom_filter "")
foreach(filter IN LISTS IE_PLUGIN_CPPLINT_FILTERS)
foreach(filter IN LISTS OV_PLUGIN_CPPLINT_FILTERS)
string(CONCAT custom_filter "${custom_filter}" "," "${filter}")
endforeach()
if (IE_PLUGIN_ADD_CLANG_FORMAT)
add_clang_format_target(${IE_PLUGIN_NAME}_clang FOR_TARGETS ${IE_PLUGIN_NAME})
if (OV_PLUGIN_ADD_CLANG_FORMAT)
add_clang_format_target(${OV_PLUGIN_NAME}_clang FOR_TARGETS ${OV_PLUGIN_NAME})
else()
add_cpplint_target(${IE_PLUGIN_NAME}_cpplint FOR_TARGETS ${IE_PLUGIN_NAME} CUSTOM_FILTERS ${custom_filter})
add_cpplint_target(${OV_PLUGIN_NAME}_cpplint FOR_TARGETS ${OV_PLUGIN_NAME} CUSTOM_FILTERS ${custom_filter})
endif()
add_dependencies(ov_plugins ${IE_PLUGIN_NAME})
add_dependencies(ov_plugins ${OV_PLUGIN_NAME})
# install rules
if(NOT IE_PLUGIN_SKIP_INSTALL OR NOT BUILD_SHARED_LIBS)
string(TOLOWER "${IE_PLUGIN_DEVICE_NAME}" install_component)
if(NOT OV_PLUGIN_SKIP_INSTALL OR NOT BUILD_SHARED_LIBS)
string(TOLOWER "${OV_PLUGIN_DEVICE_NAME}" install_component)
if(IE_PLUGIN_PSEUDO_DEVICE)
if(OV_PLUGIN_PSEUDO_DEVICE)
set(plugin_hidden HIDDEN)
endif()
ie_cpack_add_component(${install_component}
DISPLAY_NAME "${IE_PLUGIN_DEVICE_NAME} runtime"
DESCRIPTION "${IE_PLUGIN_DEVICE_NAME} runtime"
ov_cpack_add_component(${install_component}
DISPLAY_NAME "${OV_PLUGIN_DEVICE_NAME} runtime"
DESCRIPTION "${OV_PLUGIN_DEVICE_NAME} runtime"
${plugin_hidden}
DEPENDS ${OV_CPACK_COMP_CORE})
if(BUILD_SHARED_LIBS)
install(TARGETS ${IE_PLUGIN_NAME}
install(TARGETS ${OV_PLUGIN_NAME}
LIBRARY DESTINATION ${OV_CPACK_PLUGINSDIR}
COMPONENT ${install_component})
install(TARGETS ${IE_PLUGIN_NAME}
install(TARGETS ${OV_PLUGIN_NAME}
LIBRARY DESTINATION ${OV_CPACK_PLUGINSDIR}
COMPONENT ${install_component})
else()
ov_install_static_lib(${IE_PLUGIN_NAME} ${install_component})
ov_install_static_lib(${OV_PLUGIN_NAME} ${install_component})
endif()
endif()
endif()
# Enable for static build to generate correct plugins.hpp
if(NOT IE_PLUGIN_SKIP_REGISTRATION OR NOT BUILD_SHARED_LIBS)
if(NOT OV_PLUGIN_SKIP_REGISTRATION OR NOT BUILD_SHARED_LIBS)
# check that plugin with such name is not registered
foreach(plugin_entry IN LISTS PLUGIN_FILES)
string(REPLACE ":" ";" plugin_entry "${plugin_entry}")
list(GET plugin_entry -1 library_name)
list(GET plugin_entry 0 plugin_name)
if(plugin_name STREQUAL "${IE_PLUGIN_DEVICE_NAME}" AND
NOT library_name STREQUAL ${IE_PLUGIN_NAME})
message(FATAL_ERROR "${IE_PLUGIN_NAME} and ${library_name} are both registered as ${plugin_name}")
if(plugin_name STREQUAL "${OV_PLUGIN_DEVICE_NAME}" AND
NOT library_name STREQUAL ${OV_PLUGIN_NAME})
message(FATAL_ERROR "${OV_PLUGIN_NAME} and ${library_name} are both registered as ${plugin_name}")
endif()
endforeach()
# append plugin to the list to register
list(APPEND PLUGIN_FILES "${IE_PLUGIN_DEVICE_NAME}:${IE_PLUGIN_NAME}")
list(APPEND PLUGIN_FILES "${OV_PLUGIN_DEVICE_NAME}:${OV_PLUGIN_NAME}")
set(PLUGIN_FILES "${PLUGIN_FILES}" CACHE INTERNAL "" FORCE)
set(${IE_PLUGIN_DEVICE_NAME}_CONFIG "${IE_PLUGIN_DEFAULT_CONFIG}" CACHE INTERNAL "" FORCE)
set(${IE_PLUGIN_DEVICE_NAME}_PSEUDO_PLUGIN_FOR "${IE_PLUGIN_PSEUDO_PLUGIN_FOR}" CACHE INTERNAL "" FORCE)
set(${IE_PLUGIN_DEVICE_NAME}_AS_EXTENSION "${IE_PLUGIN_AS_EXTENSION}" CACHE INTERNAL "" FORCE)
set(${OV_PLUGIN_DEVICE_NAME}_CONFIG "${OV_PLUGIN_DEFAULT_CONFIG}" CACHE INTERNAL "" FORCE)
set(${OV_PLUGIN_DEVICE_NAME}_PSEUDO_PLUGIN_FOR "${OV_PLUGIN_PSEUDO_PLUGIN_FOR}" CACHE INTERNAL "" FORCE)
set(${OV_PLUGIN_DEVICE_NAME}_AS_EXTENSION "${OV_PLUGIN_AS_EXTENSION}" CACHE INTERNAL "" FORCE)
endif()
endfunction()
function(ov_add_plugin)
ie_add_plugin(${ARGN})
function(ie_add_plugin)
ov_add_plugin(${ARGN})
endfunction()
#
# ie_register_plugins_dynamic(MAIN_TARGET <main target name>)
# ov_register_in_plugins_xml(MAIN_TARGET <main target name>)
#
macro(ie_register_plugins_dynamic)
# Registers plugins in plugins.xml files for dynamic plugins build
#
macro(ov_register_in_plugins_xml)
set(options)
set(oneValueArgs MAIN_TARGET)
set(multiValueArgs)
cmake_parse_arguments(IE_REGISTER "${options}" "${oneValueArgs}" "${multiValueArgs}" ${ARGN})
cmake_parse_arguments(OV_REGISTER "${options}" "${oneValueArgs}" "${multiValueArgs}" ${ARGN})
if(NOT IE_REGISTER_MAIN_TARGET)
if(NOT OV_REGISTER_MAIN_TARGET)
message(FATAL_ERROR "Please, define MAIN_TARGET")
endif()
# Unregister <device_name>.xml files for plugins from current build tree
set(config_output_file "$<TARGET_FILE_DIR:${IE_REGISTER_MAIN_TARGET}>/plugins.xml")
set(config_output_file "$<TARGET_FILE_DIR:${OV_REGISTER_MAIN_TARGET}>/plugins.xml")
foreach(name IN LISTS PLUGIN_FILES)
string(REPLACE ":" ";" name "${name}")
@@ -183,12 +189,12 @@ macro(ie_register_plugins_dynamic)
message(FATAL_ERROR "Unexpected error, please, contact developer of this script")
endif()
list(GET name 0 device_name)
add_custom_command(TARGET ${IE_REGISTER_MAIN_TARGET} POST_BUILD
add_custom_command(TARGET ${OV_REGISTER_MAIN_TARGET} POST_BUILD
COMMAND
"${CMAKE_COMMAND}"
-D "IE_CONFIG_OUTPUT_FILE=${config_output_file}"
-D "IE_PLUGIN_NAME=${device_name}"
-D "IE_CONFIGS_DIR=${CMAKE_BINARY_DIR}/plugins"
-D "OV_CONFIG_OUTPUT_FILE=${config_output_file}"
-D "OV_PLUGIN_NAME=${device_name}"
-D "OV_CONFIGS_DIR=${CMAKE_BINARY_DIR}/plugins"
-P "${IEDevScripts_DIR}/plugins/unregister_plugin_cmake.cmake"
COMMENT
"Remove ${device_name} from the plugins.xml file"
@@ -209,15 +215,15 @@ macro(ie_register_plugins_dynamic)
# create plugin file
set(config_file_name "${CMAKE_BINARY_DIR}/plugins/${device_name}.xml")
ie_plugin_get_file_name(${name} library_name)
ov_plugin_get_file_name(${name} library_name)
add_custom_command(TARGET ${IE_REGISTER_MAIN_TARGET} POST_BUILD
add_custom_command(TARGET ${OV_REGISTER_MAIN_TARGET} POST_BUILD
COMMAND
"${CMAKE_COMMAND}"
-D "IE_CONFIG_OUTPUT_FILE=${config_file_name}"
-D "IE_DEVICE_NAME=${device_name}"
-D "IE_PLUGIN_PROPERTIES=${${device_name}_CONFIG}"
-D "IE_PLUGIN_LIBRARY_NAME=${library_name}"
-D "OV_CONFIG_OUTPUT_FILE=${config_file_name}"
-D "OV_DEVICE_NAME=${device_name}"
-D "OV_PLUGIN_PROPERTIES=${${device_name}_CONFIG}"
-D "OV_PLUGIN_LIBRARY_NAME=${library_name}"
-P "${IEDevScripts_DIR}/plugins/create_plugin_file.cmake"
COMMENT "Register ${device_name} device as ${library_name}"
VERBATIM)
@@ -227,40 +233,38 @@ macro(ie_register_plugins_dynamic)
# Combine all <device_name>.xml files into plugins.xml
add_custom_command(TARGET ${IE_REGISTER_MAIN_TARGET} POST_BUILD
COMMAND
"${CMAKE_COMMAND}"
-D "CMAKE_SHARED_MODULE_PREFIX=${CMAKE_SHARED_MODULE_PREFIX}"
-D "IE_CONFIG_OUTPUT_FILE=${config_output_file}"
-D "IE_CONFIGS_DIR=${CMAKE_BINARY_DIR}/plugins"
-P "${IEDevScripts_DIR}/plugins/register_plugin_cmake.cmake"
COMMENT
"Registering plugins to plugins.xml config file"
VERBATIM)
endmacro()
#
# ie_register_plugins()
#
macro(ie_register_plugins)
if(BUILD_SHARED_LIBS)
ie_register_plugins_dynamic(${ARGN})
endif()
add_custom_command(TARGET ${OV_REGISTER_MAIN_TARGET} POST_BUILD
COMMAND
"${CMAKE_COMMAND}"
-D "CMAKE_SHARED_MODULE_PREFIX=${CMAKE_SHARED_MODULE_PREFIX}"
-D "OV_CONFIG_OUTPUT_FILE=${config_output_file}"
-D "OV_CONFIGS_DIR=${CMAKE_BINARY_DIR}/plugins"
-P "${IEDevScripts_DIR}/plugins/register_plugin_cmake.cmake"
COMMENT
"Registering plugins to plugins.xml config file"
VERBATIM)
endmacro()
#
# ov_register_plugins()
#
macro(ov_register_plugins)
if(BUILD_SHARED_LIBS)
ie_register_plugins_dynamic(${ARGN})
if(BUILD_SHARED_LIBS AND ENABLE_PLUGINS_XML)
ov_register_in_plugins_xml(${ARGN})
endif()
endmacro()
#
# ie_target_link_plugins(<TARGET_NAME>)
# ie_register_plugins()
#
function(ie_target_link_plugins TARGET_NAME)
macro(ie_register_plugins)
ov_register_plugins(${ARGN})
endmacro()
#
# ov_target_link_plugins(<TARGET_NAME>)
#
function(ov_target_link_plugins TARGET_NAME)
if(BUILD_SHARED_LIBS)
return()
endif()
@@ -279,13 +283,13 @@ function(ie_target_link_plugins TARGET_NAME)
endfunction()
#
# ie_generate_plugins_hpp()
# ov_generate_plugins_hpp()
#
function(ie_generate_plugins_hpp)
if(BUILD_SHARED_LIBS)
return()
endif()
# Generates plugins.hpp file for:
# - static plugins build
# - cases when plugins.xml file is disabled
#
function(ov_generate_plugins_hpp)
set(device_mapping)
set(device_configs)
set(as_extension)
@@ -296,17 +300,23 @@ function(ie_generate_plugins_hpp)
message(FATAL_ERROR "Unexpected error, please, contact developer of this script")
endif()
# create device mapping: preudo device => actual device
# create device mapping: pseudo device => actual device
list(GET name 0 device_name)
if(${device_name}_PSEUDO_PLUGIN_FOR)
list(APPEND device_mapping "${device_name}:${${device_name}_PSEUDO_PLUGIN_FOR}")
if(BUILD_SHARED_LIBS)
list(GET name 1 library_name)
ov_plugin_get_file_name(${library_name} library_name)
list(APPEND device_mapping "${device_name}:${library_name}")
else()
list(APPEND device_mapping "${device_name}:${device_name}")
endif()
if(${device_name}_PSEUDO_PLUGIN_FOR)
list(APPEND device_mapping "${device_name}:${${device_name}_PSEUDO_PLUGIN_FOR}")
else()
list(APPEND device_mapping "${device_name}:${device_name}")
endif()
# register plugin as extension
if(${device_name}_AS_EXTENSION)
list(APPEND as_extension -D "${device_name}_AS_EXTENSION=ON")
# register plugin as extension
if(${device_name}_AS_EXTENSION)
list(APPEND as_extension -D "${device_name}_AS_EXTENSION=ON")
endif()
endif()
# add default plugin config options
@@ -317,21 +327,26 @@ function(ie_generate_plugins_hpp)
endif()
endforeach()
# add plugins to libraries including ie_plugins.hpp
ie_target_link_plugins(openvino)
# add plugins to libraries including ov_plugins.hpp
ov_target_link_plugins(openvino)
if(TARGET inference_engine_s)
ie_target_link_plugins(inference_engine_s)
ov_target_link_plugins(inference_engine_s)
endif()
set(ie_plugins_hpp "${CMAKE_BINARY_DIR}/src/inference/ie_plugins.hpp")
if(OV_GENERATOR_MULTI_CONFIG AND CMAKE_VERSION VERSION_GREATER_EQUAL 3.20)
set(ov_plugins_hpp "${CMAKE_BINARY_DIR}/src/inference/$<CONFIG>/ov_plugins.hpp")
else()
set(ov_plugins_hpp "${CMAKE_BINARY_DIR}/src/inference/ov_plugins.hpp")
endif()
set(plugins_hpp_in "${IEDevScripts_DIR}/plugins/plugins.hpp.in")
add_custom_command(OUTPUT "${ie_plugins_hpp}"
add_custom_command(OUTPUT "${ov_plugins_hpp}"
COMMAND
"${CMAKE_COMMAND}"
-D "IE_DEVICE_MAPPING=${device_mapping}"
-D "IE_PLUGINS_HPP_HEADER_IN=${plugins_hpp_in}"
-D "IE_PLUGINS_HPP_HEADER=${ie_plugins_hpp}"
-D "BUILD_SHARED_LIBS=${BUILD_SHARED_LIBS}"
-D "OV_DEVICE_MAPPING=${device_mapping}"
-D "OV_PLUGINS_HPP_HEADER_IN=${plugins_hpp_in}"
-D "OV_PLUGINS_HPP_HEADER=${ov_plugins_hpp}"
${device_configs}
${as_extension}
-P "${IEDevScripts_DIR}/plugins/create_plugins_hpp.cmake"
@@ -339,28 +354,11 @@ function(ie_generate_plugins_hpp)
"${plugins_hpp_in}"
"${IEDevScripts_DIR}/plugins/create_plugins_hpp.cmake"
COMMENT
"Generate ie_plugins.hpp for static build"
"Generate ov_plugins.hpp for build"
VERBATIM)
# for some reason dependency on source files does not work
# so, we have to use explicit target and make it dependency for inference_engine
add_custom_target(_ie_plugins_hpp DEPENDS ${ie_plugins_hpp})
add_dependencies(inference_engine_obj _ie_plugins_hpp)
# add dependency for object files
get_target_property(sources inference_engine_obj SOURCES)
foreach(source IN LISTS sources)
if("${source}" MATCHES "\\$\\<TARGET_OBJECTS\\:([A-Za-z0-9_]*)\\>")
# object library
set(obj_library ${CMAKE_MATCH_1})
get_target_property(obj_sources ${obj_library} SOURCES)
list(APPEND all_sources ${obj_sources})
else()
# usual source
list(APPEND all_sources ${source})
endif()
endforeach()
# add dependency on header file generation for all inference_engine source files
set_source_files_properties(${all_sources} PROPERTIES OBJECT_DEPENDS ${ie_plugins_hpp})
# so, we have to use explicit target and make it dependency for inference_engine_obj
add_custom_target(_ov_plugins_hpp DEPENDS ${ov_plugins_hpp})
add_dependencies(inference_engine_obj _ov_plugins_hpp)
endfunction()

View File

@@ -4,10 +4,14 @@
#pragma once
#include <map>
#include <string>
#ifdef OPENVINO_STATIC_LIBRARY
#include "cpp_interfaces/interface/ie_iplugin_internal.hpp"
namespace {
@IE_PLUGINS_DECLARATIONS@
@OV_PLUGINS_DECLARATIONS@
struct Value {
InferenceEngine::CreatePluginEngineFunc * m_create_plugin_func;
@@ -15,12 +19,20 @@ struct Value {
std::map<std::string, std::string> m_default_config;
};
#else
struct Value {
std::string m_plugin_path;
std::map<std::string, std::string> m_default_config;
};
#endif
using Key = std::string;
using PluginsStaticRegistry = std::map<Key, Value>;
const std::map<Key, Value> getStaticPluginsRegistry() {
@IE_PLUGINS_MAP_DEFINITION@
inline const std::map<Key, Value> getCompiledPluginsRegistry() {
@OV_PLUGINS_MAP_DEFINITION@
return plugins_hpp;
}
} // namespace

View File

@@ -8,18 +8,18 @@ set(file_content
</plugins>
</ie>")
if(NOT EXISTS "${IE_CONFIG_OUTPUT_FILE}")
file(WRITE "${IE_CONFIG_OUTPUT_FILE}" "${file_content}")
if(NOT EXISTS "${OV_CONFIG_OUTPUT_FILE}")
file(WRITE "${OV_CONFIG_OUTPUT_FILE}" "${file_content}")
endif()
# get list of plugin files
file(GLOB plugin_files "${IE_CONFIGS_DIR}/*.xml")
file(GLOB plugin_files "${OV_CONFIGS_DIR}/*.xml")
function(check_plugin_exists plugin_name outvar)
set(${outvar} OFF PARENT_SCOPE)
# check if config file already has this plugin
file(STRINGS "${IE_CONFIG_OUTPUT_FILE}" content REGEX "plugin .*=\"")
file(STRINGS "${OV_CONFIG_OUTPUT_FILE}" content REGEX "plugin .*=\"")
foreach(line IN LISTS content)
string(REGEX MATCH "location=\"([^\"]*)\"" location "${line}")
@@ -44,7 +44,7 @@ endforeach()
# add plugin
set(newContent "")
file(STRINGS "${IE_CONFIG_OUTPUT_FILE}" content)
file(STRINGS "${OV_CONFIG_OUTPUT_FILE}" content)
set(already_exists_in_xml OFF)
foreach(line IN LISTS content)
@@ -77,4 +77,4 @@ ${content}")
endif()
endforeach()
file(WRITE "${IE_CONFIG_OUTPUT_FILE}" "${newContent}")
file(WRITE "${OV_CONFIG_OUTPUT_FILE}" "${newContent}")

View File

@@ -2,16 +2,16 @@
# SPDX-License-Identifier: Apache-2.0
#
if(NOT EXISTS "${IE_CONFIG_OUTPUT_FILE}")
if(NOT EXISTS "${OV_CONFIG_OUTPUT_FILE}")
return()
endif()
# remove plugin file
file(REMOVE "${IE_CONFIGS_DIR}/${IE_PLUGIN_NAME}.xml")
file(REMOVE "${OV_CONFIGS_DIR}/${IE_PLUGIN_NAME}.xml")
# remove plugin
set(newContent "")
file(STRINGS "${IE_CONFIG_OUTPUT_FILE}" content)
file(STRINGS "${OV_CONFIG_OUTPUT_FILE}" content)
set(skip_plugin OFF)
foreach(line IN LISTS content)
@@ -32,4 +32,4 @@ foreach(line IN LISTS content)
endif()
endforeach()
file(WRITE "${IE_CONFIG_OUTPUT_FILE}" "${newContent}")
file(WRITE "${OV_CONFIG_OUTPUT_FILE}" "${newContent}")

View File

@@ -97,7 +97,11 @@ function(ov_check_pip_packages)
if(PYTHONINTERP_FOUND)
execute_process(
COMMAND ${PYTHON_EXECUTABLE} -c "import pkg_resources ; pkg_resources.require(open('${ARG_REQUIREMENTS_FILE}', mode='r'))"
COMMAND ${PYTHON_EXECUTABLE} -c "
from check_python_requirements import check_python_requirements ;
check_python_requirements('${ARG_REQUIREMENTS_FILE}') ;
"
WORKING_DIRECTORY "${IEDevScripts_DIR}"
RESULT_VARIABLE EXIT_CODE
OUTPUT_VARIABLE OUTPUT_TEXT
ERROR_VARIABLE ERROR_TEXT)

View File

@@ -17,22 +17,46 @@ if(WIN32 AND CMAKE_CXX_COMPILER_ID STREQUAL "GNU")
endif()
if(CMAKE_HOST_SYSTEM_PROCESSOR MATCHES "amd64.*|x86_64.*|AMD64.*")
set(arch_flag X86_64)
set(host_arch_flag X86_64)
elseif(CMAKE_HOST_SYSTEM_PROCESSOR MATCHES "i686.*|i386.*|x86.*|amd64.*|AMD64.*")
set(arch_flag X86)
elseif(CMAKE_HOST_SYSTEM_PROCESSOR MATCHES "^(arm64.*|aarch64.*|AARCH64.*)")
set(arch_flag AARCH64)
set(host_arch_flag X86)
elseif(CMAKE_HOST_SYSTEM_PROCESSOR MATCHES "^(arm64.*|aarch64.*|AARCH64.*|ARM64.*)")
set(host_arch_flag AARCH64)
elseif(CMAKE_HOST_SYSTEM_PROCESSOR MATCHES "^(arm.*|ARM.*)")
set(arch_flag ARM)
set(host_arch_flag ARM)
elseif(CMAKE_HOST_SYSTEM_PROCESSOR MATCHES "^riscv64$")
set(arch_flag RISCV64)
set(host_arch_flag RISCV64)
endif()
set(HOST_${arch_flag} ON)
set(HOST_${host_arch_flag} ON)
macro(_ie_process_msvc_generator_platform arch_flag)
# if cmake -A <ARM|ARM64> is passed
if(CMAKE_GENERATOR_PLATFORM STREQUAL "ARM64" OR CMAKE_SYSTEM_PROCESSOR STREQUAL "ARM64")
macro(_ov_detect_arch_by_processor_type)
if(CMAKE_OSX_ARCHITECTURES AND APPLE)
if(CMAKE_OSX_ARCHITECTURES STREQUAL "arm64")
set(AARCH64 ON)
elseif(CMAKE_OSX_ARCHITECTURES STREQUAL "x86_64")
set(X86_64 ON)
elseif(CMAKE_OSX_ARCHITECTURES MATCHES ".*x86_64.*" AND CMAKE_OSX_ARCHITECTURES MATCHES ".*arm64.*")
set(UNIVERSAL2 ON)
else()
message(FATAL_ERROR "Unsupported value: CMAKE_OSX_ARCHITECTURES = ${CMAKE_OSX_ARCHITECTURES}")
endif()
elseif(CMAKE_SYSTEM_PROCESSOR MATCHES "amd64.*|x86_64.*|AMD64.*")
set(X86_64 ON)
elseif(CMAKE_SYSTEM_PROCESSOR MATCHES "i686.*|i386.*|x86.*|amd64.*|AMD64.*|wasm")
set(X86 ON)
elseif(CMAKE_SYSTEM_PROCESSOR MATCHES "^(arm64.*|aarch64.*|AARCH64.*|ARM64.*|armv8)")
set(AARCH64 ON)
elseif(CMAKE_SYSTEM_PROCESSOR MATCHES "^(arm.*|ARM.*)")
set(ARM ON)
elseif(CMAKE_SYSTEM_PROCESSOR MATCHES "^riscv64$")
set(RISCV64 ON)
endif()
endmacro()
macro(_ov_process_msvc_generator_platform)
# if cmake -A <ARM|ARM64|x64|Win32> is passed
if(CMAKE_GENERATOR_PLATFORM STREQUAL "ARM64")
set(AARCH64 ON)
elseif(CMAKE_GENERATOR_PLATFORM STREQUAL "ARM")
set(ARM ON)
@@ -41,45 +65,30 @@ macro(_ie_process_msvc_generator_platform arch_flag)
elseif(CMAKE_GENERATOR_PLATFORM STREQUAL "Win32")
set(X86 ON)
else()
set(${arch_flag} ON)
_ov_detect_arch_by_processor_type()
endif()
endmacro()
# TODO: why OpenCV is found by cmake
if(MSVC64 OR MINGW64)
_ie_process_msvc_generator_platform(${arch_flag})
_ov_process_msvc_generator_platform()
elseif(MINGW OR (MSVC AND NOT CMAKE_CROSSCOMPILING))
_ie_process_msvc_generator_platform(${arch_flag})
elseif(CMAKE_OSX_ARCHITECTURES AND APPLE)
if(CMAKE_OSX_ARCHITECTURES STREQUAL "arm64")
set(AARCH64 ON)
elseif(CMAKE_OSX_ARCHITECTURES STREQUAL "x86_64")
set(X86_64 ON)
elseif(CMAKE_OSX_ARCHITECTURES MATCHES ".*x86_64.*" AND CMAKE_OSX_ARCHITECTURES MATCHES ".*arm64.*")
set(UNIVERSAL2 ON)
else()
message(FATAL_ERROR "Unsupported value: CMAKE_OSX_ARCHITECTURES = ${CMAKE_OSX_ARCHITECTURES}")
endif()
elseif(CMAKE_SYSTEM_PROCESSOR MATCHES "amd64.*|x86_64.*|AMD64.*")
set(X86_64 ON)
elseif(CMAKE_SYSTEM_PROCESSOR MATCHES "i686.*|i386.*|x86.*|amd64.*|AMD64.*")
set(X86 ON)
elseif(CMAKE_SYSTEM_PROCESSOR MATCHES "^(arm64.*|aarch64.*|AARCH64.*|ARM64.*)")
set(AARCH64 ON)
elseif(CMAKE_SYSTEM_PROCESSOR MATCHES "^(arm.*|ARM.*)")
set(ARM ON)
elseif(CMAKE_SYSTEM_PROCESSOR MATCHES "^riscv64$")
set(RISCV64 ON)
_ov_process_msvc_generator_platform()
else()
_ov_detect_arch_by_processor_type()
endif()
if(CMAKE_SYSTEM_NAME STREQUAL "Emscripten")
set(EMSCRIPTEN ON)
endif()
if(UNIX AND NOT (APPLE OR ANDROID OR EMSCRIPTEN))
if(UNIX AND NOT (APPLE OR ANDROID OR EMSCRIPTEN OR CYGWIN))
set(LINUX ON)
endif()
if(NOT DEFINED CMAKE_HOST_LINUX AND CMAKE_HOST_SYSTEM_NAME STREQUAL "Linux")
if(CMAKE_VERSION VERSION_LESS 3.25 AND CMAKE_HOST_SYSTEM_NAME STREQUAL "Linux")
# the variable is available since 3.25
# https://cmake.org/cmake/help/latest/variable/CMAKE_HOST_LINUX.html
set(CMAKE_HOST_LINUX ON)
endif()

View File

@@ -185,6 +185,46 @@ macro (addVersionDefines FILE)
unset(__version_file)
endmacro()
macro (ov_add_version_defines FILE TARGET)
set(__version_file ${FILE})
if(NOT IS_ABSOLUTE ${__version_file})
set(__version_file "${CMAKE_CURRENT_SOURCE_DIR}/${__version_file}")
endif()
if(NOT EXISTS ${__version_file})
message(FATAL_ERROR "${FILE} does not exists in current source directory")
endif()
_remove_source_from_target(${TARGET} ${FILE})
_remove_source_from_target(${TARGET} ${__version_file})
if (BUILD_SHARED_LIBS)
add_library(${TARGET}_version OBJECT ${__version_file})
else()
add_library(${TARGET}_version STATIC ${__version_file})
endif()
if(SUGGEST_OVERRIDE_SUPPORTED)
set_source_files_properties(${__version_file}
PROPERTIES COMPILE_OPTIONS -Wno-suggest-override)
endif()
target_compile_definitions(${TARGET}_version PRIVATE
CI_BUILD_NUMBER=\"${CI_BUILD_NUMBER}\"
$<TARGET_PROPERTY:${TARGET},INTERFACE_COMPILE_DEFINITIONS>
$<TARGET_PROPERTY:${TARGET},COMPILE_DEFINITIONS>)
target_include_directories(${TARGET}_version PRIVATE
$<TARGET_PROPERTY:${TARGET},INTERFACE_INCLUDE_DIRECTORIES>
$<TARGET_PROPERTY:${TARGET},INCLUDE_DIRECTORIES>)
target_link_libraries(${TARGET}_version PRIVATE
$<TARGET_PROPERTY:${TARGET},LINK_LIBRARIES>)
target_compile_options(${TARGET}_version PRIVATE
$<TARGET_PROPERTY:${TARGET},INTERFACE_COMPILE_OPTIONS>
$<TARGET_PROPERTY:${TARGET},COMPILE_OPTIONS>)
set_target_properties(${TARGET}_version
PROPERTIES INTERPROCEDURAL_OPTIMIZATION_RELEASE
$<TARGET_PROPERTY:${TARGET},INTERPROCEDURAL_OPTIMIZATION_RELEASE>)
target_sources(${TARGET} PRIVATE $<TARGET_OBJECTS:${TARGET}_version>)
unset(__version_file)
endmacro()
function(ov_add_library_version library)
if(NOT DEFINED OpenVINO_SOVERSION)
message(FATAL_ERROR "Internal error: OpenVINO_SOVERSION is not defined")

View File

@@ -2,18 +2,18 @@
# SPDX-License-Identifier: Apache-2.0
#
set(IE_VS_VER_FILEVERSION_QUAD "${OpenVINO_VERSION_MAJOR},${OpenVINO_VERSION_MINOR},${OpenVINO_VERSION_PATCH},${OpenVINO_VERSION_BUILD}")
set(IE_VS_VER_PRODUCTVERSION_QUAD "${OpenVINO_VERSION_MAJOR},${OpenVINO_VERSION_MINOR},${OpenVINO_VERSION_PATCH},${OpenVINO_VERSION_BUILD}")
set(IE_VS_VER_FILEVERSION_STR "${OpenVINO_VERSION_MAJOR}.${OpenVINO_VERSION_MINOR}.${OpenVINO_VERSION_PATCH}.${OpenVINO_VERSION_BUILD}")
set(OV_VS_VER_FILEVERSION_QUAD "${OpenVINO_VERSION_MAJOR},${OpenVINO_VERSION_MINOR},${OpenVINO_VERSION_PATCH},${OpenVINO_VERSION_BUILD}")
set(OV_VS_VER_PRODUCTVERSION_QUAD "${OpenVINO_VERSION_MAJOR},${OpenVINO_VERSION_MINOR},${OpenVINO_VERSION_PATCH},${OpenVINO_VERSION_BUILD}")
set(OV_VS_VER_FILEVERSION_STR "${OpenVINO_VERSION_MAJOR}.${OpenVINO_VERSION_MINOR}.${OpenVINO_VERSION_PATCH}.${OpenVINO_VERSION_BUILD}")
set(IE_VS_VER_COMPANY_NAME_STR "Intel Corporation")
set(IE_VS_VER_PRODUCTVERSION_STR "${CI_BUILD_NUMBER}")
set(IE_VS_VER_PRODUCTNAME_STR "OpenVINO toolkit")
set(IE_VS_VER_COPYRIGHT_STR "Copyright (C) 2018-2021, Intel Corporation")
set(IE_VS_VER_COMMENTS_STR "https://docs.openvino.ai/")
set(OV_VS_VER_COMPANY_NAME_STR "Intel Corporation")
set(OV_VS_VER_PRODUCTVERSION_STR "${CI_BUILD_NUMBER}")
set(OV_VS_VER_PRODUCTNAME_STR "OpenVINO toolkit")
set(OV_VS_VER_COPYRIGHT_STR "Copyright (C) 2018-2021, Intel Corporation")
set(OV_VS_VER_COMMENTS_STR "https://docs.openvino.ai/")
#
# ie_add_vs_version_file(NAME <name>
# ov_add_vs_version_file(NAME <name>
# FILEDESCRIPTION <file description>
# [COMPANY_NAME <company name>]
# [FILEVERSION <file version>]
@@ -25,7 +25,7 @@ set(IE_VS_VER_COMMENTS_STR "https://docs.openvino.ai/")
# [FILEVERSION_QUAD <name>]
# [PRODUCTVERSION_QUAD <name>])
#
function(ie_add_vs_version_file)
function(ov_add_vs_version_file)
if(NOT WIN32 OR NOT BUILD_SHARED_LIBS)
return()
endif()
@@ -38,14 +38,14 @@ function(ie_add_vs_version_file)
get_target_property(target_type ${VS_VER_NAME} TYPE)
if(NOT target_type MATCHES "^(SHARED|MODULE)_LIBRARY$")
message(FATAL_ERROR "ie_add_vs_version_file can work only with dynamic libraries")
message(FATAL_ERROR "ov_add_vs_version_file can work only with dynamic libraries")
endif()
macro(_vs_ver_update_variable name)
if(VS_VER_NAME AND DEFINED IE_${VS_VER_NAME}_VS_VER_${name})
set(IE_VS_VER_${name} "${IE_${VS_VER_NAME}_VS_VER_${name}}")
if(VS_VER_NAME AND DEFINED OV_${VS_VER_NAME}_VS_VER_${name})
set(OV_VS_VER_${name} "${OV_${VS_VER_NAME}_VS_VER_${name}}")
elseif(VS_VER_${name})
set(IE_VS_VER_${name} "${VS_VER_${name}}")
set(OV_VS_VER_${name} "${VS_VER_${name}}")
endif()
endmacro()
@@ -53,10 +53,10 @@ function(ie_add_vs_version_file)
_vs_ver_update_variable(PRODUCTVERSION_QUAD)
macro(_vs_ver_update_str_variable name)
if(VS_VER_NAME AND DEFINED IE_${VS_VER_NAME}_VS_VER_${name})
set(IE_VS_VER_${name}_STR "${IE_${VS_VER_NAME}_VS_VER_${name}}")
if(VS_VER_NAME AND DEFINED OV_${VS_VER_NAME}_VS_VER_${name})
set(OV_VS_VER_${name}_STR "${OV_${VS_VER_NAME}_VS_VER_${name}}")
elseif(VS_VER_${name})
set(IE_VS_VER_${name}_STR "${VS_VER_${name}}")
set(OV_VS_VER_${name}_STR "${VS_VER_${name}}")
endif()
endmacro()
@@ -69,8 +69,8 @@ function(ie_add_vs_version_file)
_vs_ver_update_str_variable(PRODUCTVERSION)
_vs_ver_update_str_variable(COMMENTS)
set(IE_VS_VER_ORIGINALFILENAME_STR "${CMAKE_SHARED_LIBRARY_PREFIX}${VS_VER_NAME}${CMAKE_SHARED_LIBRARY_SUFFIX}")
set(IE_VS_VER_INTERNALNAME_STR ${VS_VER_NAME})
set(OV_VS_VER_ORIGINALFILENAME_STR "${CMAKE_SHARED_LIBRARY_PREFIX}${VS_VER_NAME}${CMAKE_SHARED_LIBRARY_SUFFIX}")
set(OV_VS_VER_INTERNALNAME_STR ${VS_VER_NAME})
set(vs_version_output "${CMAKE_CURRENT_BINARY_DIR}/vs_version.rc")
configure_file("${IEDevScripts_DIR}/vs_version/vs_version.rc.in" "${vs_version_output}" @ONLY)

View File

@@ -1,8 +1,8 @@
#include <winver.h>
VS_VERSION_INFO VERSIONINFO
FILEVERSION @IE_VS_VER_FILEVERSION_QUAD@
PRODUCTVERSION @IE_VS_VER_PRODUCTVERSION_QUAD@
FILEVERSION @OV_VS_VER_FILEVERSION_QUAD@
PRODUCTVERSION @OV_VS_VER_PRODUCTVERSION_QUAD@
FILEFLAGSMASK VS_FFI_FILEFLAGSMASK
#ifdef _DEBUG
FILEFLAGS 1
@@ -17,15 +17,15 @@ BEGIN
BEGIN
BLOCK "040904E4"
BEGIN
VALUE "CompanyName", "@IE_VS_VER_COMPANY_NAME_STR@\0"
VALUE "FileDescription", "@IE_VS_VER_FILEDESCRIPTION_STR@\0"
VALUE "FileVersion", "@IE_VS_VER_FILEVERSION_STR@\0"
VALUE "InternalName", "@IE_VS_VER_INTERNALNAME_STR@\0"
VALUE "LegalCopyright", "@IE_VS_VER_COPYRIGHT_STR@\0"
VALUE "OriginalFilename", "@IE_VS_VER_ORIGINALFILENAME_STR@\0"
VALUE "ProductName", "@IE_VS_VER_PRODUCTNAME_STR@\0"
VALUE "ProductVersion", "@IE_VS_VER_PRODUCTVERSION_STR@\0"
VALUE "Comments", "@IE_VS_VER_COMMENTS_STR@\0"
VALUE "CompanyName", "@OV_VS_VER_COMPANY_NAME_STR@\0"
VALUE "FileDescription", "@OV_VS_VER_FILEDESCRIPTION_STR@\0"
VALUE "FileVersion", "@OV_VS_VER_FILEVERSION_STR@\0"
VALUE "InternalName", "@OV_VS_VER_INTERNALNAME_STR@\0"
VALUE "LegalCopyright", "@OV_VS_VER_COPYRIGHT_STR@\0"
VALUE "OriginalFilename", "@OV_VS_VER_ORIGINALFILENAME_STR@\0"
VALUE "ProductName", "@OV_VS_VER_PRODUCTNAME_STR@\0"
VALUE "ProductVersion", "@OV_VS_VER_PRODUCTVERSION_STR@\0"
VALUE "Comments", "@OV_VS_VER_COMMENTS_STR@\0"
END
END
BLOCK "VarFileInfo"

View File

@@ -40,6 +40,7 @@ function(ieTargetLinkWholeArchive targetName)
"-Wl,-noall_load"
)
else()
# non-Apple Clang and GCC / MinGW
list(APPEND libs
"-Wl,--whole-archive"
${staticLib}

View File

@@ -169,9 +169,9 @@ ov_generate_dev_package_config()
# with all imported developer targets
register_extra_modules()
# for static libraries case we need to generate final ie_plugins.hpp
# for static libraries case we need to generate final ov_plugins.hpp
# with all the information about plugins
ie_generate_plugins_hpp()
ov_generate_plugins_hpp()
# used for static build
ov_generate_frontends_hpp()

View File

@@ -6,7 +6,9 @@
# Common cmake options
#
ie_dependent_option (ENABLE_INTEL_CPU "CPU plugin for OpenVINO Runtime" ON "RISCV64 OR X86 OR X86_64" OFF)
ie_dependent_option (ENABLE_INTEL_CPU "CPU plugin for OpenVINO Runtime" ON "RISCV64 OR X86 OR X86_64 OR AARCH64 OR ARM" OFF)
ie_dependent_option (ENABLE_ARM_COMPUTE_CMAKE "Enable ARM Compute build via cmake" OFF "ENABLE_INTEL_CPU" OFF)
ie_option (ENABLE_TESTS "unit, behavior and functional tests" OFF)
@@ -14,7 +16,13 @@ ie_option (ENABLE_COMPILE_TOOL "Enables compile_tool" ON)
ie_option (ENABLE_STRICT_DEPENDENCIES "Skip configuring \"convinient\" dependencies for efficient parallel builds" ON)
ie_dependent_option (ENABLE_INTEL_GPU "GPU OpenCL-based plugin for OpenVINO Runtime" ON "X86_64;NOT APPLE;NOT MINGW;NOT WINDOWS_STORE;NOT WINDOWS_PHONE" OFF)
if(X86_64)
set(ENABLE_INTEL_GPU_DEFAULT ON)
else()
set(ENABLE_INTEL_GPU_DEFAULT OFF)
endif()
ie_dependent_option (ENABLE_INTEL_GPU "GPU OpenCL-based plugin for OpenVINO Runtime" ${ENABLE_INTEL_GPU_DEFAULT} "X86_64 OR AARCH64;NOT APPLE;NOT WINDOWS_STORE;NOT WINDOWS_PHONE" OFF)
if (ANDROID OR (CMAKE_COMPILER_IS_GNUCXX AND CMAKE_CXX_COMPILER_VERSION VERSION_LESS 7.0))
# oneDNN doesn't support old compilers and android builds for now, so we'll
@@ -26,6 +34,10 @@ endif()
ie_dependent_option (ENABLE_ONEDNN_FOR_GPU "Enable oneDNN with GPU support" ${ENABLE_ONEDNN_FOR_GPU_DEFAULT} "ENABLE_INTEL_GPU" OFF)
ie_option (ENABLE_DEBUG_CAPS "enable OpenVINO debug capabilities at runtime" OFF)
ie_dependent_option (ENABLE_GPU_DEBUG_CAPS "enable GPU debug capabilities at runtime" ON "ENABLE_DEBUG_CAPS;ENABLE_INTEL_CPU" OFF)
ie_dependent_option (ENABLE_CPU_DEBUG_CAPS "enable CPU debug capabilities at runtime" ON "ENABLE_DEBUG_CAPS;ENABLE_INTEL_GPU" OFF)
ie_option (ENABLE_PROFILING_ITT "Build with ITT tracing. Optionally configure pre-built ittnotify library though INTEL_VTUNE_DIR variable." OFF)
ie_option_enum(ENABLE_PROFILING_FILTER "Enable or disable ITT counter groups.\
@@ -41,8 +53,6 @@ In case SELECTIVE_BUILD is enabled, the SELECTIVE_BUILD_STAT variable should con
Usage: -DSELECTIVE_BUILD=ON -DSELECTIVE_BUILD_STAT=/path/*.csv" OFF
ALLOWED_VALUES ON OFF COLLECT)
ie_option(ENABLE_ERROR_HIGHLIGHT "Highlight errors and warnings during compile time" ON)
ie_option (ENABLE_DOCS "Build docs using Doxygen" OFF)
find_package(PkgConfig QUIET)
@@ -75,39 +85,45 @@ ie_dependent_option (ENABLE_TBBBIND_2_5 "Enable TBBBind_2_5 static usage in Open
ie_dependent_option (ENABLE_INTEL_GNA "GNA support for OpenVINO Runtime" ON
"NOT APPLE;NOT ANDROID;X86_64;CMAKE_CXX_COMPILER_VERSION VERSION_GREATER_EQUAL 5.4" OFF)
ie_option (ENABLE_INTEL_GNA_DEBUG "GNA debug build" OFF)
ie_dependent_option (ENABLE_INTEL_GNA_DEBUG "GNA debug build" OFF "ENABLE_INTEL_GNA" OFF)
ie_dependent_option (ENABLE_V7_SERIALIZE "enables serialization to IR v7" OFF "ENABLE_INTEL_GNA" OFF)
ie_dependent_option (ENABLE_IR_V7_READER "Enables IR v7 reader" ${BUILD_SHARED_LIBS} "ENABLE_TESTS;ENABLE_INTEL_GNA" OFF)
ie_option (ENABLE_GAPI_PREPROCESSING "Enables G-API preprocessing" ON)
ie_dependent_option (ENABLE_GAPI_PREPROCESSING "Enables G-API preprocessing" ON "NOT MINGW64" OFF)
ie_option (ENABLE_MULTI "Enables MULTI Device Plugin" ON)
ie_option (ENABLE_AUTO "Enables AUTO Device Plugin" ON)
ie_option (ENABLE_AUTO_BATCH "Enables Auto-Batching Plugin" ON)
ie_option (ENABLE_HETERO "Enables Hetero Device Plugin" ON)
ie_option (ENABLE_TEMPLATE "Enable template plugin" ON)
ie_dependent_option (ENABLE_PLUGINS_XML "Generate plugins.xml configuration file or not" OFF "BUILD_SHARED_LIBS" OFF)
ie_dependent_option (GAPI_TEST_PERF "if GAPI unit tests should examine performance" OFF "ENABLE_TESTS;ENABLE_GAPI_PREPROCESSING" OFF)
ie_dependent_option (ENABLE_DATA "fetch models from testdata repo" ON "ENABLE_FUNCTIONAL_TESTS;NOT ANDROID" OFF)
ie_dependent_option (ENABLE_BEH_TESTS "tests oriented to check OpenVINO Runtime API correctness" ON "ENABLE_TESTS" OFF)
ie_dependent_option (ENABLE_FUNCTIONAL_TESTS "functional tests" ON "ENABLE_TESTS" OFF)
ie_option (ENABLE_SAMPLES "console samples are part of OpenVINO Runtime package" ON)
ie_option (ENABLE_OPENCV "enables custom OpenCV download" OFF)
ie_option (ENABLE_V7_SERIALIZE "enables serialization to IR v7" OFF)
set(OPENVINO_EXTRA_MODULES "" CACHE STRING "Extra paths for extra modules to include into OpenVINO build")
ie_dependent_option(ENABLE_TBB_RELEASE_ONLY "Only Release TBB libraries are linked to the OpenVINO Runtime binaries" ON "THREADING MATCHES TBB;LINUX" OFF)
find_host_package(PythonInterp 3 QUIET)
ie_option(ENABLE_OV_ONNX_FRONTEND "Enable ONNX FrontEnd" ${PYTHONINTERP_FOUND})
ie_option(ENABLE_OV_PADDLE_FRONTEND "Enable PaddlePaddle FrontEnd" ON)
ie_option(ENABLE_OV_IR_FRONTEND "Enable IR FrontEnd" ON)
ie_option(ENABLE_OV_PYTORCH_FRONTEND "Enable PyTorch FrontEnd" ON)
ie_option(ENABLE_OV_IR_FRONTEND "Enable IR FrontEnd" ON)
ie_option(ENABLE_OV_TF_FRONTEND "Enable TensorFlow FrontEnd" ON)
ie_option(ENABLE_OV_TF_LITE_FRONTEND "Enable TensorFlow Lite FrontEnd" ON)
ie_dependent_option(ENABLE_SNAPPY_COMPRESSION "Enables compression support for TF FE" ON
"ENABLE_OV_TF_FRONTEND" ON)
if(CMAKE_HOST_LINUX AND LINUX)
# Debian packages are enabled on Ubuntu systems
# so, system TBB / pugixml / OpenCL can be tried for usage
@@ -123,38 +139,38 @@ else()
set(ENABLE_SYSTEM_TBB_DEFAULT ${ENABLE_SYSTEM_LIBS_DEFAULT})
endif()
if(BUILD_SHARED_LIBS)
set(ENABLE_SYSTEM_PUGIXML_DEFAULT ${ENABLE_SYSTEM_LIBS_DEFAULT})
else()
# for static libraries case libpugixml.a must be compiled with -fPIC
# but we still need an ability to compile with system PugiXML and BUILD_SHARED_LIBS
# for Conan case where everything is compiled statically
set(ENABLE_SYSTEM_PUGIXML_DEFAULT OFF)
endif()
# users wants to use his own TBB version, specific either via env vars or cmake options
if(DEFINED ENV{TBBROOT} OR DEFINED ENV{TBB_DIR} OR DEFINED TBB_DIR OR DEFINED TBBROOT)
set(ENABLE_SYSTEM_TBB_DEFAULT OFF)
endif()
# for static libraries case libpugixml.a must be compiled with -fPIC
ie_dependent_option (ENABLE_SYSTEM_PUGIXML "use the system copy of pugixml" ${ENABLE_SYSTEM_LIBS_DEFAULT} "BUILD_SHARED_LIBS" OFF)
ie_dependent_option (ENABLE_SYSTEM_TBB "use the system version of TBB" ${ENABLE_SYSTEM_TBB_DEFAULT} "THREADING MATCHES TBB" OFF)
ie_dependent_option (ENABLE_SYSTEM_OPENCL "Use the system version of OpenCL" ${ENABLE_SYSTEM_LIBS_DEFAULT} "BUILD_SHARED_LIBS;ENABLE_INTEL_GPU" OFF)
ie_option (ENABLE_DEBUG_CAPS "enable OpenVINO debug capabilities at runtime" OFF)
ie_dependent_option (ENABLE_GPU_DEBUG_CAPS "enable GPU debug capabilities at runtime" ON "ENABLE_DEBUG_CAPS" OFF)
ie_dependent_option (ENABLE_CPU_DEBUG_CAPS "enable CPU debug capabilities at runtime" ON "ENABLE_DEBUG_CAPS" OFF)
find_host_package(PythonInterp 3 QUIET)
ie_option(ENABLE_OV_ONNX_FRONTEND "Enable ONNX FrontEnd" ${PYTHONINTERP_FOUND})
ie_option(ENABLE_OV_PADDLE_FRONTEND "Enable PaddlePaddle FrontEnd" ON)
ie_option(ENABLE_OV_IR_FRONTEND "Enable IR FrontEnd" ON)
ie_option(ENABLE_OV_PYTORCH_FRONTEND "Enable PyTorch FrontEnd" ON)
ie_option(ENABLE_OV_TF_FRONTEND "Enable TensorFlow FrontEnd" ON)
ie_option(ENABLE_OV_TF_LITE_FRONTEND "Enable TensorFlow Lite FrontEnd" ON)
ie_dependent_option(ENABLE_SYSTEM_PROTOBUF "Use system protobuf" OFF
"ENABLE_OV_ONNX_FRONTEND OR ENABLE_OV_PADDLE_FRONTEND OR ENABLE_OV_TF_FRONTEND;BUILD_SHARED_LIBS" OFF)
ie_option(ENABLE_OV_IR_FRONTEND "Enable IR FrontEnd" ON)
ie_dependent_option(ENABLE_SYSTEM_FLATBUFFERS "Use system flatbuffers" ON
ie_dependent_option (ENABLE_SYSTEM_TBB "Enables use of system TBB" ${ENABLE_SYSTEM_TBB_DEFAULT}
"THREADING MATCHES TBB" OFF)
# TODO: turn it off by default during the work on cross-os distribution, because pugixml is not
# available out of box on all systems (like RHEL, UBI)
ie_option (ENABLE_SYSTEM_PUGIXML "Enables use of system PugiXML" ${ENABLE_SYSTEM_PUGIXML_DEFAULT})
# the option is on by default, because we use only flatc compiler and don't use any libraries
ie_dependent_option(ENABLE_SYSTEM_FLATBUFFERS "Enables use of system flatbuffers" ON
"ENABLE_OV_TF_LITE_FRONTEND" OFF)
ie_dependent_option (ENABLE_SYSTEM_OPENCL "Enables use of system OpenCL" ${ENABLE_SYSTEM_LIBS_DEFAULT}
"ENABLE_INTEL_GPU" OFF)
# the option is turned off by default, because we compile our own static version of protobuf
# with LTO and -fPIC options, while system one does not have such flags
ie_dependent_option (ENABLE_SYSTEM_PROTOBUF "Enables use of system Protobuf" OFF
"ENABLE_OV_ONNX_FRONTEND OR ENABLE_OV_PADDLE_FRONTEND OR ENABLE_OV_TF_FRONTEND" OFF)
# the option is turned off by default, because we don't want to have a dependency on libsnappy.so
ie_dependent_option (ENABLE_SYSTEM_SNAPPY "Enables use of system version of Snappy" OFF
"ENABLE_SNAPPY_COMPRESSION" OFF)
ie_dependent_option(ENABLE_OV_CORE_UNIT_TESTS "Enables OpenVINO core unit tests" ON "ENABLE_TESTS" OFF)
ie_option(ENABLE_OPENVINO_DEBUG "Enable output for OPENVINO_DEBUG statements" OFF)
if(NOT BUILD_SHARED_LIBS AND ENABLE_OV_TF_FRONTEND)

View File

@@ -10,8 +10,8 @@ macro(ov_cpack_settings)
set(cpack_components_all ${CPACK_COMPONENTS_ALL})
unset(CPACK_COMPONENTS_ALL)
foreach(item IN LISTS cpack_components_all)
# filter out some components, which are not needed to be wrapped to conda-forge | brew
if(# python is not a part of conda | brew
# filter out some components, which are not needed to be wrapped to conda-forge | brew | conan
if(# python is not a part of conda | brew | conan
NOT item MATCHES "^${OV_CPACK_COMP_PYTHON_OPENVINO}_python.*" AND
# python wheels are not needed to be wrapped by conda | brew packages
NOT item STREQUAL OV_CPACK_COMP_PYTHON_WHEELS AND

View File

@@ -52,6 +52,8 @@ macro(ov_cpack_settings)
NOT item STREQUAL OV_CPACK_COMP_PYTHON_WHEELS AND
# see ticket # 82605
NOT item STREQUAL "gna" AND
# don't install Intel OpenMP during debian
NOT item STREQUAL "omp" AND
# even for case of system TBB we have installation rules for wheels packages
# so, need to skip this explicitly
NOT item MATCHES "^tbb(_dev)?$" AND
@@ -91,7 +93,7 @@ macro(ov_cpack_settings)
# - 2022.1.0 is the last public release with debian packages from Intel install team
# - 2022.1.1, 2022.2 do not have debian packages enabled, distributed only as archives
# - 2022.3 is the first release where Debian updated packages are introduced, others 2022.3.X are LTS
2022.3.0 2022.3.1 2022.3.2 2022.3.3 2022.3.4 2022.3.5
2022.3.0 2022.3.1 2022.3.2 2022.3.3 2022.3.4 2022.3.5 2023.0.0 2023.0.1
)
#
@@ -154,17 +156,20 @@ macro(ov_cpack_settings)
set(auto_copyright "generic")
endif()
# intel-cpu
if(ENABLE_INTEL_CPU OR DEFINED openvino_arm_cpu_plugin_SOURCE_DIR)
if(ENABLE_INTEL_CPU)
# cpu
if(ENABLE_INTEL_CPU)
if(ARM OR AARCH64)
set(CPACK_DEBIAN_CPU_PACKAGE_NAME "libopenvino-arm-cpu-plugin-${cpack_name_ver}")
set(CPACK_COMPONENT_CPU_DESCRIPTION "ARM® CPU plugin")
set(cpu_copyright "arm_cpu")
elseif(X86 OR X86_64)
set(CPACK_DEBIAN_CPU_PACKAGE_NAME "libopenvino-intel-cpu-plugin-${cpack_name_ver}")
set(CPACK_COMPONENT_CPU_DESCRIPTION "Intel® CPU plugin")
set(cpu_copyright "generic")
else()
set(CPACK_COMPONENT_CPU_DESCRIPTION "ARM CPU")
set(cpu_copyright "arm_cpu")
message(FATAL_ERROR "Unsupported CPU architecture: ${CMAKE_SYSTEM_PROCESSOR}")
endif()
set(CPACK_COMPONENT_CPU_DEPENDS "${OV_CPACK_COMP_CORE}")
set(CPACK_DEBIAN_CPU_PACKAGE_NAME "libopenvino-intel-cpu-plugin-${cpack_name_ver}")
set(CPACK_DEBIAN_CPU_PACKAGE_CONTROL_EXTRA "${def_postinst};${def_postrm}")
_ov_add_plugin(cpu OFF)
endif()

View File

@@ -6,7 +6,7 @@ if(CPACK_GENERATOR STREQUAL "DEB")
include(cmake/packaging/debian.cmake)
elseif(CPACK_GENERATOR STREQUAL "RPM")
include(cmake/packaging/rpm.cmake)
elseif(CPACK_GENERATOR MATCHES "^(CONDA-FORGE|BREW)$")
elseif(CPACK_GENERATOR MATCHES "^(CONDA-FORGE|BREW|CONAN)$")
include(cmake/packaging/common-libraries.cmake)
elseif(CPACK_GENERATOR STREQUAL "NSIS")
include(cmake/packaging/nsis.cmake)

View File

@@ -38,6 +38,8 @@ macro(ov_cpack_settings)
NOT item STREQUAL OV_CPACK_COMP_PYTHON_WHEELS AND
# see ticket # 82605
NOT item STREQUAL "gna" AND
# don't install Intel OpenMP during rpm
NOT item STREQUAL "omp" AND
# even for case of system TBB we have installation rules for wheels packages
# so, need to skip this explicitly
NOT item MATCHES "^tbb(_dev)?$" AND
@@ -77,7 +79,7 @@ macro(ov_cpack_settings)
# - 2022.1.0 is the last public release with rpm packages from Intel install team
# - 2022.1.1, 2022.2 do not have rpm packages enabled, distributed only as archives
# - 2022.3 is the first release where RPM updated packages are introduced, others 2022.3.X are LTS
2022.3.0 2022.3.1 2022.3.2 2022.3.3 2022.3.4 2022.3.5
2022.3.0 2022.3.1 2022.3.2 2022.3.3 2022.3.4 2022.3.5 2023.0.0 2023.0.1
)
find_host_program(rpmlint_PROGRAM NAMES rpmlint DOC "Path to rpmlint")
@@ -154,17 +156,20 @@ macro(ov_cpack_settings)
set(auto_copyright "generic")
endif()
# intel-cpu
if(ENABLE_INTEL_CPU OR DEFINED openvino_arm_cpu_plugin_SOURCE_DIR)
if(ENABLE_INTEL_CPU)
# cpu
if(ENABLE_INTEL_CPU)
if(ARM OR AARCH64)
set(CPACK_RPM_CPU_PACKAGE_NAME "libopenvino-arm-cpu-plugin-${cpack_name_ver}")
set(CPACK_COMPONENT_CPU_DESCRIPTION "ARM® CPU plugin")
set(cpu_copyright "arm_cpu")
elseif(X86 OR X86_64)
set(CPACK_RPM_CPU_PACKAGE_NAME "libopenvino-intel-cpu-plugin-${cpack_name_ver}")
set(CPACK_COMPONENT_CPU_DESCRIPTION "Intel® CPU")
set(cpu_copyright "generic")
else()
set(CPACK_COMPONENT_CPU_DESCRIPTION "ARM CPU")
set(cpu_copyright "arm_cpu")
message(FATAL_ERROR "Unsupported CPU architecture: ${CMAKE_SYSTEM_PROCESSOR}")
endif()
set(CPACK_RPM_CPU_PACKAGE_REQUIRES "${core_package}")
set(CPACK_RPM_CPU_PACKAGE_NAME "libopenvino-intel-cpu-plugin-${cpack_name_ver}")
_ov_add_package(plugin_packages cpu)
endif()

View File

@@ -16,7 +16,8 @@ set(ie_options "@IE_OPTIONS@")
list(APPEND ie_options CMAKE_CXX_COMPILER_LAUNCHER CMAKE_C_COMPILER_LAUNCHER
CMAKE_CXX_LINKER_LAUNCHER CMAKE_C_LINKER_LAUNCHER
CMAKE_BUILD_TYPE CMAKE_SKIP_RPATH CMAKE_INSTALL_PREFIX
CMAKE_OSX_ARCHITECTURES CMAKE_OSX_DEPLOYMENT_TARGET)
CMAKE_OSX_ARCHITECTURES CMAKE_OSX_DEPLOYMENT_TARGET
CMAKE_CONFIGURATION_TYPES CMAKE_DEFAULT_BUILD_TYPE)
file(TO_CMAKE_PATH "${CMAKE_CURRENT_LIST_DIR}" cache_path)
message(STATUS "The following CMake options are exported from Inference Engine Developer package")
@@ -141,6 +142,14 @@ if(ENABLE_SYSTEM_PUGIXML)
endif()
endif()
set(_IE_nlohmann_json_FOUND "@nlohmann_json_FOUND@")
if(_IE_nlohmann_json_FOUND)
find_dependency(nlohmann_json)
set_target_properties(nlohmann_json::nlohmann_json PROPERTIES IMPORTED_GLOBAL ON)
add_library(IE::nlohmann_json ALIAS nlohmann_json::nlohmann_json)
endif()
unset(_IE_nlohmann_json_FOUND)
# inherit OpenCV from main IE project if enabled
if ("@OpenCV_FOUND@")
load_cache("${cache_path}" READ_WITH_PREFIX "" OpenCV_DIR)

View File

@@ -85,9 +85,9 @@
#
# `OpenVINO_VERSION_MAJOR`
# Major version component
#
#
# `OpenVINO_VERSION_MINOR`
# minor version component
# Minor version component
#
# `OpenVINO_VERSION_PATCH`
# Patch version component
@@ -138,7 +138,7 @@ endmacro()
macro(_ov_find_tbb)
set(THREADING "@THREADING@")
if((THREADING STREQUAL "TBB" OR THREADING STREQUAL "TBB_AUTO") AND NOT TBB_FOUND)
if(THREADING STREQUAL "TBB" OR THREADING STREQUAL "TBB_AUTO")
set(enable_pkgconfig_tbb "@tbb_FOUND@")
# try tbb.pc
@@ -153,10 +153,10 @@ macro(_ov_find_tbb)
endif()
pkg_search_module(tbb
${pkg_config_quiet_arg}
${pkg_config_required_arg}
IMPORTED_TARGET
tbb)
${pkg_config_quiet_arg}
${pkg_config_required_arg}
IMPORTED_TARGET
tbb)
unset(pkg_config_quiet_arg)
unset(pkg_config_required_arg)
@@ -223,28 +223,185 @@ macro(_ov_find_tbb)
PATHS ${_tbb_bind_dir}
NO_CMAKE_FIND_ROOT_PATH
NO_DEFAULT_PATH)
set_target_properties(${TBBBIND_2_5_IMPORTED_TARGETS} PROPERTIES IMPORTED_GLOBAL ON)
unset(_tbb_bind_dir)
endif()
unset(install_tbbbind)
endif()
endmacro()
macro(_ov_find_pugixml)
set(_OV_ENABLE_SYSTEM_PUGIXML "@ENABLE_SYSTEM_PUGIXML@")
if(_OV_ENABLE_SYSTEM_PUGIXML)
set(_ov_pugixml_pkgconfig_interface "@pugixml_FOUND@")
set(_ov_pugixml_cmake_interface "@PugiXML_FOUND@")
if(_ov_pugixml_pkgconfig_interface AND NOT ANDROID)
_ov_find_dependency(PkgConfig)
elseif(_ov_pugixml_cmake_interface)
_ov_find_dependency(PugiXML REQUIRED)
endif()
if(PugiXML_FOUND)
if(TARGET pugixml)
set(_ov_pugixml_target pugixml)
elseif(TARGET pugixml::pugixml)
set(_ov_pugixml_target pugixml::pugixml)
endif()
if(OpenVINODeveloperPackage_DIR)
set_property(TARGET ${_ov_pugixml_target} PROPERTY IMPORTED_GLOBAL ON)
# align with build tree
add_library(openvino::pugixml ALIAS ${_ov_pugixml_target})
endif()
unset(_ov_pugixml_target)
elseif(PkgConfig_FOUND)
if(${CMAKE_FIND_PACKAGE_NAME}_FIND_QUIETLY)
set(pkg_config_quiet_arg QUIET)
endif()
if(${CMAKE_FIND_PACKAGE_NAME}_FIND_REQUIRED)
set(pkg_config_required_arg REQUIRED)
endif()
pkg_search_module(pugixml
${pkg_config_quiet_arg}
${pkg_config_required_arg}
IMPORTED_TARGET
GLOBAL
pugixml)
unset(pkg_config_quiet_arg)
unset(pkg_config_required_arg)
if(pugixml_FOUND)
if(OpenVINODeveloperPackage_DIR)
add_library(openvino::pugixml ALIAS PkgConfig::pugixml)
endif()
# PATCH: on Ubuntu 18.04 pugixml.pc contains incorrect include directories
get_target_property(interface_include_dir PkgConfig::pugixml INTERFACE_INCLUDE_DIRECTORIES)
if(interface_include_dir AND NOT EXISTS "${interface_include_dir}")
set_target_properties(PkgConfig::pugixml PROPERTIES
INTERFACE_INCLUDE_DIRECTORIES "")
endif()
endif()
endif()
# debian 9 case: no cmake, no pkg-config files
if(NOT TARGET openvino::pugixml)
find_library(PUGIXML_LIBRARY NAMES pugixml DOC "Path to pugixml library")
if(PUGIXML_LIBRARY)
add_library(openvino::pugixml INTERFACE IMPORTED)
set_target_properties(openvino::pugixml PROPERTIES INTERFACE_LINK_LIBRARIES "${PUGIXML_LIBRARY}")
else()
message(FATAL_ERROR "Failed to find system pugixml in OpenVINO Developer Package")
endif()
endif()
endif()
endmacro()
macro(_ov_find_itt)
set(_ENABLE_PROFILING_ITT "@ENABLE_PROFILING_ITT@")
# whether 'ittapi' is found via find_package
set(_ENABLE_SYSTEM_ITTAPI "@ittapi_FOUND@")
if(_ENABLE_PROFILING_ITT AND _ENABLE_SYSTEM_ITTAPI)
_ov_find_dependency(ittapi)
endif()
unset(_ENABLE_PROFILING_ITT)
unset(_ENABLE_SYSTEM_ITTAPI)
endmacro()
macro(_ov_find_ade)
set(_OV_ENABLE_GAPI_PREPROCESSING "@ENABLE_GAPI_PREPROCESSING@")
# whether 'ade' is found via find_package
set(_ENABLE_SYSTEM_ADE "@ade_FOUND@")
if(_OV_ENABLE_GAPI_PREPROCESSING AND _ENABLE_SYSTEM_ADE)
_ov_find_dependency(ade 0.1.2)
endif()
unset(_OV_ENABLE_GAPI_PREPROCESSING)
unset(_ENABLE_SYSTEM_ADE)
endmacro()
macro(_ov_find_intel_cpu_dependencies)
set(_OV_ENABLE_CPU_ACL "@DNNL_USE_ACL@")
if(_OV_ENABLE_CPU_ACL)
if(_ov_as_external_package)
set_and_check(ARM_COMPUTE_LIB_DIR "@PACKAGE_ARM_COMPUTE_LIB_DIR@")
set(_ov_find_acl_options NO_DEFAULT_PATH)
set(_ov_find_acl_path "${CMAKE_CURRENT_LIST_DIR}")
else()
set_and_check(_ov_find_acl_path "@PACKAGE_FIND_ACL_PATH@")
endif()
_ov_find_dependency(ACL
NO_MODULE
PATHS "${_ov_find_acl_path}"
${_ov_find_acl_options})
unset(ARM_COMPUTE_LIB_DIR)
unset(_ov_find_acl_path)
unset(_ov_find_acl_options)
endif()
unset(_OV_ENABLE_CPU_ACL)
endmacro()
macro(_ov_find_intel_gpu_dependencies)
set(_OV_ENABLE_INTEL_GPU "@ENABLE_INTEL_GPU@")
set(_OV_ENABLE_SYSTEM_OPENCL "@ENABLE_SYSTEM_OPENCL@")
if(_OV_ENABLE_INTEL_GPU AND _OV_ENABLE_SYSTEM_OPENCL)
set(_OV_OpenCLICDLoader_FOUND "@OpenCLICDLoader_FOUND@")
if(_OV_OpenCLICDLoader_FOUND)
_ov_find_dependency(OpenCLICDLoader)
else()
_ov_find_dependency(OpenCL)
endif()
unset(_OV_OpenCLICDLoader_FOUND)
endif()
unset(_OV_ENABLE_INTEL_GPU)
unset(_OV_ENABLE_SYSTEM_OPENCL)
endmacro()
macro(_ov_find_intel_gna_dependencies)
set(_OV_ENABLE_INTEL_GNA "@ENABLE_INTEL_GNA@")
if(_OV_ENABLE_INTEL_GNA AND NOT libGNA_FOUND)
if(_OV_ENABLE_INTEL_GNA)
set_and_check(GNA_PATH "@PACKAGE_GNA_PATH@")
_ov_find_dependency(libGNA
COMPONENTS KERNEL
CONFIG
PATHS "${CMAKE_CURRENT_LIST_DIR}"
NO_CMAKE_FIND_ROOT_PATH
NO_DEFAULT_PATH)
unset(GNA_PATH)
endif()
unset(_OV_ENABLE_INTEL_GNA)
endmacro()
macro(_ov_find_protobuf_frontend_dependency)
set(_OV_ENABLE_SYSTEM_PROTOBUF "@ENABLE_SYSTEM_PROTOBUF@")
# TODO: remove check for target existence
if(_OV_ENABLE_SYSTEM_PROTOBUF AND NOT TARGET protobuf::libprotobuf)
_ov_find_dependency(Protobuf @Protobuf_VERSION@ EXACT)
endif()
unset(_OV_ENABLE_SYSTEM_PROTOBUF)
endmacro()
macro(_ov_find_tensorflow_frontend_dependencies)
set(_OV_ENABLE_SYSTEM_SNAPPY "@ENABLE_SYSTEM_SNAPPY@")
set(_ov_snappy_lib "@ov_snappy_lib@")
# TODO: remove check for target existence
if(_OV_ENABLE_SYSTEM_SNAPPY AND NOT TARGET ${_ov_snappy_lib})
_ov_find_dependency(Snappy @Snappy_VERSION@ EXACT)
endif()
unset(_OV_ENABLE_SYSTEM_SNAPPY)
unset(_ov_snappy_lib)
set(PACKAGE_PREFIX_DIR ${_ov_package_prefix_dir})
endmacro()
macro(_ov_find_onnx_frontend_dependencies)
set(_OV_ENABLE_SYSTEM_ONNX "@ENABLE_SYSTEM_ONNX@")
if(_OV_ENABLE_SYSTEM_ONNX)
_ov_find_dependency(ONNX @ONNX_VERSION@ EXACT)
endif()
unset(_OV_ENABLE_SYSTEM_ONNX)
endmacro()
function(_ov_target_no_deprecation_error)
if(NOT MSVC)
if(CMAKE_CXX_COMPILER_ID STREQUAL "Intel")
@@ -265,13 +422,41 @@ endfunction()
# OpenVINO config
#
cmake_policy(PUSH)
# we need CMP0057 to allow IN_LIST in if() command
if(POLICY CMP0057)
cmake_policy(SET CMP0057 NEW)
else()
message(FATAL_ERROR "OpenVINO requires CMake 3.3 or newer")
endif()
# need to store current PACKAGE_PREFIX_DIR, because it's overwritten by sub-package one
set(_ov_package_prefix_dir "${PACKAGE_PREFIX_DIR}")
set(_OV_ENABLE_OPENVINO_BUILD_SHARED "@BUILD_SHARED_LIBS@")
if(NOT TARGET openvino)
set(_ov_as_external_package ON)
endif()
if(NOT _OV_ENABLE_OPENVINO_BUILD_SHARED)
# common openvino dependencies
_ov_find_tbb()
_ov_find_itt()
_ov_find_pugixml()
# preprocessing dependencies
_ov_find_ade()
# frontend dependencies
_ov_find_protobuf_frontend_dependency()
_ov_find_tensorflow_frontend_dependencies()
_ov_find_onnx_frontend_dependencies()
# plugin dependencies
_ov_find_intel_cpu_dependencies()
_ov_find_intel_gpu_dependencies()
_ov_find_intel_gna_dependencies()
endif()
@@ -279,13 +464,26 @@ _ov_find_dependency(Threads)
unset(_OV_ENABLE_OPENVINO_BUILD_SHARED)
if(NOT TARGET openvino)
set(_ov_as_external_package ON)
set(_ov_imported_libs openvino::runtime openvino::runtime::c
openvino::frontend::onnx openvino::frontend::paddle openvino::frontend::tensorflow
openvino::frontend::pytorch openvino::frontend::tensorflow_lite)
if(_ov_as_external_package)
include("${CMAKE_CURRENT_LIST_DIR}/OpenVINOTargets.cmake")
foreach(target IN LISTS _ov_imported_libs)
if(TARGET ${target})
get_target_property(imported_configs ${target} IMPORTED_CONFIGURATIONS)
if(NOT RELWITHDEBINFO IN_LIST imported_configs)
set_property(TARGET ${target} PROPERTY MAP_IMPORTED_CONFIG_RELWITHDEBINFO RELEASE)
endif()
unset(imported_configs)
endif()
endforeach()
# WA for cmake version < 3.16 which does not export
# IMPORTED_LINK_DEPENDENT_LIBRARIES_** properties if no PUBLIC dependencies for the library
if((THREADING STREQUAL "TBB" OR THREADING STREQUAL "TBB_AUTO") AND TBB_FOUND)
if(THREADING STREQUAL "TBB" OR THREADING STREQUAL "TBB_AUTO")
foreach(type RELEASE DEBUG RELWITHDEBINFO MINSIZEREL)
foreach(tbb_target TBB::tbb TBB::tbbmalloc PkgConfig::tbb)
if(TARGET ${tbb_target})
@@ -326,12 +524,12 @@ endif()
# Apply common functions
#
foreach(target openvino::runtime openvino::runtime::c
openvino::frontend::onnx openvino::frontend::paddle openvino::frontend::tensorflow)
foreach(target IN LISTS _ov_imported_libs)
if(TARGET ${target} AND _ov_as_external_package)
_ov_target_no_deprecation_error(${target})
endif()
endforeach()
unset(_ov_imported_libs)
unset(_ov_as_external_package)
# restore PACKAGE_PREFIX_DIR
@@ -349,3 +547,7 @@ unset(${CMAKE_FIND_PACKAGE_NAME}_IR_FOUND)
unset(${CMAKE_FIND_PACKAGE_NAME}_Paddle_FOUND)
unset(${CMAKE_FIND_PACKAGE_NAME}_ONNX_FOUND)
unset(${CMAKE_FIND_PACKAGE_NAME}_TensorFlow_FOUND)
unset(${CMAKE_FIND_PACKAGE_NAME}_TensorFlowLite_FOUND)
unset(${CMAKE_FIND_PACKAGE_NAME}_PyTorch_FOUND)
cmake_policy(POP)

View File

@@ -14,7 +14,8 @@ set(ov_options "@IE_OPTIONS@")
list(APPEND ov_options CMAKE_CXX_COMPILER_LAUNCHER CMAKE_C_COMPILER_LAUNCHER
CMAKE_CXX_LINKER_LAUNCHER CMAKE_C_LINKER_LAUNCHER
CMAKE_BUILD_TYPE CMAKE_SKIP_RPATH CMAKE_INSTALL_PREFIX
CMAKE_OSX_ARCHITECTURES CMAKE_OSX_DEPLOYMENT_TARGET)
CMAKE_OSX_ARCHITECTURES CMAKE_OSX_DEPLOYMENT_TARGET
CMAKE_CONFIGURATION_TYPES CMAKE_DEFAULT_BUILD_TYPE)
file(TO_CMAKE_PATH "${CMAKE_CURRENT_LIST_DIR}" cache_path)
message(STATUS "The following CMake options are exported from OpenVINO Developer package")
@@ -27,6 +28,9 @@ foreach(option IN LISTS ov_options)
endforeach()
message(" ")
# activate generation of plugins.xml
set(ENABLE_PLUGINS_XML ON)
# for samples in 3rd party projects
if(ENABLE_SAMPLES)
set_and_check(gflags_DIR "@gflags_BINARY_DIR@")
@@ -52,6 +56,7 @@ find_dependency(OpenVINO
NO_DEFAULT_PATH)
_ov_find_tbb()
_ov_find_pugixml()
foreach(component @openvino_export_components@)
# TODO: remove legacy targets from some tests
@@ -61,58 +66,6 @@ foreach(component @openvino_export_components@)
# endif()
endforeach()
if(ENABLE_SYSTEM_PUGIXML)
set(_ov_pugixml_pkgconfig_interface "@pugixml_FOUND@")
set(_ov_pugixml_cmake_interface "@PugiXML_FOUND@")
if(_ov_pugixml_pkgconfig_interface)
find_dependency(PkgConfig)
elseif(_ov_pugixml_cmake_interface)
find_dependency(PugiXML)
endif()
if(PugiXML_FOUND)
set_property(TARGET pugixml PROPERTY IMPORTED_GLOBAL TRUE)
add_library(openvino::pugixml ALIAS pugixml)
elseif(PkgConfig_FOUND)
if(${CMAKE_FIND_PACKAGE_NAME}_FIND_QUIETLY)
set(pkg_config_quiet_arg QUIET)
endif()
if(${CMAKE_FIND_PACKAGE_NAME}_FIND_REQUIRED)
set(pkg_config_required_arg REQUIRED)
endif()
pkg_search_module(pugixml
${pkg_config_quiet_arg}
${pkg_config_required_arg}
IMPORTED_TARGET GLOBAL
pugixml)
unset(pkg_config_quiet_arg)
unset(pkg_config_required_arg)
if(pugixml_FOUND)
add_library(openvino::pugixml ALIAS PkgConfig::pugixml)
# PATCH: on Ubuntu 18.04 pugixml.pc contains incorrect include directories
get_target_property(interface_include_dir PkgConfig::pugixml INTERFACE_INCLUDE_DIRECTORIES)
if(interface_include_dir AND NOT EXISTS "${interface_include_dir}")
set_target_properties(PkgConfig::pugixml PROPERTIES
INTERFACE_INCLUDE_DIRECTORIES "")
endif()
endif()
endif()
# debian 9 case: no cmake, no pkg-config files
if(NOT TARGET openvino::pugixml)
find_library(PUGIXML_LIBRARY NAMES pugixml DOC "Path to pugixml library")
if(PUGIXML_LIBRARY)
add_library(openvino::pugixml INTERFACE IMPORTED GLOBAL)
set_target_properties(openvino::pugixml PROPERTIES INTERFACE_LINK_LIBRARIES "${PUGIXML_LIBRARY}")
else()
message(FATAL_ERROR "Failed to find system pugixml in OpenVINO Developer Package")
endif()
endif()
endif()
# inherit OpenCV from main OpenVINO project if enabled
if ("@OpenCV_FOUND@")
load_cache("${cache_path}" READ_WITH_PREFIX "" OpenCV_DIR)

View File

@@ -42,11 +42,12 @@ function(ov_model_convert SRC DST OUT)
endif()
set(full_out_name "${DST}/${rel_out_name}")
file(MAKE_DIRECTORY "${DST}/${rel_dir}")
if(ext STREQUAL ".prototxt")
# convert .prototxt models to .onnx binary
add_custom_command(OUTPUT ${full_out_name}
COMMAND ${CMAKE_COMMAND} -E make_directory
"${DST}/${rel_dir}"
COMMAND ${PYTHON_EXECUTABLE} ${onnx_gen_script}
"${SRC}/${in_file}" ${full_out_name}
DEPENDS ${onnx_gen_script} "${SRC}/${in_file}"
@@ -55,6 +56,8 @@ function(ov_model_convert SRC DST OUT)
WORKING_DIRECTORY "${model_source_dir}")
else()
add_custom_command(OUTPUT ${full_out_name}
COMMAND ${CMAKE_COMMAND} -E make_directory
"${DST}/${rel_dir}"
COMMAND "${CMAKE_COMMAND}" -E copy_if_different
"${SRC}/${in_file}" ${full_out_name}
DEPENDS ${onnx_gen_script} "${SRC}/${in_file}"
@@ -68,18 +71,24 @@ function(ov_model_convert SRC DST OUT)
set(${OUT} ${files} PARENT_SCOPE)
endfunction()
if(OV_GENERATOR_MULTI_CONFIG AND CMAKE_VERSION VERSION_GREATER_EQUAL 3.20)
set(test_model_zoo_output_dir "${CMAKE_RUNTIME_OUTPUT_DIRECTORY}/$<CONFIG>/test_model_zoo")
else()
set(test_model_zoo_output_dir "${CMAKE_RUNTIME_OUTPUT_DIRECTORY}/test_model_zoo")
endif()
ov_model_convert("${CMAKE_CURRENT_SOURCE_DIR}/src/core/tests"
"${CMAKE_RUNTIME_OUTPUT_DIRECTORY}/test_model_zoo/core"
"${test_model_zoo_output_dir}/core"
core_tests_out_files)
set(rel_path "src/tests/functional/plugin/shared/models")
ov_model_convert("${OpenVINO_SOURCE_DIR}/${rel_path}"
"${CMAKE_RUNTIME_OUTPUT_DIRECTORY}/test_model_zoo/func_tests/models"
"${test_model_zoo_output_dir}/func_tests/models"
ft_out_files)
set(rel_path "src/frontends/onnx/tests/models")
ov_model_convert("${OpenVINO_SOURCE_DIR}/${rel_path}"
"${CMAKE_RUNTIME_OUTPUT_DIRECTORY}/test_model_zoo/onnx"
"${test_model_zoo_output_dir}/onnx"
onnx_fe_out_files)
if(ENABLE_TESTS)
@@ -87,11 +96,12 @@ if(ENABLE_TESTS)
${ft_out_files}
${onnx_fe_out_files})
if (ENABLE_OV_PADDLE_FRONTEND)
add_dependencies(test_model_zoo paddle_test_models)
endif()
# TODO Reenable PDPD after paddlepaddle==2.5.0 with compliant protobuf is released (ticket 95904)
#if (ENABLE_OV_PADDLE_FRONTEND)
# add_dependencies(test_model_zoo paddle_test_models)
#endif()
install(DIRECTORY "${CMAKE_RUNTIME_OUTPUT_DIRECTORY}/test_model_zoo"
install(DIRECTORY "${test_model_zoo_output_dir}"
DESTINATION tests COMPONENT tests EXCLUDE_FROM_ALL)
set(TEST_MODEL_ZOO "./test_model_zoo" CACHE PATH "Path to test model zoo")

View File

@@ -0,0 +1,95 @@
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
# Prerequisites:
#
# Build platform: Ubuntu
# apt-get install mingw-w64 mingw-w64-tools g++-mingw-w64-x86-64 gcc-mingw-w64-x86-64
#
# Build platform: macOS
# brew install mingw-w64
#
set(CMAKE_SYSTEM_NAME Windows)
set(CMAKE_SYSTEM_PROCESSOR x86_64)
set(CMAKE_C_COMPILER x86_64-w64-mingw32-gcc-posix)
set(CMAKE_CXX_COMPILER x86_64-w64-mingw32-g++-posix)
set(PKG_CONFIG_EXECUTABLE x86_64-w64-mingw32-pkg-config CACHE PATH "Path to Windows x86_64 pkg-config")
set(CMAKE_FIND_ROOT_PATH_MODE_PROGRAM NEVER)
set(CMAKE_FIND_ROOT_PATH_MODE_LIBRARY ONLY)
set(CMAKE_FIND_ROOT_PATH_MODE_INCLUDE ONLY)
set(CMAKE_FIND_ROOT_PATH_MODE_PACKAGE ONLY)
macro(__cmake_find_root_save_and_reset)
foreach(v
CMAKE_FIND_ROOT_PATH_MODE_LIBRARY
CMAKE_FIND_ROOT_PATH_MODE_INCLUDE
CMAKE_FIND_ROOT_PATH_MODE_PACKAGE
CMAKE_FIND_ROOT_PATH_MODE_PROGRAM
)
set(__save_${v} ${${v}})
set(${v} NEVER)
endforeach()
endmacro()
macro(__cmake_find_root_restore)
foreach(v
CMAKE_FIND_ROOT_PATH_MODE_LIBRARY
CMAKE_FIND_ROOT_PATH_MODE_INCLUDE
CMAKE_FIND_ROOT_PATH_MODE_PACKAGE
CMAKE_FIND_ROOT_PATH_MODE_PROGRAM
)
set(${v} ${__save_${v}})
unset(__save_${v})
endforeach()
endmacro()
# macro to find programs on the host OS
macro(find_host_program)
__cmake_find_root_save_and_reset()
if(CMAKE_HOST_WIN32)
SET(WIN32 1)
SET(UNIX)
SET(APPLE)
elseif(CMAKE_HOST_APPLE)
SET(APPLE 1)
SET(UNIX)
SET(WIN32)
elseif(CMAKE_HOST_UNIX)
SET(UNIX 1)
SET(WIN32)
SET(APPLE)
endif()
find_program(${ARGN})
SET(WIN32 1)
SET(APPLE)
SET(UNIX)
__cmake_find_root_restore()
endmacro()
# macro to find packages on the host OS
macro(find_host_package)
__cmake_find_root_save_and_reset()
if(CMAKE_HOST_WIN32)
SET(WIN32 1)
SET(UNIX)
SET(APPLE)
elseif(CMAKE_HOST_APPLE)
SET(APPLE 1)
SET(WIN32)
SET(UNIX)
elseif(CMAKE_HOST_UNIX)
SET(UNIX 1)
SET(WIN32)
SET(APPLE)
endif()
find_package(${ARGN})
SET(WIN32 1)
SET(APPLE)
SET(UNIX)
__cmake_find_root_restore()
endmacro()

View File

@@ -24,7 +24,7 @@ set(CMAKE_LINKER ${RISCV_TOOLCHAIN_ROOT}/bin/riscv64-unknown-linux-gnu-ld)
set(CMAKE_OBJCOPY ${RISCV_TOOLCHAIN_ROOT}/bin/riscv64-unknown-linux-gnu-objcopy)
set(CMAKE_OBJDUMP ${RISCV_TOOLCHAIN_ROOT}/bin/riscv64-unknown-linux-gnu-objdump)
set(CMAKE_READELF ${RISCV_TOOLCHAIN_ROOT}/bin/riscv64-unknown-linux-gnu-readelf)
set(PKG_CONFIG_EXECUTABLE "NOT-FOUND" CACHE PATH "Path to ARM64 pkg-config")
set(PKG_CONFIG_EXECUTABLE "NOT-FOUND" CACHE PATH "Path to RISC-V pkg-config")
# Don't run the linker on compiler check
set(CMAKE_TRY_COMPILE_TARGET_TYPE STATIC_LIBRARY)

View File

@@ -0,0 +1,75 @@
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
set(CMAKE_SYSTEM_NAME Linux)
set(CMAKE_SYSTEM_PROCESSOR amd64)
set(CMAKE_C_COMPILER x86_64-linux-gnu-gcc)
set(CMAKE_CXX_COMPILER x86_64-linux-gnu-g++)
set(CMAKE_STRIP x86_64-linux-gnu-strip)
set(PKG_CONFIG_EXECUTABLE "NOT-FOUND" CACHE PATH "Path to amd64 pkg-config")
set(CMAKE_FIND_ROOT_PATH_MODE_PROGRAM NEVER)
set(CMAKE_FIND_ROOT_PATH_MODE_LIBRARY ONLY)
set(CMAKE_FIND_ROOT_PATH_MODE_INCLUDE ONLY)
set(CMAKE_FIND_ROOT_PATH_MODE_PACKAGE ONLY)
macro(__cmake_find_root_save_and_reset)
foreach(v
CMAKE_FIND_ROOT_PATH_MODE_LIBRARY
CMAKE_FIND_ROOT_PATH_MODE_INCLUDE
CMAKE_FIND_ROOT_PATH_MODE_PACKAGE
CMAKE_FIND_ROOT_PATH_MODE_PROGRAM
)
set(__save_${v} ${${v}})
set(${v} NEVER)
endforeach()
endmacro()
macro(__cmake_find_root_restore)
foreach(v
CMAKE_FIND_ROOT_PATH_MODE_LIBRARY
CMAKE_FIND_ROOT_PATH_MODE_INCLUDE
CMAKE_FIND_ROOT_PATH_MODE_PACKAGE
CMAKE_FIND_ROOT_PATH_MODE_PROGRAM
)
set(${v} ${__save_${v}})
unset(__save_${v})
endforeach()
endmacro()
# macro to find programs on the host OS
macro(find_host_program)
__cmake_find_root_save_and_reset()
if(CMAKE_HOST_WIN32)
SET(WIN32 1)
SET(UNIX)
elseif(CMAKE_HOST_APPLE)
SET(APPLE 1)
SET(UNIX)
endif()
find_program(${ARGN})
SET(WIN32)
SET(APPLE)
SET(UNIX 1)
__cmake_find_root_restore()
endmacro()
# macro to find packages on the host OS
macro(find_host_package)
__cmake_find_root_save_and_reset()
if(CMAKE_HOST_WIN32)
SET(WIN32 1)
SET(UNIX)
elseif(CMAKE_HOST_APPLE)
SET(APPLE 1)
SET(UNIX)
endif()
find_package(${ARGN})
SET(WIN32)
SET(APPLE)
SET(UNIX 1)
__cmake_find_root_restore()
endmacro()

36
conan.lock Normal file
View File

@@ -0,0 +1,36 @@
{
"version": "0.5",
"requires": [
"zlib/1.2.13#97d5730b529b4224045fe7090592d4c1%1692672717.049",
"xbyak/6.73#250bc3bc73379f90f255876c1c00a4cd%1691853024.351",
"snappy/1.1.10#916523630083f6d855cb2977de8eefb6%1689780661.062",
"pybind11/2.10.4#dd44c80a5ed6a2ef11194380daae1248%1682692198.909",
"pugixml/1.13#f615c1fcec55122b2e177d17061276e7%1691917296.869",
"protobuf/3.21.12#d9f5f4e3b86552552dda4c0a2e928eeb%1685218275.69",
"opencl-icd-loader/2023.04.17#5f73dd9f0c023d416a7f162e320b9c77%1692732261.088",
"opencl-headers/2023.04.17#3d98f2d12a67c2400de6f11d5335b5a6%1683936272.16",
"opencl-clhpp-headers/2023.04.17#7c62fcc7ac2559d4839150d2ebaac5c8%1685450803.672",
"onnx/1.13.1#f11071c8aba52731a5205b028945acbb%1693130310.715",
"onetbb/2021.10.0#cbb2fc43088070b48f6e4339bc8fa0e1%1693812561.235",
"nlohmann_json/3.11.2#a35423bb6e1eb8f931423557e282c7ed%1666619820.488",
"ittapi/3.24.0#9246125f13e7686dee2b0c992b71db94%1682969872.743",
"hwloc/2.9.2#1c63e2eccac57048ae226e6c946ebf0e%1688677682.002",
"gflags/2.2.2#48d1262ffac8d30c3224befb8275a533%1676224985.343",
"flatbuffers/23.5.26#b153646f6546daab4c7326970b6cd89c%1685838458.449",
"ade/0.1.2a#b569ff943843abd004e65536e265a445%1688125447.482"
],
"build_requires": [
"zlib/1.2.13#97d5730b529b4224045fe7090592d4c1%1692672717.049",
"protobuf/3.21.12#d9f5f4e3b86552552dda4c0a2e928eeb%1685218275.69",
"protobuf/3.21.9#515ceb0a1653cf84363d9968b812d6be%1678364058.993",
"patchelf/0.13#0eaada8970834919c3ce14355afe7fac%1680534241.341",
"m4/1.4.19#c1c4b1ee919e34630bb9b50046253d3c%1676610086.39",
"libtool/2.4.6#9ee8efc04c2e106e7fba13bb1e477617%1677509454.345",
"gnu-config/cci.20210814#15c3bf7dfdb743977b84d0321534ad90%1681250000.747",
"flatbuffers/23.5.26#b153646f6546daab4c7326970b6cd89c%1685838458.449",
"cmake/3.27.4#a7e78418b024dccacccc887f049f47ed%1693515860.005",
"automake/1.16.5#058bda3e21c36c9aa8425daf3c1faf50%1688481772.751",
"autoconf/2.71#53be95d228b2dcb30dc199cb84262d8f%1693395343.513"
],
"python_requires": []
}

33
conanfile.txt Normal file
View File

@@ -0,0 +1,33 @@
[requires]
ade/0.1.2a
onetbb/[>=2021.2.1]
pugixml/[>=1.10]
protobuf/3.21.12
ittapi/[>=3.23.0]
zlib/[>=1.2.8]
opencl-icd-loader/[>=2022.09.30]
# opencl-clhpp-headers/[>=2022.09.30]
opencl-headers/[>=2022.09.30]
xbyak/[>=6.62]
snappy/[>=1.1.7]
gflags/2.2.2
onnx/1.13.1
nlohmann_json/[>=3.1.1]
pybind11/[>=2.10.1]
flatbuffers/[>=22.9.24]
[tool_requires]
cmake/[>=3.15]
patchelf/[>=0.12]
protobuf/3.21.9
flatbuffers/[>=22.9.24]
[options]
protobuf/*:lite=True
onetbb/*:tbbmalloc=True
onetbb/*:tbbproxy=True
flatbuffers/*:header_only=True
[generators]
CMakeDeps
CMakeToolchain

View File

@@ -77,7 +77,7 @@ function(build_docs)
if(ENABLE_OPENVINO_NOTEBOOKS)
set(NBDOC_SCRIPT "${DOCS_SOURCE_DIR}/nbdoc/nbdoc.py")
list(APPEND commands
COMMAND ${PYTHON_EXECUTABLE} "${NBDOC_SCRIPT}" "${RST_OUTPUT}/notebooks"
COMMAND ${PYTHON_EXECUTABLE} "${NBDOC_SCRIPT}" "${DOCS_SOURCE_DIR}/notebooks" "${RST_OUTPUT}/notebooks"
)
endif()

View File

@@ -0,0 +1,76 @@
# Datumaro {#datumaro_documentation}
@sphinxdirective
.. meta::
:description: Start working with Datumaro, which offers functionalities for basic data
import/export, validation, correction, filtration and transformations.
Datumaro provides a suite of basic data import/export (IE) for more than 35 public vision data
formats and manipulation functionalities such as validation, correction, filtration, and some
transformations. To achieve the web-scale training, this further aims to merge multiple
heterogeneous datasets through comparator and merger. Datumaro is integrated into Geti™, OpenVINO™
Training Extensions, and CVAT for the ease of data preparation. Datumaro is open-sourced and
available on `GitHub <https://github.com/openvinotoolkit/datumaro>`__.
Refer to the official `documentation <https://openvinotoolkit.github.io/datumaro/stable/docs/get-started/introduction.html>`__ to learn more.
Plus, enjoy `Jupyter notebooks <https://github.com/openvinotoolkit/datumaro/tree/develop/notebooks>`__ for the real Datumaro practices.
Detailed Workflow
#################
.. image:: ./_static/images/datumaro.png
1. To start working with Datumaro, download public datasets or prepare your own annotated dataset.
.. note::
Datumaro provides a CLI `datum download` for downloading `TensorFlow Datasets <https://www.tensorflow.org/datasets>`__.
2. Import data into Datumaro and manipulate the dataset for the data quality using `Validator`, `Corrector`, and `Filter`.
3. Compare two datasets and transform the label schemas (category information) before merging them.
4. Merge two datasets to a large-scale dataset.
.. note::
There are some choices of merger, i.e., `ExactMerger`, `IntersectMerger`, and `UnionMerger`.
5. Split the unified dataset into subsets, e.g., `train`, `valid`, and `test` through `Splitter`.
.. note::
We can split data with a given ratio of subsets according to both the number of samples or
annotations. Please see `SplitTask` for the task-specific split.
6. Export the cleaned and unified dataset for follow-up workflows such as model training.
Go to :doc:`OpenVINO™ Training Extensions <ote_documentation>`.
If the results are unsatisfactory, add datasets and perform the same steps, starting with dataset annotation.
Datumaro Components
###################
* `Datumaro CLIs <https://openvinotoolkit.github.io/datumaro/stable/docs/command-reference/overview.html>`__
* `Datumaro APIs <https://openvinotoolkit.github.io/datumaro/stable/docs/reference/datumaro_module.html>`__
* `Datumaro data format <https://openvinotoolkit.github.io/datumaro/stable/docs/data-formats/datumaro_format.html>`__
* `Supported data formats <https://openvinotoolkit.github.io/datumaro/stable/docs/data-formats/formats/index.html>`__
Tutorials
#########
* `Basic skills <https://openvinotoolkit.github.io/datumaro/stable/docs/level-up/basic_skills/index.html>`__
* `Intermediate skills <https://openvinotoolkit.github.io/datumaro/stable/docs/level-up/intermediate_skills/index.html>`__
* `Advanced skills <https://openvinotoolkit.github.io/datumaro/stable/docs/level-up/advanced_skills/index.html>`__
Python Hands-on Examples
########################
* `Data IE <https://openvinotoolkit.github.io/datumaro/stable/docs/jupyter_notebook_examples/dataset_IO.html>`__
* `Data manipulation <https://openvinotoolkit.github.io/datumaro/stable/docs/jupyter_notebook_examples/manipulate.html>`__
* `Data exploration <https://openvinotoolkit.github.io/datumaro/stable/docs/jupyter_notebook_examples/explore.html>`__
* `Data refinement <https://openvinotoolkit.github.io/datumaro/stable/docs/jupyter_notebook_examples/refine.html>`__
* `Data transformation <https://openvinotoolkit.github.io/datumaro/stable/docs/jupyter_notebook_examples/transform.html>`__
* `Deep learning end-to-end use-cases <https://openvinotoolkit.github.io/datumaro/stable/docs/jupyter_notebook_examples/e2e_example.html>`__
@endsphinxdirective

View File

@@ -1,37 +0,0 @@
# Running and Deploying Inference {#openvino_docs_deployment_guide_introduction}
@sphinxdirective
.. toctree::
:maxdepth: 1
:hidden:
Run and Deploy Locally <openvino_deployment_guide>
Deploy via Model Serving <ovms_what_is_openvino_model_server>
@endsphinxdirective
Once you have a model that meets both OpenVINO™ and your requirements, you can choose how to deploy it with your application.
@sphinxdirective
.. panels::
:doc:`Deploy via OpenVINO Runtime <openvino_deployment_guide>`
^^^^^^^^^^^^^^
Local deployment uses OpenVINO Runtime that is called from, and linked to, the application directly.
It utilizes resources available to the system and provides the quickest way of launching inference.
---
:doc:`Deploy via Model Server <ovms_what_is_openvino_model_server>`
^^^^^^^^^^^^^^
Deployment via OpenVINO Model Server allows the application to connect to the inference server set up remotely.
This way inference can use external resources instead of those available to the application itself.
@endsphinxdirective
Apart from the default deployment options, you may also [deploy your application for the TensorFlow framework with OpenVINO Integration](./openvino_ecosystem_ovtf.md).

View File

@@ -1,15 +0,0 @@
# OpenVINO™ Deep Learning Workbench Overview {#workbench_docs_Workbench_DG_Introduction}
@sphinxdirective
.. toctree::
:maxdepth: 1
:hidden:
workbench_docs_Workbench_DG_Install
workbench_docs_Workbench_DG_Work_with_Models_and_Sample_Datasets
Tutorials <workbench_docs_Workbench_DG_Tutorials>
User Guide <workbench_docs_Workbench_DG_User_Guide>
workbench_docs_Workbench_DG_Troubleshooting
@endsphinxdirective

View File

@@ -10,15 +10,15 @@
openvino_docs_OV_UG_Running_on_multiple_devices
openvino_docs_OV_UG_Hetero_execution
openvino_docs_OV_UG_Automatic_Batching
@endsphinxdirective
OpenVINO Runtime offers multiple inference modes to allow optimum hardware utilization under different conditions. The most basic one is a single-device mode, which defines just one device responsible for the entire inference workload. It supports a range of Intel hardware by means of plugins embedded in the Runtime library, each set up to offer the best possible performance. For a complete list of supported devices and instructions on how to use them, refer to the [guide on inference devices](../OV_Runtime_UG/supported_plugins/Device_Plugins.md).
OpenVINO Runtime offers multiple inference modes to allow optimum hardware utilization under different conditions. The most basic one is a single-device mode, which defines just one device responsible for the entire inference workload. It supports a range of Intel hardware by means of plugins embedded in the Runtime library, each set up to offer the best possible performance. For a complete list of supported devices and instructions on how to use them, refer to the :doc:`guide on inference devices <openvino_docs_OV_UG_Working_with_devices>`.
The remaining modes assume certain levels of automation in selecting devices for inference. Using them in the deployed solution may potentially increase its performance and portability. The automated modes are:
* [Automatic Device Selection (AUTO)](../OV_Runtime_UG/auto_device_selection.md)
* [Multi-Device Execution (MULTI)](../OV_Runtime_UG/multi_device.md)
* [Heterogeneous Execution (HETERO)](../OV_Runtime_UG/hetero_execution.md)
* [Automatic Batching Execution (Auto-batching)](../OV_Runtime_UG/automatic_batching.md)
* :doc:`Automatic Device Selection (AUTO) <openvino_docs_OV_UG_supported_plugins_AUTO>`
* :doc:`Multi-Device Execution (MULTI) <openvino_docs_OV_UG_Running_on_multiple_devices>`
* :doc:`Heterogeneous Execution (HETERO) <openvino_docs_OV_UG_Hetero_execution>`
* :doc:`Automatic Batching Execution (Auto-batching) <openvino_docs_OV_UG_Automatic_Batching>`
@endsphinxdirective

View File

@@ -2,6 +2,10 @@
@sphinxdirective
.. meta::
:description: Explore OpenCV Graph API and other media processing frameworks
used for development of computer vision solutions.
.. toctree::
:maxdepth: 1

View File

@@ -1,6 +1,12 @@
# Model Preparation {#openvino_docs_model_processing_introduction}
@sphinxdirective
.. meta::
:description: Preparing models for OpenVINO Runtime. Learn about the methods
used to read, convert and compile models from different frameworks.
.. toctree::
:maxdepth: 1
:hidden:
@@ -9,22 +15,53 @@
openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide
omz_tools_downloader
@endsphinxdirective
Every deep learning workflow begins with obtaining a model. You can choose to prepare a custom one, use a ready-made solution and adjust it to your needs, or even download and run a pre-trained network from an online database, such as `TensorFlow Hub <https://tfhub.dev/>`__, `Hugging Face <https://huggingface.co/>`__, or `Torchvision models <https://pytorch.org/hub/>`__.
Import a model using ``read_model()``
#################################################
Model files (not Python objects) from :doc:`ONNX, PaddlePaddle, TensorFlow and TensorFlow Lite <Supported_Model_Formats>` (check :doc:`TensorFlow Frontend Capabilities and Limitations <openvino_docs_MO_DG_TensorFlow_Frontend>`) do not require a separate step for model conversion, that is ``mo.convert_model``.
The ``read_model()`` method reads a model from a file and produces `openvino.runtime.Model <api/ie_python_api/_autosummary/openvino.runtime.Model.html>`__. If the file is in one of the supported original framework :doc:`file formats <Supported_Model_Formats>`, the method runs internal conversion to an OpenVINO model format. If the file is already in the :doc:`OpenVINO IR format <openvino_ir>`, it is read "as-is", without any conversion involved.
You can also convert a model from original framework to `openvino.runtime.Model <api/ie_python_api/_autosummary/openvino.runtime.Model.html>`__ using ``convert_model()`` method. More details about ``convert_model()`` are provided in :doc:`model conversion guide <openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide>` .
``ov.Model`` can be serialized to IR using the ``ov.serialize()`` method. The serialized IR can be further optimized using :doc:`Neural Network Compression Framework (NNCF) <ptq_introduction>` that applies post-training quantization methods.
.. note::
``convert_model()`` also allows you to perform input/output cut, add pre-processing or add custom Python conversion extensions.
Convert a model with Python using ``mo.convert_model()``
###########################################################
Model conversion API, specifically, the ``mo.convert_model()`` method converts a model from original framework to ``ov.Model``. ``mo.convert_model()`` returns ``ov.Model`` object in memory so the ``read_model()`` method is not required. The resulting ``ov.Model`` can be inferred in the same training environment (python script or Jupiter Notebook). ``mo.convert_model()`` provides a convenient way to quickly switch from framework-based code to OpenVINO-based code in your inference application.
In addition to model files, ``mo.convert_model()`` can take OpenVINO extension objects constructed directly in Python for easier conversion of operations that are not supported in OpenVINO. The ``mo.convert_model()`` method also has a set of parameters to :doc:`cut the model <openvino_docs_MO_DG_prepare_model_convert_model_Cutting_Model>`, :doc:`set input shapes or layout <openvino_docs_MO_DG_prepare_model_convert_model_Converting_Model>`, :doc:`add preprocessing <openvino_docs_MO_DG_Additional_Optimization_Use_Cases>`, etc.
The figure below illustrates the typical workflow for deploying a trained deep learning model, where IR is a pair of files describing the model:
* ``.xml`` - Describes the network topology.
* ``.bin`` - Contains the weights and biases binary data.
.. image:: _static/images/model_conversion_diagram.svg
:alt: model conversion diagram
Every deep learning workflow begins with obtaining a model. You can choose to prepare a custom one, use a ready-made solution and adjust it to your needs, or even download and run a pre-trained network from an online database, such as OpenVINO's [Open Model Zoo](../model_zoo.md).
Convert a model using ``mo`` command-line tool
#################################################
[OpenVINO™ supports several model formats](../MO_DG/prepare_model/convert_model/supported_model_formats.md) and allows to convert them to it's own, OpenVINO IR, providing a tool dedicated to this task.
Another option to convert a model is to use ``mo`` command-line tool. ``mo`` is a cross-platform tool that facilitates the transition between training and deployment environments, performs static model analysis, and adjusts deep learning models for optimal execution on end-point target devices in the same measure, as the ``mo.convert_model()`` method.
[Model Optimizer](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md) reads the original model and creates the OpenVINO IR model (.xml and .bin files) so that inference can ultimately be performed without delays due to format conversion. Optionally, Model Optimizer can adjust the model to be more suitable for inference, for example, by [alternating input shapes](../MO_DG/prepare_model/convert_model/Converting_Model.md), [embedding preprocessing](../MO_DG/prepare_model/Additional_Optimizations.md) and [cutting training parts off](../MO_DG/prepare_model/convert_model/Cutting_Model.md).
``mo`` requires the use of a pre-trained deep learning model in one of the supported formats: TensorFlow, TensorFlow Lite, PaddlePaddle, or ONNX. ``mo`` converts the model to the OpenVINO Intermediate Representation format (IR), which needs to be read with the ``ov.read_model()`` method. Then, you can compile and infer the ``ov.Model`` later with :doc:`OpenVINO™ Runtime <openvino_docs_OV_UG_OV_Runtime_User_Guide>`.
The approach to fully convert a model is considered the default choice, as it allows the full extent of OpenVINO features. The OpenVINO IR model format is used by other conversion and preparation tools, such as the Post-Training Optimization Tool, for further optimization of the converted model.
Conversion is not required for ONNX and PaddlePaddle models, as OpenVINO provides C++ and Python APIs for importing them to OpenVINO Runtime directly. It provides a convenient way to quickly switch from framework-based code to OpenVINO-based code in your inference application.
The results of both ``mo`` and ``mo.convert_model()`` conversion methods described above are the same. You can choose one of them, depending on what is most convenient for you. Keep in mind that there should not be any differences in the results of model conversion if the same set of parameters is used.
This section describes how to obtain and prepare your model for work with OpenVINO to get the best inference results:
* [See the supported formats and how to use them in your project](../MO_DG/prepare_model/convert_model/supported_model_formats.md)
* [Convert different model formats to the OpenVINO IR format](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
* [Automate model-related tasks with Model Downloader and additional OMZ Tools](https://docs.openvino.ai/latest/omz_tools_downloader.html).
To begin with, you may want to [browse a database of models for use in your projects](../model_zoo.md).
* :doc:`See the supported formats and how to use them in your project <Supported_Model_Formats>`.
* :doc:`Convert different model formats to the ov.Model format <openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide>`.
@endsphinxdirective

View File

@@ -2,82 +2,99 @@
@sphinxdirective
.. meta::
:description: OpenVINO™ is an ecosystem of utilities that have advanced capabilities, which help develop deep learning solutions.
.. toctree::
:maxdepth: 1
:hidden:
ovtf_integration
ote_documentation
datumaro_documentation
ovsa_get_started
openvino_inference_engine_tools_compile_tool_README
openvino_docs_tuning_utilities
workbench_docs_Workbench_DG_Introduction
@endsphinxdirective
OpenVINO™ is not just one tool. It is an expansive ecosystem of utilities, providing a comprehensive workflow for deep learning solution development. Learn more about each of them to reach the full potential of OpenVINO™ Toolkit.
### Neural Network Compression Framework (NNCF)
**Neural Network Compression Framework (NNCF)**
A suite of advanced algorithms for Neural Network inference optimization with minimal accuracy drop. NNCF applies quantization, filter pruning, binarization and sparsity algorithms to PyTorch and TensorFlow models during training.
More resources:
* [Documentation](@ref tmo_introduction)
* [GitHub](https://github.com/openvinotoolkit/nncf)
* [PyPI](https://pypi.org/project/nncf/)
### OpenVINO™ Security Add-on
A solution for Model Developers and Independent Software Vendors to use secure packaging and secure model execution.
More resources:
* [documentation](https://docs.openvino.ai/latest/ovsa_get_started.html)
* [GitHub](https://github.com/openvinotoolkit/security_addon)
* :doc:`Documentation <tmo_introduction>`
* `GitHub <https://github.com/openvinotoolkit/nncf>`__
* `PyPI <https://pypi.org/project/nncf/>`__
### OpenVINO™ integration with TensorFlow (OVTF)
A solution empowering TensorFlow developers with OpenVINO's optimization capabilities. With just two lines of code in your application, you can offload inference to OpenVINO, while keeping the TensorFlow API.
**OpenVINO™ Training Extensions**
More resources:
* [documentation](https://github.com/openvinotoolkit/openvino_tensorflow)
* [PyPI](https://pypi.org/project/openvino-tensorflow/)
* [GitHub](https://github.com/openvinotoolkit/openvino_tensorflow)
### DL Streamer
A streaming media analytics framework, based on the GStreamer multimedia framework, for creating complex media analytics pipelines.
More resources:
* [documentation on GitHub](https://dlstreamer.github.io/index.html)
* [installation Guide on GitHub](https://github.com/openvinotoolkit/dlstreamer_gst/wiki/Install-Guide)
### DL Workbench
A web-based tool for deploying deep learning models. Built on the core of OpenVINO and equipped with a graphics user interface, DL Workbench is a great way to explore the possibilities of the OpenVINO workflow, import, analyze, optimize, and build your pre-trained models. You can do all that by visiting [Intel® DevCloud for the Edge](https://software.intel.com/content/www/us/en/develop/tools/devcloud.html) and launching DL Workbench on-line.
More resources:
* [documentation](dl_workbench_overview.md)
* [Docker Hub](https://hub.docker.com/r/openvino/workbench)
* [PyPI](https://pypi.org/project/openvino-workbench/)
### OpenVINO™ Training Extensions (OTE)
A convenient environment to train Deep Learning models and convert them using the OpenVINO™ toolkit for optimized inference.
More resources:
* [GitHub](https://github.com/openvinotoolkit/training_extensions)
### Computer Vision Annotation Tool (CVAT)
An online, interactive video and image annotation tool for computer vision purposes.
* :doc:`Overview <ote_documentation>`
* `GitHub <https://github.com/openvinotoolkit/training_extensions>`__
* `Documentation <https://openvinotoolkit.github.io/training_extensions/stable/guide/get_started/introduction.html>`__
**OpenVINO™ Security Add-on**
A solution for Model Developers and Independent Software Vendors to use secure packaging and secure model execution.
More resources:
* [documentation on GitHub](https://opencv.github.io/cvat/docs/)
* [web application](https://cvat.org/)
* [Docker Hub](https://hub.docker.com/r/openvino/cvat_server)
* [GitHub](https://github.com/openvinotoolkit/cvat)
### Dataset Management Framework (Datumaro)
* :doc:`Documentation <ovsa_get_started>`
* `GitHub <https://github.com/openvinotoolkit/security_addon>`__
**Dataset Management Framework (Datumaro)**
A framework and CLI tool to build, transform, and analyze datasets.
More resources:
* [documentation on GitHub](https://openvinotoolkit.github.io/datumaro/docs/)
* [PyPI](https://pypi.org/project/datumaro/)
* [GitHub](https://github.com/openvinotoolkit/datumaro)
* :doc:`Overview <datumaro_documentation>`
* `PyPI <https://pypi.org/project/datumaro/>`__
* `GitHub <https://github.com/openvinotoolkit/datumaro>`__
* `Documentation <https://openvinotoolkit.github.io/datumaro/stable/docs/get-started/introduction.html>`__
**Compile Tool**
Compile tool is now deprecated. If you need to compile a model for inference on a specific device, use the following script:
.. tab-set::
.. tab-item:: Python
:sync: py
.. doxygensnippet:: docs/snippets/export_compiled_model.py
:language: python
:fragment: [export_compiled_model]
.. tab-item:: C++
:sync: cpp
.. doxygensnippet:: docs/snippets/export_compiled_model.cpp
:language: cpp
:fragment: [export_compiled_model]
To learn which device supports the import / export functionality, see the :doc:`feature support matrix <openvino_docs_OV_UG_Working_with_devices>`.
For more details on preprocessing steps, refer to the :doc:`Optimize Preprocessing <openvino_docs_OV_UG_Preprocessing_Overview>`. To compile the model with advanced preprocessing capabilities, refer to the :doc:`Use Case - Integrate and Save Preprocessing Steps Into OpenVINO IR <openvino_docs_OV_UG_Preprocess_Usecase_save>`, which shows how to have all the preprocessing in the compiled blob.
**DL Workbench**
A web-based tool for deploying deep learning models. Built on the core of OpenVINO and equipped with a graphics user interface, DL Workbench is a great way to explore the possibilities of the OpenVINO workflow, import, analyze, optimize, and build your pre-trained models. You can do all that by visiting `Intel® Developer Cloud <https://software.intel.com/content/www/us/en/develop/tools/devcloud.html>`__ and launching DL Workbench online.
**OpenVINO™ integration with TensorFlow (OVTF)**
OpenVINO™ Integration with TensorFlow will no longer be supported as of OpenVINO release 2023.0. As part of the 2023.0 release, OpenVINO will feature a significantly enhanced TensorFlow user experience within native OpenVINO without needing offline model conversions. :doc:`Learn more <openvino_docs_MO_DG_TensorFlow_Frontend>`.
@endsphinxdirective

View File

@@ -1,42 +0,0 @@
# OpenVINO™ integration with TensorFlow {#ovtf_integration}
**OpenVINO™ integration with TensorFlow** is a solution for TensorFlow developers who want to get started with OpenVINO™ in their inferencing applications. By adding just two lines of code you can now take advantage of OpenVINO™ toolkit optimizations with TensorFlow inference applications across a range of Intel® computation devices.
This is all you need:
```bash
import openvino_tensorflow
openvino_tensorflow.set_backend('<backend_name>')
```
**OpenVINO™ integration with TensorFlow** accelerates inference across many AI models on a variety of Intel® technologies, such as:
- Intel® CPUs
- Intel® integrated GPUs
> **NOTE**: For maximum performance, efficiency, tooling customization, and hardware control, we recommend developers to adopt native OpenVINO™ solutions.
To find out more about the product itself, as well as learn how to use it in your project, check its dedicated [GitHub repository](https://github.com/openvinotoolkit/openvino_tensorflow/tree/master/docs).
To see what you can do with **OpenVINO™ integration with TensorFlow**, explore the demos located in the [examples folder](https://github.com/openvinotoolkit/openvino_tensorflow/tree/master/examples) in our GitHub repository.
Sample tutorials are also hosted on [Intel® DevCloud](https://www.intel.com/content/www/us/en/developer/tools/devcloud/edge/build/ovtfoverview.html). The demo applications are implemented using Jupyter Notebooks. You can interactively execute them on Intel® DevCloud nodes, compare the results of **OpenVINO™ integration with TensorFlow**, native TensorFlow, and OpenVINO™.
## License
**OpenVINO™ integration with TensorFlow** is licensed under [Apache License Version 2.0](https://github.com/openvinotoolkit/openvino_tensorflow/blob/master/LICENSE).
By contributing to the project, you agree to the license and copyright terms therein
and release your contribution under these terms.
## Support
Submit your questions, feature requests and bug reports via [GitHub issues](https://github.com/openvinotoolkit/openvino_tensorflow/issues).
## How to Contribute
We welcome community contributions to **OpenVINO™ integration with TensorFlow**. If you have an idea for improvement:
* Share your proposal via [GitHub issues](https://github.com/openvinotoolkit/openvino_tensorflow/issues).
* Submit a [pull request](https://github.com/openvinotoolkit/openvino_tensorflow/pulls).
We will review your contribution as soon as possible. If any additional fixes or modifications are necessary, we will guide you and provide feedback. Before you make your contribution, make sure you can build **OpenVINO™ integration with TensorFlow** and run all the examples with your fix/patch. If you want to introduce a large feature, create test cases for your feature. Upon our verification of your pull request, we will merge it to the repository provided that the pull request has met the above mentioned requirements and proved acceptable.
---
\* Other names and brands may be claimed as the property of others.

View File

@@ -0,0 +1,47 @@
# OpenVINO™ Training Extensions {#ote_documentation}
@sphinxdirective
.. meta::
:description: OpenVINO™ Training Extensions include advanced algorithms used
to create, train and convert deep learning models with OpenVINO
Toolkit for optimized inference.
OpenVINO™ Training Extensions provide a suite of advanced algorithms to train
Deep Learning models and convert them using the `OpenVINO™
toolkit <https://software.intel.com/en-us/openvino-toolkit>`__ for optimized
inference. It allows you to export and convert the models to the needed format. OpenVINO Training Extensions independently create and train the model. It is open-sourced and available on `GitHub <https://github.com/openvinotoolkit/training_extensions>`__. Read the OpenVINO Training Extensions `documentation <https://openvinotoolkit.github.io/training_extensions/stable/guide/get_started/introduction.html>`__ to learn more.
Detailed Workflow
#################
.. image:: ./_static/images/training_extensions_framework.png
1. To start working with OpenVINO Training Extensions, prepare and annotate your dataset. For example, on CVAT.
2. OpenVINO Training Extensions train the model, using training interface, and evaluate the model quality on your dataset, using evaluation and inference interfaces.
.. note::
Prepare a separate dataset or split the dataset you have for more accurate quality evaluation.
3. Having successful evaluation results received, you have an opportunity to deploy your model or continue optimizing it, using NNCF. For more information about these frameworks, go to :doc:`Optimization Guide <openvino_docs_model_optimization_guide>`.
If the results are unsatisfactory, add datasets and perform the same steps, starting with dataset annotation.
OpenVINO Training Extensions Components
#######################################
* `OpenVINO Training Extensions API <https://github.com/openvinotoolkit/training_extensions/tree/develop/otx/api>`__
* `OpenVINO Training Extensions CLI <https://github.com/openvinotoolkit/training_extensions/tree/develop/otx/cli>`__
* `OpenVINO Training Extensions Algorithms <https://github.com/openvinotoolkit/training_extensions/tree/develop/otx/algorithms>`__
Tutorials
#########
* `Base tutorial <https://openvinotoolkit.github.io/training_extensions/stable/guide/tutorials/base/index.html>`__
* `Advanced tutorial <https://openvinotoolkit.github.io/training_extensions/stable/guide/tutorials/advanced/index.html>`__
@endsphinxdirective

View File

@@ -3,22 +3,46 @@
@sphinxdirective
.. meta::
:description: OpenVINO toolkit workflow usually involves preparation,
optimization, and compression of models, running inference and
deploying deep learning applications.
.. toctree::
:maxdepth: 1
:hidden:
Model Preparation <openvino_docs_model_processing_introduction>
Model Optimization and Compression <openvino_docs_model_optimization_guide>
Running and Deploying Inference <openvino_docs_deployment_guide_introduction>
Running Inference <openvino_docs_OV_UG_OV_Runtime_User_Guide>
Deployment on a Local System <openvino_deployment_guide>
Deployment on a Model Server <ovms_what_is_openvino_model_server>
pytorch_2_0_torch_compile
| :doc:`Model Preparation <openvino_docs_model_processing_introduction>`
| With Model Downloader and Model Optimizer guides, you will learn to download pre-trained models and convert them for use with OpenVINO™. You can use your own models or choose some from a broad selection provided in the Open Model Zoo.
| With model conversion API guide, you will learn to convert pre-trained models for use with OpenVINO™. You can use your own models or choose some from a broad selection in online databases, such as `TensorFlow Hub <https://tfhub.dev/>`__, `Hugging Face <https://huggingface.co/>`__, `Torchvision models <https://pytorch.org/hub/>`__..
| :doc:`Model Optimization and Compression <openvino_docs_model_optimization_guide>`
| In this section you will find out how to optimize a model to achieve better inference performance. It describes multiple optimization methods for both the training and post-training stages.
| :doc:`Deployment <openvino_docs_deployment_guide_introduction>`
| This section explains the process of deploying your own inference application using either OpenVINO Runtime or OpenVINO Model Server. It describes how to run inference which is the most basic form of deployment and the quickest way of launching inference.
| :doc:`Running Inference <openvino_docs_OV_UG_OV_Runtime_User_Guide>`
| This section explains describes how to run inference which is the most basic form of deployment and the quickest way of launching inference.
Once you have a model that meets both OpenVINO™ and your requirements, you can choose how to deploy it with your application.
| :doc:`Option 1. Deployment via OpenVINO Runtime <openvino_deployment_guide>`
| Local deployment uses OpenVINO Runtime that is called from, and linked to, the application directly.
| It utilizes resources available to the system and provides the quickest way of launching inference.
| Deployment on a local system requires performing the steps from the running inference section.
| :doc:`Option 2. Deployment via Model Server <ovms_what_is_openvino_model_server>`
| Deployment via OpenVINO Model Server allows the application to connect to the inference server set up remotely.
| This way inference can use external resources instead of those available to the application itself.
| Deployment on a model server can be done quickly and without performing any additional steps described in the running inference section.
@endsphinxdirective

View File

@@ -0,0 +1,157 @@
# PyTorch Deployment via "torch.compile" {#pytorch_2_0_torch_compile}
@sphinxdirective
The ``torch.compile`` feature enables you to use OpenVINO for PyTorch-native applications.
It speeds up PyTorch code by JIT-compiling it into optimized kernels.
By default, Torch code runs in eager-mode, but with the use of ``torch.compile`` it goes through the following steps:
1. **Graph acquisition** - the model is rewritten as blocks of subgraphs that are either:
* compiled by TorchDynamo and "flattened",
* falling back to the eager-mode, due to unsupported Python constructs (like control-flow code).
2. **Graph lowering** - all PyTorch operations are decomposed into their constituent kernels specific to the chosen backend.
3. **Graph compilation** - the kernels call their corresponding low-level device-specific operations.
How to Use
#################
To use ``torch.compile``, you need to add an import statement and define one of the two available backends:
| ``openvino``
| With this backend, Torch FX subgraphs are directly converted to OpenVINO representation without any additional PyTorch based tracing/scripting.
| ``openvino_ts``
| With this backend, Torch FX subgraphs are first traced/scripted with PyTorch Torchscript, and then converted to OpenVINO representation.
.. tab-set::
.. tab-item:: openvino
:sync: backend-openvino
.. code-block:: console
import openvino.torch
...
model = torch.compile(model, backend='openvino')
Execution diagram:
.. image:: _static/images/torch_compile_backend_openvino.svg
:width: 992px
:height: 720px
:scale: 60%
:align: center
.. tab-item:: openvino_ts
:sync: backend-openvino-ts
.. code-block:: console
import openvino.torch
...
model = torch.compile(model, backend='openvino_ts')
Execution diagram:
.. image:: _static/images/torch_compile_backend_openvino_ts.svg
:width: 1088px
:height: 720px
:scale: 60%
:align: center
Environment Variables
+++++++++++++++++++++++++++
* **OPENVINO_TORCH_BACKEND_DEVICE**: enables selecting a specific hardware device to run the application.
By default, the OpenVINO backend for ``torch.compile`` runs PyTorch applications using the CPU. Setting
this variable to GPU.0, for example, will make the application use the integrated graphics processor instead.
* **OPENVINO_TORCH_MODEL_CACHING**: enables saving the optimized model files to a hard drive, after the first application run.
This makes them available for the following application executions, reducing the first-inference latency.
By default, this variable is set to ``False``. Setting it to ``True`` enables caching.
* **OPENVINO_TORCH_CACHE_DIR**: enables defining a custom directory for the model files (if model caching set to ``True``).
By default, the OpenVINO IR is saved in the ``cache`` sub-directory, created in the application's root directory.
Windows support
++++++++++++++++++++++++++
Currently, PyTorch does not support ``torch.compile`` feature on Windows officially. However, it can be accessed by running
the below instructions:
1. Install the PyTorch nightly wheel file - `2.1.0.dev20230713 <https://download.pytorch.org/whl/nightly/cpu/torch-2.1.0.dev20230713%2Bcpu-cp38-cp38-win_amd64.whl>`__ ,
2. Update the file at ``<python_env_root>/Lib/site-packages/torch/_dynamo/eval_frames.py``
3. Find the function called ``check_if_dynamo_supported()``:
.. code-block:: console
def check_if_dynamo_supported():
if sys.platform == "win32":
raise RuntimeError("Windows not yet supported for torch.compile")
if sys.version_info >= (3, 11):
raise RuntimeError("Python 3.11+ not yet supported for torch.compile")
4. Put in comments the first two lines in this function, so it looks like this:
.. code-block:: console
def check_if_dynamo_supported():
#if sys.platform == "win32":
# raise RuntimeError("Windows not yet supported for torch.compile")
if sys.version_info >= (3, 11):
`raise RuntimeError("Python 3.11+ not yet supported for torch.compile")
Support for Automatic1111 Stable Diffusion WebUI
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Automatic1111 Stable Diffusion WebUI is an open-source repository that hosts a browser-based interface for the Stable Diffusion
based image generation. It allows users to create realistic and creative images from text prompts.
Stable Diffusion WebUI is supported on Intel CPUs, Intel integrated GPUs, and Intel discrete GPUs by leveraging OpenVINO
``torch.compile`` capability. Detailed instructions are available in
`Stable Diffusion WebUI repository. <https://github.com/openvinotoolkit/stable-diffusion-webui/wiki/Installation-on-Intel-Silicon>`__
Architecture
#################
The ``torch.compile`` feature is part of PyTorch 2.0, and is based on:
* **TorchDynamo** - a Python-level JIT that hooks into the frame evaluation API in CPython,
(PEP 523) to dynamically modify Python bytecode right before it is executed (PyTorch operators
that cannot be extracted to FX graph are executed in the native Python environment).
It maintains the eager-mode capabilities using
`Guards <https://pytorch.org/docs/stable/dynamo/guards-overview.html>`__ to ensure the
generated graphs are valid.
* **AOTAutograd** - generates the backward graph corresponding to the forward graph captured by TorchDynamo.
* **PrimTorch** - decomposes complicated PyTorch operations into simpler and more elementary ops.
* **TorchInductor** - a deep learning compiler that generates fast code for multiple accelerators and backends.
When the PyTorch module is wrapped with ``torch.compile``, TorchDynamo traces the module and
rewrites Python bytecode to extract sequences of PyTorch operations into an FX Graph,
which can be optimized by the OpenVINO backend. The Torch FX graphs are first converted to
inlined FX graphs and the graph partitioning module traverses inlined FX graph to identify
operators supported by OpenVINO.
All the supported operators are clustered into OpenVINO submodules, converted to the OpenVINO
graph using OpenVINO's PyTorch decoder, and executed in an optimized manner using OpenVINO runtime.
All unsupported operators fall back to the native PyTorch runtime on CPU. If the subgraph
fails during OpenVINO conversion, the subgraph falls back to PyTorch's default inductor backend.
Additional Resources
############################
* `PyTorch 2.0 documentation <https://pytorch.org/docs/stable/index.html>`_
@endsphinxdirective

View File

@@ -1002,7 +1002,6 @@ EXCLUDE_SYMBOLS = InferenceEngine::details \
ie_api::BlobBuffer \
*impl* \
*device_name* \
*num_requests* \
*exec_net* \
*c_config* \
*ie_core_impl* \

View File

@@ -1,237 +1,361 @@
# How to Implement Custom GPU Operations {#openvino_docs_Extensibility_UG_GPU}
@sphinxdirective
.. meta::
:description: Learn the details of custom kernel support for the GPU device to
enable operations not supported by OpenVINO.
To enable operations not supported by OpenVINO™ out of the box, you may need an extension for OpenVINO operation set, and a custom kernel for the device you will target. This article describes custom kernel support for the GPU device.
The GPU codepath abstracts many details about OpenCL. You need to provide the kernel code in OpenCL C and an XML configuration file that connects the kernel and its parameters to the parameters of the operation.
There are two options for using the custom operation configuration file:
* Include a section with your kernels into the automatically-loaded `<lib_path>/cldnn_global_custom_kernels/cldnn_global_custom_kernels.xml` file.
* Call the `ov::Core::set_property()` method from your application with the `"CONFIG_FILE"` key and the configuration file name as a value before loading the network that uses custom operations to the plugin:
@sphinxtabset
* Include a section with your kernels into the automatically-loaded ``<lib_path>/cldnn_global_custom_kernels/cldnn_global_custom_kernels.xml`` file.
* Call the ``:ref:`ov::Core::set_property() <doxid-classov_1_1_core_1aa953cb0a1601dbc9a34ef6ba82b8476e>``` method from your application with the ``"CONFIG_FILE"`` key and the configuration file name as a value before loading the network that uses custom operations to the plugin:
@sphinxtab{C++}
@snippet docs/snippets/gpu/custom_kernels_api.cpp part0
@endsphinxtab
.. tab-set::
@sphinxtab{Python}
@snippet docs/snippets/gpu/custom_kernels_api.py part0
@endsphinxtab
.. tab-item:: Python
:sync: py
.. doxygensnippet:: docs/snippets/gpu/custom_kernels_api.py
:language: python
:fragment: [part0]
@endsphinxtabset
.. tab-item:: C++
:sync: cpp
.. doxygensnippet:: docs/snippets/gpu/custom_kernels_api.cpp
:language: cpp
:fragment: [part0]
All OpenVINO samples, except the trivial `hello_classification`, and most Open Model Zoo demos
feature a dedicated command-line option `-c` to load custom kernels. For example, to load custom operations for the classification sample, run the command below:
```sh
$ ./classification_sample -m <path_to_model>/bvlc_alexnet_fp16.xml -i ./validation_set/daily/227x227/apron.bmp -d GPU
-c <absolute_path_to_config>/custom_layer_example.xml
```
## Configuration File Format <a name="config-file-format"></a>
All OpenVINO samples, except the trivial ``hello_classification``, and most Open Model Zoo demos
feature a dedicated command-line option ``-c`` to load custom kernels. For example, to load custom operations for the classification sample, run the command below:
The configuration file is expected to follow the `.xml` file structure
with a node of the type `CustomLayer` for every custom operation you provide.
.. code-block:: cpp
$ ./classification_sample -m <path_to_model>/bvlc_alexnet_fp16.xml -i ./validation_set/daily/227x227/apron.bmp -d GPU
-c <absolute_path_to_config>/custom_layer_example.xml
.. _config-file-format:
Configuration File Format
#########################
The configuration file is expected to follow the ``.xml`` file structure
with a node of the type ``CustomLayer`` for every custom operation you provide.
The definitions described in the sections below use the following notations:
Notation | Description
---|---
(0/1) | Can have zero or one instance of this node or attribute
(1) | Must have only one instance of this node or attribute
(0+) | Can have any number of instances of this node or attribute
(1+) | Can have one or more instances of this node or attribute
.. list-table::
:header-rows: 1
### CustomLayer Node and Sub-Node Structure
* - Notation
- Description
* - (0/1)
- Can have zero or one instance of this node or attribute
* - (1)
- Must have only one instance of this node or attribute
* - (0+)
- Can have any number of instances of this node or attribute
* - (1+)
- Can have one or more instances of this node or attribute
The `CustomLayer` node contains the entire configuration for a single custom operation.
CustomLayer Node and Sub-Node Structure
+++++++++++++++++++++++++++++++++++++++
| Attribute Name |\# | Description |
|-----|-----|-----|
| `name` | (1) | The name of the operation type to be used. This name should be identical to the type used in the OpenVINO IR.|
| `type` | (1) | Must be `SimpleGPU`. |
| `version` | (1) | Must be `1`. |
The ``CustomLayer`` node contains the entire configuration for a single custom operation.
**Sub-nodes**: `Kernel` (1), `Buffers` (1), `CompilerOptions` (0+),
`WorkSizes` (0/1)
.. list-table::
:header-rows: 1
### Kernel Node and Sub-Node Structure
* - Attribute Name
- #
- Description
* - ``name``
- (1)
- The name of the operation type to be used. This name should be identical to the type used in the IR.
* - ``type``
- (1)
- Must be ``SimpleGPU`` .
* - ``version``
- (1)
- Must be ``1`` .
The `Kernel` node contains all kernel source code configuration.
**Sub-nodes**: ``Kernel`` (1), ``Buffers`` (1), ``CompilerOptions`` (0+),
``WorkSizes`` (0/1)
**Sub-nodes**: `Source` (1+), `Define` (0+)
Kernel Node and Sub-Node Structure
++++++++++++++++++++++++++++++++++
### Source Node and Sub-Node Structure
The ``Kernel`` node contains all kernel source code configuration.
The `Source` node points to a single OpenCL source file.
**Sub-nodes**: ``Source`` (1+), ``Define`` (0+)
| Attribute Name | \# |Description|
|-----|-----|-----|
| `filename` | (1) | Name of the file containing OpenCL source code. The path is relative to your executable. Multiple source nodes will have their sources concatenated in order. |
Source Node and Sub-Node Structure
++++++++++++++++++++++++++++++++++
The ``Source`` node points to a single OpenCL source file.
.. list-table::
:header-rows: 1
* - Attribute Name
- #
- Description
* - ``filename``
- (1)
- Name of the file containing OpenCL source code. The path is relative to your executable. Multiple source nodes will have their sources concatenated in order.
**Sub-nodes**: None
### Define Node and Sub-Node Structure
Define Node and Sub-Node Structure
++++++++++++++++++++++++++++++++++
The `Define` node configures a single `#&zwj;define` instruction to be added to
The ``Define`` node configures a single ``#define`` instruction to be added to
the sources during compilation (JIT).
| Attribute Name | \# | Description |
|------|-------|------|
| `name` | (1) | The name of the defined JIT. For static constants, this can include the value as well, which is taken as a string. |
| `param` | (0/1) | This parameter value is used as the value of this JIT definition. |
| `type` | (0/1) | The parameter type. Accepted values: `int`, `float`, and `int[]`, `float[]` for arrays. |
| `default` | (0/1) | The default value to be used if the specified parameters are missing from the operation in the OpenVINO IR. |
.. list-table::
:header-rows: 1
* - Attribute Name
- #
- Description
* - ``name``
- (1)
- The name of the defined JIT. For static constants, this can include the value as well, which is taken as a string.
* - ``param``
- (0/1)
- This parameter value is used as the value of this JIT definition.
* - ``type``
- (0/1)
- The parameter type. Accepted values: ``int`` , ``float`` , and ``int[]`` , ``float[]`` for arrays.
* - ``default``
- (0/1)
- The default value to be used if the specified parameters are missing from the operation in the OpenVINO IR.
**Sub-nodes:** None
The resulting JIT has the following form:
`#&zwj;define [name] [type] [value/default]`.
``#define [name] [type] [value/default]``.
### Buffers Node and Sub-Node Structure
Buffers Node and Sub-Node Structure
+++++++++++++++++++++++++++++++++++
The `Buffers` node configures all input/output buffers for the OpenCL entry
The ``Buffers`` node configures all input/output buffers for the OpenCL entry
function. No buffers node structure exists.
**Sub-nodes:** `Data` (0+), `Tensor` (1+)
**Sub-nodes:** ``Data`` (0+), ``Tensor`` (1+)
### Data Node and Sub-Node Structure
Data Node and Sub-Node Structure
++++++++++++++++++++++++++++++++
The `Data` node configures a single input with static data, for example,
The ``Data`` node configures a single input with static data, for example,
weights or biases.
| Attribute Name | \# | Description |
|----|-----|------|
| `name` | (1) | Name of a blob attached to an operation in the OpenVINO IR. |
| `arg-index` | (1) | 0-based index in the entry function arguments to be bound to. |
.. list-table::
:header-rows: 1
* - Attribute Name
- #
- Description
* - ``name``
- (1)
- Name of a blob attached to an operation in the OpenVINO IR.
* - ``arg-index``
- (1)
- 0-based index in the entry function arguments to be bound to.
**Sub-nodes**: None
### Tensor Node and Sub-Node Structure
Tensor Node and Sub-Node Structure
++++++++++++++++++++++++++++++++++
The `Tensor` node configures a single input or output tensor.
The ``Tensor`` node configures a single input or output tensor.
| Attribute Name | \# | Description |
|------|-------|-------|
| `arg-index` | (1) | 0-based index in the entry function arguments to be bound to. |
| `type` | (1) | `input` or `output` |
| `port-index` | (1) | 0-based index in the operation input/output ports in the OpenVINO IR |
| `format` | (0/1) | Data layout declaration for the tensor. Accepted values: `BFYX`, `BYXF`, `YXFB`, `FYXB`(also in lowercase). The default value: `BFYX` |
.. list-table::
:header-rows: 1
### CompilerOptions Node and Sub-Node Structure
* - Attribute Name
- #
- Description
* - ``arg-index``
- (1)
- 0-based index in the entry function arguments to be bound to.
* - ``type``
- (1)
- ``input`` or ``output``
* - ``port-index``
- (1)
- 0-based index in the operation input/output ports in the OpenVINO IR
* - ``format``
- (0/1)
- Data layout declaration for the tensor. Accepted values: ``BFYX`` , ``BYXF`` , ``YXFB`` , ``FYXB`` , and same values in all lowercase. Default value: ``BFYX``.
The `CompilerOptions` node configures the compilation flags for the OpenCL
CompilerOptions Node and Sub-Node Structure
+++++++++++++++++++++++++++++++++++++++++++
The ``CompilerOptions`` node configures the compilation flags for the OpenCL
sources.
| Attribute Name | \# | Description |
|--------|-----|------|
| `options` | (1) | Options string to be passed to the OpenCL compiler |
.. list-table::
:header-rows: 1
* - Attribute Name
- #
- Description
* - ``options``
- (1)
- Options string to be passed to the OpenCL compiler
**Sub-nodes**: None
### WorkSizes Node and Sub-Node Structure
WorkSizes Node and Sub-Node Structure
+++++++++++++++++++++++++++++++++++++
The `WorkSizes` node configures the global/local work sizes to be used when
The ``WorkSizes`` node configures the global/local work sizes to be used when
queuing an OpenCL program for execution.
| Attribute Name | \# | Description |
|-----|------|-----|
| `global`<br>`local` | (0/1)<br>(0/1) | An array of up to three integers or formulas for defining OpenCL work-sizes to be used during execution.<br> The formulas can use the values of the B,F,Y,X dimensions and contain the operators: +,-,/,\*,%. All operators are evaluated in integer arithmetic. <br>Default value: `global=”B*F*Y*X” local=””` |
| `dim` | (0/1) | A tensor to take the work-size from. Accepted values: `input N`, `output`, where `N` is an index of input tensor starting with 0. The default value: `output` |
.. list-table::
:header-rows: 1
* - Attribute Name
- #
- Description
* - ``global`` ``local``
- (0/1) (0/1)
- An array of up to three integers or formulas for defining OpenCL work-sizes to be used during execution. The formulas can use the values of the B,F,Y,X dimensions and contain the operators: +,-,/,\*,%. All operators are evaluated in integer arithmetic. Default value: ``global=”B\*F\*Y\*X” local=””``
* - ``dim``
- (0/1)
- A tensor to take the work-size from. Accepted values: ``input N`` , ``output`` , where ``N`` is an index of input tensor starting with 0. Default value: ``output``
**Sub-nodes**: None
## Example Configuration File
Example Configuration File
##########################
The following code sample provides an example configuration file in XML
format. For information on the configuration file structure, see the
[Configuration File Format](#config-file-format).
```xml
<CustomLayer name="ReLU" type="SimpleGPU" version="1">
<Kernel entry="example_relu_kernel">
<Source filename="custom_layer_kernel.cl"/>
<Define name="neg_slope" type="float" param="negative_slope" default="0.0"/>
</Kernel>
<Buffers>
<Tensor arg-index="0" type="input" port-index="0" format="BFYX"/>
<Tensor arg-index="1" type="output" port-index="0" format="BFYX"/>
</Buffers>
<CompilerOptions options="-cl-mad-enable"/>
<WorkSizes global="X,Y,B*F"/>
</CustomLayer>
```
format. For information on the configuration file structure, see the `Configuration File Format <#config-file-format>`__.
## Built-In Definitions for Custom Layers
.. code-block:: xml
:force:
<CustomLayer name="ReLU" type="SimpleGPU" version="1">
<Kernel entry="example_relu_kernel">
<Source filename="custom_layer_kernel.cl"/>
<Define name="neg_slope" type="float" param="negative_slope" default="0.0"/>
</Kernel>
<Buffers>
<Tensor arg-index="0" type="input" port-index="0" format="BFYX"/>
<Tensor arg-index="1" type="output" port-index="0" format="BFYX"/>
</Buffers>
<CompilerOptions options="-cl-mad-enable"/>
<WorkSizes global="X,Y,B*F"/>
</CustomLayer>
Built-In Definitions for Custom Layers
######################################
The following table includes definitions that are attached before
user sources.
For an example, see [Example Kernel](#example-kernel).
For an example, see `Example Kernel <#example-kernel>`__.
| Name | Value |
|---|---|
| `NUM_INPUTS` | Number of the input tensors bound to this kernel. |
| `GLOBAL_WORKSIZE` | An array of global work sizes used to execute this kernel. |
| `GLOBAL_WORKSIZE_SIZE` | The size of the `GLOBAL_WORKSIZE` array. |
| `LOCAL_WORKSIZE` | An array of local work sizes used to execute this kernel. |
| `LOCAL_WORKSIZE_SIZE` | The size of the `LOCAL_WORKSIZE` array. |
| `<TENSOR>_DIMS`| An array of the tensor dimension sizes. Always ordered as `BFYX`. |
| `<TENSOR>_DIMS_SIZE`| The size of the `<TENSOR>_DIMS` array.|
| `<TENSOR>_TYPE`| The datatype of the tensor: `float`, `half`, or `char`. |
| `<TENSOR>_FORMAT_<TENSOR_FORMAT>` | The format of the tensor, BFYX, BYXF, YXFB , FYXB, or ANY. The format is concatenated to the defined name. You can use the tensor format to define codepaths in your code with `#&zwj;ifdef/#&zwj;endif`. |
| `<TENSOR>_LOWER_PADDING` | An array of padding elements used for the tensor dimensions before they start. Always ordered as BFYX.|
| `<TENSOR>_LOWER_PADDING_SIZE` | The size of the `<TENSOR>_LOWER_PADDING` array. |
| `<TENSOR>_UPPER_PADDING` | An array of padding elements used for the tensor dimensions after they end. Always ordered as BFYX. |
| `<TENSOR>_UPPER_PADDING_SIZE` | The size of the `<TENSOR>_UPPER_PADDING` array. |
| `<TENSOR>_PITCHES` | The offset (in elements) between adjacent elements in each dimension. Always ordered as BFYX. |
| `<TENSOR>_PITCHES_SIZE`| The size of the `<TENSOR>_PITCHES` array. |
| `<TENSOR>_OFFSET`| The number of elements from the start of the tensor to the first valid element, bypassing the lower padding. |
.. list-table::
:header-rows: 1
All `<TENSOR>` values are automatically defined for every tensor
bound to this operation, such as `INPUT0`, `INPUT1`, and `OUTPUT0`, as shown
* - Name
- Value
* - ``NUM_INPUTS``
- Number of the input tensors bound to this kernel
* - ``GLOBAL_WORKSIZE``
- An array of global work sizes used to execute this kernel
* - ``GLOBAL_WORKSIZE_SIZE``
- The size of the ``GLOBAL_WORKSIZE`` array
* - ``LOCAL_WORKSIZE``
- An array of local work sizes used to execute this kernel
* - ``LOCAL_WORKSIZE_SIZE``
- The size of the ``LOCAL_WORKSIZE`` array
* - ``<TENSOR>_DIMS``
- An array of the tensor dimension sizes. Always ordered as ``BFYX``
* - ``<TENSOR>_DIMS_SIZE``
- The size of the ``<TENSOR>_DIMS`` array.
* - ``<TENSOR>_TYPE``
- The datatype of the tensor: ``float`` , ``half`` , or ``char``
* - ``<TENSOR>_FORMAT_<TENSOR_FORMAT>``
- The format of the tensor, BFYX, BYXF, YXFB , FYXB, or ANY. The format is concatenated to the defined name. You can use the tensor format to define codepaths in your code with ``#ifdef/#endif`` .
* - ``<TENSOR>_LOWER_PADDING``
- An array of padding elements used for the tensor dimensions before they start. Always ordered as BFYX.
* - ``<TENSOR>_LOWER_PADDING_SIZE``
- The size of the ``<TENSOR>_LOWER_PADDING`` array
* - ``<TENSOR>_UPPER_PADDING``
- An array of padding elements used for the tensor dimensions after they end. Always ordered as BFYX.
* - ``<TENSOR>_UPPER_PADDING_SIZE``
- The size of the ``<TENSOR>_UPPER_PADDING`` array
* - ``<TENSOR>_PITCHES``
- The offset (in elements) between adjacent elements in each dimension. Always ordered as BFYX.
* - ``<TENSOR>_PITCHES_SIZE``
- The size of the ``<TENSOR>_PITCHES`` array
* - ``<TENSOR>_OFFSET``
- The number of elements from the start of the tensor to the first valid element, bypassing the lower padding.
All ``<TENSOR>`` values are automatically defined for every tensor
bound to this operation, such as ``INPUT0``, ``INPUT1``, and ``OUTPUT0``, as shown
in the following example:
```c
#define INPUT0_DIMS_SIZE 4
#define INPUT0_DIMS (int []){ 1,96,55,55, }
```
.. code-block:: c
## Example Kernel<a name="example-kernel"></a>
#define INPUT0_DIMS_SIZE 4
#define INPUT0_DIMS (int []){ 1,96,55,55, }
```c
#pragma OPENCL EXTENSION cl_khr_fp16 : enable
__kernel void example_relu_kernel(
const __global INPUT0_TYPE* input0,
__global OUTPUT0_TYPE* output)
{
const uint idx = get_global_id(0);
const uint idy = get_global_id(1);
const uint idbf = get_global_id(2); // batches*features, as OpenCL supports 3D nd-ranges only
const uint feature = idbf % OUTPUT0_DIMS[1];
const uint batch = idbf / OUTPUT0_DIMS[1];
//notice that pitches are in elements, not in bytes!
const uint in_id = batch*INPUT0_PITCHES[0] + feature*INPUT0_PITCHES[1] + idy*INPUT0_PITCHES[2] + idx*INPUT0_PITCHES[3] + INPUT0_OFFSET;
const uint out_id = batch*OUTPUT0_PITCHES[0] + feature*OUTPUT0_PITCHES[1] + idy*OUTPUT0_PITCHES[2] + idx*OUTPUT0_PITCHES[3] + OUTPUT0_OFFSET;
.. _example-kernel:
INPUT0_TYPE value = input0[in_id];
// neg_slope (which is non-zero for leaky ReLU) is put automatically as #define, refer to the config xml
output[out_id] = value < 0 ? value * neg_slope : value;
}
```
Example Kernel
##############
.. code-block:: c
> **NOTE**: As described in the previous section, all items such as the
> `INPUT0_TYPE` are actually defined as OpenCL (pre-)compiler inputs by
> OpenVINO for efficiency reasons. See the [Debugging
> Tips](#debugging-tips) below for information on debugging the results.
#pragma OPENCL EXTENSION cl_khr_fp16 : enable
__kernel void example_relu_kernel(
const __global INPUT0_TYPE* input0,
__global OUTPUT0_TYPE* output)
{
const uint idx = get_global_id(0);
const uint idy = get_global_id(1);
const uint idbf = get_global_id(2); // batches*features, as OpenCL supports 3D nd-ranges only
const uint feature = idbf % OUTPUT0_DIMS[1];
const uint batch = idbf / OUTPUT0_DIMS[1];
//notice that pitches are in elements, not in bytes!
const uint in_id = batch*INPUT0_PITCHES[0] + feature*INPUT0_PITCHES[1] + idy*INPUT0_PITCHES[2] + idx*INPUT0_PITCHES[3] + INPUT0_OFFSET;
const uint out_id = batch*OUTPUT0_PITCHES[0] + feature*OUTPUT0_PITCHES[1] + idy*OUTPUT0_PITCHES[2] + idx*OUTPUT0_PITCHES[3] + OUTPUT0_OFFSET;
## Debugging Tips<a name="debugging-tips"></a>
INPUT0_TYPE value = input0[in_id];
// neg_slope (which is non-zero for leaky ReLU) is put automatically as #define, refer to the config xml
output[out_id] = value < 0 ? value * neg_slope : value;
}
**Using `printf` in the OpenCL™ Kernels**.
To debug the specific values, use `printf` in your kernels.
.. _debugging-tips:
.. note::
As described in the previous section, all items such as the ``INPUT0_TYPE`` are actually defined as OpenCL (pre-)compiler inputs by OpenVINO for efficiency reasons. See the `Debugging Tips <#debugging-tips>`__ below for information on debugging the results.
Debugging Tips
##############
**Using ``printf`` in the OpenCL™ Kernels**.
To debug the specific values, use ``printf`` in your kernels.
However, be careful not to output excessively, which
could generate too much data. The `printf` output is typical, so
could generate too much data. The ``printf`` output is typical, so
your output can be truncated to fit the buffer. Also, because of
buffering, you actually get an entire buffer of output when the
execution ends.<br>
execution ends.
For more information, refer to the [printf Function](https://www.khronos.org/registry/OpenCL/sdk/1.2/docs/man/xhtml/printfFunction.html).
For more information, refer to the `printf Function <https://www.khronos.org/registry/OpenCL/sdk/1.2/docs/man/xhtml/printfFunction.html>`__.
@endsphinxdirective

View File

@@ -2,6 +2,11 @@
@sphinxdirective
.. meta::
:description: Explore OpenVINO™ Extensibility API, which allows adding
support for models with custom operations and their further implementation
in applications.
.. toctree::
:maxdepth: 1
:hidden:
@@ -9,7 +14,6 @@
openvino_docs_Extensibility_UG_add_openvino_ops
openvino_docs_Extensibility_UG_Frontend_Extensions
openvino_docs_Extensibility_UG_GPU
openvino_docs_MO_DG_prepare_model_customize_model_optimizer_Customize_Model_Optimizer
.. toctree::
:maxdepth: 1
@@ -18,48 +22,53 @@
openvino_docs_transformations
OpenVINO Plugin Developer Guide <openvino_docs_ie_plugin_dg_overview>
@endsphinxdirective
.. toctree::
:maxdepth: 1
:hidden:
openvino_docs_MO_DG_prepare_model_customize_model_optimizer_Customize_Model_Optimizer
The Intel® Distribution of OpenVINO™ toolkit supports neural network models trained with various frameworks, including
TensorFlow, PyTorch, ONNX, PaddlePaddle, Apache MXNet, Caffe, and Kaldi. The list of supported operations is different for
each of the supported frameworks. To see the operations supported by your framework, refer to
[Supported Framework Operations](../MO_DG/prepare_model/Supported_Frameworks_Layers.md).
The Intel® Distribution of OpenVINO™ toolkit supports neural-network models trained with various frameworks, including
TensorFlow, PyTorch, ONNX, TensorFlow Lite, and PaddlePaddle (OpenVINO support for Apache MXNet, Caffe, and Kaldi is currently
being deprecated and will be removed entirely in the future). The list of supported operations is different for each of the supported frameworks.
To see the operations supported by your framework, refer to :doc:`Supported Framework Operations <openvino_resources_supported_operations_frontend>`.
Custom operations, which are not included in the list, are not recognized by OpenVINO out-of-the-box. The need for custom operation may appear in two cases:
1. A new or rarely used regular framework operation is not supported in OpenVINO yet.
2. A new user operation that was created for some specific model topology by the author of the model using framework extension capabilities.
Importing models with such operations requires additional steps. This guide illustrates the workflow for running inference on models featuring custom operations. This allows plugging in your own implementation for them. OpenVINO Extensibility API enables adding support for those custom operations and using one implementation for Model Optimizer and OpenVINO Runtime.
Defining a new custom operation basically consists of two parts:
1. Definition of operation semantics in OpenVINO, the code that describes how this operation should be inferred consuming input tensor(s) and producing output tensor(s). The implementation of execution kernels for [GPU](./GPU_Extensibility.md) is described in separate guides.
1. Definition of operation semantics in OpenVINO, the code that describes how this operation should be inferred consuming input tensor(s) and producing output tensor(s). The implementation of execution kernels for :doc:`GPU <openvino_docs_Extensibility_UG_GPU>` is described in separate guides.
2. Mapping rule that facilitates conversion of framework operation representation to OpenVINO defined operation semantics.
The first part is required for inference. The second part is required for successful import of a model containing such operations from the original framework model format. There are several options to implement each part. The following sections will describe them in detail.
## Definition of Operation Semantics
Definition of Operation Semantics
#################################
If the custom operation can be mathematically represented as a combination of exiting OpenVINO operations and such decomposition gives desired performance, then low-level operation implementation is not required. Refer to the latest OpenVINO operation set, when deciding feasibility of such decomposition. You can use any valid combination of exiting operations. The next section of this document describes the way to map a custom operation.
If such decomposition is not possible or appears too bulky with a large number of constituent operations that do not perform well, then a new class for the custom operation should be implemented, as described in the [Custom Operation Guide](add_openvino_ops.md).
If such decomposition is not possible or appears too bulky with a large number of constituent operations that do not perform well, then a new class for the custom operation should be implemented, as described in the :doc:`Custom Operation Guide <openvino_docs_Extensibility_UG_add_openvino_ops>`.
You might prefer implementing a custom operation class if you already have a generic C++ implementation of operation kernel. Otherwise, try to decompose the operation first, as described above. Then, after verifying correctness of inference and resulting performance, you may move on to optional implementation of Bare Metal C++.
## Mapping from Framework Operation
Mapping from Framework Operation
################################
Mapping of custom operation is implemented differently, depending on model format used for import. You may choose one of the following:
1. If a model is represented in the ONNX (including models exported from Pytorch in ONNX) or PaddlePaddle formats, then one of the classes from [Frontend Extension API](frontend_extensions.md) should be used. It consists of several classes available in C++ which can be used with the `--extensions` option in Model Optimizer or when a model is imported directly to OpenVINO runtime using the `read_model` method. Python API is also available for runtime model import.
1. If a model is represented in the ONNX (including models exported from Pytorch in ONNX), TensorFlow Lite, PaddlePaddle or TensorFlow formats, then one of the classes from :doc:`Frontend Extension API <openvino_docs_Extensibility_UG_Frontend_Extensions>` should be used. It consists of several classes available in C++ which can be used with the ``--extensions`` option in Model Optimizer or when a model is imported directly to OpenVINO runtime using the ``read_model`` method. Python API is also available for runtime model import.
2. If a model is represented in the TensorFlow, Caffe, Kaldi or MXNet formats, then [Model Optimizer Extensions](../MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md) should be used. This approach is available for model conversion in Model Optimizer only.
2. If a model is represented in the Caffe, Kaldi or MXNet formats (as legacy frontends), then :doc:`[Legacy] Model Optimizer Extensions <openvino_docs_MO_DG_prepare_model_customize_model_optimizer_Customize_Model_Optimizer>` should be used. This approach is available for model conversion in Model Optimizer only.
Existing of two approaches simultaneously is explained by two different types of frontends used for model conversion in OpenVINO: new frontends (ONNX, PaddlePaddle) and legacy frontends (TensorFlow, Caffe, Kaldi and Apache MXNet). Model Optimizer can use both front-ends in contrast to the direct import of model with `read_model` method which can use new frontends only. Follow one of the appropriate guides referenced above to implement mappings depending on framework frontend.
Existing of two approaches simultaneously is explained by two different types of frontends used for model conversion in OpenVINO: new frontends (ONNX, PaddlePaddle, TensorFlow Lite, and TensorFlow) and legacy frontends (Caffe, Kaldi, and Apache MXNet). Model Optimizer can use both frontends in contrast to the direct import of model with ``read_model`` method which can use new frontends only. Follow one of the appropriate guides referenced above to implement mappings depending on framework frontend.
If you are implementing extensions for new ONNX or PaddlePaddle frontends and plan to use the `--extensions` option in Model Optimizer for model conversion, then the extensions should be:
If you are implementing extensions for new ONNX, PaddlePaddle, TensorFlow Lite or TensorFlow frontends and plan to use the ``--extensions`` option in Model Optimizer for model conversion, then the extensions should be:
1. Implemented in C++ only.
@@ -69,109 +78,125 @@ Model Optimizer does not support new frontend extensions written in Python API.
Remaining part of this guide describes application of Frontend Extension API for new frontends.
## Registering Extensions
Registering Extensions
######################
A custom operation class and a new mapping frontend extension class object should be registered to be usable in OpenVINO runtime.
> **NOTE**: This documentation is derived from the [Template extension](https://github.com/openvinotoolkit/openvino/tree/master/src/core/template_extension/new), which demonstrates the details of extension development. It is based on minimalistic `Identity` operation that is a placeholder for your real custom operation. Review the complete, fully compilable code to see how it works.
.. note::
This documentation is derived from the `Template extension <https://github.com/openvinotoolkit/openvino/tree/master/src/core/template_extension/new>`__, which demonstrates the details of extension development. It is based on minimalistic ``Identity`` operation that is a placeholder for your real custom operation. Review the complete, fully compilable code to see how it works.
Use the `ov::Core::add_extension` method to load the extensions to the `ov::Core` object. This method allows loading library with extensions or extensions from the code.
Use the ``:ref:`ov::Core::add_extension <doxid-classov_1_1_core_1a68d0dea1cbcd42a67bea32780e32acea>``` method to load the extensions to the ``:ref:`ov::Core <doxid-classov_1_1_core>``` object. This method allows loading library with extensions or extensions from the code.
### Load Extensions to Core
Load Extensions to Core
+++++++++++++++++++++++
Extensions can be loaded from a code with the `ov::Core::add_extension` method:
Extensions can be loaded from a code with the ``:ref:`ov::Core::add_extension <doxid-classov_1_1_core_1a68d0dea1cbcd42a67bea32780e32acea>``` method:
@sphinxtabset
.. tab-set::
@sphinxtab{C++}
@snippet docs/snippets/ov_extensions.cpp add_extension
@endsphinxtab
@sphinxtab{Python}
@snippet docs/snippets/ov_extensions.py add_extension
@endsphinxtab
@endsphinxtabset
The `Identity` is a custom operation class defined in [Custom Operation Guide](add_openvino_ops.md). This is sufficient to enable reading OpenVINO IR which uses the `Identity` extension operation emitted by Model Optimizer. In order to load original model directly to the runtime, add a mapping extension:
@sphinxdirective
.. tab:: C++
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
:language: cpp
:fragment: add_frontend_extension
.. tab:: Python
.. doxygensnippet:: docs/snippets/ov_extensions.py
:language: python
:fragment: add_frontend_extension
@endsphinxdirective
.. tab-item:: Python
:sync: py
.. doxygensnippet:: docs/snippets/ov_extensions.py
:language: python
:fragment: [add_extension]
.. tab-item:: C++
:sync: cpp
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
:language: cpp
:fragment: [add_extension]
The ``Identity`` is a custom operation class defined in :doc:`Custom Operation Guide <openvino_docs_Extensibility_UG_add_openvino_ops>`. This is sufficient to enable reading OpenVINO IR which uses the ``Identity`` extension operation emitted by Model Optimizer. In order to load original model directly to the runtime, add a mapping extension:
.. tab-set::
.. tab-item:: Python
:sync: py
.. doxygensnippet:: docs/snippets/ov_extensions.py
:language: python
:fragment: [add_frontend_extension]
.. tab-item:: C++
:sync: cpp
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
:language: cpp
:fragment: [add_frontend_extension]
When Python API is used, there is no way to implement a custom OpenVINO operation. Even if custom OpenVINO operation is implemented in C++ and loaded into the runtime by a shared library, there is still no way to add a frontend mapping extension that refers to this custom operation. In this case, use C++ shared library approach to implement both operations semantics and framework mapping.
Python can still be used to map and decompose operations when only operations from the standard OpenVINO operation set are used.
### Create a Library with Extensions
.. _create_a_library_with_extensions:
Create a Library with Extensions
++++++++++++++++++++++++++++++++
An extension library should be created in the following cases:
- Conversion of a model with custom operations in Model Optimizer.
- Loading a model with custom operations in a Python application. This applies to both framework model and OpenVINO IR.
- Loading models with custom operations in tools that support loading extensions from a library, for example the `benchmark_app`.
* Conversion of a model with custom operations in Model Optimizer.
* Loading a model with custom operations in a Python application. This applies to both framework model and OpenVINO IR.
* Loading models with custom operations in tools that support loading extensions from a library, for example the ``benchmark_app``.
To create an extension library, for example, to load the extensions into Model Optimizer, perform the following:
1. Create an entry point for extension library. OpenVINO provides the `OPENVINO_CREATE_EXTENSIONS()` macro, which allows to define an entry point to a library with OpenVINO Extensions.
1. Create an entry point for extension library. OpenVINO provides the ``:ref:`OPENVINO_CREATE_EXTENSIONS() <doxid-core_2include_2openvino_2core_2extension_8hpp_1acdadcfa0eff763d8b4dadb8a9cb6f6e6>``` macro, which allows to define an entry point to a library with OpenVINO Extensions.
This macro should have a vector of all OpenVINO Extensions as an argument.
Based on that, the declaration of an extension class might look like the following:
@snippet template_extension/new/ov_extension.cpp ov_extension:entry_point
.. doxygensnippet:: ./src/core/template_extension/new/ov_extension.cpp
:language: cpp
:fragment: [ov_extension:entry_point]
2. Configure the build of your extension library, using the following CMake script:
@snippet template_extension/new/CMakeLists.txt cmake:extension
.. doxygensnippet:: ./src/core/template_extension/new/CMakeLists.txt
:language: cpp
:fragment: [cmake:extension]
This CMake script finds OpenVINO, using the `find_package` CMake command.
This CMake script finds OpenVINO, using the ``find_package`` CMake command.
3. Build the extension library, running the commands below:
```sh
$ cd src/core/template_extension/new
$ mkdir build
$ cd build
$ cmake -DOpenVINO_DIR=<OpenVINO_DIR> ../
$ cmake --build .
```
.. code-block:: sh
$ cd src/core/template_extension/new
$ mkdir build
$ cd build
$ cmake -DOpenVINO_DIR=<OpenVINO_DIR> ../
$ cmake --build .
4. After the build, you may use the path to your extension library to load your extensions to OpenVINO Runtime:
@sphinxtabset
.. tab-set::
.. tab-item:: Python
:sync: py
.. doxygensnippet:: docs/snippets/ov_extensions.py
:language: python
:fragment: [add_extension_lib]
@sphinxtab{C++}
.. tab-item:: C++
:sync: cpp
@snippet docs/snippets/ov_extensions.cpp add_extension_lib
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
:language: cpp
:fragment: [add_extension_lib]
@endsphinxtab
@sphinxtab{Python}
See Also
########
@snippet docs/snippets/ov_extensions.py add_extension_lib
* :doc:`OpenVINO Transformations <openvino_docs_transformations>`
* :doc:`Using OpenVINO Runtime Samples <openvino_docs_OV_UG_Samples_Overview>`
* :doc:`Hello Shape Infer SSD sample <openvino_inference_engine_samples_hello_reshape_ssd_README>`
@endsphinxtab
@endsphinxtabset
## See Also
* [OpenVINO Transformations](./ov_transformations.md)
* [Using OpenVINO Runtime Samples](../OV_Runtime_UG/Samples_Overview.md)
* [Hello Shape Infer SSD sample](../../samples/cpp/hello_reshape_ssd/README.md)
@endsphinxdirective

View File

@@ -1,59 +1,86 @@
# Custom OpenVINO™ Operations {#openvino_docs_Extensibility_UG_add_openvino_ops}
OpenVINO™ Extension API allows you to register custom operations to support models with operations which OpenVINO™ does not support out-of-the-box. This capability requires writing code in C++, so if you are using Python to develop your application you need to build a separate shared library implemented in C++ first and load it in Python using `add_extension` API. Please refer to [Create library with extensions](Intro.md#create-library-with-extensions) for more details on library creation and usage. The remining part of this document describes how to implement an operation class.
@sphinxdirective
## Operation Class
.. meta::
:description: Explore OpenVINO™ Extension API which enables registering
custom operations to support models with operations
not supported by OpenVINO.
To add your custom operation, create a new class that extends `ov::Op`, which is in turn derived from `ov::Node`, the base class for all graph operations in OpenVINO™. To add `ov::Op` please include next file:
OpenVINO™ Extension API allows you to register custom operations to support models with operations which OpenVINO™ does not support out-of-the-box. This capability requires writing code in C++, so if you are using Python to develop your application you need to build a separate shared library implemented in C++ first and load it in Python using ``add_extension`` API. Please refer to :ref:`Create library with extensions <create_library_with_extensions>` for more details on library creation and usage. The remining part of this document describes how to implement an operation class.
@snippet template_extension/new/identity.hpp op:common_include
Operation Class
###############
To add your custom operation, create a new class that extends ``ov::Op``, which is in turn derived from ``:ref:`ov::Node <doxid-classov_1_1_node>```, the base class for all graph operations in OpenVINO™. To add ``ov::Op``, include the next file:
.. doxygensnippet:: ./src/core/template_extension/new/identity.hpp
:language: cpp
:fragment: [op:common_include]
Follow the steps below to add a custom operation:
1. Add the `OPENVINO_OP` macro which defines a `NodeTypeInfo` object that identifies the type of the operation to the graph users and helps with dynamic type resolution. The type info of an operation currently consists of a string operation identifier and a string for operation version.
1. Add the ``OPENVINO_OP`` macro which defines a ``NodeTypeInfo`` object that identifies the type of the operation to the graph users and helps with dynamic type resolution. The type info of an operation currently consists of a string operation identifier and a string for operation version.
2. Implement default constructor and constructors that optionally take the operation inputs and attributes as parameters.
3. Override the shape inference method `validate_and_infer_types`. This method is called multiple times during graph manipulations to determine the shapes and element types of the operations outputs. To access the input shapes and input element types, use the `get_input_partial_shape()` and `get_input_element_type()` methods of `ov::Node`. Set the inferred shape and element type of the output using `set_output_type`.
3. Override the shape inference method ``validate_and_infer_types``. This method is called multiple times during graph manipulations to determine the shapes and element types of the operations outputs. To access the input shapes and input element types, use the ``get_input_partial_shape()`` and ``get_input_element_type()`` methods of ``:ref:`ov::Node <doxid-classov_1_1_node>```. Set the inferred shape and element type of the output using ``set_output_type``.
4. Override the `clone_with_new_inputs` method, which enables graph manipulation routines to create copies of this operation and connect it to different nodes during optimization.
4. Override the ``clone_with_new_inputs`` method, which enables graph manipulation routines to create copies of this operation and connect it to different nodes during optimization.
5. Override the `visit_attributes` method, which enables serialization and deserialization of operation attributes. An `AttributeVisitor` is passed to the method, and the implementation is expected to walk over all the attributes in the op using the type-aware `on_attribute` helper. Helpers are already implemented for standard C++ types like `int64_t`, `float`, `bool`, `vector`, and for existing OpenVINO defined types.
5. Override the ``visit_attributes`` method, which enables serialization and deserialization of operation attributes. An ``AttributeVisitor`` is passed to the method, and the implementation is expected to walk over all the attributes in the op using the type-aware ``on_attribute`` helper. Helpers are already implemented for standard C++ types like ``int64_t``, ``float``, ``bool``, ``vector``, and for existing OpenVINO defined types.
6. Override `evaluate`, which is an optional method that enables fallback of some devices to this implementation and the application of constant folding if there is a custom operation on the constant branch. If your operation contains `evaluate` method you also need to override the `has_evaluate` method, this method allows to get information about availability of `evaluate` method for the operation.
6. Override ``evaluate``, which is an optional method that enables fallback of some devices to this implementation and the application of constant folding if there is a custom operation on the constant branch. If your operation contains ``evaluate`` method you also need to override the ``has_evaluate`` method, this method allows to get information about availability of ``evaluate`` method for the operation.
Based on that, declaration of an operation class can look as follows:
### Operation Constructors
Operation Constructors
++++++++++++++++++++++
OpenVINO™ operation contains two constructors:
* Default constructor, which enables you to create an operation without attributes
* Constructor that creates and validates an operation with specified inputs and attributes
@snippet template_extension/new/identity.cpp op:ctor
.. doxygensnippet:: ./src/core/template_extension/new/identity.cpp
:language: cpp
:fragment: [op:ctor]
### `validate_and_infer_types()`
``validate_and_infer_types()``
++++++++++++++++++++++++++++++
`ov::Node::validate_and_infer_types` method validates operation attributes and calculates output shapes using attributes of the operation.
``:ref:`ov::Node::validate_and_infer_types <doxid-classov_1_1_node_1ac5224b5be848ec670d2078d9816d12e7>``` method validates operation attributes and calculates output shapes using attributes of the operation.
@snippet template_extension/new/identity.cpp op:validate
.. doxygensnippet:: ./src/core/template_extension/new/identity.cpp
:language: cpp
:fragment: [op:validate]
### `clone_with_new_inputs()`
``clone_with_new_inputs()``
+++++++++++++++++++++++++++
`ov::Node::clone_with_new_inputs` method creates a copy of the operation with new inputs.
``:ref:`ov::Node::clone_with_new_inputs <doxid-classov_1_1_node_1a04cb103fa069c3b7944ab7c44d94f5ff>``` method creates a copy of the operation with new inputs.
@snippet template_extension/new/identity.cpp op:copy
.. doxygensnippet:: ./src/core/template_extension/new/identity.cpp
:language: cpp
:fragment: [op:copy]
### `visit_attributes()`
``visit_attributes()``
++++++++++++++++++++++
`ov::Node::visit_attributes` method enables you to visit all operation attributes.
``:ref:`ov::Node::visit_attributes <doxid-classov_1_1_node_1a9743b56d352970486d17dae2416d958e>``` method enables you to visit all operation attributes.
@snippet template_extension/new/identity.cpp op:visit_attributes
.. doxygensnippet:: ./src/core/template_extension/new/identity.cpp
:language: cpp
:fragment: [op:visit_attributes]
### evaluate() and has_evaluate()
``evaluate() and has_evaluate()``
+++++++++++++++++++++++++++++++++
`ov::Node::evaluate` method enables you to apply constant folding to an operation.
``:ref:`ov::Node::evaluate <doxid-classov_1_1_node_1acfb82acc8349d7138aeaa05217c7014e>``` method enables you to apply constant folding to an operation.
@snippet template_extension/new/identity.cpp op:evaluate
.. doxygensnippet:: ./src/core/template_extension/new/identity.cpp
:language: cpp
:fragment: [op:evaluate]
@endsphinxdirective

View File

@@ -1,133 +1,419 @@
# Frontend Extensions {#openvino_docs_Extensibility_UG_Frontend_Extensions}
The goal of this chapter is to explain how to use Frontend extension classes to facilitate mapping of custom operations from framework model representation to OpenVINO representation. Refer to [Introduction to OpenVINO Extension](Intro.md) to understand entire flow.
@sphinxdirective
This API is applicable for new frontends only, which exist for ONNX and PaddlePaddle. If a different model format is used, follow legacy [Model Optimizer Extensions](../MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md) guide.
.. meta::
:description: Learn how to use frontend extension classes to facilitate the mapping
of custom operations from the framework model representation to the OpenVINO
representation.
> **NOTE**: This documentation is written based on the [Template extension](https://github.com/openvinotoolkit/openvino/tree/master/src/core/template_extension/new), which demonstrates extension development details based on minimalistic `Identity` operation that is a placeholder for your real custom operation. You can review the complete code, which is fully compliable, to see how it works.
## Single Operation Mapping with OpExtension
The goal of this chapter is to explain how to use Frontend extension classes to facilitate
mapping of custom operations from framework model representation to OpenVINO representation.
Refer to :doc:`Introduction to OpenVINO Extension <openvino_docs_Extensibility_UG_Intro>` to
understand the entire flow.
This section covers the case when a single operation in framework representation is mapped to a single operation in OpenVINO representation. This is called *one-to-one mapping*. There is `OpExtension` class that works well if all the following conditions are satisfied:
This API is applicable to new frontends only, which exist for ONNX, TensorFlow Lite, PaddlePaddle, and TensorFlow.
If a different model format is used, follow legacy
:doc:`Model Optimizer Extensions <openvino_docs_MO_DG_prepare_model_customize_model_optimizer_Customize_Model_Optimizer>`
guide.
.. note::
This documentation is written based on the `Template extension <https://github.com/openvinotoolkit/openvino/tree/master/src/core/template_extension/new>`__,
which demonstrates extension development details based on minimalistic ``Identity``
operation that is a placeholder for your real custom operation. You can review the complete code,
which is fully compilable, to see how it works.
.. note::
You can find more examples of extensions in `openvino_contrib repository <https://github.com/openvinotoolkit/openvino_contrib/tree/master/modules/custom_operations>`_.
Single Operation Mapping with OpExtension
#########################################
This section covers the case when a single operation in framework representation is mapped to a single
operation in OpenVINO representation. This is called *one-to-one mapping*. There is ``OpExtension``
class that works well if all the following conditions are satisfied:
1. Number of inputs to operation in the Framework representation is the same as in the OpenVINO representation.
2. Number of outputs is also the same in both representations.
3. Inputs can be indexed and are mapped in order correspondingly, e.g. input with index 0 in framework representation maps to input with index 0 in OpenVINO representation and so on.
3. Inputs can be indexed and are mapped in order correspondingly, e.g.
input with index 0 in framework representation maps to input with index 0 in OpenVINO representation and so on.
4. The same for outputs.
5. Each attribute in OpenVINO operation can be initialized from one of the attributes of original operation or by
some predefined constant value. Value of copied attributes cannot contain expressions, value is accepted as-is,
so type of a value should be compatible.
5. Each attribute in OpenVINO operation can be initialized from one of the attributes of original operation or by some predefined constant value. Value of copied attributes cannot contain expressions, value is accepted as-is, so type of a value should be compatible.
.. note::
> **NOTE**: `OpExtension` class is currently available for ONNX frontend only. PaddlePaddle frontend has named inputs and outputs for operation (not indexed) therefore OpExtension mapping is not applicable for this case.
``OpExtension`` class is currently available for ONNX and TensorFlow frontends.
PaddlePaddle frontend has named inputs and outputs for operation (not indexed)
therefore OpExtension mapping is not applicable for this case.
The next example maps ONNX operation with type [“Identity”]( https://github.com/onnx/onnx/blob/main/docs/Operators.md#Identity) to OpenVINO template extension `Identity` class.
The following example maps ONNX operation with the type of `Identity <https://github.com/onnx/onnx/blob/main/docs/Operators.md#Identity>`__
to OpenVINO template extension ``Identity`` class.
@snippet ov_extensions.cpp frontend_extension_Identity_header
@snippet ov_extensions.cpp frontend_extension_Identity
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
:language: cpp
:fragment: [frontend_extension_Identity_header]
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
:language: cpp
:fragment: [frontend_extension_Identity]
The mapping doesnt involve any attributes, as operation Identity doesnt have them.
Extension objects, like just constructed `extension` can be used to add to the OpenVINO runtime just before the loading a model that contains custom operations:
Extension objects, like just constructed ``extension`` can be used to add to the
OpenVINO runtime just before the loading a model that contains custom operations:
@snippet ov_extensions.cpp frontend_extension_read_model
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
:language: cpp
:fragment: [frontend_extension_read_model]
Or extensions can be constructed in a separately compiled shared library. Separately compiled library can be used in Model Optimizer or `benchmark_app`. Read about how to build and load such library in chapter “Create library with extensions” in [Introduction to OpenVINO Extension](Intro.md).
Or extensions can be constructed in a separately compiled shared library.
Separately compiled library can be used in Model Optimizer or ``benchmark_app``.
Read about how to build and load such a library in the chapter of “Create library with extensions” in
:doc:`Introduction to OpenVINO Extension <openvino_docs_Extensibility_UG_Intro>`.
If operation have multiple inputs and/or outputs they will be mapped in order. The type of elements in input/output tensors should match expected types in the surrounding operations. For example, if custom operation produces `f32` data type then operation that consumes this output should also support `f32`. Otherwise, model conversion fails with an error, there are no automatic type conversion happens.
If operation have multiple inputs and/or outputs they will be mapped in order.
The type of elements in input/output tensors should match expected types in the surrounding operations.
For example, if a custom operation produces the ``f32`` data type, the operation that consumes this output
should also support ``f32``. Otherwise, model conversion fails with an error, as no automatic type conversion is performed.
### Converting to Standard OpenVINO Operation
Converting to Standard OpenVINO Operation
+++++++++++++++++++++++++++++++++++++++++
`OpExtension` class can be used when mapping to one of the operations from standard OpenVINO operation set is what you need and there is no class like `TemplateExtension::Identity` implemented.
``OpExtension`` class can be used when mapping to one of the operations from standard OpenVINO
operation set is what you need and there is no class like ``TemplateExtension::Identity`` implemented.
Here is an example for a custom framework operation MyRelu. Suppose it is mathematically equivalent to standard `Relu` that exists in OpenVINO operation set, but for some reason has type name “MyRelu”. In this case you can directly say that “MyRelu” -> `Relu` mapping should be used:
Here is an example of a custom framework operation 'MyRelu'. Assume it is mathematically equivalent
to standard ``Relu`` that exists in the OpenVINO operation set, but for some reason has the type name of 'MyRelu'.
In this case, you can directly say that 'MyRelu' -> ``Relu`` mapping should be used:
@sphinxtabset
.. tab-set::
@sphinxtab{C++}
@snippet ov_extensions.cpp frontend_extension_MyRelu
@endsphinxtab
@sphinxtab{Python}
@snippet ov_extensions.py py_frontend_extension_MyRelu
@endsphinxtab
.. tab-item:: Python
:sync: py
.. doxygensnippet:: docs/snippets/ov_extensions.py
:language: python
:fragment: [py_frontend_extension_MyRelu]
@endsphinxtabset
.. tab-item:: C++
:sync: cpp
In the resulting converted OpenVINO model, “MyRelu” operation will be replaced by the standard operation `Relu` from the latest available OpenVINO operation set. Notice that when standard operation is used, it can be specified using just a type string (“Relu”) instead of using a `ov::opset8::Relu` class name as a template parameter for `OpExtension`. This method is available for operations from the standard operation set only. For a user custom OpenVINO operation the corresponding class should be always specified as a template parameter as it was demonstrated with `TemplateExtension::Identity`.
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
:language: cpp
:fragment: [frontend_extension_MyRelu]
### Attributes Mapping
As described above, `OpExtension` is useful when attributes can be mapped one by one or initialized by a constant. If the set of attributes in framework representation and OpenVINO representation completely match by their names and types, nothing should be specified in OpExtension constructor parameters. The attributes are discovered and mapped automatically based on `visit_attributes` method that should be defined for any OpenVINO operation.
In the resulting converted OpenVINO model, “MyRelu” operation will be replaced by the standard operation
``Relu`` from the latest available OpenVINO operation set. Notice that when standard operation is used,
it can be specified using just a type string (“Relu”) instead of using a ``ov::opset8::Relu`` class name as a
template parameter for ``OpExtension``. This method is available for operations from the standard operation set only.
For a user custom OpenVINO operation the corresponding class should be always specified as a template parameter
as it was demonstrated with ``TemplateExtension::Identity``.
Imagine you have CustomOperation class implementation that has two attributes with names `attr1` and `attr2`:
Attribute Mapping
++++++++++++++++++
@snippet ov_extensions.cpp frontend_extension_CustomOperation
As described above, ``OpExtension`` is useful when attributes can be mapped one by one or initialized by a constant.
Attributes in OpenVINO operators are identified by their names, so for frameworks that also have named attributes (like TensorFlow, PaddlePaddle, ONNX),
you can specify name to name mapping. For frameworks where OpenVINO operator's attributes can be mapped to one of the framework
operator inputs (like PyTorch), there's a name to input index mapping.
And original model in framework representation also has operation with name “CustomOperatoin” with the same `attr1` and `attr2` attributes. Then with the following code:
@snippet ov_extensions.cpp frontend_extension_CustomOperation_as_is
Named attributes mapping
^^^^^^^^^^^^^^^^^^^^^^^^
both `attr1` and `attr2` are copied from framework representation to OpenVINO representation automatically. If for some reason names of attributes are different but values still can be copied “as-is” you can pass attribute names mapping in `OpExtension` constructor:
If the set of attributes in framework representation and OpenVINO representation completely match by their names and types,
no attribute mapping has to be specified in OpExtension constructor parameters. The attributes are discovered and mapped automatically
based on ``visit_attributes`` method that should be defined for any OpenVINO operation.
@snippet ov_extensions.cpp frontend_extension_CustomOperation_rename
Imagine you have CustomOperation class implementation that has two attributes with names: ``attr1`` and ``attr2``.
Where `fw_attr1` and `fw_attr2` are names for corresponding attributes in framework operation representation.
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
:language: cpp
:fragment: [frontend_extension_CustomOperation]
If copying of an attribute is not what you need, `OpExtension` also can set attribute to predefined constant value. For the same `CustomOperation`, imagine you want to set `attr2` to value 5 instead of copying from `fw_attr2`, to achieve that do the following:
And original model in framework representation also has operation with name ``CustomOperation`` with the same
``attr1`` and ``attr2`` attributes. Then with the following code:
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
:language: cpp
:fragment: [frontend_extension_CustomOperation_as_is]
Both ``attr1`` and ``attr2`` are copied from framework representation to OpenVINO representation automatically.
If for some reason names of attributes are different but values still can be copied “as-is” you can pass attribute
names mapping in ``OpExtension`` constructor:
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
:language: cpp
:fragment: [frontend_extension_CustomOperation_rename]
Where ``fw_attr1`` and ``fw_attr2`` are names for corresponding attributes in framework operation representation.
If copying of an attribute is not what you need, ``OpExtension`` also can set attribute to predefined constant value.
For the same ``CustomOperation``, imagine you want to set ``attr2`` to value 5 instead of copying from ``fw_attr2``,
to achieve that do the following:
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
:language: cpp
:fragment: [frontend_extension_CustomOperation_rename_set]
@snippet ov_extensions.cpp frontend_extension_CustomOperation_rename_set
So the conclusion is that each attribute of target OpenVINO operation should be initialized either by
1. Setting automatically due to name matching
2. Mapped by attribute name
3. Set to a constant value
This is achieved by specifying maps as arguments for `OpExtension` constructor.
This is achieved by specifying maps as arguments for ``OpExtension`` constructor.
## Mapping to Multiple Operations with ConversionExtension
Attribute mapping with named inputs and outputs
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Previous sections cover the case when a single operation is mapped to a single operation with optional adjustment in names and attribute values. That is likely enough for your own custom operation with existing C++ kernel implementation. In this case your framework representation and OpenVINO representation for the operation are under your control and inputs/outpus/attributes can be aligned to make `OpExtension` usable.
Mappings in previous examples assume that inputs and outputs of an operator in framework model representation come
with a particular order so you can directly map framework operation input ``0`` to OpenVINO operation input ``0`` and so on.
That's not always the case, for frameworks like PaddlePaddle, operation inputs and outputs are identified by their names
and may be defined in any order. So to map it to OpenVINO operation inputs and outputs, you have to specify that order yourself.
This can be done by creating two vector of strings, one for input and one for output, where framework operation
input name at position ``i`` maps to OpenVINO operation input at position ``i`` (and similarly for outputs).
In case if one-to-one mapping is not possible, *decomposition to multiple operations* should be considered. It is achieved by using more verbose and less automated `ConversionExtension` class. It enables writing arbitrary code to replace a single framework operation by multiple connected OpenVINO operations constructing dependency graph of any complexity.
`ConversionExtension` maps a single operation to a function which builds a graph using OpenVINO operation classes. Follow chapter [Build a Model in OpenVINO Runtime](@ref ov_ug_build_model) to learn how to use OpenVINO operation classes to build a fragment of model for replacement.
Let's see the following example. Like previously, we'd like to map ``CustomOperation`` in the original model,
to OpenVINO ``CustomOperation`` as is (so their name and attributes names match). This time, that framework operation
inputs and outputs are not stricly ordered and can be identified by their names ``A``, ``B``, ``C`` for inputs
and ``X``, ``Y`` for outputs. Those inputs and outputs can be mapped to OpenVINO operation, such that inputs
``A``, ``B``, ``C`` map to OpenVINO ``CustomOperation`` first, second and third input and ``X`` and ``Y``
outputs map to OpenVINO ``CustomOperation`` first and second output respectively.
The next example illustrates using `ConversionExtension` for conversion of “ThresholdedRelu” from ONNX according to the formula: `ThresholdedRelu(x, alpha) -> Multiply(x, Convert(Greater(x, alpha), type=float))`.
Given that, such custom operation can be registered by the following:
> **NOTE**: `ThresholdedRelu` is one of the standard ONNX operators which is supported by ONNX frontend natively out-of-the-box. Here we are re-implementing it to illustrate how you can add a similar support for your custom operation instead of `ThresholdedRelu`.
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
:language: cpp
:fragment: [frontend_extension_CustomOperation_as_is_paddle]
@sphinxtabset
@sphinxtab{C++}
@snippet ov_extensions.cpp frontend_extension_ThresholdedReLU_header
@endsphinxtab
@sphinxtab{Python}
@snippet ov_extensions.py py_frontend_extension_ThresholdedReLU_header
@endsphinxtab
Second example shows how to map the operation with named inputs and outputs, but when names of attributes are different:
@endsphinxtabset
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
:language: cpp
:fragment: [frontend_extension_CustomOperation_rename_paddle]
@sphinxtabset
@sphinxtab{C++}
@snippet ov_extensions.cpp frontend_extension_ThresholdedReLU
@endsphinxtab
@sphinxtab{Python}
@snippet ov_extensions.py py_frontend_extension_ThresholdedReLU
@endsphinxtab
and the last one shows how to map the operation with named inputs and outputs, but when (in order to correctly map framework
operation to OpenVINO operation) one of the attributes has to be set to predefined value:
@endsphinxtabset
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
:language: cpp
:fragment: [frontend_extension_CustomOperation_rename_set_paddle]
To access original framework operation attribute value and connect to inputs, `node` object of type `NodeContext` is used. It has two main methods:
* `NodeContext::get_input` to get input with a given index,
Mapping attributes from operation inputs
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
* `NodeContext::get_attribute` to get attribute value with a given name.
For models (like PyTorch models), where operations have attributes on the input list, you can specify name to input index mapping.
For example, imagine you have created a custom OpenVINO operation that implements a variant of ELU activation function
with two attributes ``alpha`` and ``beta``:
.. math::
CustomElu=\left\lbrace
\begin{array}{ll}
beta * x & \textrm{if x > 0} \newline
alpha * (exp(x) - 1) & \textrm{otherwise}
\end{array}
\right.
Below is a snippet of ``CustomElu`` class showing how to define its attributes:
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
:language: cpp
:fragment: [frontend_extension_framework_map_CustomElu]
Let's see an example of how you can map ``CustomElu`` to PyTorch `aten::elu <https://pytorch.org/docs/stable/generated/torch.nn.functional.elu.html>`_
(note that if ``beta`` is equal to ``1``, ``CustomElu`` works the same as ``aten::elu``).
``aten::elu`` has ``alpha`` attribute second on the input list, but it doesn't have ``beta``.
So in order to map it to ``CustomElu`` you can use the following:
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
:language: cpp
:fragment: [frontend_extension_framework_map_CustomElu_mapping]
This will map ``alpha`` to the second input and map ``beta`` attribute to constant value ``1.0f``.
Such created extension can be used, e.g. in dynamic library, please refer to :ref:`Create a library with extensions <create_a_library_with_extensions>`.
Mapping custom operations to frontends with OPENVINO_FRAMEWORK_MAP macro
########################################################################
``OPENVINO_FRAMEWORK_MAP`` is a macro that should be used inside OpenVINO operation's class definition and that lets you specify
the mapping between this operation to a frontend operation.
Let's consider the following example. Imagine you have an ONNX model with ``CustomOp`` operation (and this operation has ``mode`` attribute),
a TensorFlow model with ``CustomOpV3`` operation (this operation has ``axis`` attribute) and a PaddlePaddle model with ``CustomOp`` (with ``mode`` attribute)
that has input named "X" and output named "Out" and all of them can be implemented with a single OpenVINO operation ``CustomOp`` like follows:
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
:language: cpp
:fragment: [frontend_extension_framework_map_macro_headers]
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
:language: cpp
:fragment: [frontend_extension_framework_map_macro_CustomOp]
Let's take a closer look at the parameters this macro takes (note that there are two flavors - the second one is to map
for PaddlePaddle operations where input and output names have to be specified).
.. code-block:: cpp
OPENVINO_FRAMEWORK_MAP(framework, name, attributes_map, attributes_values)
OPENVINO_FRAMEWORK_MAP(framework, input_names, output_names, name, attributes_map, attributes_values)
- ``framework`` - framework name.
- ``name`` - the framework operation name. It's optional if the OpenVINO custom operation name
(that is the name that is passed as the first parameter to ``OPENVINO_OP`` macro) is the same
as the framework operation name and both ``attributes_map`` and ``attributes_values`` are not provided.
- ``input_names`` - vector of strings that specify the names of inputs (needed to map PaddlePaddle to OpenVINO operations),
- ``output_names`` - vector of strings that specify the names of outputs (needed to map PaddlePaddle to OpenVINO operations),
- ``attributes_map`` - used to provide a mapping between OpenVINO operation attribute and
framework operation attribute. Contains key-value pairs, where key is an OpenVINO operation
attribute name and value is its corresponding framework operation attribute name.
This parameter is optional if the number of OpenVINO operation attributes and their names
match one-to-one with framework operation attributes.
- ``attributes_values`` - used to provide default values for OpenVINO operation attributes
that are not specified in ``attributes_map``. Contains key-value pairs, where key is an OpenVINO
operation attribute name and the value is this attribute value. This parameter cannot be provided
if ``attributes_map`` contains all of OpenVINO operation attributes or if ``attributes_map`` is not provided.
In the example above, ``OPENVINO_FRAMEWORK_MAP`` is used three times.
First, OpenVINO ``CustomOp`` is mapped to ONNX ``CustomOp`` operation, ``m_mode`` attribute is mapped to ``mode``
attribute, while ``m_axis`` attribute gets the default value ``-1``. Secondly, OpenVINO ``CustomOp`` is mapped
to TensorFlow ``CustomOpV3`` operation, ``m_axis`` attribute is mapped to ``axis`` attribute, while ``m_mode``
attribute gets the default value ``"linear"``. Thirdly, OpenVINO ``CustomOp`` is mapped to PaddlePaddle ``CustomOp`` operation,
``m_mode`` attribute is mapped to ``mode`` attribute, while ``m_axis`` attribute gets the default value ``-1``.
This mapping also specifies the input name "X" and output name "Out".
The last step is to register this custom operation by following:
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
:language: cpp
:fragment: [frontend_extension_framework_map_macro_add_extension]
.. important::
To map an operation on a specific framework, you have to link to a respective
frontend (``openvino::frontend::onnx``, ``openvino::frontend::tensorflow``, ``openvino::frontend::paddle``) in the ``CMakeLists.txt`` file:
.. code-block:: sh
target_link_libraries(${TARGET_NAME} PRIVATE openvino::frontend::onnx)
Mapping to Multiple Operations with ConversionExtension
#######################################################
Previous sections cover the case when a single operation is mapped to a single operation with optional
adjustment in names and attribute values. That is likely enough for your own custom operation with existing
C++ kernel implementation. In this case your framework representation and OpenVINO representation for the
operation are under your control and inputs/outpus/attributes can be aligned to make ``OpExtension`` usable.
In case if one-to-one mapping is not possible, *decomposition to multiple operations* should be considered.
It is achieved by using more verbose and less automated ``ConversionExtension`` class.
It enables writing arbitrary code to replace a single framework operation by multiple connected OpenVINO
operations constructing dependency graph of any complexity.
``ConversionExtension`` maps a single operation to a function which builds a graph using OpenVINO
operation classes. Follow chapter :ref:`Build a Model in OpenVINO Runtime <ov_ug_build_model>` to
learn how to use OpenVINO operation classes to build a fragment of model for replacement.
Below example illustrates using ``ConversionExtension`` for conversion of “ThresholdedRelu”
from ONNX according to the formula: ``ThresholdedRelu(x, alpha) -> Multiply(x, Convert(Greater(x, alpha), type=float))``.
.. note::
``ThresholdedRelu`` is one of the standard ONNX operators which is supported by ONNX frontend
natively out-of-the-box. Here we are re-implementing it to illustrate how you can add a similar
support for your custom operation instead of ``ThresholdedRelu``.
.. tab-set::
.. tab-item:: Python
:sync: py
.. doxygensnippet:: docs/snippets/ov_extensions.py
:language: python
:fragment: [py_frontend_extension_ThresholdedReLU_header]
.. tab-item:: C++
:sync: cpp
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
:language: cpp
:fragment: [frontend_extension_ThresholdedReLU_header]
.. tab-set::
.. tab-item:: Python
:sync: py
.. doxygensnippet:: docs/snippets/ov_extensions.py
:language: python
:fragment: [py_frontend_extension_ThresholdedReLU]
.. tab-item:: C++
:sync: cpp
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
:language: cpp
:fragment: [frontend_extension_ThresholdedReLU]
The next example shows how to use ``ConversionExtension`` to convert PyTorch
`aten::hardtanh <https://pytorch.org/docs/stable/generated/torch.nn.functional.hardtanh.html>`_
to demonstrate how to use ``get_values_from_const_input`` function to fetch an attribute value from input:
.. doxygensnippet:: docs/snippets/ov_extensions.py
:language: python
:fragment: [py_frontend_extension_aten_hardtanh]
To access original framework operation attribute value and connect to inputs, ``node`` object of type ``NodeContext`` is used. It has three main methods:
* ``NodeContext::get_input`` to get input with a given index,
* ``NodeContext::get_attribute`` to get attribute value with a given name,
* ``NodeContext::get_values_from_const_input`` to get an attribute with a given input index.
The conversion function should return a vector of node outputs that are mapped to
corresponding outputs of the original framework operation in the same order.
Some frameworks require output names of the operation to be provided during conversion.
For PaddlePaddle operations, it is generally necessary to provide names for all outputs using the ``NamedOutputs`` container.
Usually those names can be found in source code of the individual operation in PaddlePaddle code.
The next example shows such conversion for the ``top_k_v2`` operation.
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
:language: cpp
:fragment: [frontend_extension_paddle_TopK]
For TensorFlow framework, if an operation has more than one output, it is recommended to assign names to
those outputs using the ``NamedOutputVector`` structure which allows both indexed and named output access.
For a description of TensorFlow operations, including the names of their outputs, refer to the
`tf.raw_ops <https://www.tensorflow.org/api_docs/python/tf/raw_ops/>`__ documentation page.
The next example shows such conversion for the ``TopKV2`` operation.
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
:language: cpp
:fragment: [frontend_extension_tf_TopK]
@endsphinxdirective
The conversion function should return a vector of node outputs that are mapped to corresponding outputs of the original framework operation in the same order.

View File

@@ -1,28 +1,42 @@
# OpenVINO Graph Rewrite Pass {#openvino_docs_Extensibility_UG_graph_rewrite_pass}
`ov::pass::GraphRewrite` serves for running multiple matcher passes on `ov::Model` in a single graph traversal.
@sphinxdirective
.. meta::
:description: Get to know how Graph Rewrite handles running multiple matcher passes on
ov::Model in a single graph traversal.
``:ref:`ov::pass::GraphRewrite <doxid-classov_1_1pass_1_1_graph_rewrite>``` serves for running multiple matcher passes on ``:ref:`ov::Model <doxid-classov_1_1_model>``` in a single graph traversal.
Example:
@snippet src/transformations/template_pattern_transformation.cpp matcher_pass:graph_rewrite
.. doxygensnippet:: docs/snippets/template_pattern_transformation.cpp
:language: cpp
:fragment: [matcher_pass:graph_rewrite]
In addition, GraphRewrite handles nodes that were registered by MatcherPasses during their execution. This nodes will be added to the beginning of the sequence with nodes for pattern matching.
> **NOTE**: when using `ov::pass::Manager` temporary GraphRewrite is used to execute single MatcherPass.
.. note::
When using ``:ref:`ov::pass::Manager <doxid-classov_1_1pass_1_1_manager>``` temporary GraphRewrite is used to execute single MatcherPass.
GraphRewrite has two algorithms for MatcherPasses execution. First algorithm is straightforward. It applies each MatcherPass in registration order to current node.
![graph_rewrite_execution]
.. image:: ./_static/images/graph_rewrite_execution.png
But it is not really efficient when you have a lot of registered passes. So first of all GraphRewrite checks that all MatcherPass patterns has type-based root node (it means that type of this node is not hidden into predicate).
And then creates map from registered MatcherPasses. That helps to avoid additional cost of applying each MatcherPass for each node.
![graph_rewrite_efficient_search]
.. image:: ./_static/images/graph_rewrite_efficient_search.png
> **NOTE**: GraphRewrite execution algorithm cannot be set manually and depends only on root nodes registered inside MatcherPasses.
.. note::
## See Also
GraphRewrite execution algorithm cannot be set manually and depends only on root nodes registered inside MatcherPasses.
* [OpenVINO™ Transformations](./ov_transformations.md)
See Also
########
* :doc:`OpenVINO™ Transformations <openvino_docs_transformations>`
@endsphinxdirective
[graph_rewrite_execution]: ./img/graph_rewrite_execution.png
[graph_rewrite_efficient_search]: ./img/graph_rewrite_efficient_search.png

View File

@@ -1,13 +1,27 @@
# OpenVINO Matcher Pass {#openvino_docs_Extensibility_UG_matcher_pass}
`ov::pass::MatcherPass` is used for pattern-based transformations.
@sphinxdirective
.. meta::
:description: Learn how to create a pattern, implement a callback, register
the pattern and Matcher to execute MatcherPass transformation
on a model.
``:ref:`ov::pass::MatcherPass <doxid-classov_1_1pass_1_1_matcher_pass>``` is used for pattern-based transformations.
Template for MatcherPass transformation class
@snippet src/transformations/template_pattern_transformation.hpp graph_rewrite:template_transformation_hpp
@snippet src/transformations/template_pattern_transformation.cpp graph_rewrite:template_transformation_cpp
.. doxygensnippet:: docs/snippets/template_pattern_transformation.hpp
:language: cpp
:fragment: [graph_rewrite:template_transformation_hpp]
.. doxygensnippet:: docs/snippets/template_pattern_transformation.cpp
:language: cpp
:fragment: [graph_rewrite:template_transformation_cpp]
To use ``:ref:`ov::pass::MatcherPass <doxid-classov_1_1pass_1_1_matcher_pass>```, you need to complete these steps:
To use `ov::pass::MatcherPass`, you need to complete these steps:
1. Create a pattern
2. Implement a callback
3. Register the pattern and Matcher
@@ -15,87 +29,135 @@ To use `ov::pass::MatcherPass`, you need to complete these steps:
So let's go through each of these steps.
## Create a pattern
Create a pattern
################
Pattern is a single root `ov::Model`. But the only difference is that you do not need to create a model object, you just need to create and connect opset or special pattern operations.
Pattern is a single root ``:ref:`ov::Model <doxid-classov_1_1_model>```. But the only difference is that you do not need to create a model object, you just need to create and connect opset or special pattern operations.
Then you need to take the last created operation and put it as a root of the pattern. This root node will be used as a root node in pattern matching.
> **NOTE**: Any nodes in a pattern that have no consumers and are not registered as root will not be used in pattern matching.
@snippet ov_model_snippets.cpp pattern:simple_example
.. note::
Any nodes in a pattern that have no consumers and are not registered as root will not be used in pattern matching.
The `Parameter` operation in the example above has type and shape specified. These attributes are needed only to create Parameter operation class and will not be used in pattern matching.
.. doxygensnippet:: docs/snippets/ov_model_snippets.cpp
:language: cpp
:fragment: [pattern:simple_example]
For more pattern examples, refer to the [pattern matching](#pattern_matching) section.
The ``Parameter`` operation in the example above has type and shape specified. These attributes are needed only to create Parameter operation class and will not be used in pattern matching.
## Implement callback
For more pattern examples, refer to the `pattern matching section <#pattern-matching>`__.
Implement callback
##################
Callback is an action applied to every pattern entrance. In general, callback is the lambda function that takes Matcher object with detected subgraph.
@snippet ov_model_snippets.cpp pattern:callback_example
.. doxygensnippet:: docs/snippets/ov_model_snippets.cpp
:language: cpp
:fragment: [pattern:callback_example]
The example above shows the callback structure and how Matcher can be used for accessing nodes detected by pattern.
Callback return value is `true` if root node was replaced and another pattern cannot be applied to the same root node; otherwise, it is `false`.
> **NOTE**: It is not recommended to manipulate with nodes that are under root node. This may affect GraphRewrite execution as it is expected that all nodes that come after root node in topological order are valid and can be used in pattern matching.
Callback return value is ``true`` if root node was replaced and another pattern cannot be applied to the same root node; otherwise, it is ``false``.
.. note::
It is not recommended to manipulate with nodes that are under root node. This may affect GraphRewrite execution as it is expected that all nodes that come after root node in topological order are valid and can be used in pattern matching.
MatcherPass also provides functionality that allows reporting of the newly created nodes that can be used in additional pattern matching.
If MatcherPass was registered in `ov::pass::Manager` or `ov::pass::GraphRewrite`, these registered nodes will be added for additional pattern matching.
That means that matcher passes registered in `ov::pass::GraphRewrite` will be applied to these nodes.
If MatcherPass was registered in ``:ref:`ov::pass::Manager <doxid-classov_1_1pass_1_1_manager>``` or ``:ref:`ov::pass::GraphRewrite <doxid-classov_1_1pass_1_1_graph_rewrite>```, these registered nodes will be added for additional pattern matching.
That means that matcher passes registered in ``:ref:`ov::pass::GraphRewrite <doxid-classov_1_1pass_1_1_graph_rewrite>``` will be applied to these nodes.
The example below shows how single MatcherPass can fuse sequence of operations using the `register_new_node` method.
The example below shows how single MatcherPass can fuse sequence of operations using the ``register_new_node`` method.
@snippet src/transformations/template_pattern_transformation.cpp matcher_pass:relu_fusion
.. doxygensnippet:: docs/snippets/template_pattern_transformation.cpp
:language: cpp
:fragment: [matcher_pass:relu_fusion]
> **NOTE**: If you register multiple nodes, please add them in topological order. We do not topologically sort these nodes as it is a time-consuming operation.
.. note::
If you register multiple nodes, please add them in topological order. We do not topologically sort these nodes as it is a time-consuming operation.
## Register pattern and Matcher
Register pattern and Matcher
############################
The last step is to register Matcher and callback inside the MatcherPass pass. To do this, call the `register_matcher` method.
> **NOTE**: Only one matcher can be registered for a single MatcherPass class.
The last step is to register Matcher and callback inside the MatcherPass pass. To do this, call the ``register_matcher`` method.
```cpp
// Register matcher and callback
register_matcher(m, callback);
```
## Execute MatcherPass
.. note::
Only one matcher can be registered for a single MatcherPass class.
.. code-block:: cpp
// Register matcher and callback
register_matcher(m, callback);
Execute MatcherPass
###################
MatcherPass has multiple ways to be executed:
* Run on a single node - it can be useful if you want to run MatcherPass inside another transformation.
@snippet src/transformations/template_pattern_transformation.cpp matcher_pass:run_on_node
* Run on `ov::Model` using GraphRewrite - this approach gives ability to run MatcherPass on whole `ov::Model`. Moreover, multiple MatcherPass transformation can be registered in a single GraphRewite to be executed in a single graph traversal.
@snippet src/transformations/template_pattern_transformation.cpp matcher_pass:graph_rewrite
* Run on `ov::Model` using `ov::pass::Manager` - this approach helps you to register MatcherPass for execution on `ov::Model` as another transformation types.
@snippet src/transformations/template_pattern_transformation.cpp matcher_pass:manager
## Pattern Matching <a name="pattern_matching"></a>
* Run on a single node - it can be useful if you want to run MatcherPass inside another transformation.
.. doxygensnippet:: docs/snippets/template_pattern_transformation.cpp
:language: cpp
:fragment: [matcher_pass:run_on_node]
* Run on ``:ref:`ov::Model <doxid-classov_1_1_model>``` using GraphRewrite - this approach gives ability to run MatcherPass on whole ``:ref:`ov::Model <doxid-classov_1_1_model>```. Moreover, multiple MatcherPass transformation can be registered in a single GraphRewite to be executed in a single graph traversal.
.. doxygensnippet:: docs/snippets/template_pattern_transformation.cpp
:language: cpp
:fragment: [matcher_pass:graph_rewrite]
* Run on ``:ref:`ov::Model <doxid-classov_1_1_model>``` using ``:ref:`ov::pass::Manager <doxid-classov_1_1pass_1_1_manager>``` - this approach helps you to register MatcherPass for execution on ``:ref:`ov::Model <doxid-classov_1_1_model>``` as another transformation types.
.. doxygensnippet:: docs/snippets/template_pattern_transformation.cpp
:language: cpp
:fragment: [matcher_pass:manager]
Pattern Matching
################
Sometimes patterns cannot be expressed via regular operations or it is too complicated.
For example, if you want to detect **Convolution->Add** sub-graph without specifying particular input type for Convolution operation or you want to create a pattern where some of operations can have different types.
And for these cases OpenVINO™ provides additional helpers to construct patterns for GraphRewrite transformations.
There are two main helpers:
1. `ov::pass::pattern::any_input` - helps to express inputs if their types are undefined.
2. `ov::pass::pattern::wrap_type<T>` - helps to express nodes of pattern without specifying node attributes.
1. ``:ref:`ov::pass::pattern::any_input <doxid-namespaceov_1_1pass_1_1pattern_1a8ed84c3eed4610f117ee10d86d500e02>``` - helps to express inputs if their types are undefined.
2. ``:ref:`ov::pass::pattern::wrap_type <doxid-namespaceov_1_1pass_1_1pattern_1adfcd6031c95d7bace5f084e2aa105af8>`<T>`` - helps to express nodes of pattern without specifying node attributes.
Let's go through the example to have better understanding of how it works:
> **NOTE**: Node attributes do not participate in pattern matching and are needed only for operations creation. Only operation types participate in pattern matching.
.. note::
Node attributes do not participate in pattern matching and are needed only for operations creation. Only operation types participate in pattern matching.
The example below shows basic usage of `ov::passpattern::any_input`.
The example below shows basic usage of ``ov::passpattern::any_input``.
Here we construct Multiply pattern with arbitrary first input and Constant as a second input.
Also as Multiply is commutative operation, it does not matter in which order we set inputs (any_input/Constant or Constant/any_input) because both cases will be matched.
@snippet ov_model_snippets.cpp pattern:label_example
.. doxygensnippet:: docs/snippets/ov_model_snippets.cpp
:language: cpp
:fragment: [pattern:label_example]
This example shows how we can construct a pattern when operation has arbitrary number of inputs.
@snippet ov_model_snippets.cpp pattern:concat_example
.. doxygensnippet:: docs/snippets/ov_model_snippets.cpp
:language: cpp
:fragment: [pattern:concat_example]
This example shows how to use predicate to construct a pattern. Also it shows how to match pattern manually on given node.
@snippet ov_model_snippets.cpp pattern:predicate_example
.. doxygensnippet:: docs/snippets/ov_model_snippets.cpp
:language: cpp
:fragment: [pattern:predicate_example]
> **NOTE**: Be careful with manual matching because Matcher object holds matched nodes. To clear a match, use the m->clear_state() method.
.. note::
## See Also
Be careful with manual matching because Matcher object holds matched nodes. To clear a match, use the m->clear_state() method.
* [OpenVINO™ Transformations](./ov_transformations.md)
See Also
########
* :doc:`OpenVINO™ Transformations <openvino_docs_transformations>`
@endsphinxdirective

View File

@@ -1,17 +1,31 @@
# OpenVINO Model Pass {#openvino_docs_Extensibility_UG_model_pass}
`ov::pass::ModelPass` is used for transformations that take entire `ov::Model` as an input and process it.
@sphinxdirective
.. meta::
:description: Learn how to use Model Pass transformation class to take entire
ov::Model as input and process it.
``:ref:`ov::pass::ModelPass <doxid-classov_1_1pass_1_1_model_pass>``` is used for transformations that take entire ``:ref:`ov::Model <doxid-classov_1_1_model>``` as an input and process it.
Template for ModelPass transformation class
@snippet src/transformations/template_model_transformation.hpp model_pass:template_transformation_hpp
.. doxygensnippet:: docs/snippets/template_model_transformation.hpp
:language: cpp
:fragment: [model_pass:template_transformation_hpp]
@snippet src/transformations/template_model_transformation.cpp model_pass:template_transformation_cpp
.. doxygensnippet:: docs/snippets/template_model_transformation.cpp
:language: cpp
:fragment: [model_pass:template_transformation_cpp]
Using `ov::pass::ModelPass`, you need to override the `run_on_model` method where you will write the transformation code.
Return value is `true` if the original model has changed during transformation (new operation was added, or operations replacement was made, or node attributes were changed); otherwise, it is `false`.
Also `ov::pass::ModelPass` based transformations can be executed via `ov::pass::Manager`.
Using ``:ref:`ov::pass::ModelPass <doxid-classov_1_1pass_1_1_model_pass>```, you need to override the ``run_on_model`` method where you will write the transformation code.
Return value is ``true`` if the original model has changed during transformation (new operation was added, or operations replacement was made, or node attributes were changed); otherwise, it is ``false``.
Also ``:ref:`ov::pass::ModelPass <doxid-classov_1_1pass_1_1_model_pass>``` based transformations can be executed via ``:ref:`ov::pass::Manager <doxid-classov_1_1pass_1_1_manager>```.
## See Also
See Also
########
* [OpenVINO™ Transformations](./ov_transformations.md)
* :doc:`OpenVINO™ Transformations <openvino_docs_transformations>`
@endsphinxdirective

Some files were not shown because too many files have changed in this diff Show More