Compare commits

...

233 Commits

Author SHA1 Message Date
Karol Blaszczak
98a33ba770 [DOCS] 23.0 selector tool remove (#21315) 2023-11-27 15:17:44 +01:00
Sebastian Golebiewski
d0647322de Direct Github link to a specific notebook (#20358) 2023-10-10 15:45:18 +02:00
Alina Kladieva
9f2f5fc59f Bump product version to 2023.0.3 (#20147) 2023-09-29 12:43:45 +02:00
Yuan Hu
f80f99f5c5 Revert [Core] fix Memory Leak caused by create/inference request con… (#20050)
* Revert "[Core] fix Memory Leak caused by create/inference request consequently in separate thread (#18868) (#19191)"

This reverts commit b0394cc3e4.

* Install local wheel packages instead of PYPI ones (#19031)

* Try to use --no-index when install python packages

* Apply suggestions from code review

* Update .ci/azure/linux.yml

* Try to use conan.lock file (#19709)

* Fixed NCC style check (#20121)

---------

Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
Co-authored-by: Alina Kladieva <alina.kladieva@intel.com>
2023-09-28 19:39:23 +02:00
Sebastian Golebiewski
5127753183 Fix Issue 20097 - providing an easy to read Cmake command (#20130)
Porting: https://github.com/openvinotoolkit/openvino/pull/20126
2023-09-28 18:15:41 +04:00
bstankix
f4a3f0a223 [DOCS] Bugfix coveo sa-search url (#20056) 2023-09-26 15:15:29 +02:00
Ilya Lavrenov
319711fca5 Fixed issue 19784 (#19788)
* Fixed issue 19784

* Update .ci/azure/linux_debian.yml

replaced 'focal' with 'ubuntu20'
2023-09-13 14:59:55 +04:00
bstankix
8c31771e6c [DOCS] Port coveo search engine (#19751) 2023-09-11 12:40:01 +00:00
Maciej Smyk
29d01a5cbf img-fix (#19701) 2023-09-08 15:46:56 +02:00
Maciej Smyk
002537729b fix (#19697) 2023-09-08 13:33:44 +02:00
Ilya Lavrenov
088fe50dd9 Update versions in apt, yum installation docs (#19688) 2023-09-08 11:06:15 +02:00
Maciej Smyk
be021afc0b Update supported_model_formats.md (#19643) 2023-09-07 11:02:43 +02:00
Maciej Smyk
74ee34e925 Extend sphinx_sitemap to add custom metadata (#19641) 2023-09-07 09:17:07 +02:00
Maciej Smyk
c516a7279e update (#19621) 2023-09-07 08:38:50 +02:00
Maciej Smyk
8252d74662 [DOCS] contributing guidelines (#19623) 2023-09-06 16:18:46 +02:00
Alexander Suvorov
a3e746401e [DOCS] Update Selector Tool for 2023.0.2 2023-09-05 21:26:49 +02:00
Maciej Smyk
68db3844d3 [DOCS] Fixing Optimize Preprocessing in notebooks 120 and 230 for 23.0 2023-09-05 18:11:21 +02:00
Maciej Smyk
4eb71aa5e0 [DOCS] Link fix for 23.0 (#19592)
* 2023.0 link fix

* Update README.md
2023-09-05 10:06:13 +02:00
Karol Blaszczak
256a6d2572 [DOCS] 23.0.2 adjustment (#19604) 2023-09-05 09:47:01 +02:00
Ilya Lavrenov
6fbcb94e20 Fixed build with static protobuf for brew publishing (#19590) 2023-09-04 19:35:31 +04:00
Przemyslaw Wysocki
2ac63aea24 Backport Robust detection of Cython version (#19537) (#19547)
* Robust detection of Cython version (#19537)

* Aligned protobuf version in conanfile.txt with onnx recipe (#19525)

---------

Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
2023-09-04 17:03:14 +04:00
Sebastian Golebiewski
e8d44d4502 Adding Quantizing with Accuracy Control using NNCF notebook (#19588) 2023-09-04 14:56:47 +02:00
Maciej Smyk
816c2a24de [DOCS] Fix for Install from Docker Image for 23.0 (#19581)
* Update installing-openvino-docker-linux.md

* Update installing-openvino-docker-linux.md

* Update installing-openvino-docker-linux.md
2023-09-04 11:16:22 +02:00
Maciej Smyk
177aa10040 [DOCS] Torch.compile() documentation for 23.0 (#19542)
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
2023-09-04 08:38:52 +02:00
Sebastian Golebiewski
68dfd60057 add-253 (#19502) 2023-08-30 13:46:40 +02:00
Sebastian Golebiewski
fd519c711a improve-snippets (#19498)
Porting: https://github.com/openvinotoolkit/openvino/pull/19479
2023-08-30 13:11:03 +02:00
Przemyslaw Wysocki
3d17c656d1 Comment cmake check (#19491) 2023-08-30 12:56:18 +02:00
Maciej Smyk
baab44c4f4 [DOCS] Docker Guide Update for 23.0 (#19448)
* docker-update

* id fix

* Update installing-openvino-docker-linux.md

* Update installing-openvino-docker-linux.md

* Update installing-openvino-docker-linux.md
2023-08-29 08:45:20 +02:00
Sebastian Golebiewski
d972830c9a update-notebooks (#19454)
Add notebook 252-fastcomposer-image-generation. Fix indentation, admonitions, broken links and images.
2023-08-28 15:03:53 +02:00
Karol Blaszczak
f322762818 [DOCS] speech sample deprecation port 23.0 2023-08-25 12:41:30 +02:00
Karol Blaszczak
c2da07a8e7 [DOCS] adjustment to supported devices port 23.0 2023-08-25 12:10:44 +02:00
Sebastian Golebiewski
c08f68f1e5 [DOCS] Updating MO documentation for 23.0 (#19373)
* restructure-mo-docs

* apply-commits-18214

Applying commits from:

https://github.com/openvinotoolkit/openvino/pull/18214

* update

* Apply suggestions from code review

Co-authored-by: Anastasiia Pnevskaia <anastasiia.pnevskaia@intel.com>

* Apply suggestions from code review

* Update model_introduction.md

* Update docs/resources/tensorflow_frontend.md

* Create MO_Python_API.md

* Update Deep_Learning_Model_Optimizer_DevGuide.md

---------

Co-authored-by: Anastasiia Pnevskaia <anastasiia.pnevskaia@intel.com>
2023-08-23 19:24:15 +02:00
Sebastian Golebiewski
433d2f2750 CVS-113150 (#19370)
Porting:

https://github.com/openvinotoolkit/openvino/pull/18495
2023-08-23 18:28:04 +02:00
Sebastian Golebiewski
6b3863632d update-notebooks (#19339) 2023-08-22 15:37:51 +02:00
Sebastian Golebiewski
5509db87af link-to-frontend (#19333) 2023-08-22 12:54:27 +02:00
Artyom Anokhov
e662b1a330 Bump OV version to 2023.0.2 (#19329) 2023-08-22 11:02:36 +02:00
Sebastian Golebiewski
0aa5a8f704 port-19307 (#19310)
Porting: https://github.com/openvinotoolkit/openvino/pull/19307
Updating tutorials: adding table of contents and new notebooks.
2023-08-21 16:47:28 +02:00
Marcin Kusmierski
54f6f11186 [GNA] Switch GNA library to version 03.05.00.2116 (#19296)
Co-authored-by: Szymon Irzabek <szymon.jakub.irzabek@intel.com>
2023-08-21 14:50:15 +02:00
hyunback kim
ea482d8391 [GPU] Do not select onednn format for asymmetric weight (#19140) (#19265) 2023-08-21 11:30:33 +04:00
Karol Blaszczak
a93f320a48 Update prerelease_information.md (#19283) 2023-08-18 20:00:49 +02:00
Karol Blaszczak
26e9c69440 [DOCS] pre-releasenotes 23.1 Aug (#19271) 2023-08-18 17:43:11 +02:00
Marcin Kusmierski
4727efdb3c [GNA] Fix memory leak in GNA plugin. (#19257)
* Disabled transformation introducing memory leak.
2023-08-18 13:39:22 +02:00
Sergey Shlyapnikov
b7415f5c3b [GPU] Prevent Conv's input data type changing at reorder_inputs pass (#19042) (#19245) 2023-08-17 17:57:14 +04:00
Sebastian Golebiewski
0262662050 add-slash (#19243) 2023-08-17 11:23:37 +04:00
Maciej Smyk
576b99fee9 [DOCS] Removal of redundant files for 23.0 2023-08-16 13:22:43 +02:00
bstankix
4e790d7b46 [DOCS] Fix parameter name in design-tabs (#19212) 2023-08-16 07:40:39 +00:00
Yuan Hu
b0394cc3e4 [Core] fix Memory Leak caused by create/inference request consequently in separate thread (#18868) (#19191)
Signed-off-by: HU Yuan2 <yuan2.hu@intel.com>
2023-08-15 10:55:36 +04:00
bstankix
18cb7c94c1 [DOCS] Add state retention to design-tabs (#19180) 2023-08-14 13:46:50 +00:00
Wanglei Shen
064364eb5e Support Win7 in cpu information parser (#19110) 2023-08-11 09:54:51 +04:00
Shuangji Yang
5ded6fb699 fix bug on conversion of gather to sequeeze (#19094) 2023-08-10 22:25:59 +04:00
Maciej Smyk
eabf199c3a Adds Python wheel requirements info to docs (#19125) 2023-08-10 21:50:10 +04:00
Sebastian Golebiewski
0e0d166746 add-numpy (#19128) 2023-08-10 21:39:44 +04:00
Stefania Hergane
a6351294e7 [EISW-89820] [releases/2023/0] Rename VPUX to NPU (#19002)
* Change `VPUX` occurences to `NPU`

* Change library for `NPU` device in `api_conformance_helpers.hpp`

* Rename `MYRIAD plugin`

* Switch `HARDWARE_AWARE_IGNORED_PATTERNS` VPU to NPU

* Rename DEVICE_KEEMBAY to DEVICE_NPU

* Rename VPUX_DEVICE_NAME to NPU_DEVICE_NAME

* Rename vpu_patterns to npu_patterns

* Change VPUX occurences to NPU after review

* Remove VPUX device comment

* Change VPUX/vpu to NPU in tests/time_tests

* Rename VPU to NPU in docs after review

* Rename VPU to NPU in tools/pot after review

* Renamed vpu.json to npu.json in tools/pot after review

* Restore CommonTestUtils::DEVICE_KEEMBAY

---------

Co-authored-by: MirceaDan99 <mircea-aurelian.dan@intel.com>
2023-08-09 00:19:25 +04:00
Maciej Smyk
cac7e2e1c4 [DOCS] Change sample structure for 23.0 (#19058) 2023-08-08 14:18:48 +00:00
Karol Blaszczak
13e674b1f8 Docs installation guide restructuring port (#19054) 2023-08-08 16:11:51 +02:00
Maciej Smyk
a55d1c21ee [DOCS] Basic quantization flow additions for 23.0 (#19059) 2023-08-08 15:59:47 +02:00
Marcin Kacprzak
91a4f73971 * [GNA] Fix for GeminiLake detection (#18653) (#18994)
* [GNA] Added HWGeneration::GNA_1_0_E enumerator
* [GNA] Extended a few tests with GNA1.0

Co-authored-by: Przemyslaw Wysocki <przemyslaw.wysocki@intel.com>
2023-08-08 00:01:13 +04:00
Przemyslaw Wysocki
84a3aab115 [PyOV] Backport of wheel building fix (#19013)
* Add upper bound

* backport flake fix

* Support of protobuf >= 21 (#18351)

* Corrected typo

* Ability to compile with newer protobuf versions

* Limit numpy (#18406)

* Revert "[PyOV] Pin version of Cython for API 1.0 (#18604)" (#18681)

* Revert "[PyOV] Pin version of Cython for API 1.0 (#18604)"

This reverts commit 787796d88f.

* Suppressed clang warning

* Restrict scipy module version for POT (#18237)

* Restrict scipy module version for POT

Latest release https://pypi.org/project/scipy/1.11.0 causes dependency conflicts

* Bump OMZ to include scipy restriction

---------

Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>

---------

Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
Co-authored-by: Alina Kladieva <alina.kladieva@intel.com>
Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>
2023-08-07 18:23:05 +04:00
Tatiana Savina
4ddeecc031 delete gna content (#19000) 2023-08-07 10:03:32 +02:00
Maciej Smyk
9c10e33fc7 Update model_optimization_guide.md (#18954) 2023-08-04 14:49:52 +02:00
Karol Blaszczak
c32b9a0cd5 [DOCS] fix ext directive 23.0 (#18989) 2023-08-04 12:23:34 +02:00
Karol Blaszczak
c32eef361b [DOCS] update prereleasenotes (#18958) 2023-08-03 13:13:01 +02:00
Sebastian Golebiewski
8d54bdd4d5 [DOCS] Compile CPU plugin for ARM platforms - for 23.0 (#18765)
* Update build_raspbian.md

* update-instructions

* remove-cross-compilation

* Update build_raspbian.md
2023-08-01 11:19:30 +02:00
Karol Blaszczak
64395f0d5e [DOCS] new benchmark data port 23.0 (#18873)
[DOCS] new benchmark data (#18532)
2023-08-01 08:36:26 +02:00
bstankix
9562161f76 Bugfix newsletter and footer scripts (#18854) 2023-07-28 16:12:37 +02:00
Maciej Smyk
cb59f057a0 [DOCS] Link fix for Get Started for 23.0 (#18799)
* Update get_started.md

* Update get_started.md
2023-07-26 12:25:55 +02:00
bstankix
28948502a9 [DOCS] Port newsletter and carousel changes from nightly (#18780) 2023-07-25 12:24:40 +00:00
Maciej Smyk
34748ae3b5 background fix for images (#18758) 2023-07-25 07:59:57 +02:00
bstankix
06eb4afd41 Port changes from nightly (#18743) 2023-07-24 12:00:22 +00:00
Karol Blaszczak
967d74ade6 [DOCS] conformance update port (#18735)
port: https://github.com/openvinotoolkit/openvino/pull/18732

conformance table
fix Add in ONNX layers
2023-07-24 11:44:23 +02:00
Sebastian Golebiewski
5ae4e2bb2d update (#18623) 2023-07-20 14:31:42 +02:00
Karol Blaszczak
22f6a3bcc0 [DOCS] minor MO fixes (#18606) 2023-07-18 16:22:04 +02:00
Sebastian Golebiewski
e842453865 realignment (#18621) 2023-07-18 16:03:42 +02:00
Maciej Smyk
2abbec386f Update configurations-for-intel-gpu.md (#18610) 2023-07-18 12:35:43 +02:00
Sebastian Golebiewski
afb2ebcdd4 [DOCS] Updating Interactive Tutorials for 23.0 (#18556)
* update-notebooks

* Update docs/nbdoc/nbdoc.py

Co-authored-by: bstankix <bartoszx.stankiewicz@intel.com>

* Update docs/nbdoc/nbdoc.py

Co-authored-by: bstankix <bartoszx.stankiewicz@intel.com>

---------

Co-authored-by: bstankix <bartoszx.stankiewicz@intel.com>
2023-07-14 14:07:40 +02:00
Karol Blaszczak
83e45c5ff3 [DOCS] GNA disclaimer port (#18507)
port: https://github.com/openvinotoolkit/openvino/pull/18431
2023-07-12 12:40:28 +02:00
Maciej Smyk
bdb6a44942 [DOCS] Code block update for 23.0 (#18451)
* code-block-1

* Update Convert_Model_From_Paddle.md

* code-block force

* fix

* fix-2

* Update troubleshooting-steps.md

* code-block-2

* Update README.md
2023-07-11 10:50:03 +02:00
Maciej Smyk
17cd26077a Update installing-openvino-docker-linux.md (#18459) 2023-07-11 08:34:12 +02:00
Maciej Smyk
247eb8a9b9 [DOCS] Tab reorder for 23.0 (#18389)
* tabs-1

* Update configure_devices.md

* tab-2

* tab-order

* Update installing-openvino-from-archive-linux.md

* Update installing-openvino-from-archive-linux.md

* win-linux-fix

* Update GPU_Extensibility.md
2023-07-07 14:31:05 +02:00
bstankix
68b8748c9f [DOCS] Add global footer
port: https://github.com/openvinotoolkit/openvino/pull/18374
2023-07-06 08:25:46 +02:00
Sebastian Golebiewski
852efa2269 [DOCS] Fix references in installation guide for 23.0 (#18384) 2023-07-06 08:04:42 +02:00
Karol Blaszczak
303fb7a121 [DOCS] menu bug fix 23.0 (#18353) 2023-07-04 07:59:17 +00:00
Tatiana Savina
7f1c6c8ce1 update links to rn (#18338) 2023-07-03 19:03:51 +02:00
Sebastian Golebiewski
55530b47c0 [DOCS] Adding metadata to articles for 2023.0 (#18332)
* adding-metadata

* Apply suggestions from code review

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Apply suggestions from code review

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

---------

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>
2023-07-03 15:00:28 +02:00
Karol Blaszczak
69a6097a30 [DOCS] update for install 23.0.1 (#18335) 2023-07-03 14:57:57 +02:00
Karol Blaszczak
1f759456d6 [DOCS] Update Selector Tool 2023.0.1 (#18336)
authored-by: Alexander Suvorov <alexander.suvorov@intel.com>
2023-07-03 14:56:59 +02:00
Karol Blaszczak
b05a7f2ed6 [DOCS] adjustments for ST and cookie policy (#18316) 2023-07-03 08:47:00 +02:00
Tatiana Savina
f4709ffe8b [DOCS] Port docs to release branch (#18317)
* [DOCS] Local distribution page improvements  (#18049)

* add slider with os specific libs

* doc review

* local distrib doc changes

* [DOCS] Added local distribution libraries path (#18191)

* add relative path to the table

* add another column

* new table format

* fix build issue

* fix tab name

* remove old table

* format fixes

* change font

* change path windows

* change tabset name

* add arm and 86_64 tables

* remove list dots

* [DOCS] Add FrontEnd API note (#18154)

* add note

* fix typo

* add advance cases note

* tf doc note

* wording change
2023-06-30 15:34:22 +02:00
Karol Blaszczak
bb1e353e58 [DOCS] supported models page update (#18298) 2023-06-29 15:23:18 +02:00
Karol Blaszczak
99c7bbc25e [DOCS] port selector tool to 23.0 (#18295)
port:
https://github.com/openvinotoolkit/openvino/pull/17799
https://github.com/openvinotoolkit/openvino/pull/18286

authored-by: Alexander Suvorov <alexander.suvorov@intel.com>
2023-06-29 14:36:59 +02:00
Maciej Smyk
33cfcb26fb [DOCS] WSL2 Docker update for 23.0 (#18293)
* windows-fix

* Update installing-openvino-docker-linux.md

* docker fix

* Update installing-openvino-docker-linux.md

* Update installing-openvino-docker-linux.md

* Update installing-openvino-docker-linux.md

* Update installing-openvino-docker-linux.md

* Update installing-openvino-docker-linux.md

* Update installing-openvino-docker-linux.md
2023-06-29 13:26:25 +02:00
Artyom Anokhov
39c84e03f7 Updated 3lvl domain for RPMs from fedoraproject.org (#18289) 2023-06-29 11:47:46 +02:00
Karol Blaszczak
f59126dde0 [DOCS] reset the pre-release notes pages port 23 (#18276)
port: https://github.com/openvinotoolkit/openvino/pull/18177
2023-06-29 08:17:02 +02:00
Karol Blaszczak
209d506341 [DOCS] top bar fixes port 23.0
port: #18261

FAQ for pot gets drop-downs
Homepage css improvement
2023-06-28 14:13:38 +02:00
bstankix
a710adf81a Add sitemap configuration (#18271) 2023-06-28 13:50:42 +02:00
Artyom Anokhov
fa1c41994f Bump version to 2023.0.1. Updated conflicted version for APT/YUM (#18268) 2023-06-28 13:36:31 +02:00
bstankix
caae459f54 Change html_baseurl to canonical (#18253) 2023-06-27 10:25:37 +02:00
Karol Blaszczak
7ef5cbff30 [DOCS] benchmark update for OVMS 23.0 (#18010) (#18250) 2023-06-27 09:16:16 +02:00
Maciej Smyk
85956dfa4d [DOCS] Debugging Auto-Device Plugin rst shift + Notebooks installation id align for 23.0 (#18241)
* Update AutoPlugin_Debugging.md

* Update AutoPlugin_Debugging.md

* Update AutoPlugin_Debugging.md

* Update AutoPlugin_Debugging.md

* notebooks id fix

* fixes

* Update AutoPlugin_Debugging.md
2023-06-27 08:15:51 +02:00
Maciej Smyk
2d98cbed74 [DOCS] Table directive update + Get Started fix for 23.0 (#18217)
* Update notebooks-installation.md

* Update notebooks-installation.md

* Update performance_benchmarks.md

* Update openvino_ecosystem.md

* Update get_started_demos.md

* Update installing-model-dev-tools.md

* Update installing-model-dev-tools.md

* Update installing-openvino-brew.md

* Update installing-openvino-conda.md

* fix

* Update installing-openvino-apt.md

* Update installing-openvino-apt.md

* Update installing-openvino-docker-linux.md

* Update installing-openvino-docker-windows.md

* Update installing-openvino-from-archive-linux.md

* Update installing-openvino-from-archive-macos.md

* Update installing-openvino-from-archive-macos.md

* Update installing-openvino-from-archive-linux.md

* Update installing-openvino-from-archive-macos.md

* Update installing-openvino-from-archive-linux.md

* Update installing-openvino-from-archive-macos.md

* Update installing-openvino-from-archive-windows.md

* tabs

* fixes

* fixes2

* Update GPU_RemoteTensor_API.md

* fixes

* fixes

* Get started fix
2023-06-26 10:27:12 +02:00
Sebastian Golebiewski
5d47cedcc9 updating-tutorials (#18213) 2023-06-26 09:42:21 +02:00
Ilya Lavrenov
9ab5a8f5d9 Added cmake_policy call to allow IN_LIST in if() (#18226) 2023-06-24 22:51:54 +04:00
Maciej Smyk
ad84dc6205 [DOCS] Docker and GPU update for 23.0 (#17851)
* docker gpu update

* ref fix

* Update installing-openvino-docker-linux.md

* fixes

* Update DeviceDriverVersion.svg

* Update docs/install_guides/configurations-for-intel-gpu.md

Co-authored-by: Miłosz Żeglarski <milosz.zeglarski@intel.com>

* Update docs/install_guides/configurations-for-intel-gpu.md

Co-authored-by: Miłosz Żeglarski <milosz.zeglarski@intel.com>

* fixes from review

* Update configurations-for-intel-gpu.md

* Update configurations-for-intel-gpu.md

* Update deployment_migration.md

---------

Co-authored-by: Miłosz Żeglarski <milosz.zeglarski@intel.com>
2023-06-22 15:47:59 +02:00
bstankix
bd3e4347dd [DOCS] gsearch 2023-06-22 13:08:00 +02:00
Tatiana Savina
0adf0e27ee [DOCS] Port docs fixes (#18155)
* change classification notebook (#18037)

* add python block (#18085)
2023-06-21 11:47:37 +02:00
Tatiana Savina
cb7cab1886 [DOCS] shift to rst - opsets F,G (#17253) (#18152)
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
2023-06-20 14:22:13 +02:00
Georgy Krivoruchko
fd48b0bbdc [TF FE] Workaround for Broadcast/Concat issue with empty tensors (#18140)
* Added transformation for Concat
* Added test
* CI fix
* Fixed behavior of the "empty tensor list" test
2023-06-20 13:13:55 +04:00
Mateusz Bencer
691630b68c [PORT TO 23.0][ONNX FE] Allow to mix new and legacy extensions (#18116)
* [ONNX FE] Allow to mix new and legacy extensions

* added unit test

* Update op_extension.cpp

Fixed compilation with Conan

Ported https://github.com/openvinotoolkit/openvino/pull/18126

* Update op_extension.cpp

Fixed code style

---------

Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
2023-06-17 13:17:37 +00:00
Nikita Malinin
205feb9421 ENABLE_MMAP property pos (#17896) (#18106)
(cherry picked from commit 29f06692d6)
2023-06-17 12:48:47 +04:00
Georgy Krivoruchko
5ef750d5b3 Fixed Windows behavior if folder path on input (#18113) 2023-06-16 16:39:33 +00:00
Zlobin Vladimir
80fddfe1c2 Update open_model_zoo submodule (#18110)
Catch up https://github.com/openvinotoolkit/open_model_zoo/pull/3790
2023-06-16 18:19:07 +04:00
Maciej Smyk
7eb59527a0 [DOCS] CMake options description in build guide for 23.0 2023-06-16 10:37:53 +02:00
Sebastian Golebiewski
21fdda5609 [DOCS] Restyling tabs for 23.0
Porting: #18054

Introducing changes in css style for tabs from sphinx-design extension.
2023-06-14 11:43:10 +00:00
Tatiana Savina
9983f74dc7 fix link (#18050) 2023-06-14 08:54:11 +00:00
Maciej Smyk
ef0b8161c9 Update build_linux.md (#18046) 2023-06-14 10:45:16 +02:00
Sebastian Golebiewski
9e2dacbc53 [DOCS] Restyling elements on home page - for 23.0 2023-06-13 08:50:20 +02:00
Sebastian Golebiewski
d299be4202 [DOCS] Fixing formatting issues in articles - for 23.0 (#18004)
* fixing-formatting
2023-06-13 07:59:16 +02:00
Tatiana Savina
99fe2e9bdc add tabs (#18007) 2023-06-12 16:45:34 +02:00
Karol Blaszczak
6668ec39d7 [DOCS] Adding Datumaro document into OV Ecosystems (#17944) (#17968)
* add Datumaro document
* add datumaro into toctree

authored-by: Wonju Lee <wonju.lee@intel.com>
2023-06-09 13:22:43 +02:00
Maciej Smyk
1e5dced9d4 Update build_linux.md (#17967) 2023-06-09 15:03:45 +04:00
Zlobin Vladimir
7d73bae243 Update open_model_zoo submodule (#17902)
* Update open_model_zoo submodule

Catch up https://github.com/openvinotoolkit/open_model_zoo/pull/3779

---------

Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
2023-06-09 13:52:38 +04:00
Sebastian Golebiewski
d8d4fb9c94 [DOCS] Fixing broken code blocks for 23.0 (#17960)
* code-blocks-fixes
2023-06-09 09:08:27 +02:00
Ilya Lavrenov
11cde296b7 Updated refs to dependency repositories (#17953) 2023-06-08 20:14:48 +04:00
Ilya Lavrenov
44f8dac403 Align tabs in install archives Linux (#17947) (#17950) 2023-06-08 14:49:28 +02:00
Tatiana Savina
41b4fd1057 add enter dir (#17897) 2023-06-08 13:08:41 +02:00
Sebastian Golebiewski
0f89782489 update-deployment-manager (#17904) 2023-06-08 10:31:50 +04:00
Tatiana Savina
d894716fad [DOCS] Add sudo to uninstall (#17929)
* add sudo to uninstall

* Update uninstalling-openvino.md
2023-06-07 18:18:12 +02:00
Tatiana Savina
f6fd84d2e1 fix archive link (#17918) 2023-06-07 09:19:05 +00:00
Tatiana Savina
648b2ad308 [DOCS] Model optimization paragraph fix (#17907)
* fix mo guide paragraph

* fix format

* fix paragraph

* remove extra line
2023-06-07 10:45:01 +02:00
Tatiana Savina
ea5c1b04e5 [DOCS] Fix list and links to POT (#17887)
* change link to POT

* change header label

* fix typo
2023-06-06 10:59:05 +02:00
Karol Blaszczak
f3d88cbf99 DOCS post-release adjustments (#17876) 2023-06-05 15:43:45 +02:00
Tatiana Savina
e824e482b1 fix apt and yum links (#17877) 2023-06-05 13:11:21 +03:00
Sebastian Golebiewski
e4d0021e2c update-diagram (#17872) 2023-06-05 08:17:26 +02:00
Artyom Anokhov
e74cb4084d [docs] Conda update (#17861)
* Adding installing OV via Conda-Forge for MacOS

* Adding section Compiling with OpenVINO™ Runtime from Conda-Forge

* Update docs/install_guides/installing-openvino-conda.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Update docs/install_guides/installing-openvino-conda.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Update docs/install_guides/installing-openvino-conda.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Update docs/install_guides/installing-openvino-conda.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Update docs/install_guides/installing-openvino-conda.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Update docs/install_guides/installing-openvino-conda.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Update docs/install_guides/installing-openvino-conda.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Update docs/install_guides/installing-openvino-conda.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* installing-openvino-conda: Fixed title

* installing-openvino-macos-header: Fixed order for links

---------

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>
2023-06-02 21:54:11 +04:00
Tatiana Savina
e843e357cd change notebooks links (#17857) 2023-06-02 13:14:26 +02:00
Tatiana Savina
ecc502733d [DOCS] Change downloads directory link (#17846)
* installation link

* fix path
2023-06-01 19:04:02 +04:00
bstankix
d1de793552 Port default platform type selection from nightly (#17845) 2023-06-01 15:46:22 +02:00
Karol Blaszczak
ebaf6a2fcb DOCS homepage update (#17843) 2023-06-01 15:16:20 +02:00
Anton Voronov
88b006bce9 [DOC] cpu documentation fixes (#17816)
* [DOC] cpu documentation fixes

* fixed typos
2023-06-01 10:17:48 +02:00
Ilya Lavrenov
4aae068125 Update archive names for 2023.0 release (#17831) 2023-06-01 12:07:06 +04:00
Ilya Lavrenov
41c37c8af9 Updated badges for 2023.0 (#17832) 2023-06-01 12:03:20 +04:00
Sebastian Golebiewski
f40f0fa58b [DOCS] convert_model() as a default conversion path - for 23.0 (#17751)
Porting: https://github.com/openvinotoolkit/openvino/pull/17454

Updating MO documentation to make convert_model() a default conversion path.
2023-05-31 19:22:54 +02:00
Tatiana Savina
20dc436b6f DOCS Fix build links (#17821)
* change doc vers

* fix links
2023-05-31 17:45:57 +02:00
Sebastian Golebiewski
b2b7a57a4c update-tutorials (#17812) 2023-05-31 15:48:08 +02:00
Tatiana Savina
4481bfa17e [DOCS] Review release docs (#17793)
* review docs

* fix link to notebook

* fix build

* fix links

* remove bracket
2023-05-31 15:46:53 +02:00
Sebastian Golebiewski
366a5467d1 [DOCS] benchmark 23.0 update - port from master (#17806)
Porting: #17789

new benchmarking data
2023-05-31 12:58:49 +02:00
Karol Blaszczak
4be1dddb21 DOCS operation support articles update (#17449) (#17809)
port: #17449

conformance table added
ARM merged with CPU
precision support and layout tables removed from the overview device article (info available in device articles)
2023-05-31 10:56:25 +00:00
Ilya Lavrenov
3fd9b8c3b7 Updated install docs for 2023.0 (#17764) 2023-05-31 13:37:30 +04:00
Maciej Smyk
66528622a8 [DOCS] Link adjustment for dev docs + fix to build.md CPU link for 23.0 (#17747)
Port from #17744

JIRA Ticket: 110042

Update of hardcoded links to switch references from latest, nightly and 2022.3 (and earlier) to 2023.0.

JIRA Ticket: 111393

Fix for the Mac (Intel CPU) link name (it should be Intel CPU instead of Intel GPU).
2023-05-31 11:34:22 +02:00
Tatiana Savina
4fb2cebf28 [DOCS] Compile tool docs port (#17753)
* [DOCS] Compile tool docs change (#17460)

* add compile tool description

* change refs

* remove page to build docs

* doc reference fix

* review comments

* fix comment

* snippet comment

* Update docs/snippets/compile_model.cpp

Co-authored-by: Ilya Churaev <ilyachur@gmail.com>

* change snippet name

* create ov object

* code block fix

* cpp code block

* include change

* code test

* change snippet

* Update docs/snippets/export_compiled_model.cpp

Co-authored-by: Ilya Churaev <ilyachur@gmail.com>

---------

Co-authored-by: Ilya Churaev <ilyachur@gmail.com>

* Fixed compile_tool install (#17666)

---------

Co-authored-by: Ilya Churaev <ilyachur@gmail.com>
Co-authored-by: Ilya Churaev <ilya.churaev@intel.com>
2023-05-31 13:27:01 +04:00
Wanglei Shen
c18a24c05b [DOC] Add multi threading for 2023.0 release in CPU plugin document (#17788) 2023-05-31 12:54:42 +04:00
Anton Voronov
95f0005793 [DOC][CPU] Documentation update (#17786) 2023-05-31 10:37:14 +04:00
bstankix
9ac239de75 Port ability to build notebooks from local files from nightly (#17798) 2023-05-30 16:27:11 +02:00
Tatiana Savina
ad5c0808a6 add pad1 (#17760) 2023-05-30 10:39:05 +02:00
Tatiana Savina
66c6e125cf [DOCS] Port workflow docs (#17761)
* [DOCS]  Deploy and run documentation sections (#17708)

* first draft

* change name

* restructure

* workflow headers change

* change note

* remove deployment guide

* change deployment description

* fix conflicts

* clean up conflict fixes
2023-05-30 10:38:47 +02:00
Maciej Smyk
53bfc41a74 [DOCS] Configuring devices article update for 2023.0 (#17757)
* Update configure_devices.md
2023-05-29 09:02:25 +02:00
Karol Blaszczak
9b72c33039 [DOCS] install-guide fix port to 23.0 (#17672) 2023-05-25 18:39:03 +02:00
Zlobin Vladimir
c0e9e1b1a1 Update open_model_zoo submodule (#17733)
Catch up https://github.com/openvinotoolkit/open_model_zoo/pull/3770

Ticket: 110042
2023-05-25 15:56:45 +00:00
Daria Mityagina
720e283ff1 Update comments and help text (#17710) 2023-05-24 22:12:27 +02:00
Tatyana Raguzova
0e87a28791 [build_samples] Using make instead of cmake (#17560) 2023-05-24 22:43:42 +04:00
Ilya Lavrenov
6d17bbb7e9 Conan port (#17625) 2023-05-24 22:07:50 +04:00
Maxim Vafin
cebbfe65ac [DOCS] Add examples of using named outputs in extensions (#17622)
* [DOCS]Add examples of using named outputs in extensions

* Fix opset

* Apply suggestions from code review

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Update docs/Extensibility_UG/frontend_extensions.md

* Add reference to external docs

* Update docs/Extensibility_UG/frontend_extensions.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

---------

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>
2023-05-24 14:15:01 +02:00
Aleksandr Voron
c4c6567182 [DOCS][CPU] Update ARM CPU plugin documentation (#17700) 2023-05-24 15:42:32 +04:00
Karol Blaszczak
1a9ce16dd6 [DOCS] framework deprecation notice (#17484) (#17537)
Port: #17484
A new PR will be created with more changes, as suggested by jane-intel and slyalin. The "deprecated" label for articles and  additional content on converting models to ONNX will be covered then.
2023-05-24 11:53:56 +02:00
Maciej Smyk
4e8d5f3798 [DOCS] link fix (#17658) 2023-05-23 07:31:19 +02:00
Przemyslaw Wysocki
7351859ec2 limit linter (#17624) 2023-05-22 23:58:06 +04:00
Ilya Lavrenov
405c5ea03a Install libtbb12 on U22 (#17653) 2023-05-22 17:52:59 +04:00
Sebastian Golebiewski
183253e834 [DOCS] Update Interactive Tutorials - for 23.0 (#17600)
port: https://github.com/openvinotoolkit/openvino/pull/17598/
2023-05-22 14:46:14 +02:00
Maciej Smyk
cfea37b139 [DOCS] RST fixes for 23.0 (#17606)
* fixes
2023-05-22 10:33:32 +02:00
Tatiana Savina
34f00bd173 DOCS Update optimization docs with NNCF PTQ changes and deprecation of POT (#17398) (#17633)
* Update model_optimization_guide.md

* Update model_optimization_guide.md

* Update model_optimization_guide.md

* Update model_optimization_guide.md

* Update model_optimization_guide.md

* Update model_optimization_guide.md

* Update model_optimization_guide.md

* Update home.rst

* Update ptq_introduction.md

* Update Introduction.md

* Update Introduction.md

* Update Introduction.md

* Update ptq_introduction.md

* Update ptq_introduction.md

* Update basic_quantization_flow.md

* Update basic_quantization_flow.md

* Update basic_quantization_flow.md

* Update quantization_w_accuracy_control.md

* Update quantization_w_accuracy_control.md

* Update quantization_w_accuracy_control.md

* Update quantization_w_accuracy_control.md

* Update quantization_w_accuracy_control.md

* Update quantization_w_accuracy_control.md

* Update quantization_w_accuracy_control.md

* Update quantization_w_accuracy_control.md

* Update quantization_w_accuracy_control.md

* Update basic_quantization_flow.md

* Update basic_quantization_flow.md

* Update quantization_w_accuracy_control.md

* Update basic_quantization_flow.md

* Update basic_quantization_flow.md

* Update model_optimization_guide.md

* Update ptq_introduction.md

* Update quantization_w_accuracy_control.md

* Update model_optimization_guide.md

* Update quantization_w_accuracy_control.md

* Update model_optimization_guide.md

* Update quantization_w_accuracy_control.md

* Update model_optimization_guide.md

* Update Introduction.md

* Update basic_quantization_flow.md

* Update basic_quantization_flow.md

* Update quantization_w_accuracy_control.md

* Update ptq_introduction.md

* Update Introduction.md

* Update model_optimization_guide.md

* Update basic_quantization_flow.md

* Update quantization_w_accuracy_control.md

* Update quantization_w_accuracy_control.md

* Update quantization_w_accuracy_control.md

* Update Introduction.md

* Update FrequentlyAskedQuestions.md

* Update model_optimization_guide.md

* Update Introduction.md

* Update model_optimization_guide.md

* Update model_optimization_guide.md

* Update model_optimization_guide.md

* Update model_optimization_guide.md

* Update model_optimization_guide.md

* Update ptq_introduction.md

* Update ptq_introduction.md

* added code snippet (#1)

* Update basic_quantization_flow.md

* Update basic_quantization_flow.md

* Update quantization_w_accuracy_control.md

* Update basic_quantization_flow.md

* Update basic_quantization_flow.md

* Update ptq_introduction.md

* Update model_optimization_guide.md

* Update basic_quantization_flow.md

* Update ptq_introduction.md

* Update quantization_w_accuracy_control.md

* Update basic_quantization_flow.md

* Update basic_quantization_flow.md

* Update basic_quantization_flow.md

* Update ptq_introduction.md

* Update ptq_introduction.md

* Delete ptq_introduction.md

* Update FrequentlyAskedQuestions.md

* Update Introduction.md

* Update quantization_w_accuracy_control.md

* Update introduction.md

* Update basic_quantization_flow.md code blocks

* Update quantization_w_accuracy_control.md code snippets

* Update docs/optimization_guide/nncf/ptq/code/ptq_torch.py



* Update model_optimization_guide.md

* Optimization docs proofreading  (#2)

* images updated

* delete reminder

* review

* text review

* change images to original ones

* Update filter_pruning.md code blocks

* Update basic_quantization_flow.md

* Update quantization_w_accuracy_control.md

* Update images (#3)

* images updated

* delete reminder

* review

* text review

* change images to original ones

* Update filter_pruning.md code blocks

* update images

* resolve conflicts

* resolve conflicts

* change images to original ones

* resolve conflicts

* update images

* fix conflicts

* Update model_optimization_guide.md

* Update docs/optimization_guide/nncf/ptq/code/ptq_tensorflow.py



* Update docs/optimization_guide/nncf/ptq/code/ptq_torch.py



* Update docs/optimization_guide/nncf/ptq/code/ptq_onnx.py



* Update docs/optimization_guide/nncf/ptq/code/ptq_aa_openvino.py



* Update docs/optimization_guide/nncf/ptq/code/ptq_openvino.py



* table format fix

* Update headers

* Update qat.md code blocks

---------

Co-authored-by: Maksim Proshin <maksim.proshin@intel.com>
Co-authored-by: Alexander Suslov <alexander.suslov@intel.com>
2023-05-19 15:37:41 +00:00
Tatiana Savina
17326abb72 [MO][TF FE] Document freezing as essential step for pruning SM format (#17595) (#17632)
* [MO][TF FE] Document freezing as essential step for pruning SM format



* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_TensorFlow.md



---------

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>
2023-05-19 15:32:57 +00:00
Ilya Lavrenov
8601042bea Added python 3.11 for deployment tool (#17627) 2023-05-19 18:08:49 +04:00
Artyom Anokhov
39958e0dc1 Updated APT/YUM instructions with actual version. Added instructions for Ubuntu22. Updated subfolders naming for APT. (#17561) 2023-05-19 12:46:40 +02:00
Maciej Smyk
6fc9840e32 [DOCS] Link adjustment for 23.0 (#17604) 2023-05-18 15:10:13 +02:00
Ekaterina Aidova
b4452d5630 update OMZ submodule to fix bug (#17570) 2023-05-17 05:51:59 -07:00
Evgenya Stepyreva
4c69552656 Normalize_L2 relax constant input restriction (#17567)
* Normalize_L2 relax constant input restriction

* Fix warning treated as error during windows build
2023-05-17 12:37:02 +00:00
Maciej Smyk
6d8b3405ca [DOCS] Precision Control article for 23.0 (#17573)
Port from: https://github.com/openvinotoolkit/openvino/pull/17413

Added separate article on Precision Control (ov::hint::execution_mode and ov::inference_precision properties)
2023-05-17 11:21:05 +00:00
Evgenya Stepyreva
4c2096ad9c Strided Slice fix constant creation (#17557)
* Strided Slice fix constant creation

* Apply suggestions from code review

* Final touches
2023-05-16 13:53:57 +00:00
Aleksandr Voron
0c67b90f47 [CPU][ARM] Dynamic shapes support in ARM transformations (#17517) 2023-05-16 13:10:34 +04:00
Jan Iwaszkiewicz
83f51e0d00 [PyOV][Backport] Remove numpy strides from Tensor creation (#17535)
* [PyOV] Remove numpy strides from Tensor creation

* [PyOV] Add test for stride calculation

* [PyOV] Fix flake issue
2023-05-16 09:04:56 +04:00
Dmitry Kurtaev
8bb2a2a789 [CMake] Add CMAKE_MAKE_PROGRAM arg (#17340)
Co-authored-by: Ilya Churaev <ilya.churaev@intel.com>
2023-05-15 17:22:27 +04:00
Pawel Raasz
c9cfd6755c [Core] StridedSlice improvements of bound evaluation and constant folding (#17536)
* StridedSlice improvements:
-Bound evaluation for begin, end partial values when ignore mask set.
- Custom constant fold implementation.

* Improve const folding when all begin or end values
are ignored
2023-05-15 12:24:36 +00:00
Karol Blaszczak
c0060aefa7 Prepare "memory_optimization_guide.md" (#17022) (#17498)
---------

Co-authored-by: Vitaliy Urusovskij <vitaliy.urusovskij@intel.com>
2023-05-15 10:48:32 +02:00
Gorokhov Dmitriy
8a97d3c0e1 [CPU] Restore OneDNN InnerProduct primitive os_block computation behavior (#17462) 2023-05-12 15:50:10 +04:00
Wanglei Shen
c5fd3300a2 HOT FIX: disable set_cpu_used in 2023.0 release (#17456)
* disable set_cpu_used in 2023.0 release

* fix code style issue
2023-05-12 14:16:42 +08:00
Mateusz Mikolajczyk
a7f6f5292e Add missing check for special zero (#17479) 2023-05-12 09:30:55 +04:00
Maxim Vafin
804df84f7d Add transformation to convert adaptive pool to reduce (#17478)
* Add transformation to convert adaptive pool to reduce

* Update src/common/transformations/src/transformations/common_optimizations/moc_transformations.cpp

* Add tests and apply feedback

* Simplify if branches
2023-05-11 15:51:26 +00:00
Evgenya Stepyreva
1e49a594f7 [Shape inference] Pooling: Dimension div fix (#17197) (#17471)
* Dimension div fix

* codestyle fixes

* Convolution labels propagation test instances corrected

Co-authored-by: Vladislav Golubev <vladislav.golubev@intel.com>
2023-05-11 14:36:17 +04:00
Tatiana Savina
d5ac1c2e5c [DOCS] Port update docs for TF FE (#17464)
* [TF FE] Update docs for TF FE (#17453)

* Update tensorflow_frontend.md

* Update docs/resources/tensorflow_frontend.md

---------

Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>
2023-05-11 01:03:47 +04:00
Karol Blaszczak
afb2ae6b7a [DOCS] Update GPU.md (#17400) 2023-05-10 17:30:22 +02:00
Maxim Vafin
c5623b71cf Remove posibility to export to onnx (#17423)
* Remove posibility to export to onnx

* Apply suggestions from code review

* Fix tests and docs

* Workaround function inputs

* Fix code style
2023-05-10 16:35:54 +04:00
Maxim Vafin
152b11e77f Remove section about --use_legacy_frontend for PyTorch models (#17441) 2023-05-09 20:30:27 +04:00
Mateusz Tabaka
5adf3b5ca8 [TF frontend] use InterpolateMode::LINEAR_ONNX if input rank is 4 (#17406)
This change mimicks LinearToLinearONNXReplacer transformation in
legacy frontend, where linear interpolate mode is replaced with
linear_onnx due to performance reasons.

Ticket: CVS-108343
2023-05-09 14:52:35 +02:00
Fang Xu
a2ccbdf86e Update oneTBB2021.2.2 for 2023.0 (#17367)
* update oneTBB2021.2.2 for windows

* update SHA256

* update SHA256

oneTBB https://github.com/oneapi-src/oneTBB/releases/tag/v2021.2.2 (a25ebdf)

* add print for hwloc which is not found

---------

Co-authored-by: Chen Peter <peter.chen@intel.com>
2023-05-09 05:49:33 -07:00
Roman Kazantsev
1440b9950f [TF FE] Handle incorrect models (empty, fake) by TF FE (#17408) (#17432)
* [TF FE] Handle incorrect models (empty, fake) by TF FE



* Apply suggestions from code review

---------

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2023-05-09 16:30:51 +04:00
Mateusz Tabaka
d88d4d22e8 Update docs for frontend extensions (#17428) 2023-05-09 13:27:40 +02:00
Tatiana Savina
ea79006a0a DOCS Port shift to rst - Model representation doc (#17385)
* DOCS shift to rst - Model representation doc (#17320)

* model representation to rst

* fix indentation

* fix cide tabs

* fix indentation

* change doc

* fix snippets

* fix snippet

* port changes

* dev docs port
2023-05-09 10:33:12 +02:00
Pavel Esir
a4ff3318ea renumber FAQ (#17376) 2023-05-09 11:35:55 +04:00
Anastasiia Pnevskaia
44e7a003e7 Removed checks of unsatisfied dependencies in MO (#16991) (#17419)
* Fixed dependencies check, made unsatisfied dependencies show only in case of error.

* Small fix.

* Test correction.

* Small test correction.

* Temporarily added debug print.

* Debug output.

* Debug output.

* Debug output.

* Test fix.

* Removed debug output.

* Small fix.

* Moved tests to check_info_messages_test.py

* Remove dependies checks from MO.

* Small corrections.
2023-05-09 11:32:14 +04:00
Przemyslaw Wysocki
fa4112593d [Backport] OMZ submodule bump for Python 3.11 (#17325)
* backport

* Update sha
2023-05-08 18:49:28 +02:00
Surya Siddharth Pemmaraju
45e378f189 Added Torchscript backend (#17328)
* Added Torchscript backend

* Added some torchscript backend tests to ci

* Removed tests from CI as torch.compile doesn't support 3.11 currently

* Fixed linter issues

* Addressed PR comments and linter issues
2023-05-08 03:44:10 -07:00
Maciej Smyk
9320cbaa8c [DOCS] Recreation of BDTI PRs - 23.0 (#17383)
Porting: https://github.com/openvinotoolkit/openvino/pull/16913

Recreation of BDTI PRs for master.

Recreated PRs:

Docs: Update Dynamic Shapes documentation #15216
Docs: Edits to Performance Hints and Cumulative Throughput documentation #14793
Docs: Update Devices pages to state improved INT8 performance with 11th & 12th gen devices #12067
2023-05-08 10:36:56 +00:00
Maciej Smyk
718b194ad6 [DOCS] Legacy MO Extensibility update for 23.0
porting: https://github.com/openvinotoolkit/openvino/pull/15931

Divided MO Extensibility article into separate smaller articles,
Applied the suggestion from [DOCS] Better statement about MO extensions as internal API [Recreating #14062] #15679
Recreated images in svg format
Fixing directives
2023-05-08 12:25:37 +02:00
Maxim Vafin
8241540609 [PT FE] Improve exception when decoder cannot trace or script the model (#17338) (#17347)
* [PT FE] Improve exception when decoder cannot trace or script the model

* Add exception in convert_model

* Add test
2023-05-08 09:22:47 +04:00
Maxim Vafin
10d87b7332 [PT FE] Support default strides for avg and max pooling (#17337) (#17348)
* Support default strides for avg and max pooling

* Fix code style

* Remove changes from other ticket
2023-05-08 09:21:53 +04:00
Karol Blaszczak
386d773b33 [DOCS] fix typos in install guides (#17388) 2023-05-08 07:12:38 +02:00
Sun Xiaoxia
a5312f70db fix binding wrong core with latency mode in i9-13900 (#17364) 2023-05-06 17:17:11 +08:00
Roman Kazantsev
8f113ef24e [TF FE] Provide single tensor names for inputs and outputs in SavedModel (#17373)
* [TF FE] Provide single tensor names for inputs and outputs in SavedModel

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Fix build issue

* Xfail some cases due to internal problems in TF

* Xfail other layer test

* Extend documentation for function to adjust tensor names

* Use old path of tf2 layer testing for legacy frontend

---------

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2023-05-05 19:06:59 +02:00
Sebastian Golebiewski
c651bc5f87 [DOCS] Fix links - port (#17356) 2023-05-05 18:14:38 +02:00
Karol Blaszczak
12aab024d1 [GPU] Update dynamic shape document (#17274) (#17384)
porting: https://github.com/openvinotoolkit/openvino/pull/17384

* Update dynamic shape document for GPU
* Applied review comments

authored-by: Taylor Yeonbok Lee <taylor.lee@intel.com>
2023-05-05 17:58:19 +02:00
Ivan Tikhonov
3978511c5c Fix the names copying in TransposeSinking backward transformations (#17283) (#17344)
* Fix tensor names copying in TS transformations

* added a check that sinking is available for all consumers in TS backward transformations

* codestyle

* Apply review comments, add result sorting by tensor names in graph comparator

* delete debug code

* fix RemoveConsumers method implementation

* fix snippet tests

* use reference instead of raw pointer

* add new transformation tests

* fix transformation tests

Co-authored-by: Andrei Kochin <andrei.kochin@intel.com>
2023-05-05 17:09:44 +02:00
Chen Xu
0de0efd751 [CPU] Fix kernel precision mismatch in Reduce node (#17372)
* [CPU] Fix kernel precision mismatch in Reduce node

* Apply review comments
2023-05-05 14:39:30 +02:00
Sebastian Golebiewski
53e2997909 DOCS shift to rst (#17377) 2023-05-05 10:55:03 +02:00
Maciej Smyk
7779fea76f DOCS shift to rst - Opsets E for 23.0 (#17365) 2023-05-05 10:17:05 +02:00
Sebastian Golebiewski
c785551b57 DOCS shift to rst (#17346) 2023-05-04 13:29:16 +02:00
Roman Kazantsev
8c95c90e45 [TF FE] Use original input types for SavedModel (#17295) (#17335)
Also, refactor TF FE unit-tests

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2023-05-03 16:26:33 +04:00
Evgenya Stepyreva
bf829eead4 NMS-5 calculate upper-bound (#17332)
* NMS-5 calculate upper-bound

* Test
2023-05-03 15:22:08 +04:00
Roman Kazantsev
1141e90435 [MO][TF FE] Handle constant with undefined value (#17311) (#17327)
Since TF 2.10 the native model freezing can produce constants with undefined value,
i.e. tensor shape can be any and value is []. In this case the tensor just fills up with
the default value (0 - for numerics, "" - for strings)

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2023-05-03 12:09:57 +04:00
Roman Kazantsev
15b62d77cc [TF FE] Added additional pruned inputs for MetaGraph support (#17237) (#17326)
* Added handling of additional pruned inputs
Added possible topology of RestoreV2 -> AssignVariableOp
Added additional checks

* Extended tests coverage

Co-authored-by: Georgy Krivoruchko <georgy.krivoruchko@intel.com>
2023-05-03 12:09:20 +04:00
Maxim Vafin
e6347544e2 Fix issue with Pow when both inputs are scalars (#17305) (#17321)
* Fix issue with Pow when both inputs are scalars

* Fix code style
2023-05-03 11:32:13 +04:00
Anton Voronov
fcf261a048 [DOC] small fix for sparse weights decompression feature documentation (#17316) 2023-05-02 15:50:48 +02:00
Tatiana Savina
bba9f3094b [DOCS] Port docs: opsets, import keyword, deprecated options (#17289)
* Added missing import keyword (#17271)

* [DOCS] shift to rst - opsets N (#17267)

* opset to rst

* change list indentations

* fix formula

* add n operations

* add negative and nonzero

* fix link

* specs to rst

* fix matrixnms path

* change path to if

* fix list

* fix format

* DOCS remove deprecated options (#17167)

* DOCS remove deprecated options

* removed a couple more not actual questions

* remove the whole lines completely

* remove a couple of more deprecations

---------

Co-authored-by: Nikita Savelyev <nikita.savelyev@intel.com>
Co-authored-by: Pavel Esir <pavel.esir@intel.com>
2023-05-02 14:05:03 +02:00
Sergey Shlyapnikov
aa13ab63f5 [GPU] Use BFS processing order for out_of_order queue (#17304) 2023-05-02 15:25:21 +04:00
Tatiana Savina
8f978d2c60 update OTE and Datumaro links (#17269) (#17310) 2023-05-02 13:14:21 +02:00
Sebastian Golebiewski
a349ba7295 DOCS shift to rst - Opsets H & I - for 23.0 (#17307)
* update

* update

* cpp
2023-05-02 11:16:21 +02:00
Vladimir Paramuzov
73442bbc82 [GPU] Don't throw exception if no devices are found (#17302)
* [GPU] Don't throw exception if no devices are found

* Fix CAPI test
2023-05-01 23:18:51 +04:00
Tatiana Savina
76c237da8b [DOCS] Document Model Optimizer Python API port (#17287)
* [DOCS] Document Model Optimizer Python API (#14380)

* Added MO convert_model() documentation.

* Apply suggestions from code review

Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Updated Convert_Model pages for PyTorch and TF with PythonAPI info. United TF2 and TF formats lists.

* Added info on flag params, example_input formats list, small corrections.

* Moved MO python API to separate doc. Small text corrections.

* Added TF types conversion description.

* Removed duplicating info.

* Added description of InputCutInfo types and default onnx opset.

* Small correction.

* Changed type table to bullets, added blank lines.

* Added quote marks.

* Removed redunrant bullets.

* Apply suggestions from code review

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Apply suggestions from code review.

* Added new line.

* Apply comments from review.d

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_TensorFlow.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Added description of lists of parameters.

* Update docs/MO_DG/prepare_model/MO_Python_API.md

Co-authored-by: Sergey Lyalin <sergey.lyalin@intel.com>

* Added details about input_shape, example_input.

* Updated PyTorch page.

* Corrected input_signature description.

* Format correction.

* Format correction.

* Format correction.

* Format correction.

* Small correction.

* Small correction.

* Removed input_signature param description.

* Updated text.

* Small correction.

* Small correction.

* Removed not needed examples.

* Apply suggestions from code review

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Added new line.

* Update docs/MO_DG/prepare_model/MO_Python_API.md

Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Added titles of examples.

* Update docs/MO_DG/prepare_model/MO_Python_API.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Apply suggestions from code review

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Update docs/MO_DG/prepare_model/MO_Python_API.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_PyTorch.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

---------

Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>
Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>
Co-authored-by: Sergey Lyalin <sergey.lyalin@intel.com>

* fix first paragraph

* Update MO_Python_API.md

---------

Co-authored-by: Anastasiia Pnevskaia <anastasiia.pnevskaia@intel.com>
Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>
Co-authored-by: Sergey Lyalin <sergey.lyalin@intel.com>
2023-05-01 15:19:11 +04:00
Roman Lyamin
aebea2337e [GPU] Coverity fixes (#17241) (#17281) 2023-05-01 14:35:50 +04:00
Ilya Lavrenov
29c672d6d8 Fixed Python API build for Ubuntu 22.04 with python3.11 (#17297)
* Fixed Python API build for Ubuntu 22.04 with python3.11

* Update ONNX CI docker to test python 3.11 and system pybind11
2023-04-29 03:38:01 +04:00
Maksim Doronin
1f790df33c Fix enable_plugins_xml (#17293) 2023-04-29 00:02:43 +04:00
Ilya Lavrenov
5625424b91 Fixes for OpenCL via brew package (#17273) 2023-04-28 18:10:30 +04:00
Tatiana Savina
c7d0df39b5 remove pre-release note (#17265) 2023-04-28 13:04:31 +02:00
Alina Kladieva
85b57ea2bf Bump Azure refs to 2023/0 (#17264) 2023-04-27 22:09:27 +04:00
1530 changed files with 136777 additions and 24812 deletions

View File

@@ -141,7 +141,6 @@ jobs:
-DANDROID_STL=c++_shared
-DANDROID_PLATFORM=$(ANDROID_SDK_VERSION)
-DENABLE_TESTS=ON
-DENABLE_INTEL_GPU=ON
-DCMAKE_CXX_LINKER_LAUNCHER=ccache
-DCMAKE_C_LINKER_LAUNCHER=ccache
-DCMAKE_CXX_COMPILER_LAUNCHER=ccache

View File

@@ -32,13 +32,13 @@ resources:
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/openvino_contrib
ref: master
ref: releases/2023/0
- repository: testdata
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/testdata
ref: master
ref: releases/2023/0
variables:
- group: github
@@ -105,7 +105,7 @@ jobs:
steps:
- task: UsePythonVersion@0
inputs:
versionSpec: '$(OV_PYTHON_VERSION)' # Setting only major & minor version will download latest release from GH repo example 3.10 will be 3.10.10.
versionSpec: '$(OV_PYTHON_VERSION)' # Setting only major & minor version will download latest release from GH repo example 3.10 will be 3.10.10.
addToPath: true
disableDownloadFromRegistry: false
architecture: 'x64'
@@ -245,6 +245,7 @@ jobs:
-DCMAKE_CXX_COMPILER=clang++
-DCMAKE_C_COMPILER=clang
-DENABLE_SYSTEM_SNAPPY=ON
-DENABLE_SYSTEM_TBB=ON
-DCPACK_GENERATOR=$(CMAKE_CPACK_GENERATOR)
-DBUILD_nvidia_plugin=OFF
-S $(REPO_DIR)
@@ -292,7 +293,10 @@ jobs:
- script: cmake -DCOMPONENT=tests -DCMAKE_INSTALL_PREFIX=$(INSTALL_DIR) -P $(BUILD_LAYER_TESTS_DIR)/cmake_install.cmake
displayName: 'Install Layer Tests'
- script: python3 -m pip install openvino-dev --find-links=$(INSTALL_DIR)/tools
- script: |
set -e
python3 -m pip install $(INSTALL_DIR)/tools/openvino-*
python3 -m pip install $(INSTALL_DIR)/tools/openvino_dev-*
displayName: 'Install python wheels'
- script: |
@@ -305,7 +309,7 @@ jobs:
# Skip test_onnx/test_zoo_models and test_onnx/test_backend due to long execution time
- script: |
export LD_LIBRARY_PATH=$(REPO_DIR)/temp/gna_03.05.00.1906/linux/x64:$(LD_LIBRARY_PATH)
export LD_LIBRARY_PATH=$(REPO_DIR)/temp/gna_03.05.00.2116/linux/x64:$(LD_LIBRARY_PATH)
python3 -m pytest -s $(INSTALL_TEST_DIR)/pyngraph $(PYTHON_STATIC_ARGS) \
--junitxml=$(INSTALL_TEST_DIR)/TEST-Pyngraph.xml \
--ignore=$(INSTALL_TEST_DIR)/pyngraph/tests/test_onnx/test_zoo_models.py \
@@ -315,7 +319,7 @@ jobs:
# Skip test_onnx/test_zoo_models and test_onnx/test_backend due to long execution time
- script: |
# For python imports to import pybind_mock_frontend
export LD_LIBRARY_PATH=$(REPO_DIR)/temp/gna_03.05.00.1906/linux/x64:$(LD_LIBRARY_PATH)
export LD_LIBRARY_PATH=$(REPO_DIR)/temp/gna_03.05.00.2116/linux/x64:$(LD_LIBRARY_PATH)
export PYTHONPATH=$(INSTALL_TEST_DIR):$(INSTALL_DIR)/python/python3.8:$PYTHONPATH
python3 -m pytest -sv $(INSTALL_TEST_DIR)/pyopenvino $(PYTHON_STATIC_ARGS) \
--junitxml=$(INSTALL_TEST_DIR)/TEST-Pyngraph.xml \
@@ -325,7 +329,7 @@ jobs:
displayName: 'Python API 2.0 Tests'
- script: |
export LD_LIBRARY_PATH=$(REPO_DIR)/temp/gna_03.05.00.1906/linux/x64:$(LD_LIBRARY_PATH)
export LD_LIBRARY_PATH=$(REPO_DIR)/temp/gna_03.05.00.2116/linux/x64:$(LD_LIBRARY_PATH)
python3 -m pytest -s $(INSTALL_TEST_DIR)/mo/unit_tests --junitxml=$(INSTALL_TEST_DIR)/TEST-ModelOptimizer.xml
displayName: 'Model Optimizer UT'
@@ -366,7 +370,7 @@ jobs:
displayName: 'Build cpp samples - gcc'
- script: $(SAMPLES_INSTALL_DIR)/cpp/build_samples.sh -b $(BUILD_DIR)/cpp_samples_clang
env:
env:
CC: clang
CXX: clang++
displayName: 'Build cpp samples - clang'

View File

@@ -108,17 +108,17 @@ jobs:
- checkout: self
clean: 'true'
submodules: 'true'
path: openvino
- script: |
set -e
sudo -E $(OPENVINO_REPO_DIR)/install_build_dependencies.sh
python3 -m pip install --upgrade pip
python3 -m pip install -r $(OPENVINO_REPO_DIR)/src/bindings/python/requirements.txt
python3 -m pip install -r $(OPENVINO_REPO_DIR)/src/bindings/python/wheel/requirements-dev.txt
# install dependencies needed to build CPU plugin for ARM
sudo -E apt --assume-yes install scons crossbuild-essential-arm64
# generic dependencies
sudo -E apt --assume-yes install cmake ccache
# Speed up build
sudo -E apt -y --no-install-recommends install unzip
wget https://github.com/ninja-build/ninja/releases/download/v1.10.2/ninja-linux.zip
@@ -126,25 +126,60 @@ jobs:
sudo cp -v ninja /usr/local/bin/
displayName: 'Install dependencies'
- task: CMake@1
inputs:
cmakeArgs: >
-G "Ninja Multi-Config"
-DCMAKE_VERBOSE_MAKEFILE=ON
-DCMAKE_COMPILE_WARNING_AS_ERROR=ON
-DOpenCV_DIR=$(INSTALL_OPENCV)/cmake
-DENABLE_PYTHON=OFF
-DENABLE_TESTS=ON
-DENABLE_DATA=OFF
-DCMAKE_TOOLCHAIN_FILE=$(OPENVINO_REPO_DIR)/cmake/arm64.toolchain.cmake
-DCMAKE_VERBOSE_MAKEFILE=ON
-DCMAKE_CXX_COMPILER_LAUNCHER=ccache
-DCMAKE_C_COMPILER_LAUNCHER=ccache
-DARM_COMPUTE_SCONS_JOBS=$(NUM_PROC)
-DCMAKE_INSTALL_PREFIX=$(INSTALL_OPENVINO)
-S $(OPENVINO_REPO_DIR)
- script: |
git submodule update --init -- $(OPENVINO_REPO_DIR)/src/plugins
git submodule update --init -- $(OPENVINO_REPO_DIR)/thirdparty/gtest
displayName: 'Init submodules for non Conan dependencies'
- script: |
python3 -m pip install conan
# generate build profile
conan profile detect
# generate host profile for linux_arm64
echo "include(default)" > $(BUILD_OPENVINO)/linux_arm64
echo "[buildenv]" >> $(BUILD_OPENVINO)/linux_arm64
echo "CC=aarch64-linux-gnu-gcc" >> $(BUILD_OPENVINO)/linux_arm64
echo "CXX=aarch64-linux-gnu-g++" >> $(BUILD_OPENVINO)/linux_arm64
# install OpenVINO dependencies
export CMAKE_CXX_COMPILER_LAUNCHER=ccache
export CMAKE_C_COMPILER_LAUNCHER=ccache
conan install $(OPENVINO_REPO_DIR)/conanfile.txt \
-pr:h $(BUILD_OPENVINO)/linux_arm64 \
-s:h arch=armv8 \
-of $(BUILD_OPENVINO) \
-b missing
env:
CCACHE_DIR: $(OPENVINO_CCACHE_DIR)
CCACHE_TEMPDIR: $(TMP_DIR)/ccache
CCACHE_BASEDIR: $(Pipeline.Workspace)
CCACHE_MAXSIZE: 50G
displayName: 'Install conan and dependencies'
- script: |
source $(BUILD_OPENVINO)/conanbuild.sh
cmake \
-G Ninja \
-DCMAKE_VERBOSE_MAKEFILE=ON \
-DBUILD_SHARED_LIBS=ON \
-DCMAKE_COMPILE_WARNING_AS_ERROR=ON \
-DENABLE_CPPLINT=OFF \
-DENABLE_PYTHON=OFF \
-DENABLE_TESTS=ON \
-DENABLE_DATA=OFF \
-DENABLE_SYSTEM_TBB=ON \
-DENABLE_SYSTEM_PROTOBUF=ON \
-DENABLE_SYSTEM_SNAPPY=ON \
-DENABLE_SYSTEM_PUGIXML=ON \
-DCMAKE_TOOLCHAIN_FILE=$(BUILD_OPENVINO)/conan_toolchain.cmake \
-DCMAKE_CXX_COMPILER_LAUNCHER=ccache \
-DCMAKE_C_COMPILER_LAUNCHER=ccache \
-DARM_COMPUTE_SCONS_JOBS=$(NUM_PROC) \
-DCMAKE_INSTALL_PREFIX=$(INSTALL_OPENVINO) \
-DCMAKE_BUILD_TYPE=$(BUILD_TYPE) \
-S $(OPENVINO_REPO_DIR) \
-B $(BUILD_OPENVINO)
displayName: 'CMake OpenVINO ARM plugin'
source $(BUILD_OPENVINO)/deactivate_conanbuild.sh
displayName: 'CMake configure'
- script: cmake --build $(BUILD_OPENVINO) --parallel --config $(BUILD_TYPE)
env:
@@ -152,13 +187,13 @@ jobs:
CCACHE_TEMPDIR: $(TMP_DIR)/ccache
CCACHE_BASEDIR: $(Pipeline.Workspace)
CCACHE_MAXSIZE: 50G
displayName: 'Build OpenVINO ARM plugin'
displayName: 'Build OpenVINO Runtime'
- script: cmake --build $(BUILD_OPENVINO) --parallel --config $(BUILD_TYPE) --target install
displayName: 'Install OpenVINO ARM plugin'
displayName: 'Install OpenVINO Runtime'
- task: PublishBuildArtifacts@1
inputs:
PathtoPublish: $(Build.ArtifactStagingDirectory)
ArtifactName: 'openvino_aarch64_linux'
displayName: 'Publish OpenVINO AArch64 linux package'
displayName: 'Publish OpenVINO Runtime for ARM'

View File

@@ -35,6 +35,7 @@ resources:
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/testdata
ref: releases/2023/0
variables:
- group: github

View File

@@ -4,7 +4,7 @@ resources:
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/openvino_contrib
ref: master
ref: releases/2023/0
variables:
- group: github

View File

@@ -42,11 +42,13 @@ resources:
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/openvino_contrib
ref: releases/2023/0
- repository: testdata
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/testdata
ref: releases/2023/0
jobs:
- job: CUDAPlugin_Lin

View File

@@ -34,7 +34,7 @@ resources:
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/testdata
ref: master
ref: releases/2023/0
jobs:
- job: Lin_Debian
@@ -262,9 +262,9 @@ jobs:
sudo apt-get install --no-install-recommends gnupg wget -y
wget https://apt.repos.intel.com/intel-gpg-keys/GPG-PUB-KEY-INTEL-SW-PRODUCTS.PUB
sudo apt-key add GPG-PUB-KEY-INTEL-SW-PRODUCTS.PUB
echo "deb https://apt.repos.intel.com/openvino/2022 focal main" | sudo tee /etc/apt/sources.list.d/intel-openvino-2022.list
sudo apt-get update -o Dir::Etc::sourcelist=/etc/apt/sources.list.d/intel-openvino-2022.list
sudo apt-get install openvino -y || exit 1
echo "deb https://apt.repos.intel.com/openvino/2023 ubuntu20 main" | sudo tee /etc/apt/sources.list.d/intel-openvino-2023.list
sudo apt-get update -o Dir::Etc::sourcelist=/etc/apt/sources.list.d/intel-openvino-2023.list
sudo apt-get install openvino-2023.0.1 -y || exit 1
# install our local one and make sure the conflicts are resolved
sudo apt-get install --no-install-recommends dpkg-dev -y
rm -r _CPack_Packages

View File

@@ -4,7 +4,7 @@
# type: github
# endpoint: openvinotoolkit
# name: openvinotoolkit/testdata
# ref: master
# ref: releases/2023/0
jobs:
- job: Lin_lohika

View File

@@ -35,13 +35,13 @@ resources:
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/openvino_contrib
ref: master
ref: releases/2023/0
- repository: testdata
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/testdata
ref: master
ref: releases/2023/0
variables:
- group: github

View File

@@ -32,13 +32,13 @@ resources:
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/openvino_contrib
ref: master
ref: releases/2023/0
- repository: testdata
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/testdata
ref: master
ref: releases/2023/0
jobs:
- job: Win

View File

@@ -35,6 +35,7 @@ resources:
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/testdata
ref: releases/2023/0
variables:
- group: github
@@ -116,7 +117,7 @@ jobs:
-G Ninja ^
-DENABLE_CPPLINT=OFF ^
-DENABLE_GAPI_PREPROCESSING=OFF ^
-DENABLE_FASTER_BUILD=ON ^
-DENABLE_PLUGINS_XML=ON ^
-DCMAKE_COMPILE_WARNING_AS_ERROR=ON ^
-DCMAKE_BUILD_TYPE=$(BUILD_TYPE) ^
-DENABLE_PROFILING_ITT=ON ^
@@ -153,7 +154,6 @@ jobs:
-DVERBOSE_BUILD=ON ^
-DENABLE_CPPLINT=OFF ^
-DENABLE_GAPI_PREPROCESSING=OFF ^
-DENABLE_FASTER_BUILD=ON ^
-DENABLE_PROFILING_ITT=OFF ^
-DSELECTIVE_BUILD=ON ^
-DCMAKE_COMPILE_WARNING_AS_ERROR=ON ^

View File

@@ -1,4 +1,4 @@
FROM ubuntu:22.04
FROM ubuntu:23.04
LABEL version=2021.03.30.1
@@ -38,6 +38,7 @@ RUN apt-get update && apt-get -y --no-install-recommends install \
python3 \
python3-pip \
python3-dev \
pybind11-dev \
python3-virtualenv \
cython3 \
tox && \
@@ -71,5 +72,5 @@ RUN ninja install
WORKDIR /openvino/src/bindings/python
ENV OpenVINO_DIR=/openvino/dist/runtime/cmake
ENV LD_LIBRARY_PATH=/openvino/dist/runtime/lib/intel64:/openvino/dist/runtime/3rdparty/tbb/lib
ENV PYTHONPATH=/openvino/bin/intel64/${BUILD_TYPE}/python_api/python3.10:${PYTHONPATH}
ENV PYTHONPATH=/openvino/bin/intel64/${BUILD_TYPE}/python_api/python3.11:${PYTHONPATH}
CMD tox

View File

@@ -85,8 +85,8 @@ jobs:
- name: Install Clang dependency
run: |
sudo apt update
sudo apt --assume-yes remove clang-7 clang-8 clang-9 clang-10 clang-11 clang-12 clang-13
sudo apt --assume-yes install libclang-14-dev
sudo apt --assume-yes remove clang-7 clang-8 clang-9 clang-10 clang-11 clang-12 clang-13 clang-15
sudo apt --assume-yes install clang-14 libclang-14-dev
- name: Install Python-based dependencies
run: python3 -m pip install -r cmake/developer_package/ncc_naming_style/requirements_dev.txt

1
.gitignore vendored
View File

@@ -26,6 +26,7 @@ temp/
.repo/
CMakeLists.txt.user
docs/IE_PLUGIN_DG/html/
CMakeUserPresets.json
*.project
*.cproject

View File

@@ -40,8 +40,6 @@ endif()
# resolving dependencies for the project
message (STATUS "CMAKE_VERSION ......................... " ${CMAKE_VERSION})
message (STATUS "CMAKE_BINARY_DIR ...................... " ${CMAKE_BINARY_DIR})
message (STATUS "CMAKE_SOURCE_DIR ...................... " ${CMAKE_SOURCE_DIR})
message (STATUS "OpenVINO_SOURCE_DIR ................... " ${OpenVINO_SOURCE_DIR})
message (STATUS "OpenVINO_BINARY_DIR ................... " ${OpenVINO_BINARY_DIR})
message (STATUS "CMAKE_GENERATOR ....................... " ${CMAKE_GENERATOR})
@@ -66,7 +64,7 @@ endif()
if(CMAKE_TOOLCHAIN_FILE)
message (STATUS "CMAKE_TOOLCHAIN_FILE .................. " ${CMAKE_TOOLCHAIN_FILE})
endif()
if(OV_GLIBC_VERSION)
if(NOT OV_GLIBC_VERSION VERSION_EQUAL 0.0)
message (STATUS "GLIBC_VERSION ......................... " ${OV_GLIBC_VERSION})
endif()

View File

@@ -1,55 +1,88 @@
# How to contribute to the OpenVINO repository
# Contributing to OpenVINO
We welcome community contributions to OpenVINO™. Please read the following guide to learn how to find ideas for contribution, practices for good pull requests, checking your changes with our tests and more.
## How to contribute to the OpenVINO project
OpenVINO™ is always looking for opportunities to improve and your contributions
play a big role in this process. There are several ways you can make the
product better:
## Before you start contributing you should
### Provide Feedback
- Make sure you agree to contribute your code under [OpenVINO™ (Apache 2.0)](https://github.com/openvinotoolkit/openvino/blob/master/LICENSE) license.
- Figure out what youre going to contribute. If you dont know what you are going to work on, navigate to the [Github "Issues" tab](https://github.com/openvinotoolkit/openvino/issues). Make sure that there isn't someone working on it. In the latter case you might provide support or suggestion in the issue or in the linked pull request.
- If you are going to fix a bug, check that it's still exists in the latest release. This can be done by building the latest master branch, and make sure that the error is still reproducible there. We do not fix bugs that only affect older non-LTS releases like 2020.2 for example (more details about [branching strategy](https://github.com/openvinotoolkit/openvino/wiki/Branches)).
* **Report bugs / issues**
If you experience faulty behavior in OpenVINO or its components, you can
[create a new issue](https://github.com/openvinotoolkit/openvino/issues)
in the GitHub issue tracker.
* **Propose new features / improvements**
If you have a suggestion for improving OpenVINO or want to share your ideas, you can open a new
[GitHub Discussion](https://github.com/openvinotoolkit/openvino/discussions).
If your idea is already well defined, you can also create a
[Feature Request Issue](https://github.com/openvinotoolkit/openvino/issues/new?assignees=octocat&labels=enhancement%2Cfeature&projects=&template=feature_request.yml&title=%5BFeature+Request%5D%3A+)
In both cases, provide a detailed description, including use cases, benefits, and potential challenges.
If your points are especially well aligned with the product vision, they will be included in the
[development roadmap](./ROADMAP.md).
User feedback is crucial for OpenVINO development and even if your input is not immediately prioritized,
it may be used at a later time or undertaken by the community, regardless of the official roadmap.
### Contribute Code Changes
* **Fix Bugs or Develop New Features**
If you want to help improving OpenVINO, choose one of the issues reported in
[GitHub Issue Tracker](https://github.com/openvinotoolkit/openvino/issues) and
[create a Pull Request](./CONTRIBUTING_PR.md) addressing it. Consider one of the
tasks listed as [first-time contributions](https://github.com/openvinotoolkit/openvino/issues/17502).
If the feature you want to develop is more complex or not well defined by the reporter,
it is always a good idea to [discuss it](https://github.com/openvinotoolkit/openvino/discussions)
with OpenVINO developers first. Before creating a new PR, check if nobody is already
working on it. In such a case, you may still help, having aligned with the other developer.
Importantly, always check if the change hasn't been implemented before you start working on it!
You can build OpenVINO using the latest master branch and make sure that it still needs your
changes. Also, do not address issues that only affect older non-LTS releases, like 2022.2.
* **Develop a New Device Plugin**
Since the market of computing devices is constantly evolving, OpenVINO is always open to extending
its support for new hardware. If you want to run inference on a device that is currently not supported,
you can see how to develop a new plugin for it in the
[Plugin Developer Guide](https://docs.openvino.ai/canonical/openvino_docs_ie_plugin_dg_overview.html).
### Improve documentation
* **OpenVINO developer documentation** is contained entirely in this repository, under the
[./docs/dev](https://github.com/openvinotoolkit/openvino/tree/master/docs/dev) folder.
* **User documentation** is built from several sources and published at
[docs.openvino.ai](docs.openvino.ai), which is the recommended place for reading
these documents. Use the files maintained in this repository only for editing purposes.
* The easiest way to help with documentation is to review it and provide feedback on the
existing articles. Whether you notice a mistake, see the possibility of improving the text,
or think more information should be added, you can reach out to any of the documentation
contributors to discuss the potential changes.
You can also create a Pull Request directly, following the [editor's guide](./docs/CONTRIBUTING_DOCS.md).
## "Fork & Pull Request model" for code contribution
### Promote and Support OpenVINO
### [](https://github.com/openvinotoolkit/openvino/blob/master/CONTRIBUTING.md#the-instruction-in-brief)The instruction in brief
* **Popularize OpenVINO**
Articles, tutorials, blog posts, demos, videos, and any other involvement
in the OpenVINO community is always a welcome contribution. If you discuss
or present OpenVINO on various social platforms, you are raising awareness
of the product among A.I. enthusiasts and enabling other people to discover
the toolkit. Feel free to reach out to OpenVINO developers if you need help
with making such community-based content.
- Register at GitHub. Create your fork of OpenVINO™ repository [https://github.com/openvinotoolkit/openvino](https://github.com/openvinotoolkit/openvino) (see [https://help.github.com/articles/fork-a-repo](https://help.github.com/articles/fork-a-repo) for details).
- Install Git.
- Set your user name and email address in a Git configuration according to GitHub account (see [https://git-scm.com/book/en/v2/Getting-Started-First-Time-Git-Setup](https://git-scm.com/book/en/v2/Getting-Started-First-Time-Git-Setup) for details).
- Choose a task for yourself. It could be a bugfix or some new code.
- Choose a base branch for your work. More details about branches and policies are here: [Branches](https://github.com/openvinotoolkit/openvino/wiki/Branches)
- Clone your fork to your computer.
- Create a new branch (with a meaningful name) from the base branch you chose.
- Modify / add the code following our [Coding Style Guide](./docs/dev/coding_style.md).
- If you want to add a new sample, please look at this [Guide for contributing to C++/C/Python IE samples](https://github.com/openvinotoolkit/openvino/wiki/SampleContribute)
- If you want to contribute to the documentation and want to add a new guide, follow that instruction [Documentation guidelines](https://github.com/openvinotoolkit/openvino/wiki/CodingStyleGuideLinesDocumentation)
- Run testsuite locally:
- execute each test binary from the artifacts directory, e.g. `<source dir>/bin/intel64/Release/ieFuncTests`
- When you are done, make sure that your branch is to date with latest state of the branch you want to contribute to (e.g. `git fetch upstream && git merge upstream/master`), push your branch to your GitHub fork; then create a pull request from your branch to the base branch (see [https://help.github.com/articles/using-pull-requests](https://help.github.com/articles/using-pull-requests) for details).
## Making a good pull request
Following these guidelines will increase the likelihood of your pull request being accepted:
- One PR one issue.
- Build perfectly on your local system.
- Choose the right base branch [Branches](https://github.com/openvinotoolkit/openvino/wiki/Branches).
- Follow the [Coding Style Guide](./docs/dev/coding_style.md) for your code.
- Update documentation using [Documentation guidelines](https://github.com/openvinotoolkit/openvino/wiki/CodingStyleGuideLinesDocumentation) if needed.
- Cover your changes with test.
- Add license at the top of new files [C++ example](https://github.com/openvinotoolkit/openvino/blob/master/samples/cpp/classification_sample_async/main.cpp#L1-L2), [Python example](https://github.com/openvinotoolkit/openvino/blob/master/samples/python/hello_classification/hello_classification.py#L3-L4).
- Add enough information: a meaningful title, the reason why you made the commit and a link to the issue page if exists.
- Remove unrelated to PR changes.
- If it is still WIP and you want to check CI test results early then use _Draft_ PR.
- Submit your PR and become an OpenVINO™ contributor!
* **Help Other Community Members**
If you are an experienced OpenVINO user and want to help, you can always
share your expertise with the community. Check GitHub Discussions and
Issues to see if you can help someone.
## Testing and merging pull requests
## License
Your pull request will be automatically tested by OpenVINO™'s precommit (testing status are automatically reported as "green" or "red" circles in precommit steps on PR's page). If any builders have failed, you need fix the issue. To rerun the automatic builds just push changes to your branch on GitHub. No need to close pull request and open a new one!
## Merging PR
When the reviewer accepts the pull request and the pre-commit shows a "green" status, the review status is set to "Approved", which signals to the OpenVINO™ maintainers that they can merge your pull request.
By contributing to the OpenVINO project, you agree that your contributions will be
licensed under the terms stated in the [LICENSE](./LICENSE.md) file.

111
CONTRIBUTING_DOCS.md Normal file
View File

@@ -0,0 +1,111 @@
# OpenVINO Documentation Guide
## Basic article structure
OpenVINO documentation is built using Sphinx and the reStructuredText formatting.
That means the basic formatting rules need to be used:
### White Spaces
OpenVINO documentation is developed to be easily readable in both html and
reStructuredText. Here are some suggestions on how to make it render nicely
and improve document clarity.
### Headings (including the article title)
They are made by "underscoring" text with punctuation marks (at least as
many marks as letters in the underscored header). We use the following convention:
```
H1
====================
H2
####################
H3
++++++++++++++++++++
H4
--------------------
H5
....................
```
### Line length
In programming, a limit of 80 characters per line is a common BKM. It may also apply
to reading natural languages fairly well. For this reason, we aim at lines of around
70 to 100 characters long. The limit is not a strict rule but rather a guideline to
follow in most cases. The breaks will not translate to html, and rightly so, but will
make reading and editing documents in GitHub or an editor much easier.
### Tables
Tables may be difficult to implement well in websites. For example, longer portions
of text, like descriptions, may render them difficult to read (e.g. improper cell
widths or heights). Complex tables may also be difficult to read in source files.
To prevent that, check the [table directive documentation](https://www.sphinx-doc.org/en/master/usage/restructuredtext/directives.html#table-directives)
and see our custom directives. Use the following guidelines for easier editing:
* For very big and complex data sets: use a list instead of a table or remove
the problematic content from the table and implement it differently.
* For very big and complex data sets that need to use tables: use an external
file (e.g. PDF) and link to it.
* For medium tables that look bad in source (e.g. due to long lines of text),
use the reStructuredText list table format.
* For medium and small tables, use the reStructuredText grid or simple table formats.
## Cross-linking
There are several directives Sphinx uses for linking, each has its purpose and format.
Follow these guidelines for consistent results:
* Avoid absolute references to internal documents as much as possible (link to source, not html).
* Note that sphinx uses the "back-tick" character and not the "inverted-comma" => ` vs. '
* When a file path starts at the same directory is used, put "./" at its beginning.
* Always add a space before the opening angle bracket ("<") for target files.
Use the following formatting for different links:
* link to an external page / file
* `` `text <url> `__ ``
* use a double underscore for consistency
* link to an internal documentation page / file
* `` :doc:`a docs page <relative file path>` ``
* Link to an rst or md file within our documentation, so that it renders properly in html
* link to a header on the same page
* `` 'a header in the same article <this-is-section-header-title>`__ ``
* anchors are created automatically for all existing headers
* such anchor looks like the header, with minor adjustments:
* all letters are lower case,
* remove all special glyphs, like brackets,
* replace spaces with hyphens
* Create an anchor in an article
* `` .. _anchor-in-the target-article:: ``
* put it before the header to which you want to link
* See the rules for naming anchors / labels at the bottom of this article
* link to an anchor on a different page in our documentation
* `` :ref:`the created anchor <anchor-in-the target-article>` ``
* link to the anchor using just its name
* anchors / labels
Read about anchors
Sphinx uses labels to create html anchors, which can be linked to from anywhere in documentation.
Although they may be put at the top of any article to make linking to it very easy, we do not use
this approach. Every label definition starts with an underscore, the underscore is not used in links.
Most importantly, every label needs to be globally unique. It means that it is always a good
practice to start their labels with a clear identifier of the article they reside in.

63
CONTRIBUTING_PR.md Normal file
View File

@@ -0,0 +1,63 @@
# How to Prepare a Good PR
OpenVINO is an open-source project and you can contribute to its code directly.
To do so, follow these guidelines for creating Pull Requests, so that your
changes get the highest chance of being merged.
## General Rules of a Good Pull Request
* Create your own fork of the repository and use it to create PRs.
Avoid creating change branches in the main repository.
* Choose a proper branch for your work and create your own branch based on it.
* Give your branches, commits, and Pull Requests meaningful names and descriptions.
It helps to track changes later. If your changes cover a particular component,
you can indicate it in the PR name as a prefix, for example: ``[DOCS] PR name``.
* Follow the [OpenVINO code style guide](https://github.com/openvinotoolkit/openvino/blob/master/docs/dev/coding_style.md).
* Make your PRs small - each PR should address one issue. Remove all changes
unrelated to the PR.
* Document your contribution! If your changes may impact how the user works with
OpenVINO, provide the information in proper articles. You can do it yourself,
or contact one of OpenVINO documentation contributors to work together on
developing the right content.
* For Work In Progress, or checking test results early, use a Draft PR.
## Ensure Change Quality
Your pull request will be automatically tested by OpenVINO™'s pre-commit and marked
as "green" if it is ready for merging. If any builders fail, the status is "red,"
you need to fix the issues listed in console logs. Any change to the PR branch will
automatically trigger the checks, so you don't need to recreate the PR, Just wait
for the updated results.
Regardless of the automated tests, you should ensure the quality of your changes:
* Test your changes locally:
* Make sure to double-check your code.
* Run tests locally to identify and fix potential issues (execute test binaries
from the artifacts directory, e.g. ``<source dir>/bin/intel64/Release/ieFuncTests``)
* Before creating a PR, make sure that your branch is up to date with the latest
state of the branch you want to contribute to (e.g. git fetch upstream && git
merge upstream/master).
## Branching Policy
* The "master" branch is used for development and constitutes the base for each new release.
* Each OpenVINO release has its own branch: ``releases/<year>/<release number>``.
* The final release each year is considered a Long Term Support version,
which means it remains active.
* Contributions are accepted only by active branches, which are:
* the "master" branch for future releases,
* the most recently published version for fixes,
* LTS versions (for two years from their release dates).
## Need Additional Help? Check these Articles
* [How to create a fork](https://help.github.com/articles/fork-a-rep)
* [Install Git](https://git-scm.com/book/en/v2/Getting-Started-First-Time-Git-Setup)
* If you want to add a new sample, please have a look at the Guide for contributing
to C++/C/Python IE samples and add the license statement at the top of new files for
C++ example, Python example.

View File

@@ -2,14 +2,14 @@
<img src="docs/img/openvino-logo-purple-black.png" width="400px">
[![Stable release](https://img.shields.io/badge/version-2022.3-green.svg)](https://github.com/openvinotoolkit/openvino/releases/tag/2022.3.0)
[![Apache License Version 2.0](https://img.shields.io/badge/license-Apache_2.0-green.svg)](LICENSE)
![GitHub branch checks state](https://img.shields.io/github/checks-status/openvinotoolkit/openvino/master?label=GitHub%20checks)
![Azure DevOps builds (branch)](https://img.shields.io/azure-devops/build/openvinoci/b2bab62f-ab2f-4871-a538-86ea1be7d20f/13?label=Public%20CI)
[![PyPI Status](https://badge.fury.io/py/openvino.svg)](https://badge.fury.io/py/openvino)
[![Anaconda Status](https://anaconda.org/conda-forge/openvino/badges/version.svg)](https://anaconda.org/conda-forge/openvino/badges/version.svg)
[![Anaconda Status](https://anaconda.org/conda-forge/openvino/badges/version.svg)](https://anaconda.org/conda-forge/openvino)
[![brew Status](https://img.shields.io/homebrew/v/openvino)](https://formulae.brew.sh/formula/openvino)
[![PyPI Downloads](https://pepy.tech/badge/openvino)](https://pepy.tech/project/openvino)
[![Anaconda Downloads](https://anaconda.org/conda-forge/openvino/badges/downloads.svg)](https://anaconda.org/conda-forge/openvino/files)
[![brew Downloads](https://img.shields.io/homebrew/installs/dy/openvino)](https://formulae.brew.sh/formula/openvino)
</div>
## Contents:
@@ -70,24 +70,24 @@ The OpenVINO™ Runtime can infer models on different hardware devices. This sec
<tbody>
<tr>
<td rowspan=2>CPU</td>
<td> <a href="https://docs.openvino.ai/nightly/openvino_docs_OV_UG_supported_plugins_CPU.html#doxid-openvino-docs-o-v-u-g-supported-plugins-c-p-u">Intel CPU</a></tb>
<td> <a href="https://docs.openvino.ai/2023.0/openvino_docs_OV_UG_supported_plugins_CPU.html#doxid-openvino-docs-o-v-u-g-supported-plugins-c-p-u">Intel CPU</a></tb>
<td><b><i><a href="./src/plugins/intel_cpu">openvino_intel_cpu_plugin</a></i></b></td>
<td>Intel Xeon with Intel® Advanced Vector Extensions 2 (Intel® AVX2), Intel® Advanced Vector Extensions 512 (Intel® AVX-512), and AVX512_BF16, Intel Core Processors with Intel AVX2, Intel Atom Processors with Intel® Streaming SIMD Extensions (Intel® SSE)</td>
</tr>
<tr>
<td> <a href="https://docs.openvino.ai/nightly/openvino_docs_OV_UG_supported_plugins_ARM_CPU.html">ARM CPU</a></tb>
<td> <a href="https://docs.openvino.ai/2023.0/openvino_docs_OV_UG_supported_plugins_CPU.html">ARM CPU</a></tb>
<td><b><i><a href="https://github.com/openvinotoolkit/openvino_contrib/tree/master/modules/arm_plugin">openvino_arm_cpu_plugin</a></i></b></td>
<td>Raspberry Pi™ 4 Model B, Apple® Mac mini with M1 chip, NVIDIA® Jetson Nano™, Android™ devices
</tr>
<tr>
<td>GPU</td>
<td><a href="https://docs.openvino.ai/nightly/openvino_docs_OV_UG_supported_plugins_GPU.html#doxid-openvino-docs-o-v-u-g-supported-plugins-g-p-u">Intel GPU</a></td>
<td><a href="https://docs.openvino.ai/2023.0/openvino_docs_OV_UG_supported_plugins_GPU.html#doxid-openvino-docs-o-v-u-g-supported-plugins-g-p-u">Intel GPU</a></td>
<td><b><i><a href="./src/plugins/intel_gpu">openvino_intel_gpu_plugin</a></i></b></td>
<td>Intel Processor Graphics, including Intel HD Graphics and Intel Iris Graphics</td>
</tr>
<tr>
<td>GNA</td>
<td><a href="https://docs.openvino.ai/nightly/openvino_docs_OV_UG_supported_plugins_GNA.html#doxid-openvino-docs-o-v-u-g-supported-plugins-g-n-a">Intel GNA</a></td>
<td><a href="https://docs.openvino.ai/2023.0/openvino_docs_OV_UG_supported_plugins_GNA.html#doxid-openvino-docs-o-v-u-g-supported-plugins-g-n-a">Intel GNA</a></td>
<td><b><i><a href="./src/plugins/intel_gna">openvino_intel_gna_plugin</a></i></b></td>
<td>Intel Speech Enabling Developer Kit, Amazon Alexa* Premium Far-Field Developer Kit, Intel Pentium Silver J5005 Processor, Intel Pentium Silver N5000 Processor, Intel Celeron J4005 Processor, Intel Celeron J4105 Processor, Intel Celeron Processor N4100, Intel Celeron Processor N4000, Intel Core i3-8121U Processor, Intel Core i7-1065G7 Processor, Intel Core i7-1060G7 Processor, Intel Core i5-1035G4 Processor, Intel Core i5-1035G7 Processor, Intel Core i5-1035G1 Processor, Intel Core i5-1030G7 Processor, Intel Core i5-1030G4 Processor, Intel Core i3-1005G1 Processor, Intel Core i3-1000G1 Processor, Intel Core i3-1000G4 Processor</td>
</tr>
@@ -105,22 +105,22 @@ OpenVINO™ Toolkit also contains several plugins which simplify loading models
</thead>
<tbody>
<tr>
<td><a href="https://docs.openvino.ai/nightly/openvino_docs_IE_DG_supported_plugins_AUTO.html#doxid-openvino-docs-i-e-d-g-supported-plugins-a-u-t-o">Auto</a></td>
<td><a href="https://docs.openvino.ai/2023.0/openvino_docs_OV_UG_supported_plugins_AUTO.html">Auto</a></td>
<td><b><i><a href="./src/plugins/auto">openvino_auto_plugin</a></i></b></td>
<td>Auto plugin enables selecting Intel device for inference automatically</td>
</tr>
<tr>
<td><a href="https://docs.openvino.ai/nightly/openvino_docs_OV_UG_Automatic_Batching.html">Auto Batch</a></td>
<td><a href="https://docs.openvino.ai/2023.0/openvino_docs_OV_UG_Automatic_Batching.html">Auto Batch</a></td>
<td><b><i><a href="./src/plugins/auto_batch">openvino_auto_batch_plugin</a></i></b></td>
<td>Auto batch plugin performs on-the-fly automatic batching (i.e. grouping inference requests together) to improve device utilization, with no programming effort from the user</td>
</tr>
<tr>
<td><a href="https://docs.openvino.ai/nightly/openvino_docs_OV_UG_Hetero_execution.html#doxid-openvino-docs-o-v-u-g-hetero-execution">Hetero</a></td>
<td><a href="https://docs.openvino.ai/2023.0/openvino_docs_OV_UG_Hetero_execution.html#doxid-openvino-docs-o-v-u-g-hetero-execution">Hetero</a></td>
<td><b><i><a href="./src/plugins/hetero">openvino_hetero_plugin</a></i></b></td>
<td>Heterogeneous execution enables automatic inference splitting between several devices</td>
</tr>
<tr>
<td><a href="https://docs.openvino.ai/nightly/openvino_docs_OV_UG_Running_on_multiple_devices.html#doxid-openvino-docs-o-v-u-g-running-on-multiple-devices">Multi</a></td>
<td><a href="https://docs.openvino.ai/2023.0/openvino_docs_OV_UG_Running_on_multiple_devices.html#doxid-openvino-docs-o-v-u-g-running-on-multiple-devices">Multi</a></td>
<td><b><i><a href="./src/plugins/auto">openvino_auto_plugin</a></i></b></td>
<td>Multi plugin enables simultaneous inference of the same model on several devices in parallel</td>
</tr>
@@ -157,10 +157,10 @@ The list of OpenVINO tutorials:
## System requirements
The system requirements vary depending on platform and are available on dedicated pages:
- [Linux](https://docs.openvino.ai/nightly/openvino_docs_install_guides_installing_openvino_linux_header.html)
- [Windows](https://docs.openvino.ai/nightly/openvino_docs_install_guides_installing_openvino_windows_header.html)
- [macOS](https://docs.openvino.ai/nightly/openvino_docs_install_guides_installing_openvino_macos_header.html)
- [Raspbian](https://docs.openvino.ai/nightly/openvino_docs_install_guides_installing_openvino_raspbian.html)
- [Linux](https://docs.openvino.ai/2023.0/openvino_docs_install_guides_installing_openvino_linux_header.html)
- [Windows](https://docs.openvino.ai/2023.0/openvino_docs_install_guides_installing_openvino_windows_header.html)
- [macOS](https://docs.openvino.ai/2023.0/openvino_docs_install_guides_installing_openvino_macos_header.html)
- [Raspbian](https://docs.openvino.ai/2023.0/openvino_docs_install_guides_installing_openvino_raspbian.html)
## How to build
@@ -189,7 +189,6 @@ Report questions, issues and suggestions, using:
* [Neural Network Compression Framework (NNCF)](https://github.com/openvinotoolkit/nncf) - a suite of advanced algorithms for model inference optimization including quantization, filter pruning, binarization and sparsity
* [OpenVINO™ Training Extensions (OTE)](https://github.com/openvinotoolkit/training_extensions) - convenient environment to train Deep Learning models and convert them using OpenVINO for optimized inference.
* [OpenVINO™ Model Server (OVMS)](https://github.com/openvinotoolkit/model_server) - a scalable, high-performance solution for serving deep learning models optimized for Intel architectures
* [DL Workbench](https://docs.openvino.ai/nightly/workbench_docs_Workbench_DG_Introduction.html) - an alternative, web-based version of OpenVINO designed to facilitate optimization and compression of pre-trained deep learning models.
* [Computer Vision Annotation Tool (CVAT)](https://github.com/opencv/cvat) - an online, interactive video and image annotation tool for computer vision purposes.
* [Dataset Management Framework (Datumaro)](https://github.com/openvinotoolkit/datumaro) - a framework and CLI tool to build, transform, and analyze datasets.
@@ -197,7 +196,7 @@ Report questions, issues and suggestions, using:
\* Other names and brands may be claimed as the property of others.
[Open Model Zoo]:https://github.com/openvinotoolkit/open_model_zoo
[OpenVINO™ Runtime]:https://docs.openvino.ai/nightly/openvino_docs_OV_UG_OV_Runtime_User_Guide.html
[Model Optimizer]:https://docs.openvino.ai/nightly/openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html
[Post-Training Optimization Tool]:https://docs.openvino.ai/nightly/pot_introduction.html
[OpenVINO™ Runtime]:https://docs.openvino.ai/2023.0/openvino_docs_OV_UG_OV_Runtime_User_Guide.html
[Model Optimizer]:https://docs.openvino.ai/2023.0/openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html
[Post-Training Optimization Tool]:https://docs.openvino.ai/2023.0/pot_introduction.html
[Samples]:https://github.com/openvinotoolkit/openvino/tree/master/samples

View File

@@ -98,10 +98,10 @@ function(ov_download_tbb)
# TODO: add target_path to be platform specific as well, to avoid following if
# build oneTBB 2021.2.1 with Visual Studio 2019 (MSVC 14.21)
RESOLVE_DEPENDENCY(TBB
ARCHIVE_WIN "oneapi-tbb-2021.2.1-win.zip"
ARCHIVE_WIN "oneapi-tbb-2021.2.2-win.zip"
TARGET_PATH "${TEMP}/tbb"
ENVIRONMENT "TBBROOT"
SHA256 "d81591673bd7d3d9454054642f8ef799e1fdddc7b4cee810a95e6130eb7323d4"
SHA256 "103b19a8af288c6a7d83ed3f0d2239c4afd0dd189fc12aad1d34b3c9e78df94b"
USE_NEW_LOCATION TRUE)
elseif(ANDROID AND X86_64)
RESOLVE_DEPENDENCY(TBB
@@ -327,8 +327,8 @@ if(ENABLE_INTEL_GNA)
GNA_LIB_DIR
libGNA_INCLUDE_DIRS
libGNA_LIBRARIES_BASE_PATH)
set(GNA_VERSION "03.05.00.1906")
set(GNA_HASH "4a5be86d9c026b0e10afac2a57fc7c99d762b30e3d506abb3a3380fbcfe2726e")
set(GNA_VERSION "03.05.00.2116")
set(GNA_HASH "960350567702bda17276ac4c060d7524fb7ce7ced785004bd861c81ff2bfe2c5")
set(FILES_TO_EXTRACT_LIST gna_${GNA_VERSION}/include)
if(WIN32)

View File

@@ -111,8 +111,8 @@ else()
set(BIN_FOLDER "bin/${ARCH_FOLDER}")
endif()
if(CMAKE_GENERATOR MATCHES "^Ninja Multi-Config$")
# Ninja-Multi specific, see:
if(CMAKE_GENERATOR STREQUAL "Ninja Multi-Config")
# 'Ninja Multi-Config' specific, see:
# https://cmake.org/cmake/help/latest/variable/CMAKE_DEFAULT_BUILD_TYPE.html
set(CMAKE_DEFAULT_BUILD_TYPE "Release" CACHE STRING "CMake default build type")
elseif(NOT OV_GENERATOR_MULTI_CONFIG)
@@ -240,7 +240,7 @@ if(ENABLE_LTO)
LANGUAGES C CXX)
if(NOT IPO_SUPPORTED)
set(ENABLE_LTO "OFF" CACHE STRING "Enable Link Time Optmization" FORCE)
set(ENABLE_LTO "OFF" CACHE STRING "Enable Link Time Optimization" FORCE)
message(WARNING "IPO / LTO is not supported: ${OUTPUT_MESSAGE}")
endif()
endif()
@@ -250,8 +250,8 @@ endif()
macro(ov_install_static_lib target comp)
if(NOT BUILD_SHARED_LIBS)
get_target_property(target_type ${target} TYPE)
if(${target_type} STREQUAL "STATIC_LIBRARY")
set_target_properties(${target} PROPERTIES EXCLUDE_FROM_ALL FALSE)
if(target_type STREQUAL "STATIC_LIBRARY")
set_target_properties(${target} PROPERTIES EXCLUDE_FROM_ALL OFF)
endif()
install(TARGETS ${target} EXPORT OpenVINOTargets
ARCHIVE DESTINATION ${OV_CPACK_ARCHIVEDIR} COMPONENT ${comp} ${ARGN})

View File

@@ -4,23 +4,28 @@
if(WIN32)
set(PROGRAMFILES_ENV "ProgramFiles(X86)")
file(TO_CMAKE_PATH $ENV{${PROGRAMFILES_ENV}} PROGRAMFILES)
set(WDK_PATHS "${PROGRAMFILES}/Windows Kits/10/bin/${CMAKE_VS_WINDOWS_TARGET_PLATFORM_VERSION}/x64"
"${PROGRAMFILES}/Windows Kits/10/bin/x64")
# check that PROGRAMFILES_ENV is defined, because in case of cross-compilation for Windows
# we don't have such variable
if(DEFINED ENV{PROGRAMFILES_ENV})
file(TO_CMAKE_PATH $ENV{${PROGRAMFILES_ENV}} PROGRAMFILES)
message(STATUS "Trying to find apivalidator in: ")
foreach(wdk_path IN LISTS WDK_PATHS)
message(" * ${wdk_path}")
endforeach()
set(WDK_PATHS "${PROGRAMFILES}/Windows Kits/10/bin/${CMAKE_VS_WINDOWS_TARGET_PLATFORM_VERSION}/x64"
"${PROGRAMFILES}/Windows Kits/10/bin/x64")
find_host_program(ONECORE_API_VALIDATOR
NAMES apivalidator
PATHS ${WDK_PATHS}
DOC "ApiValidator for OneCore compliance")
message(STATUS "Trying to find apivalidator in: ")
foreach(wdk_path IN LISTS WDK_PATHS)
message(" * ${wdk_path}")
endforeach()
if(ONECORE_API_VALIDATOR)
message(STATUS "Found apivalidator: ${ONECORE_API_VALIDATOR}")
find_host_program(ONECORE_API_VALIDATOR
NAMES apivalidator
PATHS ${WDK_PATHS}
DOC "ApiValidator for OneCore compliance")
if(ONECORE_API_VALIDATOR)
message(STATUS "Found apivalidator: ${ONECORE_API_VALIDATOR}")
endif()
endif()
endif()

View File

@@ -4,8 +4,13 @@
macro(enable_fuzzing)
# Enable (libFuzzer)[https://llvm.org/docs/LibFuzzer.html] if supported.
set(FUZZING_COMPILER_FLAGS "-fsanitize=fuzzer-no-link -fprofile-instr-generate -fcoverage-mapping")
set(FUZZING_LINKER_FLAGS "-fsanitize-coverage=trace-pc-guard -fprofile-instr-generate")
if(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
# see https://learn.microsoft.com/en-us/cpp/build/reference/fsanitize?view=msvc-160#remarks
set(FUZZING_COMPILER_FLAGS "/fsanitize=fuzzer")
elseif(OV_COMPILER_IS_CLANG)
set(FUZZING_COMPILER_FLAGS "-fsanitize=fuzzer-no-link -fprofile-instr-generate -fcoverage-mapping")
set(FUZZING_LINKER_FLAGS "-fsanitize-coverage=trace-pc-guard -fprofile-instr-generate")
endif()
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} ${FUZZING_COMPILER_FLAGS}")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${FUZZING_COMPILER_FLAGS}")
@@ -20,6 +25,10 @@ function(add_fuzzer FUZZER_EXE_NAME FUZZER_SOURCES)
add_executable(${FUZZER_EXE_NAME} ${FUZZER_SOURCES})
target_link_libraries(${FUZZER_EXE_NAME} PRIVATE fuzz-testhelper)
if(ENABLE_FUZZING)
set_target_properties(${FUZZER_EXE_NAME} PROPERTIES LINK_FLAGS "-fsanitize=fuzzer")
if(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
# no extra flags are required
elseif(OV_COMPILER_IS_CLANG)
set_target_properties(${FUZZER_EXE_NAME} PROPERTIES LINK_FLAGS "-fsanitize=fuzzer")
endif()
endif()
endfunction(add_fuzzer)

View File

@@ -12,23 +12,17 @@ include(CheckCXXCompilerFlag)
# Defines ie_c_cxx_deprecated varaible which contains C / C++ compiler flags
#
macro(ov_disable_deprecated_warnings)
if(WIN32)
if(CMAKE_CXX_COMPILER_ID STREQUAL "Intel")
if(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
set(ie_c_cxx_deprecated "/wd4996")
elseif(CMAKE_CXX_COMPILER_ID STREQUAL "Intel")
if(WIN32)
set(ie_c_cxx_deprecated "/Qdiag-disable:1478,1786")
elseif(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
set(ie_c_cxx_deprecated "/wd4996")
elseif(OV_COMPILER_IS_CLANG)
set(ie_c_cxx_deprecated "-Wno-deprecated-declarations")
endif()
else()
if(CMAKE_CXX_COMPILER_ID STREQUAL "Intel")
else()
set(ie_c_cxx_deprecated "-diag-disable=1478,1786")
elseif(OV_COMPILER_IS_CLANG OR CMAKE_COMPILER_IS_GNUCXX)
set(ie_c_cxx_deprecated "-Wno-deprecated-declarations")
endif()
endif()
if(NOT ie_c_cxx_deprecated)
elseif(OV_COMPILER_IS_CLANG OR CMAKE_COMPILER_IS_GNUCXX)
set(ie_c_cxx_deprecated "-Wno-deprecated-declarations")
else()
message(WARNING "Unsupported CXX compiler ${CMAKE_CXX_COMPILER_ID}")
endif()
@@ -49,24 +43,18 @@ endmacro()
# Defines ie_c_cxx_deprecated_no_errors varaible which contains C / C++ compiler flags
#
macro(ov_deprecated_no_errors)
if(WIN32)
if(CMAKE_CXX_COMPILER_ID STREQUAL "Intel")
if(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
# show 4996 only for /w4
set(ie_c_cxx_deprecated_no_errors "/wd4996")
elseif(CMAKE_CXX_COMPILER_ID STREQUAL "Intel")
if(WIN32)
set(ie_c_cxx_deprecated_no_errors "/Qdiag-warning:1478,1786")
elseif(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
# show 4996 only for /w4
set(ie_c_cxx_deprecated_no_errors "/wd4996")
elseif(OV_COMPILER_IS_CLANG)
set(ie_c_cxx_deprecated_no_errors "-Wno-error=deprecated-declarations")
endif()
else()
if(CMAKE_CXX_COMPILER_ID STREQUAL "Intel")
else()
set(ie_c_cxx_deprecated_no_errors "-diag-warning=1478,1786")
elseif(OV_COMPILER_IS_CLANG OR CMAKE_COMPILER_IS_GNUCXX)
set(ie_c_cxx_deprecated_no_errors "-Wno-error=deprecated-declarations")
endif()
endif()
if(NOT ie_c_cxx_deprecated_no_errors)
elseif(OV_COMPILER_IS_CLANG OR CMAKE_COMPILER_IS_GNUCXX)
set(ie_c_cxx_deprecated_no_errors "-Wno-error=deprecated-declarations")
else()
message(WARNING "Unsupported CXX compiler ${CMAKE_CXX_COMPILER_ID}")
endif()
@@ -101,23 +89,21 @@ endmacro()
# Provides SSE4.2 compilation flags depending on an OS and a compiler
#
macro(ie_sse42_optimization_flags flags)
if(WIN32)
if(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
# No such option for MSVC 2019
elseif(CMAKE_CXX_COMPILER_ID STREQUAL "Intel")
if(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
# No such option for MSVC 2019
elseif(CMAKE_CXX_COMPILER_ID STREQUAL "Intel")
if(WIN32)
set(${flags} /QxSSE4.2)
else()
message(WARNING "Unsupported CXX compiler ${CMAKE_CXX_COMPILER_ID}")
set(${flags} -xSSE4.2)
endif()
elseif(OV_COMPILER_IS_CLANG OR CMAKE_COMPILER_IS_GNUCXX)
set(${flags} -msse4.2)
if(EMSCRIPTEN)
list(APPEND ${flags} -msimd128)
endif()
else()
if(CMAKE_CXX_COMPILER_ID STREQUAL "Intel")
set(${flags} -xSSE4.2)
else()
set(${flags} -msse4.2)
if(EMSCRIPTEN)
list(APPEND ${flags} -msimd128)
endif()
endif()
message(WARNING "Unsupported CXX compiler ${CMAKE_CXX_COMPILER_ID}")
endif()
endmacro()
@@ -127,20 +113,18 @@ endmacro()
# Provides AVX2 compilation flags depending on an OS and a compiler
#
macro(ie_avx2_optimization_flags flags)
if(WIN32)
if(CMAKE_CXX_COMPILER_ID STREQUAL "Intel")
if(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
set(${flags} /arch:AVX2)
elseif(CMAKE_CXX_COMPILER_ID STREQUAL "Intel")
if(WIN32)
set(${flags} /QxCORE-AVX2)
elseif(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
set(${flags} /arch:AVX2)
else()
message(WARNING "Unsupported CXX compiler ${CMAKE_CXX_COMPILER_ID}")
endif()
else()
if(CMAKE_CXX_COMPILER_ID STREQUAL "Intel")
set(${flags} -xCORE-AVX2)
else()
set(${flags} -mavx2 -mfma)
endif()
elseif(OV_COMPILER_IS_CLANG OR CMAKE_COMPILER_IS_GNUCXX)
set(${flags} -mavx2 -mfma)
else()
message(WARNING "Unsupported CXX compiler ${CMAKE_CXX_COMPILER_ID}")
endif()
endmacro()
@@ -151,24 +135,18 @@ endmacro()
# depending on an OS and a compiler
#
macro(ie_avx512_optimization_flags flags)
if(WIN32)
if(CMAKE_CXX_COMPILER_ID STREQUAL "Intel")
if(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
set(${flags} /arch:AVX512)
elseif(CMAKE_CXX_COMPILER_ID STREQUAL "Intel")
if(WIN32)
set(${flags} /QxCOMMON-AVX512)
elseif(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
set(${flags} /arch:AVX512)
else()
message(WARNING "Unsupported CXX compiler ${CMAKE_CXX_COMPILER_ID}")
endif()
else()
if(CMAKE_CXX_COMPILER_ID STREQUAL "Intel")
set(${flags} -xCOMMON-AVX512)
endif()
if(CMAKE_CXX_COMPILER_ID STREQUAL "GNU")
set(${flags} -mavx512f -mfma)
endif()
if(CMAKE_CXX_COMPILER_ID MATCHES "^(Clang|AppleClang)$")
set(${flags} -mavx512f -mfma)
endif()
elseif(OV_COMPILER_IS_CLANG OR CMAKE_COMPILER_IS_GNUCXX)
set(${flags} -mavx512f -mfma)
else()
message(WARNING "Unsupported CXX compiler ${CMAKE_CXX_COMPILER_ID}")
endif()
endmacro()
@@ -265,8 +243,10 @@ endfunction()
function(ov_force_include target scope header_file)
if(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
target_compile_options(${target} ${scope} /FI"${header_file}")
else()
elseif(OV_COMPILER_IS_CLANG OR CMAKE_COMPILER_IS_GNUCXX)
target_compile_options(${target} ${scope} -include "${header_file}")
else()
message(WARNING "Unsupported CXX compiler ${CMAKE_CXX_COMPILER_ID}")
endif()
endfunction()
@@ -318,11 +298,11 @@ set(CMAKE_VISIBILITY_INLINES_HIDDEN ON)
if(CMAKE_CL_64)
# Default char Type Is unsigned
# ie_add_compiler_flags(/J)
else()
elseif(CMAKE_COMPILER_IS_GNUCXX OR OV_COMPILER_IS_CLANG)
ie_add_compiler_flags(-fsigned-char)
endif()
if(WIN32)
if(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
#
# Common options / warnings enabled
#
@@ -335,16 +315,14 @@ if(WIN32)
# This option helps ensure the fewest possible hard-to-find code defects. Similar to -Wall on GNU / Clang
ie_add_compiler_flags(/W3)
if(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
# Increase Number of Sections in .Obj file
ie_add_compiler_flags(/bigobj)
# Build with multiple processes
ie_add_compiler_flags(/MP)
# Increase Number of Sections in .Obj file
ie_add_compiler_flags(/bigobj)
# Build with multiple processes
ie_add_compiler_flags(/MP)
if(AARCH64 AND NOT MSVC_VERSION LESS 1930)
# otherwise, _ARM64_EXTENDED_INTRINSICS is defined, which defines 'mvn' macro
ie_add_compiler_flags(/D_ARM64_DISTINCT_NEON_TYPES)
endif()
if(AARCH64 AND NOT MSVC_VERSION LESS 1930)
# otherwise, _ARM64_EXTENDED_INTRINSICS is defined, which defines 'mvn' macro
ie_add_compiler_flags(/D_ARM64_DISTINCT_NEON_TYPES)
endif()
# Handle Large Addresses
@@ -361,42 +339,62 @@ if(WIN32)
set(CMAKE_SHARED_LINKER_FLAGS "${CMAKE_SHARED_LINKER_FLAGS} /WX")
set(CMAKE_MODULE_LINKER_FLAGS "${CMAKE_MODULE_LINKER_FLAGS} /WX")
set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} /WX")
if(CMAKE_CXX_COMPILER_ID STREQUAL "Intel")
ie_add_compiler_flags(/Qdiag-warning:47,1740,1786)
endif()
endif()
#
# Disable noisy warnings
#
if(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
# C4251 needs to have dll-interface to be used by clients of class
ie_add_compiler_flags(/wd4251)
# C4275 non dll-interface class used as base for dll-interface class
ie_add_compiler_flags(/wd4275)
elseif(CMAKE_CXX_COMPILER_ID STREQUAL "Intel")
# 161: unrecognized pragma
# 177: variable was declared but never referenced
# 556: not matched type of assigned function pointer
# 1744: field of class type without a DLL interface used in a class with a DLL interface
# 1879: unimplemented pragma ignored
# 2586: decorated name length exceeded, name was truncated
# 2651: attribute does not apply to any entity
# 3180: unrecognized OpenMP pragma
# 11075: To get full report use -Qopt-report:4 -Qopt-report-phase ipo
# 15335: was not vectorized: vectorization possible but seems inefficient. Use vector always directive or /Qvec-threshold0 to override
ie_add_compiler_flags(/Qdiag-disable:161,177,556,1744,1879,2586,2651,3180,11075,15335)
endif()
# C4251 needs to have dll-interface to be used by clients of class
ie_add_compiler_flags(/wd4251)
# C4275 non dll-interface class used as base for dll-interface class
ie_add_compiler_flags(/wd4275)
#
# Debug information flags, by default CMake adds /Zi option
# but provides no way to specify CMAKE_COMPILE_PDB_NAME on root level
# In order to avoid issues with ninja we are replacing default flag instead of having two of them
# and observing warning D9025 about flag override
#
string(REPLACE "/Zi" "/Z7" CMAKE_C_FLAGS_DEBUG "${CMAKE_C_FLAGS_DEBUG}")
string(REPLACE "/Zi" "/Z7" CMAKE_CXX_FLAGS_DEBUG "${CMAKE_CXX_FLAGS_DEBUG}")
string(REPLACE "/Zi" "/Z7" CMAKE_C_FLAGS_RELWITHDEBINFO "${CMAKE_C_FLAGS_RELWITHDEBINFO}")
string(REPLACE "/Zi" "/Z7" CMAKE_CXX_FLAGS_RELWITHDEBINFO "${CMAKE_CXX_FLAGS_RELWITHDEBINFO}")
elseif(CMAKE_CXX_COMPILER_ID STREQUAL "Intel" AND WIN32)
#
# Warnings as errors
#
if(CMAKE_COMPILE_WARNING_AS_ERROR AND CMAKE_VERSION VERSION_LESS 3.24)
ie_add_compiler_flags(/Qdiag-warning:47,1740,1786)
endif()
#
# Disable noisy warnings
#
# 161: unrecognized pragma
ie_add_compiler_flags(/Qdiag-disable:161)
# 177: variable was declared but never referenced
ie_add_compiler_flags(/Qdiag-disable:177)
# 556: not matched type of assigned function pointer
ie_add_compiler_flags(/Qdiag-disable:556)
# 1744: field of class type without a DLL interface used in a class with a DLL interface
ie_add_compiler_flags(/Qdiag-disable:1744)
# 1879: unimplemented pragma ignored
ie_add_compiler_flags(/Qdiag-disable:1879)
# 2586: decorated name length exceeded, name was truncated
ie_add_compiler_flags(/Qdiag-disable:2586)
# 2651: attribute does not apply to any entity
ie_add_compiler_flags(/Qdiag-disable:2651)
# 3180: unrecognized OpenMP pragma
ie_add_compiler_flags(/Qdiag-disable:3180)
# 11075: To get full report use -Qopt-report:4 -Qopt-report-phase ipo
ie_add_compiler_flags(/Qdiag-disable:11075)
# 15335: was not vectorized: vectorization possible but seems inefficient.
# Use vector always directive or /Qvec-threshold0 to override
ie_add_compiler_flags(/Qdiag-disable:15335)
else()
#
# Common enabled warnings

View File

@@ -5,7 +5,9 @@
include(CheckCXXCompilerFlag)
if (ENABLE_SANITIZER)
if (WIN32)
if(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
# the flag is available since MSVC 2019 16.9
# see https://learn.microsoft.com/en-us/cpp/build/reference/fsanitize?view=msvc-160
check_cxx_compiler_flag("/fsanitize=address" SANITIZE_ADDRESS_SUPPORTED)
if (SANITIZE_ADDRESS_SUPPORTED)
set(SANITIZER_COMPILER_FLAGS "${SANITIZER_COMPILER_FLAGS} /fsanitize=address")
@@ -14,21 +16,23 @@ if (ENABLE_SANITIZER)
"Please, check requirements:\n"
"https://github.com/openvinotoolkit/openvino/wiki/AddressSanitizer-and-LeakSanitizer")
endif()
else()
elseif(CMAKE_COMPILER_IS_GNUCXX OR OV_COMPILER_IS_CLANG)
set(SANITIZER_COMPILER_FLAGS "${SANITIZER_COMPILER_FLAGS} -fsanitize=address")
check_cxx_compiler_flag("-fsanitize-recover=address" SANITIZE_RECOVER_ADDRESS_SUPPORTED)
if (SANITIZE_RECOVER_ADDRESS_SUPPORTED)
set(SANITIZER_COMPILER_FLAGS "${SANITIZER_COMPILER_FLAGS} -fsanitize-recover=address")
endif()
set(SANITIZER_LINKER_FLAGS "${SANITIZER_LINKER_FLAGS} -fsanitize=address")
else()
message(WARNING "Unsupported CXX compiler ${CMAKE_CXX_COMPILER_ID}")
endif()
endif()
if (ENABLE_UB_SANITIZER)
if (WIN32)
message(FATAL_ERROR "UndefinedBehavior sanitizer is not supported in Windows")
if(ENABLE_UB_SANITIZER)
if(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
message(FATAL_ERROR "UndefinedBehavior sanitizer is not supported in Windows with MSVC compiler. Please, use clang-cl or mingw")
endif()
# TODO: Remove -fno-sanitize=null as thirdparty/ocl/clhpp_headers UBSAN compatibility resolved:
# https://github.com/KhronosGroup/OpenCL-CLHPP/issues/17
# Mute -fsanitize=function Indirect call of a function through a function pointer of the wrong type.
@@ -48,43 +52,50 @@ if (ENABLE_UB_SANITIZER)
set(SANITIZER_COMPILER_FLAGS "${SANITIZER_COMPILER_FLAGS} -fno-sanitize=function")
endif()
if (CMAKE_CXX_COMPILER_ID STREQUAL "GNU")
# TODO: Remove -Wno-maybe-uninitialized after CVS-61143 fix
if(CMAKE_COMPILER_IS_GNUCXX)
# TODO: Remove -Wno-maybe-uninitialized after CVS-61143 is fixed
set(SANITIZER_COMPILER_FLAGS "${SANITIZER_COMPILER_FLAGS} -Wno-maybe-uninitialized")
endif()
check_cxx_compiler_flag("-fsanitize-recover=undefined" SANITIZE_RECOVER_UNDEFINED_SUPPORTED)
if (SANITIZE_RECOVER_UNDEFINED_SUPPORTED)
if(SANITIZE_RECOVER_UNDEFINED_SUPPORTED)
set(SANITIZER_COMPILER_FLAGS "${SANITIZER_COMPILER_FLAGS} -fsanitize-recover=undefined")
endif()
set(SANITIZER_LINKER_FLAGS "${SANITIZER_LINKER_FLAGS} -fsanitize=undefined")
endif()
if (ENABLE_THREAD_SANITIZER)
set(SANITIZER_COMPILER_FLAGS "${SANITIZER_COMPILER_FLAGS} -fsanitize=thread")
set(SANITIZER_LINKER_FLAGS "${SANITIZER_LINKER_FLAGS} -fsanitize=thread")
if(ENABLE_THREAD_SANITIZER)
if(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
message(FATAL_ERROR "Thread sanitizer is not supported in Windows with MSVC compiler. Please, use clang-cl or mingw")
elseif(CMAKE_COMPILER_IS_GNUCXX OR OV_COMPILER_IS_CLANG)
set(SANITIZER_COMPILER_FLAGS "${SANITIZER_COMPILER_FLAGS} -fsanitize=thread")
set(SANITIZER_LINKER_FLAGS "${SANITIZER_LINKER_FLAGS} -fsanitize=thread")
else()
message(WARNING "Unsupported CXX compiler ${CMAKE_CXX_COMPILER_ID}")
endif()
endif()
# common sanitizer options
if (DEFINED SANITIZER_COMPILER_FLAGS)
if(DEFINED SANITIZER_COMPILER_FLAGS)
# ensure symbols are present
if (NOT WIN32)
if(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
set(SANITIZER_COMPILER_FLAGS "${SANITIZER_COMPILER_FLAGS} /Oy-")
elseif(CMAKE_COMPILER_IS_GNUCXX OR OV_COMPILER_IS_CLANG)
set(SANITIZER_COMPILER_FLAGS "${SANITIZER_COMPILER_FLAGS} -g -fno-omit-frame-pointer")
if(NOT OV_COMPILER_IS_CLANG)
if(CMAKE_COMPILER_IS_GNUCXX)
# GPU plugin tests compilation is slow with -fvar-tracking-assignments on GCC.
# Clang has no var-tracking-assignments.
set(SANITIZER_COMPILER_FLAGS "${SANITIZER_COMPILER_FLAGS} -fno-var-tracking-assignments")
endif()
# prevent unloading libraries at runtime, so sanitizer can resolve their symbols
if (NOT CMAKE_CXX_COMPILER_ID STREQUAL "AppleClang")
if(NOT OV_COMPILER_IS_APPLECLANG)
set(SANITIZER_LINKER_FLAGS "${SANITIZER_LINKER_FLAGS} -Wl,-z,nodelete")
if(OV_COMPILER_IS_CLANG AND CMAKE_CXX_COMPILER_VERSION VERSION_GREATER_EQUAL 8.0)
set(SANITIZER_LINKER_FLAGS "${SANITIZER_LINKER_FLAGS} -fuse-ld=lld")
endif()
endif()
else()
set(SANITIZER_COMPILER_FLAGS "${SANITIZER_COMPILER_FLAGS} /Oy-")
message(WARNING "Unsupported CXX compiler ${CMAKE_CXX_COMPILER_ID}")
endif()
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} ${SANITIZER_COMPILER_FLAGS}")

View File

@@ -2,61 +2,68 @@
# SPDX-License-Identifier: Apache-2.0
#
if(UNIX)
set(IE_C_CXX_FLAGS "${IE_C_CXX_FLAGS} -Wformat -Wformat-security")
if(CMAKE_COMPILER_IS_GNUCXX OR OV_COMPILER_IS_CLANG OR
(UNIX AND CMAKE_CXX_COMPILER_ID STREQUAL "Intel"))
set(OV_C_CXX_FLAGS "${OV_C_CXX_FLAGS} -Wformat -Wformat-security")
if (NOT ENABLE_SANITIZER)
if(EMSCRIPTEN)
# emcc does not support fortification, see:
# https://stackoverflow.com/questions/58854858/undefined-symbol-stack-chk-guard-in-libopenh264-so-when-building-ffmpeg-wit
else()
# ASan does not support fortification https://github.com/google/sanitizers/issues/247
set(IE_C_CXX_FLAGS "${IE_C_CXX_FLAGS} -D_FORTIFY_SOURCE=2")
set(OV_C_CXX_FLAGS "${OV_C_CXX_FLAGS} -D_FORTIFY_SOURCE=2")
endif()
endif()
if(NOT APPLE)
set(CMAKE_EXE_LINKER_FLAGS_RELEASE "${CMAKE_EXE_LINKER_FLAGS_RELEASE} -pie")
endif()
if(CMAKE_CXX_COMPILER_ID STREQUAL "GNU")
set(IE_LINKER_FLAGS "${IE_LINKER_FLAGS} -z noexecstack -z relro -z now")
set(IE_C_CXX_FLAGS "${IE_C_CXX_FLAGS} -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv")
if(CMAKE_COMPILER_IS_GNUCXX)
set(OV_C_CXX_FLAGS "${OV_C_CXX_FLAGS} -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv")
if(CMAKE_CXX_COMPILER_VERSION VERSION_LESS 4.9)
set(IE_C_CXX_FLAGS "${IE_C_CXX_FLAGS} -fstack-protector-all")
set(OV_C_CXX_FLAGS "${OV_C_CXX_FLAGS} -fstack-protector-all")
else()
set(IE_C_CXX_FLAGS "${IE_C_CXX_FLAGS} -fstack-protector-strong")
set(OV_C_CXX_FLAGS "${OV_C_CXX_FLAGS} -fstack-protector-strong")
endif()
if (NOT ENABLE_SANITIZER)
set(IE_C_CXX_FLAGS "${IE_C_CXX_FLAGS} -s")
# Remove all symbol table and relocation information from the executable
set(OV_C_CXX_FLAGS "${OV_C_CXX_FLAGS} -s")
endif()
if(NOT MINGW)
set(OV_LINKER_FLAGS "${OV_LINKER_FLAGS} -z noexecstack -z relro -z now")
endif()
elseif(OV_COMPILER_IS_CLANG)
if(EMSCRIPTEN)
# emcc does not support fortification
# https://stackoverflow.com/questions/58854858/undefined-symbol-stack-chk-guard-in-libopenh264-so-when-building-ffmpeg-wit
else()
set(IE_C_CXX_FLAGS "${IE_C_CXX_FLAGS} -fstack-protector-all")
set(OV_C_CXX_FLAGS "${OV_C_CXX_FLAGS} -fstack-protector-all")
endif()
elseif(CMAKE_CXX_COMPILER_ID STREQUAL "Intel")
if (NOT ENABLE_SANITIZER)
set(IE_C_CXX_FLAGS "${IE_C_CXX_FLAGS} -Wl,--strip-all")
set(OV_C_CXX_FLAGS "${OV_C_CXX_FLAGS} -Wl,--strip-all")
endif()
set(IE_C_CXX_FLAGS "${IE_C_CXX_FLAGS} -fstack-protector-strong")
set(IE_LINKER_FLAGS "${IE_LINKER_FLAGS} -z noexecstack -z relro -z now")
endif()
else()
if(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
set(IE_C_CXX_FLAGS "${IE_C_CXX_FLAGS} /sdl")
endif()
set(IE_C_CXX_FLAGS "${IE_C_CXX_FLAGS} /guard:cf")
if(ENABLE_INTEGRITYCHECK)
set(CMAKE_SHARED_LINKER_FLAGS_RELEASE "${CMAKE_SHARED_LINKER_FLAGS_RELEASE} /INTEGRITYCHECK")
endif()
if(ENABLE_QSPECTRE)
ie_add_compiler_flags(/Qspectre)
set(OV_C_CXX_FLAGS "${OV_C_CXX_FLAGS} -fstack-protector-strong")
set(OV_LINKER_FLAGS "${OV_LINKER_FLAGS} -z noexecstack -z relro -z now")
endif()
elseif(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
set(OV_C_CXX_FLAGS "${OV_C_CXX_FLAGS} /sdl /guard:cf")
endif()
set(CMAKE_C_FLAGS_RELEASE "${CMAKE_C_FLAGS_RELEASE} ${IE_C_CXX_FLAGS}")
set(CMAKE_CXX_FLAGS_RELEASE "${CMAKE_CXX_FLAGS_RELEASE} ${IE_C_CXX_FLAGS}")
set(CMAKE_SHARED_LINKER_FLAGS_RELEASE "${CMAKE_SHARED_LINKER_FLAGS_RELEASE} ${IE_LINKER_FLAGS}")
set(CMAKE_MODULE_LINKER_FLAGS_RELEASE "${CMAKE_MODULE_LINKER_FLAGS_RELEASE} ${IE_LINKER_FLAGS}")
set(CMAKE_EXE_LINKER_FLAGS_RELEASE "${CMAKE_EXE_LINKER_FLAGS_RELEASE} ${IE_LINKER_FLAGS}")
if(ENABLE_QSPECTRE)
set(OV_C_CXX_FLAGS "${OV_C_CXX_FLAGS} /Qspectre")
endif()
if(ENABLE_INTEGRITYCHECK)
set(CMAKE_SHARED_LINKER_FLAGS_RELEASE "${CMAKE_SHARED_LINKER_FLAGS_RELEASE} /INTEGRITYCHECK")
endif()
set(CMAKE_C_FLAGS_RELEASE "${CMAKE_C_FLAGS_RELEASE} ${OV_C_CXX_FLAGS}")
set(CMAKE_CXX_FLAGS_RELEASE "${CMAKE_CXX_FLAGS_RELEASE} ${OV_C_CXX_FLAGS}")
set(CMAKE_SHARED_LINKER_FLAGS_RELEASE "${CMAKE_SHARED_LINKER_FLAGS_RELEASE} ${OV_LINKER_FLAGS}")
set(CMAKE_MODULE_LINKER_FLAGS_RELEASE "${CMAKE_MODULE_LINKER_FLAGS_RELEASE} ${OV_LINKER_FLAGS}")
set(CMAKE_EXE_LINKER_FLAGS_RELEASE "${CMAKE_EXE_LINKER_FLAGS_RELEASE} ${OV_LINKER_FLAGS}")
unset(OV_C_CXX_FLAGS)
unset(OV_LINKER_FLAGS)

View File

@@ -641,7 +641,7 @@ _repository = None
# Files to exclude from linting. This is set by the --exclude flag.
_excludes = None
# Whether to supress PrintInfo messages
# Whether to suppress PrintInfo messages
_quiet = False
# The allowed line length of files.
@@ -752,7 +752,7 @@ def ParseNolintSuppressions(filename, raw_line, linenum, error):
'Unknown NOLINT error category: %s' % category)
def ProcessGlobalSuppresions(lines):
def ProcessGlobalSuppressions(lines):
"""Updates the list of global error suppressions.
Parses any lint directives in the file that have global effect.
@@ -780,7 +780,7 @@ def IsErrorSuppressedByNolint(category, linenum):
"""Returns true if the specified error category is suppressed on this line.
Consults the global error_suppressions map populated by
ParseNolintSuppressions/ProcessGlobalSuppresions/ResetNolintSuppressions.
ParseNolintSuppressions/ProcessGlobalSuppressions/ResetNolintSuppressions.
Args:
category: str, the category of the error.
@@ -6203,7 +6203,7 @@ def ProcessFileData(filename, file_extension, lines, error,
ResetNolintSuppressions()
CheckForCopyright(filename, lines, error)
ProcessGlobalSuppresions(lines)
ProcessGlobalSuppressions(lines)
RemoveMultiLineComments(filename, lines, error)
clean_lines = CleansedLines(lines)

View File

@@ -74,7 +74,12 @@ ie_option (VERBOSE_BUILD "shows extra information about build" OFF)
ie_option (ENABLE_UNSAFE_LOCATIONS "skip check for MD5 for dependency" OFF)
ie_dependent_option (ENABLE_FUZZING "instrument build for fuzzing" OFF "OV_COMPILER_IS_CLANG;NOT WIN32" OFF)
if(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC" AND MSVC_VERSION GREATER_EQUAL 1930)
# Visual Studio 2022: 1930-1939 = VS 17.0 (v143 toolset)
set(_msvc_version_2022 ON)
endif()
ie_dependent_option (ENABLE_FUZZING "instrument build for fuzzing" OFF "OV_COMPILER_IS_CLANG OR _msvc_version_2022" OFF)
#
# Check features

View File

@@ -171,7 +171,7 @@ macro(ov_add_frontend)
endforeach()
# Disable all warnings for generated code
set_source_files_properties(${PROTO_SRCS} ${PROTO_HDRS} PROPERTIES COMPILE_OPTIONS -w GENERATED TRUE)
set_source_files_properties(${PROTO_SRCS} ${PROTO_HDRS} PROPERTIES COMPILE_OPTIONS -w GENERATED ON)
# Create library
add_library(${TARGET_NAME} ${LIBRARY_SRC} ${LIBRARY_HEADERS} ${LIBRARY_PUBLIC_HEADERS}
@@ -204,8 +204,7 @@ macro(ov_add_frontend)
ov_add_vs_version_file(NAME ${TARGET_NAME}
FILEDESCRIPTION ${OV_FRONTEND_FILEDESCRIPTION})
target_link_libraries(${TARGET_NAME} PUBLIC openvino::runtime)
target_link_libraries(${TARGET_NAME} PRIVATE ${OV_FRONTEND_LINK_LIBRARIES})
target_link_libraries(${TARGET_NAME} PRIVATE ${OV_FRONTEND_LINK_LIBRARIES} PUBLIC openvino::runtime)
ov_add_library_version(${TARGET_NAME})
# WA for TF frontends which always require protobuf (not protobuf-lite)
@@ -216,23 +215,34 @@ macro(ov_add_frontend)
if(proto_files)
if(OV_FRONTEND_PROTOBUF_LITE)
if(NOT protobuf_lite_installed)
ov_install_static_lib(${Protobuf_LITE_LIBRARIES} ${OV_CPACK_COMP_CORE})
set(protobuf_lite_installed ON CACHE INTERNAL "" FORCE)
endif()
link_system_libraries(${TARGET_NAME} PRIVATE ${Protobuf_LITE_LIBRARIES})
set(protobuf_target_name libprotobuf-lite)
set(protobuf_install_name "protobuf_lite_installed")
else()
if(NOT protobuf_installed)
ov_install_static_lib(${Protobuf_LIBRARIES} ${OV_CPACK_COMP_CORE})
set(protobuf_installed ON CACHE INTERNAL "" FORCE)
endif()
link_system_libraries(${TARGET_NAME} PRIVATE ${Protobuf_LIBRARIES})
set(protobuf_target_name libprotobuf)
set(protobuf_install_name "protobuf_installed")
endif()
if(ENABLE_SYSTEM_PROTOBUF)
# use imported target name with namespace
set(protobuf_target_name "protobuf::${protobuf_target_name}")
endif()
# prptobuf generated code emits -Wsuggest-override error
link_system_libraries(${TARGET_NAME} PRIVATE ${protobuf_target_name})
# protobuf generated code emits -Wsuggest-override error
if(SUGGEST_OVERRIDE_SUPPORTED)
target_compile_options(${TARGET_NAME} PRIVATE -Wno-suggest-override)
endif()
# install protobuf if it is not installed yet
if(NOT ${protobuf_install_name})
if(ENABLE_SYSTEM_PROTOBUF)
# we have to add find_package(Protobuf) to the OpenVINOConfig.cmake for static build
# no needs to install protobuf
else()
ov_install_static_lib(${protobuf_target_name} ${OV_CPACK_COMP_CORE})
set("${protobuf_install_name}" ON CACHE INTERNAL "" FORCE)
endif()
endif()
endif()
if(flatbuffers_schema_files)

View File

@@ -2,41 +2,6 @@
# SPDX-License-Identifier: Apache-2.0
#
include(target_flags)
# TODO: remove this function: we must not have conditions for particular OS names or versions
# cmake needs to look at /etc files only when we build for Linux on Linux
if(CMAKE_HOST_LINUX AND LINUX)
function(get_linux_name res_var)
if(EXISTS "/etc/lsb-release")
# linux version detection using cat /etc/lsb-release
file(READ "/etc/lsb-release" release_data)
set(name_regex "DISTRIB_ID=([^ \n]*)\n")
set(version_regex "DISTRIB_RELEASE=([0-9]+(\\.[0-9]+)?)")
else()
execute_process(COMMAND find -L /etc/ -maxdepth 1 -type f -name *-release -exec cat {} \;
OUTPUT_VARIABLE release_data
RESULT_VARIABLE result)
string(REPLACE "Red Hat" "CentOS" release_data "${release_data}")
set(name_regex "NAME=\"([^ \"\n]*).*\"\n")
set(version_regex "VERSION=\"([0-9]+(\\.[0-9]+)?)[^\n]*\"")
endif()
string(REGEX MATCH ${name_regex} name ${release_data})
set(os_name ${CMAKE_MATCH_1})
string(REGEX MATCH ${version_regex} version ${release_data})
set(os_name "${os_name} ${CMAKE_MATCH_1}")
if(os_name)
set(${res_var} ${os_name} PARENT_SCOPE)
else ()
set(${res_var} NOTFOUND PARENT_SCOPE)
endif ()
endfunction()
else()
function(get_linux_name res_var)
set(${res_var} NOTFOUND PARENT_SCOPE)
endfunction()
endif ()
function(get_linux_name res_var)
set(${res_var} NOTFOUND PARENT_SCOPE)
endfunction()

View File

@@ -99,6 +99,10 @@ function(ov_native_compile_external_project)
list(APPEND ARG_CMAKE_ARGS "-DCMAKE_C_COMPILER_LAUNCHER=${CMAKE_C_COMPILER_LAUNCHER}")
endif()
if(DEFINED CMAKE_MAKE_PROGRAM)
list(APPEND ARG_CMAKE_ARGS "-DCMAKE_MAKE_PROGRAM=${CMAKE_MAKE_PROGRAM}")
endif()
ExternalProject_Add(${ARG_TARGET_NAME}
# Directory Options
SOURCE_DIR "${CMAKE_CURRENT_SOURCE_DIR}"

View File

@@ -25,7 +25,7 @@ macro(ov_common_libraries_cpack_set_dirs)
set(OV_CPACK_IE_CMAKEDIR ${CMAKE_INSTALL_LIBDIR}/cmake/inferenceengine${OpenVINO_VERSION})
set(OV_CPACK_NGRAPH_CMAKEDIR ${CMAKE_INSTALL_LIBDIR}/cmake/ngraph${OpenVINO_VERSION})
set(OV_CPACK_OPENVINO_CMAKEDIR ${CMAKE_INSTALL_LIBDIR}/cmake/openvino${OpenVINO_VERSION})
set(OV_CPACK_DOCDIR ${CMAKE_INSTALL_DATADIR}/doc/openvino-${OpenVINO_VERSION})
set(OV_CPACK_LICENSESDIR licenses)
ov_get_pyversion(pyversion)
if(pyversion)

View File

@@ -31,6 +31,7 @@ macro(ov_debian_cpack_set_dirs)
set(OV_CPACK_NGRAPH_CMAKEDIR ${OV_CPACK_RUNTIMEDIR}/cmake/ngraph${OpenVINO_VERSION})
set(OV_CPACK_OPENVINO_CMAKEDIR ${OV_CPACK_RUNTIMEDIR}/cmake/openvino${OpenVINO_VERSION})
set(OV_CPACK_DOCDIR ${CMAKE_INSTALL_DATADIR}/doc/openvino-${OpenVINO_VERSION})
set(OV_CPACK_LICENSESDIR ${OV_CPACK_DOCDIR}/licenses)
set(OV_CPACK_PYTHONDIR lib/python3/dist-packages)
# non-native stuff

View File

@@ -29,6 +29,7 @@ macro(ov_cpack_set_dirs)
set(OV_CPACK_NGRAPH_CMAKEDIR runtime/cmake)
set(OV_CPACK_OPENVINO_CMAKEDIR runtime/cmake)
set(OV_CPACK_DOCDIR docs)
set(OV_CPACK_LICENSESDIR ${OV_CPACK_DOCDIR}/licenses)
set(OV_CPACK_SAMPLESDIR samples)
set(OV_CPACK_WHEELSDIR tools)
set(OV_CPACK_TOOLSDIR tools)
@@ -99,10 +100,10 @@ endif()
# if <FILE> is a symlink, we resolve it, but install file with a name of symlink
#
function(ov_install_with_name file component)
if((APPLE AND file MATCHES "^[^\.]+\.[0-9]+${CMAKE_SHARED_LIBRARY_SUFFIX}$") OR
(file MATCHES "^.*\.${CMAKE_SHARED_LIBRARY_SUFFIX}\.[0-9]+$"))
get_filename_component(actual_name "${file}" NAME)
if((APPLE AND actual_name MATCHES "^[^\.]+\.[0-9]+${CMAKE_SHARED_LIBRARY_SUFFIX}$") OR
(actual_name MATCHES "^.*\.${CMAKE_SHARED_LIBRARY_SUFFIX}\.[0-9]+$"))
if(IS_SYMLINK "${file}")
get_filename_component(actual_name "${file}" NAME)
get_filename_component(file "${file}" REALPATH)
set(install_rename RENAME "${actual_name}")
endif()
@@ -162,7 +163,7 @@ elseif(CPACK_GENERATOR STREQUAL "RPM")
include(packaging/rpm/rpm)
elseif(CPACK_GENERATOR STREQUAL "NSIS")
include(packaging/nsis)
elseif(CPACK_GENERATOR MATCHES "^(CONDA-FORGE|BREW)$")
elseif(CPACK_GENERATOR MATCHES "^(CONDA-FORGE|BREW|CONAN)$")
include(packaging/common-libraries)
endif()

View File

@@ -22,6 +22,11 @@ macro(ov_rpm_cpack_set_dirs)
set(OV_CPACK_NGRAPH_CMAKEDIR ${CMAKE_INSTALL_LIBDIR}/cmake/ngraph${OpenVINO_VERSION})
set(OV_CPACK_OPENVINO_CMAKEDIR ${CMAKE_INSTALL_LIBDIR}/cmake/openvino${OpenVINO_VERSION})
set(OV_CPACK_DOCDIR ${CMAKE_INSTALL_DATADIR}/doc/openvino-${OpenVINO_VERSION})
set(OV_CPACK_LICENSESDIR ${OV_CPACK_DOCDIR}/licenses)
# TODO:
# 1. define python installation directories for RPM packages
# 2. make sure only a single version of python API can be installed at the same time (define conflicts section)
# set(OV_CPACK_PYTHONDIR lib/python3/dist-packages)
ov_get_pyversion(pyversion)

View File

@@ -17,20 +17,44 @@ if(WIN32 AND CMAKE_CXX_COMPILER_ID STREQUAL "GNU")
endif()
if(CMAKE_HOST_SYSTEM_PROCESSOR MATCHES "amd64.*|x86_64.*|AMD64.*")
set(arch_flag X86_64)
set(host_arch_flag X86_64)
elseif(CMAKE_HOST_SYSTEM_PROCESSOR MATCHES "i686.*|i386.*|x86.*|amd64.*|AMD64.*")
set(arch_flag X86)
set(host_arch_flag X86)
elseif(CMAKE_HOST_SYSTEM_PROCESSOR MATCHES "^(arm64.*|aarch64.*|AARCH64.*|ARM64.*)")
set(arch_flag AARCH64)
set(host_arch_flag AARCH64)
elseif(CMAKE_HOST_SYSTEM_PROCESSOR MATCHES "^(arm.*|ARM.*)")
set(arch_flag ARM)
set(host_arch_flag ARM)
elseif(CMAKE_HOST_SYSTEM_PROCESSOR MATCHES "^riscv64$")
set(arch_flag RISCV64)
set(host_arch_flag RISCV64)
endif()
set(HOST_${arch_flag} ON)
set(HOST_${host_arch_flag} ON)
macro(_ie_process_msvc_generator_platform arch_flag)
macro(_ov_detect_arch_by_processor_type)
if(CMAKE_OSX_ARCHITECTURES AND APPLE)
if(CMAKE_OSX_ARCHITECTURES STREQUAL "arm64")
set(AARCH64 ON)
elseif(CMAKE_OSX_ARCHITECTURES STREQUAL "x86_64")
set(X86_64 ON)
elseif(CMAKE_OSX_ARCHITECTURES MATCHES ".*x86_64.*" AND CMAKE_OSX_ARCHITECTURES MATCHES ".*arm64.*")
set(UNIVERSAL2 ON)
else()
message(FATAL_ERROR "Unsupported value: CMAKE_OSX_ARCHITECTURES = ${CMAKE_OSX_ARCHITECTURES}")
endif()
elseif(CMAKE_SYSTEM_PROCESSOR MATCHES "amd64.*|x86_64.*|AMD64.*")
set(X86_64 ON)
elseif(CMAKE_SYSTEM_PROCESSOR MATCHES "i686.*|i386.*|x86.*|amd64.*|AMD64.*|wasm")
set(X86 ON)
elseif(CMAKE_SYSTEM_PROCESSOR MATCHES "^(arm64.*|aarch64.*|AARCH64.*|ARM64.*|armv8)")
set(AARCH64 ON)
elseif(CMAKE_SYSTEM_PROCESSOR MATCHES "^(arm.*|ARM.*)")
set(ARM ON)
elseif(CMAKE_SYSTEM_PROCESSOR MATCHES "^riscv64$")
set(RISCV64 ON)
endif()
endmacro()
macro(_ov_process_msvc_generator_platform)
# if cmake -A <ARM|ARM64|x64|Win32> is passed
if(CMAKE_GENERATOR_PLATFORM STREQUAL "ARM64")
set(AARCH64 ON)
@@ -41,45 +65,30 @@ macro(_ie_process_msvc_generator_platform arch_flag)
elseif(CMAKE_GENERATOR_PLATFORM STREQUAL "Win32")
set(X86 ON)
else()
set(${arch_flag} ON)
_ov_detect_arch_by_processor_type()
endif()
endmacro()
# TODO: why OpenCV is found by cmake
if(MSVC64 OR MINGW64)
_ie_process_msvc_generator_platform(${arch_flag})
_ov_process_msvc_generator_platform()
elseif(MINGW OR (MSVC AND NOT CMAKE_CROSSCOMPILING))
_ie_process_msvc_generator_platform(${arch_flag})
elseif(CMAKE_OSX_ARCHITECTURES AND APPLE)
if(CMAKE_OSX_ARCHITECTURES STREQUAL "arm64")
set(AARCH64 ON)
elseif(CMAKE_OSX_ARCHITECTURES STREQUAL "x86_64")
set(X86_64 ON)
elseif(CMAKE_OSX_ARCHITECTURES MATCHES ".*x86_64.*" AND CMAKE_OSX_ARCHITECTURES MATCHES ".*arm64.*")
set(UNIVERSAL2 ON)
else()
message(FATAL_ERROR "Unsupported value: CMAKE_OSX_ARCHITECTURES = ${CMAKE_OSX_ARCHITECTURES}")
endif()
elseif(CMAKE_SYSTEM_PROCESSOR MATCHES "amd64.*|x86_64.*|AMD64.*")
set(X86_64 ON)
elseif(CMAKE_SYSTEM_PROCESSOR MATCHES "i686.*|i386.*|x86.*|amd64.*|AMD64.*")
set(X86 ON)
elseif(CMAKE_SYSTEM_PROCESSOR MATCHES "^(arm64.*|aarch64.*|AARCH64.*|ARM64.*)")
set(AARCH64 ON)
elseif(CMAKE_SYSTEM_PROCESSOR MATCHES "^(arm.*|ARM.*)")
set(ARM ON)
elseif(CMAKE_SYSTEM_PROCESSOR MATCHES "^riscv64$")
set(RISCV64 ON)
_ov_process_msvc_generator_platform()
else()
_ov_detect_arch_by_processor_type()
endif()
if(CMAKE_SYSTEM_NAME STREQUAL "Emscripten")
set(EMSCRIPTEN ON)
endif()
if(UNIX AND NOT (APPLE OR ANDROID OR EMSCRIPTEN))
if(UNIX AND NOT (APPLE OR ANDROID OR EMSCRIPTEN OR CYGWIN))
set(LINUX ON)
endif()
if(NOT DEFINED CMAKE_HOST_LINUX AND CMAKE_HOST_SYSTEM_NAME STREQUAL "Linux")
if(CMAKE_VERSION VERSION_LESS 3.25 AND CMAKE_HOST_SYSTEM_NAME STREQUAL "Linux")
# the variable is available since 3.25
# https://cmake.org/cmake/help/latest/variable/CMAKE_HOST_LINUX.html
set(CMAKE_HOST_LINUX ON)
endif()

View File

@@ -40,6 +40,7 @@ function(ieTargetLinkWholeArchive targetName)
"-Wl,-noall_load"
)
else()
# non-Apple Clang and GCC / MinGW
list(APPEND libs
"-Wl,--whole-archive"
${staticLib}

View File

@@ -22,7 +22,7 @@ else()
set(ENABLE_INTEL_GPU_DEFAULT OFF)
endif()
ie_dependent_option (ENABLE_INTEL_GPU "GPU OpenCL-based plugin for OpenVINO Runtime" ${ENABLE_INTEL_GPU_DEFAULT} "X86_64 OR AARCH64;NOT APPLE;NOT MINGW;NOT WINDOWS_STORE;NOT WINDOWS_PHONE" OFF)
ie_dependent_option (ENABLE_INTEL_GPU "GPU OpenCL-based plugin for OpenVINO Runtime" ${ENABLE_INTEL_GPU_DEFAULT} "X86_64 OR AARCH64;NOT APPLE;NOT WINDOWS_STORE;NOT WINDOWS_PHONE" OFF)
if (ANDROID OR (CMAKE_COMPILER_IS_GNUCXX AND CMAKE_CXX_COMPILER_VERSION VERSION_LESS 7.0))
# oneDNN doesn't support old compilers and android builds for now, so we'll
@@ -34,6 +34,10 @@ endif()
ie_dependent_option (ENABLE_ONEDNN_FOR_GPU "Enable oneDNN with GPU support" ${ENABLE_ONEDNN_FOR_GPU_DEFAULT} "ENABLE_INTEL_GPU" OFF)
ie_option (ENABLE_DEBUG_CAPS "enable OpenVINO debug capabilities at runtime" OFF)
ie_dependent_option (ENABLE_GPU_DEBUG_CAPS "enable GPU debug capabilities at runtime" ON "ENABLE_DEBUG_CAPS;ENABLE_INTEL_CPU" OFF)
ie_dependent_option (ENABLE_CPU_DEBUG_CAPS "enable CPU debug capabilities at runtime" ON "ENABLE_DEBUG_CAPS;ENABLE_INTEL_GPU" OFF)
ie_option (ENABLE_PROFILING_ITT "Build with ITT tracing. Optionally configure pre-built ittnotify library though INTEL_VTUNE_DIR variable." OFF)
ie_option_enum(ENABLE_PROFILING_FILTER "Enable or disable ITT counter groups.\
@@ -81,41 +85,45 @@ ie_dependent_option (ENABLE_TBBBIND_2_5 "Enable TBBBind_2_5 static usage in Open
ie_dependent_option (ENABLE_INTEL_GNA "GNA support for OpenVINO Runtime" ON
"NOT APPLE;NOT ANDROID;X86_64;CMAKE_CXX_COMPILER_VERSION VERSION_GREATER_EQUAL 5.4" OFF)
ie_option (ENABLE_INTEL_GNA_DEBUG "GNA debug build" OFF)
ie_dependent_option (ENABLE_INTEL_GNA_DEBUG "GNA debug build" OFF "ENABLE_INTEL_GNA" OFF)
ie_dependent_option (ENABLE_V7_SERIALIZE "enables serialization to IR v7" OFF "ENABLE_INTEL_GNA" OFF)
ie_dependent_option (ENABLE_IR_V7_READER "Enables IR v7 reader" ${BUILD_SHARED_LIBS} "ENABLE_TESTS;ENABLE_INTEL_GNA" OFF)
ie_option (ENABLE_GAPI_PREPROCESSING "Enables G-API preprocessing" ON)
ie_dependent_option (ENABLE_GAPI_PREPROCESSING "Enables G-API preprocessing" ON "NOT MINGW64" OFF)
ie_option (ENABLE_MULTI "Enables MULTI Device Plugin" ON)
ie_option (ENABLE_AUTO "Enables AUTO Device Plugin" ON)
ie_option (ENABLE_AUTO_BATCH "Enables Auto-Batching Plugin" ON)
ie_option (ENABLE_HETERO "Enables Hetero Device Plugin" ON)
ie_option (ENABLE_TEMPLATE "Enable template plugin" ON)
ie_dependent_option (ENABLE_PLUGINS_XML "Generate plugins.xml configuration file or not" OFF "NOT BUILD_SHARED_LIBS" OFF)
ie_dependent_option (ENABLE_PLUGINS_XML "Generate plugins.xml configuration file or not" OFF "BUILD_SHARED_LIBS" OFF)
ie_dependent_option (GAPI_TEST_PERF "if GAPI unit tests should examine performance" OFF "ENABLE_TESTS;ENABLE_GAPI_PREPROCESSING" OFF)
ie_dependent_option (ENABLE_DATA "fetch models from testdata repo" ON "ENABLE_FUNCTIONAL_TESTS;NOT ANDROID" OFF)
ie_dependent_option (ENABLE_BEH_TESTS "tests oriented to check OpenVINO Runtime API correctness" ON "ENABLE_TESTS" OFF)
ie_dependent_option (ENABLE_FUNCTIONAL_TESTS "functional tests" ON "ENABLE_TESTS" OFF)
ie_option (ENABLE_SAMPLES "console samples are part of OpenVINO Runtime package" ON)
ie_option (ENABLE_OPENCV "enables custom OpenCV download" OFF)
ie_option (ENABLE_V7_SERIALIZE "enables serialization to IR v7" OFF)
set(OPENVINO_EXTRA_MODULES "" CACHE STRING "Extra paths for extra modules to include into OpenVINO build")
ie_dependent_option(ENABLE_TBB_RELEASE_ONLY "Only Release TBB libraries are linked to the OpenVINO Runtime binaries" ON "THREADING MATCHES TBB;LINUX" OFF)
find_host_package(PythonInterp 3 QUIET)
ie_option(ENABLE_OV_ONNX_FRONTEND "Enable ONNX FrontEnd" ${PYTHONINTERP_FOUND})
ie_option(ENABLE_OV_PADDLE_FRONTEND "Enable PaddlePaddle FrontEnd" ON)
ie_option(ENABLE_OV_IR_FRONTEND "Enable IR FrontEnd" ON)
ie_option(ENABLE_OV_PYTORCH_FRONTEND "Enable PyTorch FrontEnd" ON)
ie_option(ENABLE_OV_IR_FRONTEND "Enable IR FrontEnd" ON)
ie_option(ENABLE_OV_TF_FRONTEND "Enable TensorFlow FrontEnd" ON)
ie_option(ENABLE_OV_TF_LITE_FRONTEND "Enable TensorFlow Lite FrontEnd" ON)
ie_dependent_option(ENABLE_SNAPPY_COMPRESSION "Enables compression support for TF FE" ON
"ENABLE_OV_TF_FRONTEND" ON)
if(CMAKE_HOST_LINUX AND LINUX)
# Debian packages are enabled on Ubuntu systems
# so, system TBB / pugixml / OpenCL can be tried for usage
@@ -131,40 +139,37 @@ else()
set(ENABLE_SYSTEM_TBB_DEFAULT ${ENABLE_SYSTEM_LIBS_DEFAULT})
endif()
if(BUILD_SHARED_LIBS)
set(ENABLE_SYSTEM_PUGIXML_DEFAULT ${ENABLE_SYSTEM_LIBS_DEFAULT})
else()
# for static libraries case libpugixml.a must be compiled with -fPIC
# but we still need an ability to compile with system PugiXML and BUILD_SHARED_LIBS
# for Conan case where everything is compiled statically
set(ENABLE_SYSTEM_PUGIXML_DEFAULT OFF)
endif()
# users wants to use his own TBB version, specific either via env vars or cmake options
if(DEFINED ENV{TBBROOT} OR DEFINED ENV{TBB_DIR} OR DEFINED TBB_DIR OR DEFINED TBBROOT)
set(ENABLE_SYSTEM_TBB_DEFAULT OFF)
endif()
# for static libraries case libpugixml.a must be compiled with -fPIC
ie_dependent_option (ENABLE_SYSTEM_PUGIXML "use the system copy of pugixml" ${ENABLE_SYSTEM_LIBS_DEFAULT} "BUILD_SHARED_LIBS" OFF)
ie_dependent_option (ENABLE_SYSTEM_TBB "use the system version of TBB" ${ENABLE_SYSTEM_TBB_DEFAULT} "THREADING MATCHES TBB" OFF)
ie_dependent_option (ENABLE_SYSTEM_OPENCL "Use the system version of OpenCL" ${ENABLE_SYSTEM_LIBS_DEFAULT} "BUILD_SHARED_LIBS;ENABLE_INTEL_GPU" OFF)
ie_option (ENABLE_DEBUG_CAPS "enable OpenVINO debug capabilities at runtime" OFF)
ie_dependent_option (ENABLE_GPU_DEBUG_CAPS "enable GPU debug capabilities at runtime" ON "ENABLE_DEBUG_CAPS" OFF)
ie_dependent_option (ENABLE_CPU_DEBUG_CAPS "enable CPU debug capabilities at runtime" ON "ENABLE_DEBUG_CAPS" OFF)
find_host_package(PythonInterp 3 QUIET)
ie_option(ENABLE_OV_ONNX_FRONTEND "Enable ONNX FrontEnd" ${PYTHONINTERP_FOUND})
ie_option(ENABLE_OV_PADDLE_FRONTEND "Enable PaddlePaddle FrontEnd" ON)
ie_option(ENABLE_OV_IR_FRONTEND "Enable IR FrontEnd" ON)
ie_option(ENABLE_OV_PYTORCH_FRONTEND "Enable PyTorch FrontEnd" ON)
ie_option(ENABLE_OV_TF_FRONTEND "Enable TensorFlow FrontEnd" ON)
ie_option(ENABLE_OV_TF_LITE_FRONTEND "Enable TensorFlow Lite FrontEnd" ON)
ie_option(ENABLE_OV_IR_FRONTEND "Enable IR FrontEnd" ON)
ie_dependent_option(ENABLE_SNAPPY_COMPRESSION "Enables compression support for TF FE" ON
"ENABLE_OV_TF_FRONTEND" ON)
ie_dependent_option(ENABLE_SYSTEM_PROTOBUF "Enables use of system protobuf" OFF
"ENABLE_OV_ONNX_FRONTEND OR ENABLE_OV_PADDLE_FRONTEND OR ENABLE_OV_TF_FRONTEND;BUILD_SHARED_LIBS" OFF)
ie_dependent_option (ENABLE_SYSTEM_TBB "Enables use of system TBB" ${ENABLE_SYSTEM_TBB_DEFAULT}
"THREADING MATCHES TBB" OFF)
# TODO: turn it off by default during the work on cross-os distribution, because pugixml is not
# available out of box on all systems (like RHEL, UBI)
ie_option (ENABLE_SYSTEM_PUGIXML "Enables use of system PugiXML" ${ENABLE_SYSTEM_PUGIXML_DEFAULT})
# the option is on by default, because we use only flatc compiler and don't use any libraries
ie_dependent_option(ENABLE_SYSTEM_FLATBUFFERS "Enables use of system flatbuffers" ON
"ENABLE_OV_TF_LITE_FRONTEND" OFF)
ie_dependent_option(ENABLE_SYSTEM_SNAPPY "Enables use of system version of snappy" OFF "ENABLE_SNAPPY_COMPRESSION;BUILD_SHARED_LIBS" OFF)
ie_dependent_option (ENABLE_SYSTEM_OPENCL "Enables use of system OpenCL" ${ENABLE_SYSTEM_LIBS_DEFAULT}
"ENABLE_INTEL_GPU" OFF)
# the option is turned off by default, because we compile our own static version of protobuf
# with LTO and -fPIC options, while system one does not have such flags
ie_dependent_option (ENABLE_SYSTEM_PROTOBUF "Enables use of system Protobuf" OFF
"ENABLE_OV_ONNX_FRONTEND OR ENABLE_OV_PADDLE_FRONTEND OR ENABLE_OV_TF_FRONTEND" OFF)
# the option is turned off by default, because we don't want to have a dependency on libsnappy.so
ie_dependent_option (ENABLE_SYSTEM_SNAPPY "Enables use of system version of Snappy" OFF
"ENABLE_SNAPPY_COMPRESSION" OFF)
ie_option(ENABLE_OPENVINO_DEBUG "Enable output for OPENVINO_DEBUG statements" OFF)

View File

@@ -10,8 +10,8 @@ macro(ov_cpack_settings)
set(cpack_components_all ${CPACK_COMPONENTS_ALL})
unset(CPACK_COMPONENTS_ALL)
foreach(item IN LISTS cpack_components_all)
# filter out some components, which are not needed to be wrapped to conda-forge | brew
if(# python is not a part of conda | brew
# filter out some components, which are not needed to be wrapped to conda-forge | brew | conan
if(# python is not a part of conda | brew | conan
NOT item MATCHES "^${OV_CPACK_COMP_PYTHON_OPENVINO}_python.*" AND
# python wheels are not needed to be wrapped by conda | brew packages
NOT item STREQUAL OV_CPACK_COMP_PYTHON_WHEELS AND

View File

@@ -93,7 +93,7 @@ macro(ov_cpack_settings)
# - 2022.1.0 is the last public release with debian packages from Intel install team
# - 2022.1.1, 2022.2 do not have debian packages enabled, distributed only as archives
# - 2022.3 is the first release where Debian updated packages are introduced, others 2022.3.X are LTS
2022.3.0 2022.3.1 2022.3.2 2022.3.3 2022.3.4 2022.3.5
2022.3.0 2022.3.1 2022.3.2 2022.3.3 2022.3.4 2022.3.5 2023.0.0 2023.0.1
)
#

View File

@@ -6,7 +6,7 @@ if(CPACK_GENERATOR STREQUAL "DEB")
include(cmake/packaging/debian.cmake)
elseif(CPACK_GENERATOR STREQUAL "RPM")
include(cmake/packaging/rpm.cmake)
elseif(CPACK_GENERATOR MATCHES "^(CONDA-FORGE|BREW)$")
elseif(CPACK_GENERATOR MATCHES "^(CONDA-FORGE|BREW|CONAN)$")
include(cmake/packaging/common-libraries.cmake)
elseif(CPACK_GENERATOR STREQUAL "NSIS")
include(cmake/packaging/nsis.cmake)

View File

@@ -79,7 +79,7 @@ macro(ov_cpack_settings)
# - 2022.1.0 is the last public release with rpm packages from Intel install team
# - 2022.1.1, 2022.2 do not have rpm packages enabled, distributed only as archives
# - 2022.3 is the first release where RPM updated packages are introduced, others 2022.3.X are LTS
2022.3.0 2022.3.1 2022.3.2 2022.3.3 2022.3.4 2022.3.5
2022.3.0 2022.3.1 2022.3.2 2022.3.3 2022.3.4 2022.3.5 2023.0.0 2023.0.1
)
find_host_program(rpmlint_PROGRAM NAMES rpmlint DOC "Path to rpmlint")

View File

@@ -142,6 +142,14 @@ if(ENABLE_SYSTEM_PUGIXML)
endif()
endif()
set(_IE_nlohmann_json_FOUND "@nlohmann_json_FOUND@")
if(_IE_nlohmann_json_FOUND)
find_dependency(nlohmann_json)
set_target_properties(nlohmann_json::nlohmann_json PROPERTIES IMPORTED_GLOBAL ON)
add_library(IE::nlohmann_json ALIAS nlohmann_json::nlohmann_json)
endif()
unset(_IE_nlohmann_json_FOUND)
# inherit OpenCV from main IE project if enabled
if ("@OpenCV_FOUND@")
load_cache("${cache_path}" READ_WITH_PREFIX "" OpenCV_DIR)

View File

@@ -85,9 +85,9 @@
#
# `OpenVINO_VERSION_MAJOR`
# Major version component
#
#
# `OpenVINO_VERSION_MINOR`
# minor version component
# Minor version component
#
# `OpenVINO_VERSION_PATCH`
# Patch version component
@@ -138,7 +138,7 @@ endmacro()
macro(_ov_find_tbb)
set(THREADING "@THREADING@")
if((THREADING STREQUAL "TBB" OR THREADING STREQUAL "TBB_AUTO") AND NOT TBB_FOUND)
if(THREADING STREQUAL "TBB" OR THREADING STREQUAL "TBB_AUTO")
set(enable_pkgconfig_tbb "@tbb_FOUND@")
# try tbb.pc
@@ -153,10 +153,10 @@ macro(_ov_find_tbb)
endif()
pkg_search_module(tbb
${pkg_config_quiet_arg}
${pkg_config_required_arg}
IMPORTED_TARGET
tbb)
${pkg_config_quiet_arg}
${pkg_config_required_arg}
IMPORTED_TARGET
tbb)
unset(pkg_config_quiet_arg)
unset(pkg_config_required_arg)
@@ -223,28 +223,185 @@ macro(_ov_find_tbb)
PATHS ${_tbb_bind_dir}
NO_CMAKE_FIND_ROOT_PATH
NO_DEFAULT_PATH)
set_target_properties(${TBBBIND_2_5_IMPORTED_TARGETS} PROPERTIES IMPORTED_GLOBAL ON)
unset(_tbb_bind_dir)
endif()
unset(install_tbbbind)
endif()
endmacro()
macro(_ov_find_pugixml)
set(_OV_ENABLE_SYSTEM_PUGIXML "@ENABLE_SYSTEM_PUGIXML@")
if(_OV_ENABLE_SYSTEM_PUGIXML)
set(_ov_pugixml_pkgconfig_interface "@pugixml_FOUND@")
set(_ov_pugixml_cmake_interface "@PugiXML_FOUND@")
if(_ov_pugixml_pkgconfig_interface AND NOT ANDROID)
_ov_find_dependency(PkgConfig)
elseif(_ov_pugixml_cmake_interface)
_ov_find_dependency(PugiXML REQUIRED)
endif()
if(PugiXML_FOUND)
if(TARGET pugixml)
set(_ov_pugixml_target pugixml)
elseif(TARGET pugixml::pugixml)
set(_ov_pugixml_target pugixml::pugixml)
endif()
if(OpenVINODeveloperPackage_DIR)
set_property(TARGET ${_ov_pugixml_target} PROPERTY IMPORTED_GLOBAL ON)
# align with build tree
add_library(openvino::pugixml ALIAS ${_ov_pugixml_target})
endif()
unset(_ov_pugixml_target)
elseif(PkgConfig_FOUND)
if(${CMAKE_FIND_PACKAGE_NAME}_FIND_QUIETLY)
set(pkg_config_quiet_arg QUIET)
endif()
if(${CMAKE_FIND_PACKAGE_NAME}_FIND_REQUIRED)
set(pkg_config_required_arg REQUIRED)
endif()
pkg_search_module(pugixml
${pkg_config_quiet_arg}
${pkg_config_required_arg}
IMPORTED_TARGET
GLOBAL
pugixml)
unset(pkg_config_quiet_arg)
unset(pkg_config_required_arg)
if(pugixml_FOUND)
if(OpenVINODeveloperPackage_DIR)
add_library(openvino::pugixml ALIAS PkgConfig::pugixml)
endif()
# PATCH: on Ubuntu 18.04 pugixml.pc contains incorrect include directories
get_target_property(interface_include_dir PkgConfig::pugixml INTERFACE_INCLUDE_DIRECTORIES)
if(interface_include_dir AND NOT EXISTS "${interface_include_dir}")
set_target_properties(PkgConfig::pugixml PROPERTIES
INTERFACE_INCLUDE_DIRECTORIES "")
endif()
endif()
endif()
# debian 9 case: no cmake, no pkg-config files
if(NOT TARGET openvino::pugixml)
find_library(PUGIXML_LIBRARY NAMES pugixml DOC "Path to pugixml library")
if(PUGIXML_LIBRARY)
add_library(openvino::pugixml INTERFACE IMPORTED)
set_target_properties(openvino::pugixml PROPERTIES INTERFACE_LINK_LIBRARIES "${PUGIXML_LIBRARY}")
else()
message(FATAL_ERROR "Failed to find system pugixml in OpenVINO Developer Package")
endif()
endif()
endif()
endmacro()
macro(_ov_find_itt)
set(_ENABLE_PROFILING_ITT "@ENABLE_PROFILING_ITT@")
# whether 'ittapi' is found via find_package
set(_ENABLE_SYSTEM_ITTAPI "@ittapi_FOUND@")
if(_ENABLE_PROFILING_ITT AND _ENABLE_SYSTEM_ITTAPI)
_ov_find_dependency(ittapi)
endif()
unset(_ENABLE_PROFILING_ITT)
unset(_ENABLE_SYSTEM_ITTAPI)
endmacro()
macro(_ov_find_ade)
set(_OV_ENABLE_GAPI_PREPROCESSING "@ENABLE_GAPI_PREPROCESSING@")
# whether 'ade' is found via find_package
set(_ENABLE_SYSTEM_ADE "@ade_FOUND@")
if(_OV_ENABLE_GAPI_PREPROCESSING AND _ENABLE_SYSTEM_ADE)
_ov_find_dependency(ade 0.1.2)
endif()
unset(_OV_ENABLE_GAPI_PREPROCESSING)
unset(_ENABLE_SYSTEM_ADE)
endmacro()
macro(_ov_find_intel_cpu_dependencies)
set(_OV_ENABLE_CPU_ACL "@DNNL_USE_ACL@")
if(_OV_ENABLE_CPU_ACL)
if(_ov_as_external_package)
set_and_check(ARM_COMPUTE_LIB_DIR "@PACKAGE_ARM_COMPUTE_LIB_DIR@")
set(_ov_find_acl_options NO_DEFAULT_PATH)
set(_ov_find_acl_path "${CMAKE_CURRENT_LIST_DIR}")
else()
set_and_check(_ov_find_acl_path "@PACKAGE_FIND_ACL_PATH@")
endif()
_ov_find_dependency(ACL
NO_MODULE
PATHS "${_ov_find_acl_path}"
${_ov_find_acl_options})
unset(ARM_COMPUTE_LIB_DIR)
unset(_ov_find_acl_path)
unset(_ov_find_acl_options)
endif()
unset(_OV_ENABLE_CPU_ACL)
endmacro()
macro(_ov_find_intel_gpu_dependencies)
set(_OV_ENABLE_INTEL_GPU "@ENABLE_INTEL_GPU@")
set(_OV_ENABLE_SYSTEM_OPENCL "@ENABLE_SYSTEM_OPENCL@")
if(_OV_ENABLE_INTEL_GPU AND _OV_ENABLE_SYSTEM_OPENCL)
set(_OV_OpenCLICDLoader_FOUND "@OpenCLICDLoader_FOUND@")
if(_OV_OpenCLICDLoader_FOUND)
_ov_find_dependency(OpenCLICDLoader)
else()
_ov_find_dependency(OpenCL)
endif()
unset(_OV_OpenCLICDLoader_FOUND)
endif()
unset(_OV_ENABLE_INTEL_GPU)
unset(_OV_ENABLE_SYSTEM_OPENCL)
endmacro()
macro(_ov_find_intel_gna_dependencies)
set(_OV_ENABLE_INTEL_GNA "@ENABLE_INTEL_GNA@")
if(_OV_ENABLE_INTEL_GNA AND NOT libGNA_FOUND)
if(_OV_ENABLE_INTEL_GNA)
set_and_check(GNA_PATH "@PACKAGE_GNA_PATH@")
_ov_find_dependency(libGNA
COMPONENTS KERNEL
CONFIG
PATHS "${CMAKE_CURRENT_LIST_DIR}"
NO_CMAKE_FIND_ROOT_PATH
NO_DEFAULT_PATH)
unset(GNA_PATH)
endif()
unset(_OV_ENABLE_INTEL_GNA)
endmacro()
macro(_ov_find_protobuf_frontend_dependency)
set(_OV_ENABLE_SYSTEM_PROTOBUF "@ENABLE_SYSTEM_PROTOBUF@")
# TODO: remove check for target existence
if(_OV_ENABLE_SYSTEM_PROTOBUF AND NOT TARGET protobuf::libprotobuf)
_ov_find_dependency(Protobuf @Protobuf_VERSION@ EXACT)
endif()
unset(_OV_ENABLE_SYSTEM_PROTOBUF)
endmacro()
macro(_ov_find_tensorflow_frontend_dependencies)
set(_OV_ENABLE_SYSTEM_SNAPPY "@ENABLE_SYSTEM_SNAPPY@")
set(_ov_snappy_lib "@ov_snappy_lib@")
# TODO: remove check for target existence
if(_OV_ENABLE_SYSTEM_SNAPPY AND NOT TARGET ${_ov_snappy_lib})
_ov_find_dependency(Snappy @Snappy_VERSION@ EXACT)
endif()
unset(_OV_ENABLE_SYSTEM_SNAPPY)
unset(_ov_snappy_lib)
set(PACKAGE_PREFIX_DIR ${_ov_package_prefix_dir})
endmacro()
macro(_ov_find_onnx_frontend_dependencies)
set(_OV_ENABLE_SYSTEM_ONNX "@ENABLE_SYSTEM_ONNX@")
if(_OV_ENABLE_SYSTEM_ONNX)
_ov_find_dependency(ONNX @ONNX_VERSION@ EXACT)
endif()
unset(_OV_ENABLE_SYSTEM_ONNX)
endmacro()
function(_ov_target_no_deprecation_error)
if(NOT MSVC)
if(CMAKE_CXX_COMPILER_ID STREQUAL "Intel")
@@ -265,13 +422,41 @@ endfunction()
# OpenVINO config
#
cmake_policy(PUSH)
# we need CMP0057 to allow IN_LIST in if() command
if(POLICY CMP0057)
cmake_policy(SET CMP0057 NEW)
else()
message(FATAL_ERROR "OpenVINO requires CMake 3.3 or newer")
endif()
# need to store current PACKAGE_PREFIX_DIR, because it's overwritten by sub-package one
set(_ov_package_prefix_dir "${PACKAGE_PREFIX_DIR}")
set(_OV_ENABLE_OPENVINO_BUILD_SHARED "@BUILD_SHARED_LIBS@")
if(NOT TARGET openvino)
set(_ov_as_external_package ON)
endif()
if(NOT _OV_ENABLE_OPENVINO_BUILD_SHARED)
# common openvino dependencies
_ov_find_tbb()
_ov_find_itt()
_ov_find_pugixml()
# preprocessing dependencies
_ov_find_ade()
# frontend dependencies
_ov_find_protobuf_frontend_dependency()
_ov_find_tensorflow_frontend_dependencies()
_ov_find_onnx_frontend_dependencies()
# plugin dependencies
_ov_find_intel_cpu_dependencies()
_ov_find_intel_gpu_dependencies()
_ov_find_intel_gna_dependencies()
endif()
@@ -279,13 +464,26 @@ _ov_find_dependency(Threads)
unset(_OV_ENABLE_OPENVINO_BUILD_SHARED)
if(NOT TARGET openvino)
set(_ov_as_external_package ON)
set(_ov_imported_libs openvino::runtime openvino::runtime::c
openvino::frontend::onnx openvino::frontend::paddle openvino::frontend::tensorflow
openvino::frontend::pytorch openvino::frontend::tensorflow_lite)
if(_ov_as_external_package)
include("${CMAKE_CURRENT_LIST_DIR}/OpenVINOTargets.cmake")
foreach(target IN LISTS _ov_imported_libs)
if(TARGET ${target})
get_target_property(imported_configs ${target} IMPORTED_CONFIGURATIONS)
if(NOT RELWITHDEBINFO IN_LIST imported_configs)
set_property(TARGET ${target} PROPERTY MAP_IMPORTED_CONFIG_RELWITHDEBINFO RELEASE)
endif()
unset(imported_configs)
endif()
endforeach()
# WA for cmake version < 3.16 which does not export
# IMPORTED_LINK_DEPENDENT_LIBRARIES_** properties if no PUBLIC dependencies for the library
if((THREADING STREQUAL "TBB" OR THREADING STREQUAL "TBB_AUTO") AND TBB_FOUND)
if(THREADING STREQUAL "TBB" OR THREADING STREQUAL "TBB_AUTO")
foreach(type RELEASE DEBUG RELWITHDEBINFO MINSIZEREL)
foreach(tbb_target TBB::tbb TBB::tbbmalloc PkgConfig::tbb)
if(TARGET ${tbb_target})
@@ -326,12 +524,12 @@ endif()
# Apply common functions
#
foreach(target openvino::runtime openvino::runtime::c
openvino::frontend::onnx openvino::frontend::paddle openvino::frontend::tensorflow)
foreach(target IN LISTS _ov_imported_libs)
if(TARGET ${target} AND _ov_as_external_package)
_ov_target_no_deprecation_error(${target})
endif()
endforeach()
unset(_ov_imported_libs)
unset(_ov_as_external_package)
# restore PACKAGE_PREFIX_DIR
@@ -349,3 +547,7 @@ unset(${CMAKE_FIND_PACKAGE_NAME}_IR_FOUND)
unset(${CMAKE_FIND_PACKAGE_NAME}_Paddle_FOUND)
unset(${CMAKE_FIND_PACKAGE_NAME}_ONNX_FOUND)
unset(${CMAKE_FIND_PACKAGE_NAME}_TensorFlow_FOUND)
unset(${CMAKE_FIND_PACKAGE_NAME}_TensorFlowLite_FOUND)
unset(${CMAKE_FIND_PACKAGE_NAME}_PyTorch_FOUND)
cmake_policy(POP)

View File

@@ -56,6 +56,7 @@ find_dependency(OpenVINO
NO_DEFAULT_PATH)
_ov_find_tbb()
_ov_find_pugixml()
foreach(component @openvino_export_components@)
# TODO: remove legacy targets from some tests
@@ -65,58 +66,6 @@ foreach(component @openvino_export_components@)
# endif()
endforeach()
if(ENABLE_SYSTEM_PUGIXML)
set(_ov_pugixml_pkgconfig_interface "@pugixml_FOUND@")
set(_ov_pugixml_cmake_interface "@PugiXML_FOUND@")
if(_ov_pugixml_pkgconfig_interface)
find_dependency(PkgConfig)
elseif(_ov_pugixml_cmake_interface)
find_dependency(PugiXML)
endif()
if(PugiXML_FOUND)
set_property(TARGET pugixml PROPERTY IMPORTED_GLOBAL TRUE)
add_library(openvino::pugixml ALIAS pugixml)
elseif(PkgConfig_FOUND)
if(${CMAKE_FIND_PACKAGE_NAME}_FIND_QUIETLY)
set(pkg_config_quiet_arg QUIET)
endif()
if(${CMAKE_FIND_PACKAGE_NAME}_FIND_REQUIRED)
set(pkg_config_required_arg REQUIRED)
endif()
pkg_search_module(pugixml
${pkg_config_quiet_arg}
${pkg_config_required_arg}
IMPORTED_TARGET GLOBAL
pugixml)
unset(pkg_config_quiet_arg)
unset(pkg_config_required_arg)
if(pugixml_FOUND)
add_library(openvino::pugixml ALIAS PkgConfig::pugixml)
# PATCH: on Ubuntu 18.04 pugixml.pc contains incorrect include directories
get_target_property(interface_include_dir PkgConfig::pugixml INTERFACE_INCLUDE_DIRECTORIES)
if(interface_include_dir AND NOT EXISTS "${interface_include_dir}")
set_target_properties(PkgConfig::pugixml PROPERTIES
INTERFACE_INCLUDE_DIRECTORIES "")
endif()
endif()
endif()
# debian 9 case: no cmake, no pkg-config files
if(NOT TARGET openvino::pugixml)
find_library(PUGIXML_LIBRARY NAMES pugixml DOC "Path to pugixml library")
if(PUGIXML_LIBRARY)
add_library(openvino::pugixml INTERFACE IMPORTED GLOBAL)
set_target_properties(openvino::pugixml PROPERTIES INTERFACE_LINK_LIBRARIES "${PUGIXML_LIBRARY}")
else()
message(FATAL_ERROR "Failed to find system pugixml in OpenVINO Developer Package")
endif()
endif()
endif()
# inherit OpenCV from main OpenVINO project if enabled
if ("@OpenCV_FOUND@")
load_cache("${cache_path}" READ_WITH_PREFIX "" OpenCV_DIR)

View File

@@ -0,0 +1,95 @@
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
# Prerequisites:
#
# Build platform: Ubuntu
# apt-get install mingw-w64 mingw-w64-tools g++-mingw-w64-x86-64 gcc-mingw-w64-x86-64
#
# Build platform: macOS
# brew install mingw-w64
#
set(CMAKE_SYSTEM_NAME Windows)
set(CMAKE_SYSTEM_PROCESSOR x86_64)
set(CMAKE_C_COMPILER x86_64-w64-mingw32-gcc-posix)
set(CMAKE_CXX_COMPILER x86_64-w64-mingw32-g++-posix)
set(PKG_CONFIG_EXECUTABLE x86_64-w64-mingw32-pkg-config CACHE PATH "Path to Windows x86_64 pkg-config")
set(CMAKE_FIND_ROOT_PATH_MODE_PROGRAM NEVER)
set(CMAKE_FIND_ROOT_PATH_MODE_LIBRARY ONLY)
set(CMAKE_FIND_ROOT_PATH_MODE_INCLUDE ONLY)
set(CMAKE_FIND_ROOT_PATH_MODE_PACKAGE ONLY)
macro(__cmake_find_root_save_and_reset)
foreach(v
CMAKE_FIND_ROOT_PATH_MODE_LIBRARY
CMAKE_FIND_ROOT_PATH_MODE_INCLUDE
CMAKE_FIND_ROOT_PATH_MODE_PACKAGE
CMAKE_FIND_ROOT_PATH_MODE_PROGRAM
)
set(__save_${v} ${${v}})
set(${v} NEVER)
endforeach()
endmacro()
macro(__cmake_find_root_restore)
foreach(v
CMAKE_FIND_ROOT_PATH_MODE_LIBRARY
CMAKE_FIND_ROOT_PATH_MODE_INCLUDE
CMAKE_FIND_ROOT_PATH_MODE_PACKAGE
CMAKE_FIND_ROOT_PATH_MODE_PROGRAM
)
set(${v} ${__save_${v}})
unset(__save_${v})
endforeach()
endmacro()
# macro to find programs on the host OS
macro(find_host_program)
__cmake_find_root_save_and_reset()
if(CMAKE_HOST_WIN32)
SET(WIN32 1)
SET(UNIX)
SET(APPLE)
elseif(CMAKE_HOST_APPLE)
SET(APPLE 1)
SET(UNIX)
SET(WIN32)
elseif(CMAKE_HOST_UNIX)
SET(UNIX 1)
SET(WIN32)
SET(APPLE)
endif()
find_program(${ARGN})
SET(WIN32 1)
SET(APPLE)
SET(UNIX)
__cmake_find_root_restore()
endmacro()
# macro to find packages on the host OS
macro(find_host_package)
__cmake_find_root_save_and_reset()
if(CMAKE_HOST_WIN32)
SET(WIN32 1)
SET(UNIX)
SET(APPLE)
elseif(CMAKE_HOST_APPLE)
SET(APPLE 1)
SET(WIN32)
SET(UNIX)
elseif(CMAKE_HOST_UNIX)
SET(UNIX 1)
SET(WIN32)
SET(APPLE)
endif()
find_package(${ARGN})
SET(WIN32 1)
SET(APPLE)
SET(UNIX)
__cmake_find_root_restore()
endmacro()

View File

@@ -24,7 +24,7 @@ set(CMAKE_LINKER ${RISCV_TOOLCHAIN_ROOT}/bin/riscv64-unknown-linux-gnu-ld)
set(CMAKE_OBJCOPY ${RISCV_TOOLCHAIN_ROOT}/bin/riscv64-unknown-linux-gnu-objcopy)
set(CMAKE_OBJDUMP ${RISCV_TOOLCHAIN_ROOT}/bin/riscv64-unknown-linux-gnu-objdump)
set(CMAKE_READELF ${RISCV_TOOLCHAIN_ROOT}/bin/riscv64-unknown-linux-gnu-readelf)
set(PKG_CONFIG_EXECUTABLE "NOT-FOUND" CACHE PATH "Path to ARM64 pkg-config")
set(PKG_CONFIG_EXECUTABLE "NOT-FOUND" CACHE PATH "Path to RISC-V pkg-config")
# Don't run the linker on compiler check
set(CMAKE_TRY_COMPILE_TARGET_TYPE STATIC_LIBRARY)

View File

@@ -0,0 +1,75 @@
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
set(CMAKE_SYSTEM_NAME Linux)
set(CMAKE_SYSTEM_PROCESSOR amd64)
set(CMAKE_C_COMPILER x86_64-linux-gnu-gcc)
set(CMAKE_CXX_COMPILER x86_64-linux-gnu-g++)
set(CMAKE_STRIP x86_64-linux-gnu-strip)
set(PKG_CONFIG_EXECUTABLE "NOT-FOUND" CACHE PATH "Path to amd64 pkg-config")
set(CMAKE_FIND_ROOT_PATH_MODE_PROGRAM NEVER)
set(CMAKE_FIND_ROOT_PATH_MODE_LIBRARY ONLY)
set(CMAKE_FIND_ROOT_PATH_MODE_INCLUDE ONLY)
set(CMAKE_FIND_ROOT_PATH_MODE_PACKAGE ONLY)
macro(__cmake_find_root_save_and_reset)
foreach(v
CMAKE_FIND_ROOT_PATH_MODE_LIBRARY
CMAKE_FIND_ROOT_PATH_MODE_INCLUDE
CMAKE_FIND_ROOT_PATH_MODE_PACKAGE
CMAKE_FIND_ROOT_PATH_MODE_PROGRAM
)
set(__save_${v} ${${v}})
set(${v} NEVER)
endforeach()
endmacro()
macro(__cmake_find_root_restore)
foreach(v
CMAKE_FIND_ROOT_PATH_MODE_LIBRARY
CMAKE_FIND_ROOT_PATH_MODE_INCLUDE
CMAKE_FIND_ROOT_PATH_MODE_PACKAGE
CMAKE_FIND_ROOT_PATH_MODE_PROGRAM
)
set(${v} ${__save_${v}})
unset(__save_${v})
endforeach()
endmacro()
# macro to find programs on the host OS
macro(find_host_program)
__cmake_find_root_save_and_reset()
if(CMAKE_HOST_WIN32)
SET(WIN32 1)
SET(UNIX)
elseif(CMAKE_HOST_APPLE)
SET(APPLE 1)
SET(UNIX)
endif()
find_program(${ARGN})
SET(WIN32)
SET(APPLE)
SET(UNIX 1)
__cmake_find_root_restore()
endmacro()
# macro to find packages on the host OS
macro(find_host_package)
__cmake_find_root_save_and_reset()
if(CMAKE_HOST_WIN32)
SET(WIN32 1)
SET(UNIX)
elseif(CMAKE_HOST_APPLE)
SET(APPLE 1)
SET(UNIX)
endif()
find_package(${ARGN})
SET(WIN32)
SET(APPLE)
SET(UNIX 1)
__cmake_find_root_restore()
endmacro()

36
conan.lock Normal file
View File

@@ -0,0 +1,36 @@
{
"version": "0.5",
"requires": [
"zlib/1.2.13#97d5730b529b4224045fe7090592d4c1%1692672717.049",
"xbyak/6.73#250bc3bc73379f90f255876c1c00a4cd%1691853024.351",
"snappy/1.1.10#916523630083f6d855cb2977de8eefb6%1689780661.062",
"pybind11/2.10.4#dd44c80a5ed6a2ef11194380daae1248%1682692198.909",
"pugixml/1.13#f615c1fcec55122b2e177d17061276e7%1691917296.869",
"protobuf/3.21.12#d9f5f4e3b86552552dda4c0a2e928eeb%1685218275.69",
"opencl-icd-loader/2023.04.17#5f73dd9f0c023d416a7f162e320b9c77%1692732261.088",
"opencl-headers/2023.04.17#3d98f2d12a67c2400de6f11d5335b5a6%1683936272.16",
"opencl-clhpp-headers/2023.04.17#7c62fcc7ac2559d4839150d2ebaac5c8%1685450803.672",
"onnx/1.13.1#f11071c8aba52731a5205b028945acbb%1693130310.715",
"onetbb/2021.10.0#cbb2fc43088070b48f6e4339bc8fa0e1%1693812561.235",
"nlohmann_json/3.11.2#a35423bb6e1eb8f931423557e282c7ed%1666619820.488",
"ittapi/3.24.0#9246125f13e7686dee2b0c992b71db94%1682969872.743",
"hwloc/2.9.2#1c63e2eccac57048ae226e6c946ebf0e%1688677682.002",
"gflags/2.2.2#48d1262ffac8d30c3224befb8275a533%1676224985.343",
"flatbuffers/23.5.26#b153646f6546daab4c7326970b6cd89c%1685838458.449",
"ade/0.1.2a#b569ff943843abd004e65536e265a445%1688125447.482"
],
"build_requires": [
"zlib/1.2.13#97d5730b529b4224045fe7090592d4c1%1692672717.049",
"protobuf/3.21.12#d9f5f4e3b86552552dda4c0a2e928eeb%1685218275.69",
"protobuf/3.21.9#515ceb0a1653cf84363d9968b812d6be%1678364058.993",
"patchelf/0.13#0eaada8970834919c3ce14355afe7fac%1680534241.341",
"m4/1.4.19#c1c4b1ee919e34630bb9b50046253d3c%1676610086.39",
"libtool/2.4.6#9ee8efc04c2e106e7fba13bb1e477617%1677509454.345",
"gnu-config/cci.20210814#15c3bf7dfdb743977b84d0321534ad90%1681250000.747",
"flatbuffers/23.5.26#b153646f6546daab4c7326970b6cd89c%1685838458.449",
"cmake/3.27.4#a7e78418b024dccacccc887f049f47ed%1693515860.005",
"automake/1.16.5#058bda3e21c36c9aa8425daf3c1faf50%1688481772.751",
"autoconf/2.71#53be95d228b2dcb30dc199cb84262d8f%1693395343.513"
],
"python_requires": []
}

33
conanfile.txt Normal file
View File

@@ -0,0 +1,33 @@
[requires]
ade/0.1.2a
onetbb/[>=2021.2.1]
pugixml/[>=1.10]
protobuf/3.21.12
ittapi/[>=3.23.0]
zlib/[>=1.2.8]
opencl-icd-loader/[>=2022.09.30]
# opencl-clhpp-headers/[>=2022.09.30]
opencl-headers/[>=2022.09.30]
xbyak/[>=6.62]
snappy/[>=1.1.7]
gflags/2.2.2
onnx/1.13.1
nlohmann_json/[>=3.1.1]
pybind11/[>=2.10.1]
flatbuffers/[>=22.9.24]
[tool_requires]
cmake/[>=3.15]
patchelf/[>=0.12]
protobuf/3.21.9
flatbuffers/[>=22.9.24]
[options]
protobuf/*:lite=True
onetbb/*:tbbmalloc=True
onetbb/*:tbbproxy=True
flatbuffers/*:header_only=True
[generators]
CMakeDeps
CMakeToolchain

View File

@@ -77,7 +77,7 @@ function(build_docs)
if(ENABLE_OPENVINO_NOTEBOOKS)
set(NBDOC_SCRIPT "${DOCS_SOURCE_DIR}/nbdoc/nbdoc.py")
list(APPEND commands
COMMAND ${PYTHON_EXECUTABLE} "${NBDOC_SCRIPT}" "${RST_OUTPUT}/notebooks"
COMMAND ${PYTHON_EXECUTABLE} "${NBDOC_SCRIPT}" "${DOCS_SOURCE_DIR}/notebooks" "${RST_OUTPUT}/notebooks"
)
endif()

View File

@@ -0,0 +1,76 @@
# Datumaro {#datumaro_documentation}
@sphinxdirective
.. meta::
:description: Start working with Datumaro, which offers functionalities for basic data
import/export, validation, correction, filtration and transformations.
Datumaro provides a suite of basic data import/export (IE) for more than 35 public vision data
formats and manipulation functionalities such as validation, correction, filtration, and some
transformations. To achieve the web-scale training, this further aims to merge multiple
heterogeneous datasets through comparator and merger. Datumaro is integrated into Geti™, OpenVINO™
Training Extensions, and CVAT for the ease of data preparation. Datumaro is open-sourced and
available on `GitHub <https://github.com/openvinotoolkit/datumaro>`__.
Refer to the official `documentation <https://openvinotoolkit.github.io/datumaro/stable/docs/get-started/introduction.html>`__ to learn more.
Plus, enjoy `Jupyter notebooks <https://github.com/openvinotoolkit/datumaro/tree/develop/notebooks>`__ for the real Datumaro practices.
Detailed Workflow
#################
.. image:: ./_static/images/datumaro.png
1. To start working with Datumaro, download public datasets or prepare your own annotated dataset.
.. note::
Datumaro provides a CLI `datum download` for downloading `TensorFlow Datasets <https://www.tensorflow.org/datasets>`__.
2. Import data into Datumaro and manipulate the dataset for the data quality using `Validator`, `Corrector`, and `Filter`.
3. Compare two datasets and transform the label schemas (category information) before merging them.
4. Merge two datasets to a large-scale dataset.
.. note::
There are some choices of merger, i.e., `ExactMerger`, `IntersectMerger`, and `UnionMerger`.
5. Split the unified dataset into subsets, e.g., `train`, `valid`, and `test` through `Splitter`.
.. note::
We can split data with a given ratio of subsets according to both the number of samples or
annotations. Please see `SplitTask` for the task-specific split.
6. Export the cleaned and unified dataset for follow-up workflows such as model training.
Go to :doc:`OpenVINO™ Training Extensions <ote_documentation>`.
If the results are unsatisfactory, add datasets and perform the same steps, starting with dataset annotation.
Datumaro Components
###################
* `Datumaro CLIs <https://openvinotoolkit.github.io/datumaro/stable/docs/command-reference/overview.html>`__
* `Datumaro APIs <https://openvinotoolkit.github.io/datumaro/stable/docs/reference/datumaro_module.html>`__
* `Datumaro data format <https://openvinotoolkit.github.io/datumaro/stable/docs/data-formats/datumaro_format.html>`__
* `Supported data formats <https://openvinotoolkit.github.io/datumaro/stable/docs/data-formats/formats/index.html>`__
Tutorials
#########
* `Basic skills <https://openvinotoolkit.github.io/datumaro/stable/docs/level-up/basic_skills/index.html>`__
* `Intermediate skills <https://openvinotoolkit.github.io/datumaro/stable/docs/level-up/intermediate_skills/index.html>`__
* `Advanced skills <https://openvinotoolkit.github.io/datumaro/stable/docs/level-up/advanced_skills/index.html>`__
Python Hands-on Examples
########################
* `Data IE <https://openvinotoolkit.github.io/datumaro/stable/docs/jupyter_notebook_examples/dataset_IO.html>`__
* `Data manipulation <https://openvinotoolkit.github.io/datumaro/stable/docs/jupyter_notebook_examples/manipulate.html>`__
* `Data exploration <https://openvinotoolkit.github.io/datumaro/stable/docs/jupyter_notebook_examples/explore.html>`__
* `Data refinement <https://openvinotoolkit.github.io/datumaro/stable/docs/jupyter_notebook_examples/refine.html>`__
* `Data transformation <https://openvinotoolkit.github.io/datumaro/stable/docs/jupyter_notebook_examples/transform.html>`__
* `Deep learning end-to-end use-cases <https://openvinotoolkit.github.io/datumaro/stable/docs/jupyter_notebook_examples/e2e_example.html>`__
@endsphinxdirective

View File

@@ -1,33 +0,0 @@
# Running and Deploying Inference {#openvino_docs_deployment_guide_introduction}
@sphinxdirective
.. toctree::
:maxdepth: 1
:hidden:
Run and Deploy Locally <openvino_deployment_guide>
Deploy via Model Serving <ovms_what_is_openvino_model_server>
Once you have a model that meets both OpenVINO™ and your requirements, you can choose how to deploy it with your application.
.. panels::
:doc:`Deploy via OpenVINO Runtime <openvino_deployment_guide>`
^^^^^^^^^^^^^^
Local deployment uses OpenVINO Runtime that is called from, and linked to, the application directly.
It utilizes resources available to the system and provides the quickest way of launching inference.
---
:doc:`Deploy via Model Server <ovms_what_is_openvino_model_server>`
^^^^^^^^^^^^^^
Deployment via OpenVINO Model Server allows the application to connect to the inference server set up remotely.
This way inference can use external resources instead of those available to the application itself.
Apart from the default deployment options, you may also :doc:`deploy your application for the TensorFlow framework with OpenVINO Integration <ovtf_integration>`
@endsphinxdirective

View File

@@ -17,7 +17,7 @@ OpenVINO Runtime offers multiple inference modes to allow optimum hardware utili
The remaining modes assume certain levels of automation in selecting devices for inference. Using them in the deployed solution may potentially increase its performance and portability. The automated modes are:
* :doc:`Automatic Device Selection (AUTO) <openvino_docs_OV_UG_supported_plugins_AUTO>`
* :doc:``Multi-Device Execution (MULTI) <openvino_docs_OV_UG_Running_on_multiple_devices>`
* :doc:`Multi-Device Execution (MULTI) <openvino_docs_OV_UG_Running_on_multiple_devices>`
* :doc:`Heterogeneous Execution (HETERO) <openvino_docs_OV_UG_Hetero_execution>`
* :doc:`Automatic Batching Execution (Auto-batching) <openvino_docs_OV_UG_Automatic_Batching>`

View File

@@ -2,6 +2,10 @@
@sphinxdirective
.. meta::
:description: Explore OpenCV Graph API and other media processing frameworks
used for development of computer vision solutions.
.. toctree::
:maxdepth: 1

View File

@@ -1,6 +1,12 @@
# Model Preparation {#openvino_docs_model_processing_introduction}
@sphinxdirective
.. meta::
:description: Preparing models for OpenVINO Runtime. Learn about the methods
used to read, convert and compile models from different frameworks.
.. toctree::
:maxdepth: 1
:hidden:
@@ -10,22 +16,52 @@
omz_tools_downloader
Every deep learning workflow begins with obtaining a model. You can choose to prepare a custom one, use a ready-made solution and adjust it to your needs, or even download and run a pre-trained network from an online database, such as OpenVINO's :doc:`Open Model Zoo <model_zoo>`.
Every deep learning workflow begins with obtaining a model. You can choose to prepare a custom one, use a ready-made solution and adjust it to your needs, or even download and run a pre-trained network from an online database, such as `TensorFlow Hub <https://tfhub.dev/>`__, `Hugging Face <https://huggingface.co/>`__, or `Torchvision models <https://pytorch.org/hub/>`__.
:doc:`OpenVINO™ supports several model formats <Supported_Model_Formats>` and allows to convert them to it's own, OpenVINO IR, providing a tool dedicated to this task.
Import a model using ``read_model()``
#################################################
:doc:`Model Optimizer <openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide>` reads the original model and creates the OpenVINO IR model (.xml and .bin files) so that inference can ultimately be performed without delays due to format conversion. Optionally, Model Optimizer can adjust the model to be more suitable for inference, for example, by :doc:`alternating input shapes <openvino_docs_MO_DG_prepare_model_convert_model_Converting_Model>`, :doc:`embedding preprocessing <openvino_docs_MO_DG_Additional_Optimization_Use_Cases>` and :doc:`cutting training parts off <openvino_docs_MO_DG_prepare_model_convert_model_Cutting_Model>`.
Model files (not Python objects) from :doc:`ONNX, PaddlePaddle, TensorFlow and TensorFlow Lite <Supported_Model_Formats>` (check :doc:`TensorFlow Frontend Capabilities and Limitations <openvino_docs_MO_DG_TensorFlow_Frontend>`) do not require a separate step for model conversion, that is ``mo.convert_model``.
The approach to fully convert a model is considered the default choice, as it allows the full extent of OpenVINO features. The OpenVINO IR model format is used by other conversion and preparation tools, such as the Post-Training Optimization Tool, for further optimization of the converted model.
The ``read_model()`` method reads a model from a file and produces `openvino.runtime.Model <api/ie_python_api/_autosummary/openvino.runtime.Model.html>`__. If the file is in one of the supported original framework :doc:`file formats <Supported_Model_Formats>`, the method runs internal conversion to an OpenVINO model format. If the file is already in the :doc:`OpenVINO IR format <openvino_ir>`, it is read "as-is", without any conversion involved.
Conversion is not required for ONNX, PaddlePaddle, TensorFlow Lite and TensorFlow models (check :doc:`TensorFlow Frontend Capabilities and Limitations <openvino_docs_MO_DG_TensorFlow_Frontend>`), as OpenVINO provides C++ and Python APIs for importing them to OpenVINO Runtime directly. It provides a convenient way to quickly switch from framework-based code to OpenVINO-based code in your inference application.
You can also convert a model from original framework to `openvino.runtime.Model <api/ie_python_api/_autosummary/openvino.runtime.Model.html>`__ using ``convert_model()`` method. More details about ``convert_model()`` are provided in :doc:`model conversion guide <openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide>` .
``ov.Model`` can be serialized to IR using the ``ov.serialize()`` method. The serialized IR can be further optimized using :doc:`Neural Network Compression Framework (NNCF) <ptq_introduction>` that applies post-training quantization methods.
.. note::
``convert_model()`` also allows you to perform input/output cut, add pre-processing or add custom Python conversion extensions.
Convert a model with Python using ``mo.convert_model()``
###########################################################
Model conversion API, specifically, the ``mo.convert_model()`` method converts a model from original framework to ``ov.Model``. ``mo.convert_model()`` returns ``ov.Model`` object in memory so the ``read_model()`` method is not required. The resulting ``ov.Model`` can be inferred in the same training environment (python script or Jupiter Notebook). ``mo.convert_model()`` provides a convenient way to quickly switch from framework-based code to OpenVINO-based code in your inference application.
In addition to model files, ``mo.convert_model()`` can take OpenVINO extension objects constructed directly in Python for easier conversion of operations that are not supported in OpenVINO. The ``mo.convert_model()`` method also has a set of parameters to :doc:`cut the model <openvino_docs_MO_DG_prepare_model_convert_model_Cutting_Model>`, :doc:`set input shapes or layout <openvino_docs_MO_DG_prepare_model_convert_model_Converting_Model>`, :doc:`add preprocessing <openvino_docs_MO_DG_Additional_Optimization_Use_Cases>`, etc.
The figure below illustrates the typical workflow for deploying a trained deep learning model, where IR is a pair of files describing the model:
* ``.xml`` - Describes the network topology.
* ``.bin`` - Contains the weights and biases binary data.
.. image:: _static/images/model_conversion_diagram.svg
:alt: model conversion diagram
Convert a model using ``mo`` command-line tool
#################################################
Another option to convert a model is to use ``mo`` command-line tool. ``mo`` is a cross-platform tool that facilitates the transition between training and deployment environments, performs static model analysis, and adjusts deep learning models for optimal execution on end-point target devices in the same measure, as the ``mo.convert_model()`` method.
``mo`` requires the use of a pre-trained deep learning model in one of the supported formats: TensorFlow, TensorFlow Lite, PaddlePaddle, or ONNX. ``mo`` converts the model to the OpenVINO Intermediate Representation format (IR), which needs to be read with the ``ov.read_model()`` method. Then, you can compile and infer the ``ov.Model`` later with :doc:`OpenVINO™ Runtime <openvino_docs_OV_UG_OV_Runtime_User_Guide>`.
The results of both ``mo`` and ``mo.convert_model()`` conversion methods described above are the same. You can choose one of them, depending on what is most convenient for you. Keep in mind that there should not be any differences in the results of model conversion if the same set of parameters is used.
This section describes how to obtain and prepare your model for work with OpenVINO to get the best inference results:
* :doc:`See the supported formats and how to use them in your project <Supported_Model_Formats>`.
* :doc:`Convert different model formats to the OpenVINO IR format <openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide>`.
* `Automate model-related tasks with Model Downloader and additional OMZ Tools <https://docs.openvino.ai/latest/omz_tools_downloader.html>`__.
* :doc:`Convert different model formats to the ov.Model format <openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide>`.
To begin with, you may want to :doc:`browse a database of models for use in your projects <model_zoo>`.
@endsphinxdirective

View File

@@ -2,21 +2,25 @@
@sphinxdirective
.. meta::
:description: OpenVINO™ is an ecosystem of utilities that have advanced capabilities, which help develop deep learning solutions.
.. toctree::
:maxdepth: 1
:hidden:
ote_documentation
ovtf_integration
datumaro_documentation
ovsa_get_started
openvino_inference_engine_tools_compile_tool_README
openvino_docs_tuning_utilities
OpenVINO™ is not just one tool. It is an expansive ecosystem of utilities, providing a comprehensive workflow for deep learning solution development. Learn more about each of them to reach the full potential of OpenVINO™ Toolkit.
Neural Network Compression Framework (NNCF)
###########################################
**Neural Network Compression Framework (NNCF)**
A suite of advanced algorithms for Neural Network inference optimization with minimal accuracy drop. NNCF applies quantization, filter pruning, binarization and sparsity algorithms to PyTorch and TensorFlow models during training.
@@ -27,8 +31,7 @@ More resources:
* `PyPI <https://pypi.org/project/nncf/>`__
OpenVINO™ Training Extensions
#############################
**OpenVINO™ Training Extensions**
A convenient environment to train Deep Learning models and convert them using the OpenVINO™ toolkit for optimized inference.
@@ -38,71 +41,60 @@ More resources:
* `GitHub <https://github.com/openvinotoolkit/training_extensions>`__
* `Documentation <https://openvinotoolkit.github.io/training_extensions/stable/guide/get_started/introduction.html>`__
OpenVINO™ Security Add-on
#########################
**OpenVINO™ Security Add-on**
A solution for Model Developers and Independent Software Vendors to use secure packaging and secure model execution.
More resources:
* `Documentation <https://docs.openvino.ai/latest/ovsa_get_started.html>`__
* :doc:`Documentation <ovsa_get_started>`
* `GitHub <https://github.com/openvinotoolkit/security_addon>`__
OpenVINO™ integration with TensorFlow (OVTF)
############################################
A solution empowering TensorFlow developers with OpenVINO's optimization capabilities. With just two lines of code in your application, you can offload inference to OpenVINO, while keeping the TensorFlow API.
More resources:
* `Documentation <https://github.com/openvinotoolkit/openvino_tensorflow>`__
* `PyPI <https://pypi.org/project/openvino-tensorflow/>`__
* `GitHub <https://github.com/openvinotoolkit/openvino_tensorflow>`__
DL Streamer
###########
A streaming media analytics framework, based on the GStreamer multimedia framework, for creating complex media analytics pipelines.
More resources:
* `Documentation on GitHub <https://dlstreamer.github.io/index.html>`__
* `Installation Guide on GitHub <https://github.com/openvinotoolkit/dlstreamer_gst/wiki/Install-Guide>`__
DL Workbench
############
A web-based tool for deploying deep learning models. Built on the core of OpenVINO and equipped with a graphics user interface, DL Workbench is a great way to explore the possibilities of the OpenVINO workflow, import, analyze, optimize, and build your pre-trained models. You can do all that by visiting `Intel® Developer Cloud <https://software.intel.com/content/www/us/en/develop/tools/devcloud.html>`__ and launching DL Workbench online.
More resources:
* `Documentation <https://docs.openvino.ai/2022.3/workbench_docs_Workbench_DG_Introduction.html>`__
* `Docker Hub <https://hub.docker.com/r/openvino/workbench>`__
* `PyPI <https://pypi.org/project/openvino-workbench/>`__
Computer Vision Annotation Tool (CVAT)
######################################
An online, interactive video and image annotation tool for computer vision purposes.
More resources:
* `Documentation on GitHub <https://opencv.github.io/cvat/docs/>`__
* `Web application <https://www.cvat.ai/>`__
* `Docker Hub <https://hub.docker.com/r/openvino/cvat_server>`__
* `GitHub <https://github.com/openvinotoolkit/cvat>`__
Dataset Management Framework (Datumaro)
#######################################
**Dataset Management Framework (Datumaro)**
A framework and CLI tool to build, transform, and analyze datasets.
More resources:
* `Documentation on GitHub <https://openvinotoolkit.github.io/datumaro/docs/>`__
* :doc:`Overview <datumaro_documentation>`
* `PyPI <https://pypi.org/project/datumaro/>`__
* `GitHub <https://github.com/openvinotoolkit/datumaro>`__
* `Documentation <https://openvinotoolkit.github.io/datumaro/stable/docs/get-started/introduction.html>`__
**Compile Tool**
Compile tool is now deprecated. If you need to compile a model for inference on a specific device, use the following script:
.. tab-set::
.. tab-item:: Python
:sync: py
.. doxygensnippet:: docs/snippets/export_compiled_model.py
:language: python
:fragment: [export_compiled_model]
.. tab-item:: C++
:sync: cpp
.. doxygensnippet:: docs/snippets/export_compiled_model.cpp
:language: cpp
:fragment: [export_compiled_model]
To learn which device supports the import / export functionality, see the :doc:`feature support matrix <openvino_docs_OV_UG_Working_with_devices>`.
For more details on preprocessing steps, refer to the :doc:`Optimize Preprocessing <openvino_docs_OV_UG_Preprocessing_Overview>`. To compile the model with advanced preprocessing capabilities, refer to the :doc:`Use Case - Integrate and Save Preprocessing Steps Into OpenVINO IR <openvino_docs_OV_UG_Preprocess_Usecase_save>`, which shows how to have all the preprocessing in the compiled blob.
**DL Workbench**
A web-based tool for deploying deep learning models. Built on the core of OpenVINO and equipped with a graphics user interface, DL Workbench is a great way to explore the possibilities of the OpenVINO workflow, import, analyze, optimize, and build your pre-trained models. You can do all that by visiting `Intel® Developer Cloud <https://software.intel.com/content/www/us/en/develop/tools/devcloud.html>`__ and launching DL Workbench online.
**OpenVINO™ integration with TensorFlow (OVTF)**
OpenVINO™ Integration with TensorFlow will no longer be supported as of OpenVINO release 2023.0. As part of the 2023.0 release, OpenVINO will feature a significantly enhanced TensorFlow user experience within native OpenVINO without needing offline model conversions. :doc:`Learn more <openvino_docs_MO_DG_TensorFlow_Frontend>`.
@endsphinxdirective

View File

@@ -1,55 +0,0 @@
# OpenVINO™ integration with TensorFlow {#ovtf_integration}
@sphinxdirective
**OpenVINO™ integration with TensorFlow** is a solution for TensorFlow developers who want to get started with OpenVINO™ in their inferencing applications. By adding just two lines of code you can now take advantage of OpenVINO™ toolkit optimizations with TensorFlow inference applications across a range of Intel® computation devices.
This is all you need:
.. code-block:: bash
import openvino_tensorflow
openvino_tensorflow.set_backend('<backend_name>')
**OpenVINO™ integration with TensorFlow** accelerates inference across many AI models on a variety of Intel® technologies, such as:
* Intel® CPUs
* Intel® integrated GPUs
.. note::
For maximum performance, efficiency, tooling customization, and hardware control, we recommend developers to adopt native OpenVINO™ solutions.
To find out more about the product itself, as well as learn how to use it in your project, check its dedicated `GitHub repository <https://github.com/openvinotoolkit/openvino_tensorflow/tree/master/docs>`__.
To see what you can do with **OpenVINO™ integration with TensorFlow**, explore the demos located in the `examples folder <https://github.com/openvinotoolkit/openvino_tensorflow/tree/master/examples>`__ in our GitHub repository.
Sample tutorials are also hosted on `Intel® DevCloud <https://www.intel.com/content/www/us/en/developer/tools/devcloud/edge/build/ovtfoverview.html>`__. The demo applications are implemented using Jupyter Notebooks. You can interactively execute them on Intel® DevCloud nodes, compare the results of **OpenVINO™ integration with TensorFlow**, native TensorFlow, and OpenVINO™.
License
#######
**OpenVINO™ integration with TensorFlow** is licensed under `Apache License Version 2.0 <https://github.com/openvinotoolkit/openvino_tensorflow/blob/master/LICENSE>`__.
By contributing to the project, you agree to the license and copyright terms therein
and release your contribution under these terms.
Support
#######
Submit your questions, feature requests and bug reports via `GitHub issues <https://github.com/openvinotoolkit/openvino_tensorflow/issues>`__.
How to Contribute
#################
We welcome community contributions to **OpenVINO™ integration with TensorFlow**. If you have an idea for improvement:
* Share your proposal via `GitHub issues <https://github.com/openvinotoolkit/openvino_tensorflow/issues>`__.
* Submit a `pull request <https://github.com/openvinotoolkit/openvino_tensorflow/pulls>`__.
We will review your contribution as soon as possible. If any additional fixes or modifications are necessary, we will guide you and provide feedback. Before you make your contribution, make sure you can build **OpenVINO™ integration with TensorFlow** and run all the examples with your fix/patch. If you want to introduce a large feature, create test cases for your feature. Upon our verification of your pull request, we will merge it to the repository provided that the pull request has met the above mentioned requirements and proved acceptable.
\* Other names and brands may be claimed as the property of others.
@endsphinxdirective

View File

@@ -1,6 +1,12 @@
# OpenVINO™ Training Extensions {#ote_documentation}
@sphinxdirective
@sphinxdirective
.. meta::
:description: OpenVINO™ Training Extensions include advanced algorithms used
to create, train and convert deep learning models with OpenVINO
Toolkit for optimized inference.
OpenVINO™ Training Extensions provide a suite of advanced algorithms to train
Deep Learning models and convert them using the `OpenVINO™
@@ -19,21 +25,22 @@ Detailed Workflow
.. note::
Prepare a separate dataset or split the dataset you have for more accurate quality evaluation.
3. Having successful evaluation results received, you have an opportunity to deploy your model or continue optimizing it, using NNCF and POT. For more information about these frameworks, go to :doc:`Optimization Guide <openvino_docs_model_optimization_guide>`.
3. Having successful evaluation results received, you have an opportunity to deploy your model or continue optimizing it, using NNCF. For more information about these frameworks, go to :doc:`Optimization Guide <openvino_docs_model_optimization_guide>`.
If the results are unsatisfactory, add datasets and perform the same steps, starting with dataset annotation.
OpenVINO Training Extensions Components
#######################################
- `OpenVINO Training Extensions SDK <https://github.com/openvinotoolkit/training_extensions/tree/master/ote_sdk>`__
- `OpenVINO Training Extensions CLI <https://github.com/openvinotoolkit/training_extensions/tree/master/ote_cli>`__
- `OpenVINO Training Extensions Algorithms <https://github.com/openvinotoolkit/training_extensions/tree/master/external>`__
* `OpenVINO Training Extensions API <https://github.com/openvinotoolkit/training_extensions/tree/develop/otx/api>`__
* `OpenVINO Training Extensions CLI <https://github.com/openvinotoolkit/training_extensions/tree/develop/otx/cli>`__
* `OpenVINO Training Extensions Algorithms <https://github.com/openvinotoolkit/training_extensions/tree/develop/otx/algorithms>`__
Tutorials
#########
`Object Detection <https://github.com/openvinotoolkit/training_extensions/blob/master/ote_cli/notebooks/train.ipynb>`__
* `Base tutorial <https://openvinotoolkit.github.io/training_extensions/stable/guide/tutorials/base/index.html>`__
* `Advanced tutorial <https://openvinotoolkit.github.io/training_extensions/stable/guide/tutorials/advanced/index.html>`__
@endsphinxdirective

View File

@@ -3,22 +3,46 @@
@sphinxdirective
.. meta::
:description: OpenVINO toolkit workflow usually involves preparation,
optimization, and compression of models, running inference and
deploying deep learning applications.
.. toctree::
:maxdepth: 1
:hidden:
Model Preparation <openvino_docs_model_processing_introduction>
Model Optimization and Compression <openvino_docs_model_optimization_guide>
Running and Deploying Inference <openvino_docs_deployment_guide_introduction>
Running Inference <openvino_docs_OV_UG_OV_Runtime_User_Guide>
Deployment on a Local System <openvino_deployment_guide>
Deployment on a Model Server <ovms_what_is_openvino_model_server>
pytorch_2_0_torch_compile
| :doc:`Model Preparation <openvino_docs_model_processing_introduction>`
| With Model Downloader and Model Optimizer guides, you will learn to download pre-trained models and convert them for use with OpenVINO™. You can use your own models or choose some from a broad selection provided in the Open Model Zoo.
| With model conversion API guide, you will learn to convert pre-trained models for use with OpenVINO™. You can use your own models or choose some from a broad selection in online databases, such as `TensorFlow Hub <https://tfhub.dev/>`__, `Hugging Face <https://huggingface.co/>`__, `Torchvision models <https://pytorch.org/hub/>`__..
| :doc:`Model Optimization and Compression <openvino_docs_model_optimization_guide>`
| In this section you will find out how to optimize a model to achieve better inference performance. It describes multiple optimization methods for both the training and post-training stages.
| :doc:`Deployment <openvino_docs_deployment_guide_introduction>`
| This section explains the process of deploying your own inference application using either OpenVINO Runtime or OpenVINO Model Server. It describes how to run inference which is the most basic form of deployment and the quickest way of launching inference.
| :doc:`Running Inference <openvino_docs_OV_UG_OV_Runtime_User_Guide>`
| This section explains describes how to run inference which is the most basic form of deployment and the quickest way of launching inference.
Once you have a model that meets both OpenVINO™ and your requirements, you can choose how to deploy it with your application.
| :doc:`Option 1. Deployment via OpenVINO Runtime <openvino_deployment_guide>`
| Local deployment uses OpenVINO Runtime that is called from, and linked to, the application directly.
| It utilizes resources available to the system and provides the quickest way of launching inference.
| Deployment on a local system requires performing the steps from the running inference section.
| :doc:`Option 2. Deployment via Model Server <ovms_what_is_openvino_model_server>`
| Deployment via OpenVINO Model Server allows the application to connect to the inference server set up remotely.
| This way inference can use external resources instead of those available to the application itself.
| Deployment on a model server can be done quickly and without performing any additional steps described in the running inference section.
@endsphinxdirective

View File

@@ -0,0 +1,157 @@
# PyTorch Deployment via "torch.compile" {#pytorch_2_0_torch_compile}
@sphinxdirective
The ``torch.compile`` feature enables you to use OpenVINO for PyTorch-native applications.
It speeds up PyTorch code by JIT-compiling it into optimized kernels.
By default, Torch code runs in eager-mode, but with the use of ``torch.compile`` it goes through the following steps:
1. **Graph acquisition** - the model is rewritten as blocks of subgraphs that are either:
* compiled by TorchDynamo and "flattened",
* falling back to the eager-mode, due to unsupported Python constructs (like control-flow code).
2. **Graph lowering** - all PyTorch operations are decomposed into their constituent kernels specific to the chosen backend.
3. **Graph compilation** - the kernels call their corresponding low-level device-specific operations.
How to Use
#################
To use ``torch.compile``, you need to add an import statement and define one of the two available backends:
| ``openvino``
| With this backend, Torch FX subgraphs are directly converted to OpenVINO representation without any additional PyTorch based tracing/scripting.
| ``openvino_ts``
| With this backend, Torch FX subgraphs are first traced/scripted with PyTorch Torchscript, and then converted to OpenVINO representation.
.. tab-set::
.. tab-item:: openvino
:sync: backend-openvino
.. code-block:: console
import openvino.torch
...
model = torch.compile(model, backend='openvino')
Execution diagram:
.. image:: _static/images/torch_compile_backend_openvino.svg
:width: 992px
:height: 720px
:scale: 60%
:align: center
.. tab-item:: openvino_ts
:sync: backend-openvino-ts
.. code-block:: console
import openvino.torch
...
model = torch.compile(model, backend='openvino_ts')
Execution diagram:
.. image:: _static/images/torch_compile_backend_openvino_ts.svg
:width: 1088px
:height: 720px
:scale: 60%
:align: center
Environment Variables
+++++++++++++++++++++++++++
* **OPENVINO_TORCH_BACKEND_DEVICE**: enables selecting a specific hardware device to run the application.
By default, the OpenVINO backend for ``torch.compile`` runs PyTorch applications using the CPU. Setting
this variable to GPU.0, for example, will make the application use the integrated graphics processor instead.
* **OPENVINO_TORCH_MODEL_CACHING**: enables saving the optimized model files to a hard drive, after the first application run.
This makes them available for the following application executions, reducing the first-inference latency.
By default, this variable is set to ``False``. Setting it to ``True`` enables caching.
* **OPENVINO_TORCH_CACHE_DIR**: enables defining a custom directory for the model files (if model caching set to ``True``).
By default, the OpenVINO IR is saved in the ``cache`` sub-directory, created in the application's root directory.
Windows support
++++++++++++++++++++++++++
Currently, PyTorch does not support ``torch.compile`` feature on Windows officially. However, it can be accessed by running
the below instructions:
1. Install the PyTorch nightly wheel file - `2.1.0.dev20230713 <https://download.pytorch.org/whl/nightly/cpu/torch-2.1.0.dev20230713%2Bcpu-cp38-cp38-win_amd64.whl>`__ ,
2. Update the file at ``<python_env_root>/Lib/site-packages/torch/_dynamo/eval_frames.py``
3. Find the function called ``check_if_dynamo_supported()``:
.. code-block:: console
def check_if_dynamo_supported():
if sys.platform == "win32":
raise RuntimeError("Windows not yet supported for torch.compile")
if sys.version_info >= (3, 11):
raise RuntimeError("Python 3.11+ not yet supported for torch.compile")
4. Put in comments the first two lines in this function, so it looks like this:
.. code-block:: console
def check_if_dynamo_supported():
#if sys.platform == "win32":
# raise RuntimeError("Windows not yet supported for torch.compile")
if sys.version_info >= (3, 11):
`raise RuntimeError("Python 3.11+ not yet supported for torch.compile")
Support for Automatic1111 Stable Diffusion WebUI
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Automatic1111 Stable Diffusion WebUI is an open-source repository that hosts a browser-based interface for the Stable Diffusion
based image generation. It allows users to create realistic and creative images from text prompts.
Stable Diffusion WebUI is supported on Intel CPUs, Intel integrated GPUs, and Intel discrete GPUs by leveraging OpenVINO
``torch.compile`` capability. Detailed instructions are available in
`Stable Diffusion WebUI repository. <https://github.com/openvinotoolkit/stable-diffusion-webui/wiki/Installation-on-Intel-Silicon>`__
Architecture
#################
The ``torch.compile`` feature is part of PyTorch 2.0, and is based on:
* **TorchDynamo** - a Python-level JIT that hooks into the frame evaluation API in CPython,
(PEP 523) to dynamically modify Python bytecode right before it is executed (PyTorch operators
that cannot be extracted to FX graph are executed in the native Python environment).
It maintains the eager-mode capabilities using
`Guards <https://pytorch.org/docs/stable/dynamo/guards-overview.html>`__ to ensure the
generated graphs are valid.
* **AOTAutograd** - generates the backward graph corresponding to the forward graph captured by TorchDynamo.
* **PrimTorch** - decomposes complicated PyTorch operations into simpler and more elementary ops.
* **TorchInductor** - a deep learning compiler that generates fast code for multiple accelerators and backends.
When the PyTorch module is wrapped with ``torch.compile``, TorchDynamo traces the module and
rewrites Python bytecode to extract sequences of PyTorch operations into an FX Graph,
which can be optimized by the OpenVINO backend. The Torch FX graphs are first converted to
inlined FX graphs and the graph partitioning module traverses inlined FX graph to identify
operators supported by OpenVINO.
All the supported operators are clustered into OpenVINO submodules, converted to the OpenVINO
graph using OpenVINO's PyTorch decoder, and executed in an optimized manner using OpenVINO runtime.
All unsupported operators fall back to the native PyTorch runtime on CPU. If the subgraph
fails during OpenVINO conversion, the subgraph falls back to PyTorch's default inductor backend.
Additional Resources
############################
* `PyTorch 2.0 documentation <https://pytorch.org/docs/stable/index.html>`_
@endsphinxdirective

View File

@@ -2,6 +2,11 @@
@sphinxdirective
.. meta::
:description: Learn the details of custom kernel support for the GPU device to
enable operations not supported by OpenVINO.
To enable operations not supported by OpenVINO™ out of the box, you may need an extension for OpenVINO operation set, and a custom kernel for the device you will target. This article describes custom kernel support for the GPU device.
The GPU codepath abstracts many details about OpenCL. You need to provide the kernel code in OpenCL C and an XML configuration file that connects the kernel and its parameters to the parameters of the operation.
@@ -13,18 +18,20 @@ There are two options for using the custom operation configuration file:
.. tab-set::
.. tab-item:: C++
.. doxygensnippet:: docs/snippets/gpu/custom_kernels_api.cpp
:language: cpp
:fragment: [part0]
.. tab-item:: Python
:sync: py
.. doxygensnippet:: docs/snippets/gpu/custom_kernels_api.py
:language: python
:fragment: [part0]
.. tab-item:: C++
:sync: cpp
.. doxygensnippet:: docs/snippets/gpu/custom_kernels_api.cpp
:language: cpp
:fragment: [part0]
All OpenVINO samples, except the trivial ``hello_classification``, and most Open Model Zoo demos
feature a dedicated command-line option ``-c`` to load custom kernels. For example, to load custom operations for the classification sample, run the command below:
@@ -235,7 +242,8 @@ Example Configuration File
The following code sample provides an example configuration file in XML
format. For information on the configuration file structure, see the `Configuration File Format <#config-file-format>`__.
.. code-block:: cpp
.. code-block:: xml
:force:
<CustomLayer name="ReLU" type="SimpleGPU" version="1">
<Kernel entry="example_relu_kernel">

View File

@@ -2,6 +2,11 @@
@sphinxdirective
.. meta::
:description: Explore OpenVINO™ Extensibility API, which allows adding
support for models with custom operations and their further implementation
in applications.
.. toctree::
:maxdepth: 1
:hidden:
@@ -9,7 +14,6 @@
openvino_docs_Extensibility_UG_add_openvino_ops
openvino_docs_Extensibility_UG_Frontend_Extensions
openvino_docs_Extensibility_UG_GPU
openvino_docs_MO_DG_prepare_model_customize_model_optimizer_Customize_Model_Optimizer
.. toctree::
:maxdepth: 1
@@ -18,14 +22,20 @@
openvino_docs_transformations
OpenVINO Plugin Developer Guide <openvino_docs_ie_plugin_dg_overview>
The Intel® Distribution of OpenVINO™ toolkit supports neural network models trained with various frameworks, including
TensorFlow, PyTorch, ONNX, TensorFlow Lite, PaddlePaddle, Apache MXNet, Caffe, and Kaldi. The list of supported operations is different for
each of the supported frameworks. To see the operations supported by your framework, refer to :doc:`Supported Framework Operations <openvino_docs_MO_DG_prepare_model_Supported_Frameworks_Layers>`.
.. toctree::
:maxdepth: 1
:hidden:
openvino_docs_MO_DG_prepare_model_customize_model_optimizer_Customize_Model_Optimizer
The Intel® Distribution of OpenVINO™ toolkit supports neural-network models trained with various frameworks, including
TensorFlow, PyTorch, ONNX, TensorFlow Lite, and PaddlePaddle (OpenVINO support for Apache MXNet, Caffe, and Kaldi is currently
being deprecated and will be removed entirely in the future). The list of supported operations is different for each of the supported frameworks.
To see the operations supported by your framework, refer to :doc:`Supported Framework Operations <openvino_resources_supported_operations_frontend>`.
Custom operations, which are not included in the list, are not recognized by OpenVINO out-of-the-box. The need for custom operation may appear in two cases:
1. A new or rarely used regular framework operation is not supported in OpenVINO yet.
2. A new user operation that was created for some specific model topology by the author of the model using framework extension capabilities.
Importing models with such operations requires additional steps. This guide illustrates the workflow for running inference on models featuring custom operations. This allows plugging in your own implementation for them. OpenVINO Extensibility API enables adding support for those custom operations and using one implementation for Model Optimizer and OpenVINO Runtime.
@@ -54,9 +64,9 @@ Mapping of custom operation is implemented differently, depending on model forma
1. If a model is represented in the ONNX (including models exported from Pytorch in ONNX), TensorFlow Lite, PaddlePaddle or TensorFlow formats, then one of the classes from :doc:`Frontend Extension API <openvino_docs_Extensibility_UG_Frontend_Extensions>` should be used. It consists of several classes available in C++ which can be used with the ``--extensions`` option in Model Optimizer or when a model is imported directly to OpenVINO runtime using the ``read_model`` method. Python API is also available for runtime model import.
2. If a model is represented in the Caffe, Kaldi or MXNet formats, then :doc:`Model Optimizer Extensions <openvino_docs_MO_DG_prepare_model_customize_model_optimizer_Customize_Model_Optimizer>` should be used. This approach is available for model conversion in Model Optimizer only.
2. If a model is represented in the Caffe, Kaldi or MXNet formats (as legacy frontends), then :doc:`[Legacy] Model Optimizer Extensions <openvino_docs_MO_DG_prepare_model_customize_model_optimizer_Customize_Model_Optimizer>` should be used. This approach is available for model conversion in Model Optimizer only.
Existing of two approaches simultaneously is explained by two different types of frontends used for model conversion in OpenVINO: new frontends (ONNX, PaddlePaddle, TensorFlow Lite and TensorFlow) and legacy frontends (Caffe, Kaldi and Apache MXNet). Model Optimizer can use both front-ends in contrast to the direct import of model with ``read_model`` method which can use new frontends only. Follow one of the appropriate guides referenced above to implement mappings depending on framework frontend.
Existing of two approaches simultaneously is explained by two different types of frontends used for model conversion in OpenVINO: new frontends (ONNX, PaddlePaddle, TensorFlow Lite, and TensorFlow) and legacy frontends (Caffe, Kaldi, and Apache MXNet). Model Optimizer can use both frontends in contrast to the direct import of model with ``read_model`` method which can use new frontends only. Follow one of the appropriate guides referenced above to implement mappings depending on framework frontend.
If you are implementing extensions for new ONNX, PaddlePaddle, TensorFlow Lite or TensorFlow frontends and plan to use the ``--extensions`` option in Model Optimizer for model conversion, then the extensions should be:
@@ -85,6 +95,13 @@ Extensions can be loaded from a code with the ``:ref:`ov::Core::add_extension <
.. tab-set::
.. tab-item:: Python
:sync: py
.. doxygensnippet:: docs/snippets/ov_extensions.py
:language: python
:fragment: [add_extension]
.. tab-item:: C++
:sync: cpp
@@ -92,18 +109,18 @@ Extensions can be loaded from a code with the ``:ref:`ov::Core::add_extension <
:language: cpp
:fragment: [add_extension]
.. tab-item:: Python
:sync: py
.. doxygensnippet:: docs/snippets/ov_extensions.py
:language: python
:fragment: [add_extension]
The ``Identity`` is a custom operation class defined in :doc:`Custom Operation Guide <openvino_docs_Extensibility_UG_add_openvino_ops>`. This is sufficient to enable reading OpenVINO IR which uses the ``Identity`` extension operation emitted by Model Optimizer. In order to load original model directly to the runtime, add a mapping extension:
.. tab-set::
.. tab-item:: Python
:sync: py
.. doxygensnippet:: docs/snippets/ov_extensions.py
:language: python
:fragment: [add_frontend_extension]
.. tab-item:: C++
:sync: cpp
@@ -111,16 +128,11 @@ The ``Identity`` is a custom operation class defined in :doc:`Custom Operation G
:language: cpp
:fragment: [add_frontend_extension]
.. tab-item:: Python
:sync: py
.. doxygensnippet:: docs/snippets/ov_extensions.py
:language: python
:fragment: [add_frontend_extension]
When Python API is used, there is no way to implement a custom OpenVINO operation. Even if custom OpenVINO operation is implemented in C++ and loaded into the runtime by a shared library, there is still no way to add a frontend mapping extension that refers to this custom operation. In this case, use C++ shared library approach to implement both operations semantics and framework mapping.
Python can still be used to map and decompose operations when only operations from the standard OpenVINO operation set are used.
.. _create_a_library_with_extensions:
Create a Library with Extensions
++++++++++++++++++++++++++++++++
@@ -165,13 +177,6 @@ This CMake script finds OpenVINO, using the ``find_package`` CMake command.
.. tab-set::
.. tab-item:: C++
:sync: cpp
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
:language: cpp
:fragment: [add_extension_lib]
.. tab-item:: Python
:sync: py
@@ -179,6 +184,13 @@ This CMake script finds OpenVINO, using the ``find_package`` CMake command.
:language: python
:fragment: [add_extension_lib]
.. tab-item:: C++
:sync: cpp
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
:language: cpp
:fragment: [add_extension_lib]
See Also
########
@@ -187,4 +199,4 @@ See Also
* :doc:`Using OpenVINO Runtime Samples <openvino_docs_OV_UG_Samples_Overview>`
* :doc:`Hello Shape Infer SSD sample <openvino_inference_engine_samples_hello_reshape_ssd_README>`
@endsphinxdirective
@endsphinxdirective

View File

@@ -2,6 +2,11 @@
@sphinxdirective
.. meta::
:description: Explore OpenVINO™ Extension API which enables registering
custom operations to support models with operations
not supported by OpenVINO.
OpenVINO™ Extension API allows you to register custom operations to support models with operations which OpenVINO™ does not support out-of-the-box. This capability requires writing code in C++, so if you are using Python to develop your application you need to build a separate shared library implemented in C++ first and load it in Python using ``add_extension`` API. Please refer to :ref:`Create library with extensions <create_library_with_extensions>` for more details on library creation and usage. The remining part of this document describes how to implement an operation class.
Operation Class

View File

@@ -2,6 +2,12 @@
@sphinxdirective
.. meta::
:description: Learn how to use frontend extension classes to facilitate the mapping
of custom operations from the framework model representation to the OpenVINO
representation.
The goal of this chapter is to explain how to use Frontend extension classes to facilitate
mapping of custom operations from framework model representation to OpenVINO representation.
Refer to :doc:`Introduction to OpenVINO Extension <openvino_docs_Extensibility_UG_Intro>` to
@@ -19,6 +25,11 @@ guide.
operation that is a placeholder for your real custom operation. You can review the complete code,
which is fully compilable, to see how it works.
.. note::
You can find more examples of extensions in `openvino_contrib repository <https://github.com/openvinotoolkit/openvino_contrib/tree/master/modules/custom_operations>`_.
Single Operation Mapping with OpExtension
#########################################
@@ -83,6 +94,13 @@ In this case, you can directly say that 'MyRelu' -> ``Relu`` mapping should be u
.. tab-set::
.. tab-item:: Python
:sync: py
.. doxygensnippet:: docs/snippets/ov_extensions.py
:language: python
:fragment: [py_frontend_extension_MyRelu]
.. tab-item:: C++
:sync: cpp
@@ -90,13 +108,6 @@ In this case, you can directly say that 'MyRelu' -> ``Relu`` mapping should be u
:language: cpp
:fragment: [frontend_extension_MyRelu]
.. tab-item:: Python
:sync: python
.. doxygensnippet:: docs/snippets/ov_extensions.py
:language: python
:fragment: [py_frontend_extension_MyRelu]
In the resulting converted OpenVINO model, “MyRelu” operation will be replaced by the standard operation
``Relu`` from the latest available OpenVINO operation set. Notice that when standard operation is used,
@@ -108,10 +119,18 @@ as it was demonstrated with ``TemplateExtension::Identity``.
Attribute Mapping
++++++++++++++++++
As described above, ``OpExtension`` is useful when attributes can be mapped one by one or initialized by a constant.
As described above, ``OpExtension`` is useful when attributes can be mapped one by one or initialized by a constant.
Attributes in OpenVINO operators are identified by their names, so for frameworks that also have named attributes (like TensorFlow, PaddlePaddle, ONNX),
you can specify name to name mapping. For frameworks where OpenVINO operator's attributes can be mapped to one of the framework
operator inputs (like PyTorch), there's a name to input index mapping.
Named attributes mapping
^^^^^^^^^^^^^^^^^^^^^^^^
If the set of attributes in framework representation and OpenVINO representation completely match by their names and types,
nothing should be specified in OpExtension constructor parameters. The attributes are discovered and mapped
automatically based on ``visit_attributes`` method that should be defined for any OpenVINO operation.
no attribute mapping has to be specified in OpExtension constructor parameters. The attributes are discovered and mapped automatically
based on ``visit_attributes`` method that should be defined for any OpenVINO operation.
Imagine you have CustomOperation class implementation that has two attributes with names: ``attr1`` and ``attr2``.
@@ -119,14 +138,15 @@ Imagine you have CustomOperation class implementation that has two attributes wi
:language: cpp
:fragment: [frontend_extension_CustomOperation]
And the original model in the framework representation also has operation named “CustomOperation with the same
And original model in framework representation also has operation with name ``CustomOperation`` with the same
``attr1`` and ``attr2`` attributes. Then with the following code:
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
:language: cpp
:fragment: [frontend_extension_CustomOperation_as_is]
Both ``attr1`` and ``attr2`` are copied from framework representation to OpenVINO representation automatically.
Both ``attr1`` and ``attr2`` are copied from framework representation to OpenVINO representation automatically.
If for some reason names of attributes are different but values still can be copied “as-is” you can pass attribute
names mapping in ``OpExtension`` constructor:
@@ -144,48 +164,123 @@ to achieve that do the following:
:language: cpp
:fragment: [frontend_extension_CustomOperation_rename_set]
So the conclusion is that each attribute of target OpenVINO operation should be initialized either by
1. Setting automatically due to name matching
2. Mapped by attribute name
3. Set to a constant value
This is achieved by specifying maps as arguments for `OpExtension` constructor.
This is achieved by specifying maps as arguments for ``OpExtension`` constructor.
Attribute mapping with named inputs and outputs
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Mappings in previous examples assume that inputs and outputs of an operator in framework model representation come
with a particular order so you can directly map framework operation input ``0`` to OpenVINO operation input ``0`` and so on.
That's not always the case, for frameworks like PaddlePaddle, operation inputs and outputs are identified by their names
and may be defined in any order. So to map it to OpenVINO operation inputs and outputs, you have to specify that order yourself.
This can be done by creating two vector of strings, one for input and one for output, where framework operation
input name at position ``i`` maps to OpenVINO operation input at position ``i`` (and similarly for outputs).
Let's see the following example. Like previously, we'd like to map ``CustomOperation`` in the original model,
to OpenVINO ``CustomOperation`` as is (so their name and attributes names match). This time, that framework operation
inputs and outputs are not stricly ordered and can be identified by their names ``A``, ``B``, ``C`` for inputs
and ``X``, ``Y`` for outputs. Those inputs and outputs can be mapped to OpenVINO operation, such that inputs
``A``, ``B``, ``C`` map to OpenVINO ``CustomOperation`` first, second and third input and ``X`` and ``Y``
outputs map to OpenVINO ``CustomOperation`` first and second output respectively.
Given that, such custom operation can be registered by the following:
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
:language: cpp
:fragment: [frontend_extension_CustomOperation_as_is_paddle]
Second example shows how to map the operation with named inputs and outputs, but when names of attributes are different:
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
:language: cpp
:fragment: [frontend_extension_CustomOperation_rename_paddle]
and the last one shows how to map the operation with named inputs and outputs, but when (in order to correctly map framework
operation to OpenVINO operation) one of the attributes has to be set to predefined value:
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
:language: cpp
:fragment: [frontend_extension_CustomOperation_rename_set_paddle]
Mapping attributes from operation inputs
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
For models (like PyTorch models), where operations have attributes on the input list, you can specify name to input index mapping.
For example, imagine you have created a custom OpenVINO operation that implements a variant of ELU activation function
with two attributes ``alpha`` and ``beta``:
.. math::
CustomElu=\left\lbrace
\begin{array}{ll}
beta * x & \textrm{if x > 0} \newline
alpha * (exp(x) - 1) & \textrm{otherwise}
\end{array}
\right.
Below is a snippet of ``CustomElu`` class showing how to define its attributes:
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
:language: cpp
:fragment: [frontend_extension_framework_map_CustomElu]
Let's see an example of how you can map ``CustomElu`` to PyTorch `aten::elu <https://pytorch.org/docs/stable/generated/torch.nn.functional.elu.html>`_
(note that if ``beta`` is equal to ``1``, ``CustomElu`` works the same as ``aten::elu``).
``aten::elu`` has ``alpha`` attribute second on the input list, but it doesn't have ``beta``.
So in order to map it to ``CustomElu`` you can use the following:
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
:language: cpp
:fragment: [frontend_extension_framework_map_CustomElu_mapping]
This will map ``alpha`` to the second input and map ``beta`` attribute to constant value ``1.0f``.
Such created extension can be used, e.g. in dynamic library, please refer to :ref:`Create a library with extensions <create_a_library_with_extensions>`.
Mapping custom operations to frontends with OPENVINO_FRAMEWORK_MAP macro
---------------------------------------------------------------------------
########################################################################
.. note::
Below solution works only for ONNX and Tensorflow frontends.
``OPENVINO_FRAMEWORK_MAP`` is a macro that should be used inside OpenVINO operation's class definition and that lets you specify
the mapping between this operation to a frontend operation.
``OPENVINO_FRAMEWORK_MAP`` is a macro that should be used inside OpenVINO operation's class definition
and that lets you specify the mapping between this operation to a frontend operation.
Let's consider the following example. Imagine you have an ONNX model with ``CustomOp`` operation (and this operation has ``mode`` attribute),
a TensorFlow model with ``CustomOpV3`` operation (this operation has ``axis`` attribute) and a PaddlePaddle model with ``CustomOp`` (with ``mode`` attribute)
that has input named "X" and output named "Out" and all of them can be implemented with a single OpenVINO operation ``CustomOp`` like follows:
Let's consider the following example. Imagine you have an ONNX model with ``CustomOp`` operation
(and this operation has ``mode`` attribute) and a Tensorflow model with ``CustomOpV3`` operation
(this operation has ``axis`` attribute) and both of them can be implemented with a single OpenVINO
operation ``CustomOp`` like follows:
.. doxygensnippet:: ov_extensions.cpp
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
:language: cpp
:fragment: [frontend_extension_framework_map_macro_headers]
.. doxygensnippet:: ov_extensions.cpp
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
:language: cpp
:fragment: [frontend_extension_framework_map_macro_CustomOp]
Let's take a closer look at the parameters this macro takes (note that there are two flavors - the second one is to map
for PaddlePaddle operations where input and output names have to be specified).
Let's take a closer look at the parameters this macro takes:
.. code-block::cpp
.. code-block:: cpp
OPENVINO_FRAMEWORK_MAP(framework, name, attributes_map, attributes_values)
OPENVINO_FRAMEWORK_MAP(framework, input_names, output_names, name, attributes_map, attributes_values)
- ``framework`` - framework name.
- ``name`` - the framework operation name. It's optional if the OpenVINO custom operation name
(that is the name that is passed as the first parameter to `OPENVINO_OP` macro) is the
same as the framework operation name and both ``attributes_map`` and ``attributes_values`` are not provided.
(that is the name that is passed as the first parameter to ``OPENVINO_OP`` macro) is the same
as the framework operation name and both ``attributes_map`` and ``attributes_values`` are not provided.
- ``input_names`` - vector of strings that specify the names of inputs (needed to map PaddlePaddle to OpenVINO operations),
- ``output_names`` - vector of strings that specify the names of outputs (needed to map PaddlePaddle to OpenVINO operations),
- ``attributes_map`` - used to provide a mapping between OpenVINO operation attribute and
framework operation attribute. Contains key-value pairs, where key is an OpenVINO operation
attribute name and value is its corresponding framework operation attribute name.
@@ -196,14 +291,29 @@ Let's take a closer look at the parameters this macro takes:
operation attribute name and the value is this attribute value. This parameter cannot be provided
if ``attributes_map`` contains all of OpenVINO operation attributes or if ``attributes_map`` is not provided.
In the example above, ``OPENVINO_FRAMEWORK_MAP`` is used twice.
In the example above, ``OPENVINO_FRAMEWORK_MAP`` is used three times.
First, OpenVINO ``CustomOp`` is mapped to ONNX ``CustomOp`` operation, ``m_mode`` attribute is mapped to ``mode``
attribute, while ``m_axis`` attribute gets the default value ``-1``. Secondly, OpenVINO `CustomOp` is mapped
to Tensorflow ``CustomOpV3`` operation, ``m_axis`` attribute is mapped to ``axis`` attribute, while ``m_mode``
attribute gets the default value ``linear``.
attribute, while ``m_axis`` attribute gets the default value ``-1``. Secondly, OpenVINO ``CustomOp`` is mapped
to TensorFlow ``CustomOpV3`` operation, ``m_axis`` attribute is mapped to ``axis`` attribute, while ``m_mode``
attribute gets the default value ``"linear"``. Thirdly, OpenVINO ``CustomOp`` is mapped to PaddlePaddle ``CustomOp`` operation,
``m_mode`` attribute is mapped to ``mode`` attribute, while ``m_axis`` attribute gets the default value ``-1``.
This mapping also specifies the input name "X" and output name "Out".
The last step is to register this custom operation by following:
@snippet ov_extensions.cpp frontend_extension_framework_map_macro_add_extension
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
:language: cpp
:fragment: [frontend_extension_framework_map_macro_add_extension]
.. important::
To map an operation on a specific framework, you have to link to a respective
frontend (``openvino::frontend::onnx``, ``openvino::frontend::tensorflow``, ``openvino::frontend::paddle``) in the ``CMakeLists.txt`` file:
.. code-block:: sh
target_link_libraries(${TARGET_NAME} PRIVATE openvino::frontend::onnx)
Mapping to Multiple Operations with ConversionExtension
#######################################################
@@ -222,7 +332,7 @@ operations constructing dependency graph of any complexity.
operation classes. Follow chapter :ref:`Build a Model in OpenVINO Runtime <ov_ug_build_model>` to
learn how to use OpenVINO operation classes to build a fragment of model for replacement.
The next example illustrates using ``ConversionExtension`` for conversion of “ThresholdedRelu”
Below example illustrates using ``ConversionExtension`` for conversion of “ThresholdedRelu”
from ONNX according to the formula: ``ThresholdedRelu(x, alpha) -> Multiply(x, Convert(Greater(x, alpha), type=float))``.
.. note::
@@ -233,6 +343,13 @@ from ONNX according to the formula: ``ThresholdedRelu(x, alpha) -> Multiply(x, C
.. tab-set::
.. tab-item:: Python
:sync: py
.. doxygensnippet:: docs/snippets/ov_extensions.py
:language: python
:fragment: [py_frontend_extension_ThresholdedReLU_header]
.. tab-item:: C++
:sync: cpp
@@ -240,14 +357,14 @@ from ONNX according to the formula: ``ThresholdedRelu(x, alpha) -> Multiply(x, C
:language: cpp
:fragment: [frontend_extension_ThresholdedReLU_header]
.. tab-item:: Python
:sync: python
.. tab-set::
.. tab-item:: Python
:sync: py
.. doxygensnippet:: docs/snippets/ov_extensions.py
:language: python
:fragment: [py_frontend_extension_ThresholdedReLU_header]
.. tab-set::
:fragment: [py_frontend_extension_ThresholdedReLU]
.. tab-item:: C++
:sync: cpp
@@ -256,24 +373,47 @@ from ONNX according to the formula: ``ThresholdedRelu(x, alpha) -> Multiply(x, C
:language: cpp
:fragment: [frontend_extension_ThresholdedReLU]
.. tab-item:: Python
:sync: python
.. doxygensnippet:: docs/snippets/ov_extensions.py
:language: python
:fragment: [py_frontend_extension_ThresholdedReLU]
The next example shows how to use ``ConversionExtension`` to convert PyTorch
`aten::hardtanh <https://pytorch.org/docs/stable/generated/torch.nn.functional.hardtanh.html>`_
to demonstrate how to use ``get_values_from_const_input`` function to fetch an attribute value from input:
To access original framework operation attribute value and connect to inputs,
``node`` object of type ``NodeContext`` is used. It has two main methods:
.. doxygensnippet:: docs/snippets/ov_extensions.py
:language: python
:fragment: [py_frontend_extension_aten_hardtanh]
To access original framework operation attribute value and connect to inputs, ``node`` object of type ``NodeContext`` is used. It has three main methods:
* ``NodeContext::get_input`` to get input with a given index,
* ``NodeContext::get_attribute`` to get attribute value with a given name.
* ``NodeContext::get_attribute`` to get attribute value with a given name,
* ``NodeContext::get_values_from_const_input`` to get an attribute with a given input index.
The conversion function should return a vector of node outputs that are mapped to
corresponding outputs of the original framework operation in the same order.
Some frameworks require output names of the operation to be provided during conversion.
For PaddlePaddle operations, it is generally necessary to provide names for all outputs using the ``NamedOutputs`` container.
Usually those names can be found in source code of the individual operation in PaddlePaddle code.
The next example shows such conversion for the ``top_k_v2`` operation.
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
:language: cpp
:fragment: [frontend_extension_paddle_TopK]
For TensorFlow framework, if an operation has more than one output, it is recommended to assign names to
those outputs using the ``NamedOutputVector`` structure which allows both indexed and named output access.
For a description of TensorFlow operations, including the names of their outputs, refer to the
`tf.raw_ops <https://www.tensorflow.org/api_docs/python/tf/raw_ops/>`__ documentation page.
The next example shows such conversion for the ``TopKV2`` operation.
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
:language: cpp
:fragment: [frontend_extension_tf_TopK]
@endsphinxdirective

View File

@@ -2,6 +2,11 @@
@sphinxdirective
.. meta::
:description: Get to know how Graph Rewrite handles running multiple matcher passes on
ov::Model in a single graph traversal.
``:ref:`ov::pass::GraphRewrite <doxid-classov_1_1pass_1_1_graph_rewrite>``` serves for running multiple matcher passes on ``:ref:`ov::Model <doxid-classov_1_1_model>``` in a single graph traversal.
Example:

View File

@@ -2,6 +2,11 @@
@sphinxdirective
.. meta::
:description: Learn how to create a pattern, implement a callback, register
the pattern and Matcher to execute MatcherPass transformation
on a model.
``:ref:`ov::pass::MatcherPass <doxid-classov_1_1pass_1_1_matcher_pass>``` is used for pattern-based transformations.
Template for MatcherPass transformation class

View File

@@ -2,6 +2,11 @@
@sphinxdirective
.. meta::
:description: Learn how to use Model Pass transformation class to take entire
ov::Model as input and process it.
``:ref:`ov::pass::ModelPass <doxid-classov_1_1pass_1_1_model_pass>``` is used for transformations that take entire ``:ref:`ov::Model <doxid-classov_1_1_model>``` as an input and process it.
Template for ModelPass transformation class

View File

@@ -2,6 +2,11 @@
@sphinxdirective
.. meta::
:description: Learn how to apply additional model optimizations or transform
unsupported subgraphs and operations, using OpenVINO™ Transformations API.
.. toctree::
:maxdepth: 1
:hidden:

View File

@@ -1,3 +0,0 @@
version https://git-lfs.github.com/spec/v1
oid sha256:f7c8ab4f15874d235968471bcf876c89c795d601e69891208107b8b72aa58eb1
size 70014

View File

@@ -1,3 +0,0 @@
version https://git-lfs.github.com/spec/v1
oid sha256:3d5ccf51fe1babb93d96d042494695a6a6e055d1f8ebf7eef5083d54d8987a23
size 58789

View File

@@ -1,40 +0,0 @@
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#! [complex:transformation]
from openvino.tools.mo.front.common.replacement import FrontReplacementSubgraph
from openvino.tools.mo.graph.graph import Graph
class Complex(FrontReplacementSubgraph):
enabled = True
def pattern(self):
return dict(
nodes=[
('strided_slice_real', dict(op='StridedSlice')),
('strided_slice_imag', dict(op='StridedSlice')),
('complex', dict(op='Complex')),
],
edges=[
('strided_slice_real', 'complex', {'in': 0}),
('strided_slice_imag', 'complex', {'in': 1}),
])
@staticmethod
def replace_sub_graph(graph: Graph, match: dict):
strided_slice_real = match['strided_slice_real']
strided_slice_imag = match['strided_slice_imag']
complex_node = match['complex']
# make sure that both strided slice operations get the same data as input
assert strided_slice_real.in_port(0).get_source() == strided_slice_imag.in_port(0).get_source()
# identify the output port of the operation producing datat for strided slice nodes
input_node_output_port = strided_slice_real.in_port(0).get_source()
input_node_output_port.disconnect()
# change the connection so now all consumers of "complex_node" get data from input node of strided slice nodes
complex_node.out_port(0).get_connection().set_source(input_node_output_port)
#! [complex:transformation]

View File

@@ -1,27 +0,0 @@
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#! [complex_abs:transformation]
import numpy as np
from openvino.tools.mo.ops.elementwise import Pow
from openvino.tools.mo.ops.ReduceOps import ReduceSum
from openvino.tools.mo.front.common.replacement import FrontReplacementOp
from openvino.tools.mo.graph.graph import Graph, Node
from openvino.tools.mo.ops.const import Const
class ComplexAbs(FrontReplacementOp):
op = "ComplexAbs"
enabled = True
def replace_op(self, graph: Graph, node: Node):
pow_2 = Const(graph, {'value': np.float32(2.0)}).create_node()
reduce_axis = Const(graph, {'value': np.int32(-1)}).create_node()
pow_0_5 = Const(graph, {'value': np.float32(0.5)}).create_node()
sq = Pow(graph, dict(name=node.in_node(0).name + '/sq', power=2.0)).create_node([node.in_node(0), pow_2])
sum = ReduceSum(graph, dict(name=sq.name + '/sum')).create_node([sq, reduce_axis])
sqrt = Pow(graph, dict(name=sum.name + '/sqrt', power=0.5)).create_node([sum, pow_0_5])
return [sqrt.id]
#! [complex_abs:transformation]

View File

@@ -1,33 +0,0 @@
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
# ! [fft_ext:extractor]
from ...ops.FFT import FFT
from openvino.tools.mo.front.extractor import FrontExtractorOp
class FFT2DFrontExtractor(FrontExtractorOp):
op = 'FFT2D'
enabled = True
@classmethod
def extract(cls, node):
attrs = {
'inverse': 0
}
FFT.update_node_stat(node, attrs)
return cls.enabled
class IFFT2DFrontExtractor(FrontExtractorOp):
op = 'IFFT2D'
enabled = True
@classmethod
def extract(cls, node):
attrs = {
'inverse': 1
}
FFT.update_node_stat(node, attrs)
return cls.enabled
# ! [fft_ext:extractor]

View File

@@ -1,27 +0,0 @@
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#! [fft:operation]
from openvino.tools.mo.front.common.partial_infer.elemental import copy_shape_infer
from openvino.tools.mo.graph.graph import Graph
from openvino.tools.mo.ops.op import Op
class FFT(Op):
op = 'FFT'
enabled = False
def __init__(self, graph: Graph, attrs: dict):
super().__init__(graph, {
'type': self.op,
'op': self.op,
'version': 'custom_opset',
'inverse': None,
'in_ports_count': 1,
'out_ports_count': 1,
'infer': copy_shape_infer
}, attrs)
def backend_attrs(self):
return ['inverse']
#! [fft:operation]

View File

@@ -1,106 +0,0 @@
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#! [mri_demo:demo]
import numpy as np
import cv2 as cv
import argparse
import time
from openvino.inference_engine import IECore
def kspace_to_image(kspace):
assert(len(kspace.shape) == 3 and kspace.shape[-1] == 2)
fft = cv.idft(kspace, flags=cv.DFT_SCALE)
img = cv.magnitude(fft[:,:,0], fft[:,:,1])
return cv.normalize(img, dst=None, alpha=255, beta=0, norm_type=cv.NORM_MINMAX, dtype=cv.CV_8U)
if __name__ == '__main__':
parser = argparse.ArgumentParser(description='MRI reconstrution demo for network from https://github.com/rmsouza01/Hybrid-CS-Model-MRI (https://arxiv.org/abs/1810.12473)')
parser.add_argument('-i', '--input', dest='input', help='Path to input .npy file with MRI scan data.')
parser.add_argument('-p', '--pattern', dest='pattern', help='Path to sampling mask in .npy format.')
parser.add_argument('-m', '--model', dest='model', help='Path to .xml file of OpenVINO IR.')
parser.add_argument('-l', '--cpu_extension', dest='cpu_extension', help='Path to extensions library with FFT implementation.')
parser.add_argument('-d', '--device', dest='device', default='CPU',
help='Optional. Specify the target device to infer on; CPU, '
'GPU, GNA is acceptable. For non-CPU targets, '
'HETERO plugin is used with CPU fallbacks to FFT implementation. '
'Default value is CPU')
args = parser.parse_args()
xml_path = args.model
assert(xml_path.endswith('.xml'))
bin_path = xml_path[:xml_path.rfind('.xml')] + '.bin'
ie = IECore()
ie.add_extension(args.cpu_extension, "CPU")
net = ie.read_network(xml_path, bin_path)
device = 'CPU' if args.device == 'CPU' else ('HETERO:' + args.device + ',CPU')
exec_net = ie.load_network(net, device)
# Hybrid-CS-Model-MRI/Data/stats_fs_unet_norm_20.npy
stats = np.array([2.20295299e-01, 1.11048916e+03, 4.16997984e+00, 4.71741395e+00], dtype=np.float32)
# Hybrid-CS-Model-MRI/Data/sampling_mask_20perc.npy
var_sampling_mask = np.load(args.pattern) # TODO: can we generate it in runtime?
print('Sampling ratio:', 1.0 - var_sampling_mask.sum() / var_sampling_mask.size)
data = np.load(args.input)
num_slices, height, width = data.shape[0], data.shape[1], data.shape[2]
pred = np.zeros((num_slices, height, width), dtype=np.uint8)
data /= np.sqrt(height * width)
print('Compute...')
start = time.time()
for slice_id, kspace in enumerate(data):
kspace = kspace.copy()
# Apply sampling
kspace[var_sampling_mask] = 0
kspace = (kspace - stats[0]) / stats[1]
# Forward through network
input = np.expand_dims(kspace.transpose(2, 0, 1), axis=0)
outputs = exec_net.infer(inputs={'input_1': input})
output = next(iter(outputs.values()))
output = output.reshape(height, width)
# Save predictions
pred[slice_id] = cv.normalize(output, dst=None, alpha=255, beta=0, norm_type=cv.NORM_MINMAX, dtype=cv.CV_8U)
print('Elapsed time: %.1f seconds' % (time.time() - start))
WIN_NAME = 'MRI reconstruction with OpenVINO'
slice_id = 0
def callback(pos):
global slice_id
slice_id = pos
kspace = data[slice_id]
img = kspace_to_image(kspace)
kspace[var_sampling_mask] = 0
masked = kspace_to_image(kspace)
rec = pred[slice_id]
# Add a header
border_size = 20
render = cv.hconcat((img, masked, rec))
render = cv.copyMakeBorder(render, border_size, 0, 0, 0, cv.BORDER_CONSTANT, value=255)
cv.putText(render, 'Original', (0, 15), cv.FONT_HERSHEY_SIMPLEX, 0.5, color=0)
cv.putText(render, 'Sampled (PSNR %.1f)' % cv.PSNR(img, masked), (width, 15), cv.FONT_HERSHEY_SIMPLEX, 0.5, color=0)
cv.putText(render, 'Reconstructed (PSNR %.1f)' % cv.PSNR(img, rec), (width*2, 15), cv.FONT_HERSHEY_SIMPLEX, 0.5, color=0)
cv.imshow(WIN_NAME, render)
cv.waitKey(1)
cv.namedWindow(WIN_NAME, cv.WINDOW_NORMAL)
print(num_slices)
cv.createTrackbar('Slice', WIN_NAME, num_slices // 2, num_slices - 1, callback)
callback(num_slices // 2) # Trigger initial visualization
cv.waitKey()
#! [mri_demo:demo]

View File

@@ -2,6 +2,9 @@
@sphinxdirective
.. meta::
:description: Use the base ov::IAsyncInferRequest class to implement a custom asynchronous inference request in OpenVINO.
Asynchronous Inference Request runs an inference pipeline asynchronously in one or several task executors depending on a device pipeline structure.
OpenVINO Runtime Plugin API provides the base ov::IAsyncInferRequest class:

View File

@@ -2,6 +2,10 @@
@sphinxdirective
.. meta::
:description: Learn how to build a plugin using CMake and OpenVINO Developer Package.
OpenVINO build infrastructure provides the OpenVINO Developer Package for plugin development.
OpenVINO Developer Package
@@ -9,7 +13,7 @@ OpenVINO Developer Package
To automatically generate the OpenVINO Developer Package, run the ``cmake`` tool during a OpenVINO build:
.. code-block:: bash
.. code-block:: sh
$ mkdir openvino-release-build
$ cd openvino-release-build
@@ -48,7 +52,7 @@ Build Plugin using OpenVINO Developer Package
To build a plugin source tree using the OpenVINO Developer Package, run the commands below:
.. code-block:: bash
.. code-block:: sh
$ mkdir template-plugin-release-build
$ cd template-plugin-release-build
@@ -72,7 +76,7 @@ To build a plugin and its tests, run the following CMake scripts:
The default values of the ``ENABLE_TESTS``, ``ENABLE_FUNCTIONAL_TESTS`` options are shared via the OpenVINO Developer Package and they are the same as for the main OpenVINO build tree. You can override them during plugin build using the command below:
.. code-block:: bash
.. code-block:: sh
$ cmake -DENABLE_FUNCTIONAL_TESTS=OFF -DOpenVINODeveloperPackage_DIR=../openvino-release-build ../template-plugin

View File

@@ -2,6 +2,10 @@
@sphinxdirective
.. meta::
:description: Use the ov::CompiledModel class as the base class for a compiled
model and to create an arbitrary number of ov::InferRequest objects.
ov::CompiledModel class functionality:
* Compile an ov::Model instance to a backend specific graph representation

View File

@@ -2,6 +2,10 @@
@sphinxdirective
.. meta::
:description: Use the ov::ISyncInferRequest interface as the base class to implement a synchronous inference request in OpenVINO.
``InferRequest`` class functionality:
* Allocate input and output tensors needed for a backend-dependent network inference.

View File

@@ -2,6 +2,12 @@
@sphinxdirective
.. meta::
:description: Develop and implement independent inference solutions for
different devices with the components of plugin architecture
of OpenVINO.
.. toctree::
:maxdepth: 1
:caption: Converting and Preparing Models
@@ -87,7 +93,7 @@ Detailed Guides
API References
##############
* `OpenVINO Plugin API <https://docs.openvino.ai/nightly/groupov_dev_api.html>`__
* `OpenVINO Transformation API <https://docs.openvino.ai/2022.3/groupie_transformation_api.html>`__
* `OpenVINO Plugin API <https://docs.openvino.ai/2023.0/groupov_dev_api.html>`__
* `OpenVINO Transformation API <https://docs.openvino.ai/2023.0/groupie_transformation_api.html>`__
@endsphinxdirective

View File

@@ -2,6 +2,11 @@
@sphinxdirective
.. meta::
:description: Explore OpenVINO Plugin API, which includes functions and
helper classes that simplify the development of new plugins.
OpenVINO Plugin usually represents a wrapper around a backend. Backends can be:
* OpenCL-like backend (e.g. clDNN library) for GPU devices.

View File

@@ -2,6 +2,11 @@
@sphinxdirective
.. meta::
:description: Use the openvino::funcSharedTests library, which includes
a predefined set of functional tests and utilities to verify a plugin.
OpenVINO tests infrastructure provides a predefined set of functional tests and utilities. They are used to verify a plugin using the OpenVINO public API.
All the tests are written in the `Google Test C++ framework <https://github.com/google/googletest>`__.

View File

@@ -2,6 +2,11 @@
@sphinxdirective
.. meta::
:description: Use the ov::Property class to define access rights and
specific properties of an OpenVINO plugin.
Plugin can provide own device-specific properties.
Property Class

View File

@@ -3,6 +3,11 @@
@sphinxdirective
.. meta::
:description: Learn about the support for quantized models with different
precisions and the FakeQuantize operation used to express
quantization rules.
One of the feature of OpenVINO is the support of quantized models with different precisions: INT8, INT4, etc.
However, it is up to the plugin to define what exact precisions are supported by the particular HW.
All quantized models which can be expressed in IR have a unified representation by means of *FakeQuantize* operation.
@@ -53,8 +58,8 @@ Thus we can define:
Quantization specifics and restrictions
#######################################
In general, OpenVINO can represent and execute quantized models from different sources. However, the Post-training Optimization Tool (POT)
is considered the default way to get optimized models. Since the POT supports HW-aware quantization it means that specific rules can be implemented in it for
In general, OpenVINO can represent and execute quantized models from different sources. However, the Neural Network Compression Framework (NNCF)
is considered the default way to get optimized models. Since the NNCF supports HW-aware quantization it means that specific rules can be implemented in it for
the particular HW. However, it is reasonable to have compatibility with general-purpose HW such as CPU and GPU and support their quantization schemes.
Below we define these rules as follows:

View File

@@ -2,6 +2,10 @@
@sphinxdirective
.. meta::
:description: Use the ov::RemoteContext class as the base class for a plugin-specific remote context.
ov::RemoteContext class functionality:
* Represents device-specific inference context.

View File

@@ -2,6 +2,10 @@
@sphinxdirective
.. meta::
:description: Use the ov::IRemoteTensor interface as a base class for device-specific remote tensors.
ov::RemoteTensor class functionality:
* Provides an interface to work with device-specific memory.

View File

@@ -2,6 +2,11 @@
@sphinxdirective
.. meta::
:description: Learn more about plugin development and specific features in
OpenVINO: precision transformations and support for quantized
models with different precisions.
.. toctree::
:maxdepth: 1
:hidden:

View File

@@ -2,6 +2,10 @@
@sphinxdirective
.. meta::
:description: Learn about extra API references required for the development of
plugins in OpenVINO.
.. toctree::
:maxdepth: 1
:hidden:
@@ -9,9 +13,9 @@
../groupov_dev_api
../groupie_transformation_api
@endsphinxdirective
The guides below provides extra API references needed for OpenVINO plugin development:
* [OpenVINO Plugin API](@ref ov_dev_api)
* [OpenVINO Transformation API](@ref ie_transformation_api)
* `OpenVINO Plugin API <https://docs.openvino.ai/2023.0/groupov_dev_api.html>`__
* `OpenVINO Transformation API <https://docs.openvino.ai/2023.0/groupie_transformation_api.html>`__
@endsphinxdirective

View File

@@ -2,6 +2,9 @@
@sphinxdirective
.. meta::
:description: Learn about AvgPoolPrecisionPreserved attribute used only during AvgPool operation.
:ref:`ngraph::AvgPoolPrecisionPreservedAttribute <doxid-classngraph_1_1_avg_pool_precision_preserved_attribute>` class represents the ``AvgPoolPrecisionPreserved`` attribute.
Utility attribute, which is used only during ``AvgPool`` operation, precision preserved property definition.

View File

@@ -2,6 +2,10 @@
@sphinxdirective
.. meta::
:description: Learn about IntervalsAlignment attribute, which describes a subgraph with the same quantization intervals alignment.
:ref:`ngraph::IntervalsAlignmentAttribute <doxid-classngraph_1_1_intervals_alignment_attribute>` class represents the ``IntervalsAlignment`` attribute.
The attribute defines a subgraph with the same quantization intervals alignment. ``FakeQuantize`` operations are included. The attribute is used by quantization operations.

View File

@@ -2,6 +2,10 @@
@sphinxdirective
.. meta::
:description: Learn about PrecisionPreserved attribute, which describes a precision preserved operation.
:ref:`ngraph::PrecisionPreservedAttribute <doxid-classngraph_1_1_precision_preserved_attribute>` class represents the ``PrecisionPreserved`` attribute.
The attribute defines a precision preserved operation. If the attribute is absent, then an operation is not precision preserved.

View File

@@ -2,6 +2,10 @@
@sphinxdirective
.. meta::
:description: Learn about Precisions attribute, which describes the precision required for an input/output port or an operation.
:ref:`ngraph::PrecisionsAttribute <doxid-classngraph_1_1_precisions_attribute>` class represents the ``Precisions`` attribute.
The attribute defines precision which is required for input/output port or an operation.

View File

@@ -2,6 +2,10 @@
@sphinxdirective
.. meta::
:description: Learn about QuantizationAlignment attribute, which describes a subgraph with the same quantization alignment.
:ref:`ngraph::QuantizationAlignmentAttribute <doxid-classngraph_1_1_quantization_alignment_attribute>` class represents the ``QuantizationAlignment`` attribute.
The attribute defines a subgraph with the same quantization alignment. ``FakeQuantize`` operations are not included. The attribute is used by quantization operations.

View File

@@ -2,6 +2,10 @@
@sphinxdirective
.. meta::
:description: Learn about QuantizationGranularity attribute, which describes quantization granularity of operation inputs.
ngraph::QuantizationAttribute class represents the ``QuantizationGranularity`` attribute.
The attribute defines quantization granularity of operation inputs.

View File

@@ -2,6 +2,9 @@
@sphinxdirective
.. meta::
:description: Learn about low precision transformations used to infer a quantized model in low precision with the maximum performance on Intel CPU, GPU, and ARM platforms.
.. toctree::
:maxdepth: 1
:caption: Low Precision Transformations
@@ -308,13 +311,13 @@ This step is optional. It modifies the nGraph function to a device-specific oper
Result model overview
#####################
Let's explore quantized `TensorFlow implementation of ResNet-50 <https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/resnet-50-tf>`__ model. Use `Model Downloader <https://docs.openvino.ai/2022.3/omz_tools_downloader.html>`__ tool to download the ``fp16`` model from `OpenVINO™ Toolkit - Open Model Zoo repository <https://github.com/openvinotoolkit/open_model_zoo>`__:
Let's explore quantized `TensorFlow implementation of ResNet-50 <https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/resnet-50-tf>`__ model. Use :doc:`Model Downloader <omz_tools_downloader>` tool to download the ``fp16`` model from `OpenVINO™ Toolkit - Open Model Zoo repository <https://github.com/openvinotoolkit/open_model_zoo>`__:
.. code-block:: sh
omz_downloader --name resnet-50-tf --precisions FP16-INT8
After that you should quantize model by the `Model Quantizer <https://docs.openvino.ai/2022.3/omz_tools_downloader.html>`__ tool.
After that you should quantize model by the :doc:`Model Quantizer <omz_tools_downloader>` tool.
.. code-block:: sh
@@ -337,7 +340,7 @@ Results analysis
Result model depends on different factors:
* The original model quantization possibility and quantization quality. For some models, some operations are not possible to be quantized by POT and NNCF tools. In this case ``FakeQuantize`` operations are absent before these operations and they will be inferred in original precision.
* The original model quantization possibility and quantization quality. For some models, some operations are not possible to be quantized by NNCF tool. In this case ``FakeQuantize`` operations are absent before these operations and they will be inferred in original precision.
* LPT customization and plugin supported operations. If plugin doesn't support INT8 inference for some operation then corresponding LPT transformation should be disabled and the operation will be inferred in original precision.

View File

@@ -2,6 +2,10 @@
@sphinxdirective
.. meta::
:description: Check the lists of attributes created or used by model transformations.
.. toctree::
:maxdepth: 1
:caption: Attributes

View File

@@ -2,6 +2,10 @@
@sphinxdirective
.. meta::
:description: Learn about optional Prerequisites transformations, that
prepare a model before applying other low precision transformations.
Prerequisites transformations are optional. The transformations prepare a model before running other low precision transformations. The transformations do not operate with dequantization operations or update precisions. Prerequisites transformations include:
* :doc:`PullReshapeThroughDequantization <openvino_docs_OV_UG_lpt_PullReshapeThroughDequantization>`

View File

@@ -2,6 +2,10 @@
@sphinxdirective
.. meta::
:description: Learn about markup transformations, which are used to create
attributes for input and output ports and operations during runtime.
This step defines the optimal ``FakeQuantize`` decomposition precisions for the best inference performance via operations markup with runtime attribute instances. Attributes are created for input and output ports and operations. Transformations do not change the operation output port precisions. A model markup low precision logic is decomposed and implemented into the following common markup transformations. The order of transformations is important:
1. :doc:`MarkupBias <openvino_docs_OV_UG_lpt_MarkupBias>`

View File

@@ -2,6 +2,12 @@
@sphinxdirective
.. meta::
:description: Learn about main transformations, which are mostly low
precision transformations that handle decomposition and
dequantization operations.
Main transformations are the majority of low precision transformations. Transformations operate with dequantization operations. Main transformations include:
* :doc:`AddTransformation <openvino_docs_OV_UG_lpt_AddTransformation>`

Some files were not shown because too many files have changed in this diff Show More