Compare commits

..

233 Commits

Author SHA1 Message Date
Karol Blaszczak
98a33ba770 [DOCS] 23.0 selector tool remove (#21315) 2023-11-27 15:17:44 +01:00
Sebastian Golebiewski
d0647322de Direct Github link to a specific notebook (#20358) 2023-10-10 15:45:18 +02:00
Alina Kladieva
9f2f5fc59f Bump product version to 2023.0.3 (#20147) 2023-09-29 12:43:45 +02:00
Yuan Hu
f80f99f5c5 Revert [Core] fix Memory Leak caused by create/inference request con… (#20050)
* Revert "[Core] fix Memory Leak caused by create/inference request consequently in separate thread (#18868) (#19191)"

This reverts commit b0394cc3e4.

* Install local wheel packages instead of PYPI ones (#19031)

* Try to use --no-index when install python packages

* Apply suggestions from code review

* Update .ci/azure/linux.yml

* Try to use conan.lock file (#19709)

* Fixed NCC style check (#20121)

---------

Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
Co-authored-by: Alina Kladieva <alina.kladieva@intel.com>
2023-09-28 19:39:23 +02:00
Sebastian Golebiewski
5127753183 Fix Issue 20097 - providing an easy to read Cmake command (#20130)
Porting: https://github.com/openvinotoolkit/openvino/pull/20126
2023-09-28 18:15:41 +04:00
bstankix
f4a3f0a223 [DOCS] Bugfix coveo sa-search url (#20056) 2023-09-26 15:15:29 +02:00
Ilya Lavrenov
319711fca5 Fixed issue 19784 (#19788)
* Fixed issue 19784

* Update .ci/azure/linux_debian.yml

replaced 'focal' with 'ubuntu20'
2023-09-13 14:59:55 +04:00
bstankix
8c31771e6c [DOCS] Port coveo search engine (#19751) 2023-09-11 12:40:01 +00:00
Maciej Smyk
29d01a5cbf img-fix (#19701) 2023-09-08 15:46:56 +02:00
Maciej Smyk
002537729b fix (#19697) 2023-09-08 13:33:44 +02:00
Ilya Lavrenov
088fe50dd9 Update versions in apt, yum installation docs (#19688) 2023-09-08 11:06:15 +02:00
Maciej Smyk
be021afc0b Update supported_model_formats.md (#19643) 2023-09-07 11:02:43 +02:00
Maciej Smyk
74ee34e925 Extend sphinx_sitemap to add custom metadata (#19641) 2023-09-07 09:17:07 +02:00
Maciej Smyk
c516a7279e update (#19621) 2023-09-07 08:38:50 +02:00
Maciej Smyk
8252d74662 [DOCS] contributing guidelines (#19623) 2023-09-06 16:18:46 +02:00
Alexander Suvorov
a3e746401e [DOCS] Update Selector Tool for 2023.0.2 2023-09-05 21:26:49 +02:00
Maciej Smyk
68db3844d3 [DOCS] Fixing Optimize Preprocessing in notebooks 120 and 230 for 23.0 2023-09-05 18:11:21 +02:00
Maciej Smyk
4eb71aa5e0 [DOCS] Link fix for 23.0 (#19592)
* 2023.0 link fix

* Update README.md
2023-09-05 10:06:13 +02:00
Karol Blaszczak
256a6d2572 [DOCS] 23.0.2 adjustment (#19604) 2023-09-05 09:47:01 +02:00
Ilya Lavrenov
6fbcb94e20 Fixed build with static protobuf for brew publishing (#19590) 2023-09-04 19:35:31 +04:00
Przemyslaw Wysocki
2ac63aea24 Backport Robust detection of Cython version (#19537) (#19547)
* Robust detection of Cython version (#19537)

* Aligned protobuf version in conanfile.txt with onnx recipe (#19525)

---------

Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
2023-09-04 17:03:14 +04:00
Sebastian Golebiewski
e8d44d4502 Adding Quantizing with Accuracy Control using NNCF notebook (#19588) 2023-09-04 14:56:47 +02:00
Maciej Smyk
816c2a24de [DOCS] Fix for Install from Docker Image for 23.0 (#19581)
* Update installing-openvino-docker-linux.md

* Update installing-openvino-docker-linux.md

* Update installing-openvino-docker-linux.md
2023-09-04 11:16:22 +02:00
Maciej Smyk
177aa10040 [DOCS] Torch.compile() documentation for 23.0 (#19542)
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
2023-09-04 08:38:52 +02:00
Sebastian Golebiewski
68dfd60057 add-253 (#19502) 2023-08-30 13:46:40 +02:00
Sebastian Golebiewski
fd519c711a improve-snippets (#19498)
Porting: https://github.com/openvinotoolkit/openvino/pull/19479
2023-08-30 13:11:03 +02:00
Przemyslaw Wysocki
3d17c656d1 Comment cmake check (#19491) 2023-08-30 12:56:18 +02:00
Maciej Smyk
baab44c4f4 [DOCS] Docker Guide Update for 23.0 (#19448)
* docker-update

* id fix

* Update installing-openvino-docker-linux.md

* Update installing-openvino-docker-linux.md

* Update installing-openvino-docker-linux.md
2023-08-29 08:45:20 +02:00
Sebastian Golebiewski
d972830c9a update-notebooks (#19454)
Add notebook 252-fastcomposer-image-generation. Fix indentation, admonitions, broken links and images.
2023-08-28 15:03:53 +02:00
Karol Blaszczak
f322762818 [DOCS] speech sample deprecation port 23.0 2023-08-25 12:41:30 +02:00
Karol Blaszczak
c2da07a8e7 [DOCS] adjustment to supported devices port 23.0 2023-08-25 12:10:44 +02:00
Sebastian Golebiewski
c08f68f1e5 [DOCS] Updating MO documentation for 23.0 (#19373)
* restructure-mo-docs

* apply-commits-18214

Applying commits from:

https://github.com/openvinotoolkit/openvino/pull/18214

* update

* Apply suggestions from code review

Co-authored-by: Anastasiia Pnevskaia <anastasiia.pnevskaia@intel.com>

* Apply suggestions from code review

* Update model_introduction.md

* Update docs/resources/tensorflow_frontend.md

* Create MO_Python_API.md

* Update Deep_Learning_Model_Optimizer_DevGuide.md

---------

Co-authored-by: Anastasiia Pnevskaia <anastasiia.pnevskaia@intel.com>
2023-08-23 19:24:15 +02:00
Sebastian Golebiewski
433d2f2750 CVS-113150 (#19370)
Porting:

https://github.com/openvinotoolkit/openvino/pull/18495
2023-08-23 18:28:04 +02:00
Sebastian Golebiewski
6b3863632d update-notebooks (#19339) 2023-08-22 15:37:51 +02:00
Sebastian Golebiewski
5509db87af link-to-frontend (#19333) 2023-08-22 12:54:27 +02:00
Artyom Anokhov
e662b1a330 Bump OV version to 2023.0.2 (#19329) 2023-08-22 11:02:36 +02:00
Sebastian Golebiewski
0aa5a8f704 port-19307 (#19310)
Porting: https://github.com/openvinotoolkit/openvino/pull/19307
Updating tutorials: adding table of contents and new notebooks.
2023-08-21 16:47:28 +02:00
Marcin Kusmierski
54f6f11186 [GNA] Switch GNA library to version 03.05.00.2116 (#19296)
Co-authored-by: Szymon Irzabek <szymon.jakub.irzabek@intel.com>
2023-08-21 14:50:15 +02:00
hyunback kim
ea482d8391 [GPU] Do not select onednn format for asymmetric weight (#19140) (#19265) 2023-08-21 11:30:33 +04:00
Karol Blaszczak
a93f320a48 Update prerelease_information.md (#19283) 2023-08-18 20:00:49 +02:00
Karol Blaszczak
26e9c69440 [DOCS] pre-releasenotes 23.1 Aug (#19271) 2023-08-18 17:43:11 +02:00
Marcin Kusmierski
4727efdb3c [GNA] Fix memory leak in GNA plugin. (#19257)
* Disabled transformation introducing memory leak.
2023-08-18 13:39:22 +02:00
Sergey Shlyapnikov
b7415f5c3b [GPU] Prevent Conv's input data type changing at reorder_inputs pass (#19042) (#19245) 2023-08-17 17:57:14 +04:00
Sebastian Golebiewski
0262662050 add-slash (#19243) 2023-08-17 11:23:37 +04:00
Maciej Smyk
576b99fee9 [DOCS] Removal of redundant files for 23.0 2023-08-16 13:22:43 +02:00
bstankix
4e790d7b46 [DOCS] Fix parameter name in design-tabs (#19212) 2023-08-16 07:40:39 +00:00
Yuan Hu
b0394cc3e4 [Core] fix Memory Leak caused by create/inference request consequently in separate thread (#18868) (#19191)
Signed-off-by: HU Yuan2 <yuan2.hu@intel.com>
2023-08-15 10:55:36 +04:00
bstankix
18cb7c94c1 [DOCS] Add state retention to design-tabs (#19180) 2023-08-14 13:46:50 +00:00
Wanglei Shen
064364eb5e Support Win7 in cpu information parser (#19110) 2023-08-11 09:54:51 +04:00
Shuangji Yang
5ded6fb699 fix bug on conversion of gather to sequeeze (#19094) 2023-08-10 22:25:59 +04:00
Maciej Smyk
eabf199c3a Adds Python wheel requirements info to docs (#19125) 2023-08-10 21:50:10 +04:00
Sebastian Golebiewski
0e0d166746 add-numpy (#19128) 2023-08-10 21:39:44 +04:00
Stefania Hergane
a6351294e7 [EISW-89820] [releases/2023/0] Rename VPUX to NPU (#19002)
* Change `VPUX` occurences to `NPU`

* Change library for `NPU` device in `api_conformance_helpers.hpp`

* Rename `MYRIAD plugin`

* Switch `HARDWARE_AWARE_IGNORED_PATTERNS` VPU to NPU

* Rename DEVICE_KEEMBAY to DEVICE_NPU

* Rename VPUX_DEVICE_NAME to NPU_DEVICE_NAME

* Rename vpu_patterns to npu_patterns

* Change VPUX occurences to NPU after review

* Remove VPUX device comment

* Change VPUX/vpu to NPU in tests/time_tests

* Rename VPU to NPU in docs after review

* Rename VPU to NPU in tools/pot after review

* Renamed vpu.json to npu.json in tools/pot after review

* Restore CommonTestUtils::DEVICE_KEEMBAY

---------

Co-authored-by: MirceaDan99 <mircea-aurelian.dan@intel.com>
2023-08-09 00:19:25 +04:00
Maciej Smyk
cac7e2e1c4 [DOCS] Change sample structure for 23.0 (#19058) 2023-08-08 14:18:48 +00:00
Karol Blaszczak
13e674b1f8 Docs installation guide restructuring port (#19054) 2023-08-08 16:11:51 +02:00
Maciej Smyk
a55d1c21ee [DOCS] Basic quantization flow additions for 23.0 (#19059) 2023-08-08 15:59:47 +02:00
Marcin Kacprzak
91a4f73971 * [GNA] Fix for GeminiLake detection (#18653) (#18994)
* [GNA] Added HWGeneration::GNA_1_0_E enumerator
* [GNA] Extended a few tests with GNA1.0

Co-authored-by: Przemyslaw Wysocki <przemyslaw.wysocki@intel.com>
2023-08-08 00:01:13 +04:00
Przemyslaw Wysocki
84a3aab115 [PyOV] Backport of wheel building fix (#19013)
* Add upper bound

* backport flake fix

* Support of protobuf >= 21 (#18351)

* Corrected typo

* Ability to compile with newer protobuf versions

* Limit numpy (#18406)

* Revert "[PyOV] Pin version of Cython for API 1.0 (#18604)" (#18681)

* Revert "[PyOV] Pin version of Cython for API 1.0 (#18604)"

This reverts commit 787796d88f.

* Suppressed clang warning

* Restrict scipy module version for POT (#18237)

* Restrict scipy module version for POT

Latest release https://pypi.org/project/scipy/1.11.0 causes dependency conflicts

* Bump OMZ to include scipy restriction

---------

Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>

---------

Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
Co-authored-by: Alina Kladieva <alina.kladieva@intel.com>
Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>
2023-08-07 18:23:05 +04:00
Tatiana Savina
4ddeecc031 delete gna content (#19000) 2023-08-07 10:03:32 +02:00
Maciej Smyk
9c10e33fc7 Update model_optimization_guide.md (#18954) 2023-08-04 14:49:52 +02:00
Karol Blaszczak
c32b9a0cd5 [DOCS] fix ext directive 23.0 (#18989) 2023-08-04 12:23:34 +02:00
Karol Blaszczak
c32eef361b [DOCS] update prereleasenotes (#18958) 2023-08-03 13:13:01 +02:00
Sebastian Golebiewski
8d54bdd4d5 [DOCS] Compile CPU plugin for ARM platforms - for 23.0 (#18765)
* Update build_raspbian.md

* update-instructions

* remove-cross-compilation

* Update build_raspbian.md
2023-08-01 11:19:30 +02:00
Karol Blaszczak
64395f0d5e [DOCS] new benchmark data port 23.0 (#18873)
[DOCS] new benchmark data (#18532)
2023-08-01 08:36:26 +02:00
bstankix
9562161f76 Bugfix newsletter and footer scripts (#18854) 2023-07-28 16:12:37 +02:00
Maciej Smyk
cb59f057a0 [DOCS] Link fix for Get Started for 23.0 (#18799)
* Update get_started.md

* Update get_started.md
2023-07-26 12:25:55 +02:00
bstankix
28948502a9 [DOCS] Port newsletter and carousel changes from nightly (#18780) 2023-07-25 12:24:40 +00:00
Maciej Smyk
34748ae3b5 background fix for images (#18758) 2023-07-25 07:59:57 +02:00
bstankix
06eb4afd41 Port changes from nightly (#18743) 2023-07-24 12:00:22 +00:00
Karol Blaszczak
967d74ade6 [DOCS] conformance update port (#18735)
port: https://github.com/openvinotoolkit/openvino/pull/18732

conformance table
fix Add in ONNX layers
2023-07-24 11:44:23 +02:00
Sebastian Golebiewski
5ae4e2bb2d update (#18623) 2023-07-20 14:31:42 +02:00
Karol Blaszczak
22f6a3bcc0 [DOCS] minor MO fixes (#18606) 2023-07-18 16:22:04 +02:00
Sebastian Golebiewski
e842453865 realignment (#18621) 2023-07-18 16:03:42 +02:00
Maciej Smyk
2abbec386f Update configurations-for-intel-gpu.md (#18610) 2023-07-18 12:35:43 +02:00
Sebastian Golebiewski
afb2ebcdd4 [DOCS] Updating Interactive Tutorials for 23.0 (#18556)
* update-notebooks

* Update docs/nbdoc/nbdoc.py

Co-authored-by: bstankix <bartoszx.stankiewicz@intel.com>

* Update docs/nbdoc/nbdoc.py

Co-authored-by: bstankix <bartoszx.stankiewicz@intel.com>

---------

Co-authored-by: bstankix <bartoszx.stankiewicz@intel.com>
2023-07-14 14:07:40 +02:00
Karol Blaszczak
83e45c5ff3 [DOCS] GNA disclaimer port (#18507)
port: https://github.com/openvinotoolkit/openvino/pull/18431
2023-07-12 12:40:28 +02:00
Maciej Smyk
bdb6a44942 [DOCS] Code block update for 23.0 (#18451)
* code-block-1

* Update Convert_Model_From_Paddle.md

* code-block force

* fix

* fix-2

* Update troubleshooting-steps.md

* code-block-2

* Update README.md
2023-07-11 10:50:03 +02:00
Maciej Smyk
17cd26077a Update installing-openvino-docker-linux.md (#18459) 2023-07-11 08:34:12 +02:00
Maciej Smyk
247eb8a9b9 [DOCS] Tab reorder for 23.0 (#18389)
* tabs-1

* Update configure_devices.md

* tab-2

* tab-order

* Update installing-openvino-from-archive-linux.md

* Update installing-openvino-from-archive-linux.md

* win-linux-fix

* Update GPU_Extensibility.md
2023-07-07 14:31:05 +02:00
bstankix
68b8748c9f [DOCS] Add global footer
port: https://github.com/openvinotoolkit/openvino/pull/18374
2023-07-06 08:25:46 +02:00
Sebastian Golebiewski
852efa2269 [DOCS] Fix references in installation guide for 23.0 (#18384) 2023-07-06 08:04:42 +02:00
Karol Blaszczak
303fb7a121 [DOCS] menu bug fix 23.0 (#18353) 2023-07-04 07:59:17 +00:00
Tatiana Savina
7f1c6c8ce1 update links to rn (#18338) 2023-07-03 19:03:51 +02:00
Sebastian Golebiewski
55530b47c0 [DOCS] Adding metadata to articles for 2023.0 (#18332)
* adding-metadata

* Apply suggestions from code review

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Apply suggestions from code review

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

---------

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>
2023-07-03 15:00:28 +02:00
Karol Blaszczak
69a6097a30 [DOCS] update for install 23.0.1 (#18335) 2023-07-03 14:57:57 +02:00
Karol Blaszczak
1f759456d6 [DOCS] Update Selector Tool 2023.0.1 (#18336)
authored-by: Alexander Suvorov <alexander.suvorov@intel.com>
2023-07-03 14:56:59 +02:00
Karol Blaszczak
b05a7f2ed6 [DOCS] adjustments for ST and cookie policy (#18316) 2023-07-03 08:47:00 +02:00
Tatiana Savina
f4709ffe8b [DOCS] Port docs to release branch (#18317)
* [DOCS] Local distribution page improvements  (#18049)

* add slider with os specific libs

* doc review

* local distrib doc changes

* [DOCS] Added local distribution libraries path (#18191)

* add relative path to the table

* add another column

* new table format

* fix build issue

* fix tab name

* remove old table

* format fixes

* change font

* change path windows

* change tabset name

* add arm and 86_64 tables

* remove list dots

* [DOCS] Add FrontEnd API note (#18154)

* add note

* fix typo

* add advance cases note

* tf doc note

* wording change
2023-06-30 15:34:22 +02:00
Karol Blaszczak
bb1e353e58 [DOCS] supported models page update (#18298) 2023-06-29 15:23:18 +02:00
Karol Blaszczak
99c7bbc25e [DOCS] port selector tool to 23.0 (#18295)
port:
https://github.com/openvinotoolkit/openvino/pull/17799
https://github.com/openvinotoolkit/openvino/pull/18286

authored-by: Alexander Suvorov <alexander.suvorov@intel.com>
2023-06-29 14:36:59 +02:00
Maciej Smyk
33cfcb26fb [DOCS] WSL2 Docker update for 23.0 (#18293)
* windows-fix

* Update installing-openvino-docker-linux.md

* docker fix

* Update installing-openvino-docker-linux.md

* Update installing-openvino-docker-linux.md

* Update installing-openvino-docker-linux.md

* Update installing-openvino-docker-linux.md

* Update installing-openvino-docker-linux.md

* Update installing-openvino-docker-linux.md
2023-06-29 13:26:25 +02:00
Artyom Anokhov
39c84e03f7 Updated 3lvl domain for RPMs from fedoraproject.org (#18289) 2023-06-29 11:47:46 +02:00
Karol Blaszczak
f59126dde0 [DOCS] reset the pre-release notes pages port 23 (#18276)
port: https://github.com/openvinotoolkit/openvino/pull/18177
2023-06-29 08:17:02 +02:00
Karol Blaszczak
209d506341 [DOCS] top bar fixes port 23.0
port: #18261

FAQ for pot gets drop-downs
Homepage css improvement
2023-06-28 14:13:38 +02:00
bstankix
a710adf81a Add sitemap configuration (#18271) 2023-06-28 13:50:42 +02:00
Artyom Anokhov
fa1c41994f Bump version to 2023.0.1. Updated conflicted version for APT/YUM (#18268) 2023-06-28 13:36:31 +02:00
bstankix
caae459f54 Change html_baseurl to canonical (#18253) 2023-06-27 10:25:37 +02:00
Karol Blaszczak
7ef5cbff30 [DOCS] benchmark update for OVMS 23.0 (#18010) (#18250) 2023-06-27 09:16:16 +02:00
Maciej Smyk
85956dfa4d [DOCS] Debugging Auto-Device Plugin rst shift + Notebooks installation id align for 23.0 (#18241)
* Update AutoPlugin_Debugging.md

* Update AutoPlugin_Debugging.md

* Update AutoPlugin_Debugging.md

* Update AutoPlugin_Debugging.md

* notebooks id fix

* fixes

* Update AutoPlugin_Debugging.md
2023-06-27 08:15:51 +02:00
Maciej Smyk
2d98cbed74 [DOCS] Table directive update + Get Started fix for 23.0 (#18217)
* Update notebooks-installation.md

* Update notebooks-installation.md

* Update performance_benchmarks.md

* Update openvino_ecosystem.md

* Update get_started_demos.md

* Update installing-model-dev-tools.md

* Update installing-model-dev-tools.md

* Update installing-openvino-brew.md

* Update installing-openvino-conda.md

* fix

* Update installing-openvino-apt.md

* Update installing-openvino-apt.md

* Update installing-openvino-docker-linux.md

* Update installing-openvino-docker-windows.md

* Update installing-openvino-from-archive-linux.md

* Update installing-openvino-from-archive-macos.md

* Update installing-openvino-from-archive-macos.md

* Update installing-openvino-from-archive-linux.md

* Update installing-openvino-from-archive-macos.md

* Update installing-openvino-from-archive-linux.md

* Update installing-openvino-from-archive-macos.md

* Update installing-openvino-from-archive-windows.md

* tabs

* fixes

* fixes2

* Update GPU_RemoteTensor_API.md

* fixes

* fixes

* Get started fix
2023-06-26 10:27:12 +02:00
Sebastian Golebiewski
5d47cedcc9 updating-tutorials (#18213) 2023-06-26 09:42:21 +02:00
Ilya Lavrenov
9ab5a8f5d9 Added cmake_policy call to allow IN_LIST in if() (#18226) 2023-06-24 22:51:54 +04:00
Maciej Smyk
ad84dc6205 [DOCS] Docker and GPU update for 23.0 (#17851)
* docker gpu update

* ref fix

* Update installing-openvino-docker-linux.md

* fixes

* Update DeviceDriverVersion.svg

* Update docs/install_guides/configurations-for-intel-gpu.md

Co-authored-by: Miłosz Żeglarski <milosz.zeglarski@intel.com>

* Update docs/install_guides/configurations-for-intel-gpu.md

Co-authored-by: Miłosz Żeglarski <milosz.zeglarski@intel.com>

* fixes from review

* Update configurations-for-intel-gpu.md

* Update configurations-for-intel-gpu.md

* Update deployment_migration.md

---------

Co-authored-by: Miłosz Żeglarski <milosz.zeglarski@intel.com>
2023-06-22 15:47:59 +02:00
bstankix
bd3e4347dd [DOCS] gsearch 2023-06-22 13:08:00 +02:00
Tatiana Savina
0adf0e27ee [DOCS] Port docs fixes (#18155)
* change classification notebook (#18037)

* add python block (#18085)
2023-06-21 11:47:37 +02:00
Tatiana Savina
cb7cab1886 [DOCS] shift to rst - opsets F,G (#17253) (#18152)
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
2023-06-20 14:22:13 +02:00
Georgy Krivoruchko
fd48b0bbdc [TF FE] Workaround for Broadcast/Concat issue with empty tensors (#18140)
* Added transformation for Concat
* Added test
* CI fix
* Fixed behavior of the "empty tensor list" test
2023-06-20 13:13:55 +04:00
Mateusz Bencer
691630b68c [PORT TO 23.0][ONNX FE] Allow to mix new and legacy extensions (#18116)
* [ONNX FE] Allow to mix new and legacy extensions

* added unit test

* Update op_extension.cpp

Fixed compilation with Conan

Ported https://github.com/openvinotoolkit/openvino/pull/18126

* Update op_extension.cpp

Fixed code style

---------

Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
2023-06-17 13:17:37 +00:00
Nikita Malinin
205feb9421 ENABLE_MMAP property pos (#17896) (#18106)
(cherry picked from commit 29f06692d6)
2023-06-17 12:48:47 +04:00
Georgy Krivoruchko
5ef750d5b3 Fixed Windows behavior if folder path on input (#18113) 2023-06-16 16:39:33 +00:00
Zlobin Vladimir
80fddfe1c2 Update open_model_zoo submodule (#18110)
Catch up https://github.com/openvinotoolkit/open_model_zoo/pull/3790
2023-06-16 18:19:07 +04:00
Maciej Smyk
7eb59527a0 [DOCS] CMake options description in build guide for 23.0 2023-06-16 10:37:53 +02:00
Sebastian Golebiewski
21fdda5609 [DOCS] Restyling tabs for 23.0
Porting: #18054

Introducing changes in css style for tabs from sphinx-design extension.
2023-06-14 11:43:10 +00:00
Tatiana Savina
9983f74dc7 fix link (#18050) 2023-06-14 08:54:11 +00:00
Maciej Smyk
ef0b8161c9 Update build_linux.md (#18046) 2023-06-14 10:45:16 +02:00
Sebastian Golebiewski
9e2dacbc53 [DOCS] Restyling elements on home page - for 23.0 2023-06-13 08:50:20 +02:00
Sebastian Golebiewski
d299be4202 [DOCS] Fixing formatting issues in articles - for 23.0 (#18004)
* fixing-formatting
2023-06-13 07:59:16 +02:00
Tatiana Savina
99fe2e9bdc add tabs (#18007) 2023-06-12 16:45:34 +02:00
Karol Blaszczak
6668ec39d7 [DOCS] Adding Datumaro document into OV Ecosystems (#17944) (#17968)
* add Datumaro document
* add datumaro into toctree

authored-by: Wonju Lee <wonju.lee@intel.com>
2023-06-09 13:22:43 +02:00
Maciej Smyk
1e5dced9d4 Update build_linux.md (#17967) 2023-06-09 15:03:45 +04:00
Zlobin Vladimir
7d73bae243 Update open_model_zoo submodule (#17902)
* Update open_model_zoo submodule

Catch up https://github.com/openvinotoolkit/open_model_zoo/pull/3779

---------

Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
2023-06-09 13:52:38 +04:00
Sebastian Golebiewski
d8d4fb9c94 [DOCS] Fixing broken code blocks for 23.0 (#17960)
* code-blocks-fixes
2023-06-09 09:08:27 +02:00
Ilya Lavrenov
11cde296b7 Updated refs to dependency repositories (#17953) 2023-06-08 20:14:48 +04:00
Ilya Lavrenov
44f8dac403 Align tabs in install archives Linux (#17947) (#17950) 2023-06-08 14:49:28 +02:00
Tatiana Savina
41b4fd1057 add enter dir (#17897) 2023-06-08 13:08:41 +02:00
Sebastian Golebiewski
0f89782489 update-deployment-manager (#17904) 2023-06-08 10:31:50 +04:00
Tatiana Savina
d894716fad [DOCS] Add sudo to uninstall (#17929)
* add sudo to uninstall

* Update uninstalling-openvino.md
2023-06-07 18:18:12 +02:00
Tatiana Savina
f6fd84d2e1 fix archive link (#17918) 2023-06-07 09:19:05 +00:00
Tatiana Savina
648b2ad308 [DOCS] Model optimization paragraph fix (#17907)
* fix mo guide paragraph

* fix format

* fix paragraph

* remove extra line
2023-06-07 10:45:01 +02:00
Tatiana Savina
ea5c1b04e5 [DOCS] Fix list and links to POT (#17887)
* change link to POT

* change header label

* fix typo
2023-06-06 10:59:05 +02:00
Karol Blaszczak
f3d88cbf99 DOCS post-release adjustments (#17876) 2023-06-05 15:43:45 +02:00
Tatiana Savina
e824e482b1 fix apt and yum links (#17877) 2023-06-05 13:11:21 +03:00
Sebastian Golebiewski
e4d0021e2c update-diagram (#17872) 2023-06-05 08:17:26 +02:00
Artyom Anokhov
e74cb4084d [docs] Conda update (#17861)
* Adding installing OV via Conda-Forge for MacOS

* Adding section Compiling with OpenVINO™ Runtime from Conda-Forge

* Update docs/install_guides/installing-openvino-conda.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Update docs/install_guides/installing-openvino-conda.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Update docs/install_guides/installing-openvino-conda.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Update docs/install_guides/installing-openvino-conda.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Update docs/install_guides/installing-openvino-conda.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Update docs/install_guides/installing-openvino-conda.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Update docs/install_guides/installing-openvino-conda.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Update docs/install_guides/installing-openvino-conda.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* installing-openvino-conda: Fixed title

* installing-openvino-macos-header: Fixed order for links

---------

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>
2023-06-02 21:54:11 +04:00
Tatiana Savina
e843e357cd change notebooks links (#17857) 2023-06-02 13:14:26 +02:00
Tatiana Savina
ecc502733d [DOCS] Change downloads directory link (#17846)
* installation link

* fix path
2023-06-01 19:04:02 +04:00
bstankix
d1de793552 Port default platform type selection from nightly (#17845) 2023-06-01 15:46:22 +02:00
Karol Blaszczak
ebaf6a2fcb DOCS homepage update (#17843) 2023-06-01 15:16:20 +02:00
Anton Voronov
88b006bce9 [DOC] cpu documentation fixes (#17816)
* [DOC] cpu documentation fixes

* fixed typos
2023-06-01 10:17:48 +02:00
Ilya Lavrenov
4aae068125 Update archive names for 2023.0 release (#17831) 2023-06-01 12:07:06 +04:00
Ilya Lavrenov
41c37c8af9 Updated badges for 2023.0 (#17832) 2023-06-01 12:03:20 +04:00
Sebastian Golebiewski
f40f0fa58b [DOCS] convert_model() as a default conversion path - for 23.0 (#17751)
Porting: https://github.com/openvinotoolkit/openvino/pull/17454

Updating MO documentation to make convert_model() a default conversion path.
2023-05-31 19:22:54 +02:00
Tatiana Savina
20dc436b6f DOCS Fix build links (#17821)
* change doc vers

* fix links
2023-05-31 17:45:57 +02:00
Sebastian Golebiewski
b2b7a57a4c update-tutorials (#17812) 2023-05-31 15:48:08 +02:00
Tatiana Savina
4481bfa17e [DOCS] Review release docs (#17793)
* review docs

* fix link to notebook

* fix build

* fix links

* remove bracket
2023-05-31 15:46:53 +02:00
Sebastian Golebiewski
366a5467d1 [DOCS] benchmark 23.0 update - port from master (#17806)
Porting: #17789

new benchmarking data
2023-05-31 12:58:49 +02:00
Karol Blaszczak
4be1dddb21 DOCS operation support articles update (#17449) (#17809)
port: #17449

conformance table added
ARM merged with CPU
precision support and layout tables removed from the overview device article (info available in device articles)
2023-05-31 10:56:25 +00:00
Ilya Lavrenov
3fd9b8c3b7 Updated install docs for 2023.0 (#17764) 2023-05-31 13:37:30 +04:00
Maciej Smyk
66528622a8 [DOCS] Link adjustment for dev docs + fix to build.md CPU link for 23.0 (#17747)
Port from #17744

JIRA Ticket: 110042

Update of hardcoded links to switch references from latest, nightly and 2022.3 (and earlier) to 2023.0.

JIRA Ticket: 111393

Fix for the Mac (Intel CPU) link name (it should be Intel CPU instead of Intel GPU).
2023-05-31 11:34:22 +02:00
Tatiana Savina
4fb2cebf28 [DOCS] Compile tool docs port (#17753)
* [DOCS] Compile tool docs change (#17460)

* add compile tool description

* change refs

* remove page to build docs

* doc reference fix

* review comments

* fix comment

* snippet comment

* Update docs/snippets/compile_model.cpp

Co-authored-by: Ilya Churaev <ilyachur@gmail.com>

* change snippet name

* create ov object

* code block fix

* cpp code block

* include change

* code test

* change snippet

* Update docs/snippets/export_compiled_model.cpp

Co-authored-by: Ilya Churaev <ilyachur@gmail.com>

---------

Co-authored-by: Ilya Churaev <ilyachur@gmail.com>

* Fixed compile_tool install (#17666)

---------

Co-authored-by: Ilya Churaev <ilyachur@gmail.com>
Co-authored-by: Ilya Churaev <ilya.churaev@intel.com>
2023-05-31 13:27:01 +04:00
Wanglei Shen
c18a24c05b [DOC] Add multi threading for 2023.0 release in CPU plugin document (#17788) 2023-05-31 12:54:42 +04:00
Anton Voronov
95f0005793 [DOC][CPU] Documentation update (#17786) 2023-05-31 10:37:14 +04:00
bstankix
9ac239de75 Port ability to build notebooks from local files from nightly (#17798) 2023-05-30 16:27:11 +02:00
Tatiana Savina
ad5c0808a6 add pad1 (#17760) 2023-05-30 10:39:05 +02:00
Tatiana Savina
66c6e125cf [DOCS] Port workflow docs (#17761)
* [DOCS]  Deploy and run documentation sections (#17708)

* first draft

* change name

* restructure

* workflow headers change

* change note

* remove deployment guide

* change deployment description

* fix conflicts

* clean up conflict fixes
2023-05-30 10:38:47 +02:00
Maciej Smyk
53bfc41a74 [DOCS] Configuring devices article update for 2023.0 (#17757)
* Update configure_devices.md
2023-05-29 09:02:25 +02:00
Karol Blaszczak
9b72c33039 [DOCS] install-guide fix port to 23.0 (#17672) 2023-05-25 18:39:03 +02:00
Zlobin Vladimir
c0e9e1b1a1 Update open_model_zoo submodule (#17733)
Catch up https://github.com/openvinotoolkit/open_model_zoo/pull/3770

Ticket: 110042
2023-05-25 15:56:45 +00:00
Daria Mityagina
720e283ff1 Update comments and help text (#17710) 2023-05-24 22:12:27 +02:00
Tatyana Raguzova
0e87a28791 [build_samples] Using make instead of cmake (#17560) 2023-05-24 22:43:42 +04:00
Ilya Lavrenov
6d17bbb7e9 Conan port (#17625) 2023-05-24 22:07:50 +04:00
Maxim Vafin
cebbfe65ac [DOCS] Add examples of using named outputs in extensions (#17622)
* [DOCS]Add examples of using named outputs in extensions

* Fix opset

* Apply suggestions from code review

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Update docs/Extensibility_UG/frontend_extensions.md

* Add reference to external docs

* Update docs/Extensibility_UG/frontend_extensions.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

---------

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>
2023-05-24 14:15:01 +02:00
Aleksandr Voron
c4c6567182 [DOCS][CPU] Update ARM CPU plugin documentation (#17700) 2023-05-24 15:42:32 +04:00
Karol Blaszczak
1a9ce16dd6 [DOCS] framework deprecation notice (#17484) (#17537)
Port: #17484
A new PR will be created with more changes, as suggested by jane-intel and slyalin. The "deprecated" label for articles and  additional content on converting models to ONNX will be covered then.
2023-05-24 11:53:56 +02:00
Maciej Smyk
4e8d5f3798 [DOCS] link fix (#17658) 2023-05-23 07:31:19 +02:00
Przemyslaw Wysocki
7351859ec2 limit linter (#17624) 2023-05-22 23:58:06 +04:00
Ilya Lavrenov
405c5ea03a Install libtbb12 on U22 (#17653) 2023-05-22 17:52:59 +04:00
Sebastian Golebiewski
183253e834 [DOCS] Update Interactive Tutorials - for 23.0 (#17600)
port: https://github.com/openvinotoolkit/openvino/pull/17598/
2023-05-22 14:46:14 +02:00
Maciej Smyk
cfea37b139 [DOCS] RST fixes for 23.0 (#17606)
* fixes
2023-05-22 10:33:32 +02:00
Tatiana Savina
34f00bd173 DOCS Update optimization docs with NNCF PTQ changes and deprecation of POT (#17398) (#17633)
* Update model_optimization_guide.md

* Update model_optimization_guide.md

* Update model_optimization_guide.md

* Update model_optimization_guide.md

* Update model_optimization_guide.md

* Update model_optimization_guide.md

* Update model_optimization_guide.md

* Update home.rst

* Update ptq_introduction.md

* Update Introduction.md

* Update Introduction.md

* Update Introduction.md

* Update ptq_introduction.md

* Update ptq_introduction.md

* Update basic_quantization_flow.md

* Update basic_quantization_flow.md

* Update basic_quantization_flow.md

* Update quantization_w_accuracy_control.md

* Update quantization_w_accuracy_control.md

* Update quantization_w_accuracy_control.md

* Update quantization_w_accuracy_control.md

* Update quantization_w_accuracy_control.md

* Update quantization_w_accuracy_control.md

* Update quantization_w_accuracy_control.md

* Update quantization_w_accuracy_control.md

* Update quantization_w_accuracy_control.md

* Update basic_quantization_flow.md

* Update basic_quantization_flow.md

* Update quantization_w_accuracy_control.md

* Update basic_quantization_flow.md

* Update basic_quantization_flow.md

* Update model_optimization_guide.md

* Update ptq_introduction.md

* Update quantization_w_accuracy_control.md

* Update model_optimization_guide.md

* Update quantization_w_accuracy_control.md

* Update model_optimization_guide.md

* Update quantization_w_accuracy_control.md

* Update model_optimization_guide.md

* Update Introduction.md

* Update basic_quantization_flow.md

* Update basic_quantization_flow.md

* Update quantization_w_accuracy_control.md

* Update ptq_introduction.md

* Update Introduction.md

* Update model_optimization_guide.md

* Update basic_quantization_flow.md

* Update quantization_w_accuracy_control.md

* Update quantization_w_accuracy_control.md

* Update quantization_w_accuracy_control.md

* Update Introduction.md

* Update FrequentlyAskedQuestions.md

* Update model_optimization_guide.md

* Update Introduction.md

* Update model_optimization_guide.md

* Update model_optimization_guide.md

* Update model_optimization_guide.md

* Update model_optimization_guide.md

* Update model_optimization_guide.md

* Update ptq_introduction.md

* Update ptq_introduction.md

* added code snippet (#1)

* Update basic_quantization_flow.md

* Update basic_quantization_flow.md

* Update quantization_w_accuracy_control.md

* Update basic_quantization_flow.md

* Update basic_quantization_flow.md

* Update ptq_introduction.md

* Update model_optimization_guide.md

* Update basic_quantization_flow.md

* Update ptq_introduction.md

* Update quantization_w_accuracy_control.md

* Update basic_quantization_flow.md

* Update basic_quantization_flow.md

* Update basic_quantization_flow.md

* Update ptq_introduction.md

* Update ptq_introduction.md

* Delete ptq_introduction.md

* Update FrequentlyAskedQuestions.md

* Update Introduction.md

* Update quantization_w_accuracy_control.md

* Update introduction.md

* Update basic_quantization_flow.md code blocks

* Update quantization_w_accuracy_control.md code snippets

* Update docs/optimization_guide/nncf/ptq/code/ptq_torch.py



* Update model_optimization_guide.md

* Optimization docs proofreading  (#2)

* images updated

* delete reminder

* review

* text review

* change images to original ones

* Update filter_pruning.md code blocks

* Update basic_quantization_flow.md

* Update quantization_w_accuracy_control.md

* Update images (#3)

* images updated

* delete reminder

* review

* text review

* change images to original ones

* Update filter_pruning.md code blocks

* update images

* resolve conflicts

* resolve conflicts

* change images to original ones

* resolve conflicts

* update images

* fix conflicts

* Update model_optimization_guide.md

* Update docs/optimization_guide/nncf/ptq/code/ptq_tensorflow.py



* Update docs/optimization_guide/nncf/ptq/code/ptq_torch.py



* Update docs/optimization_guide/nncf/ptq/code/ptq_onnx.py



* Update docs/optimization_guide/nncf/ptq/code/ptq_aa_openvino.py



* Update docs/optimization_guide/nncf/ptq/code/ptq_openvino.py



* table format fix

* Update headers

* Update qat.md code blocks

---------

Co-authored-by: Maksim Proshin <maksim.proshin@intel.com>
Co-authored-by: Alexander Suslov <alexander.suslov@intel.com>
2023-05-19 15:37:41 +00:00
Tatiana Savina
17326abb72 [MO][TF FE] Document freezing as essential step for pruning SM format (#17595) (#17632)
* [MO][TF FE] Document freezing as essential step for pruning SM format



* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_TensorFlow.md



---------

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>
2023-05-19 15:32:57 +00:00
Ilya Lavrenov
8601042bea Added python 3.11 for deployment tool (#17627) 2023-05-19 18:08:49 +04:00
Artyom Anokhov
39958e0dc1 Updated APT/YUM instructions with actual version. Added instructions for Ubuntu22. Updated subfolders naming for APT. (#17561) 2023-05-19 12:46:40 +02:00
Maciej Smyk
6fc9840e32 [DOCS] Link adjustment for 23.0 (#17604) 2023-05-18 15:10:13 +02:00
Ekaterina Aidova
b4452d5630 update OMZ submodule to fix bug (#17570) 2023-05-17 05:51:59 -07:00
Evgenya Stepyreva
4c69552656 Normalize_L2 relax constant input restriction (#17567)
* Normalize_L2 relax constant input restriction

* Fix warning treated as error during windows build
2023-05-17 12:37:02 +00:00
Maciej Smyk
6d8b3405ca [DOCS] Precision Control article for 23.0 (#17573)
Port from: https://github.com/openvinotoolkit/openvino/pull/17413

Added separate article on Precision Control (ov::hint::execution_mode and ov::inference_precision properties)
2023-05-17 11:21:05 +00:00
Evgenya Stepyreva
4c2096ad9c Strided Slice fix constant creation (#17557)
* Strided Slice fix constant creation

* Apply suggestions from code review

* Final touches
2023-05-16 13:53:57 +00:00
Aleksandr Voron
0c67b90f47 [CPU][ARM] Dynamic shapes support in ARM transformations (#17517) 2023-05-16 13:10:34 +04:00
Jan Iwaszkiewicz
83f51e0d00 [PyOV][Backport] Remove numpy strides from Tensor creation (#17535)
* [PyOV] Remove numpy strides from Tensor creation

* [PyOV] Add test for stride calculation

* [PyOV] Fix flake issue
2023-05-16 09:04:56 +04:00
Dmitry Kurtaev
8bb2a2a789 [CMake] Add CMAKE_MAKE_PROGRAM arg (#17340)
Co-authored-by: Ilya Churaev <ilya.churaev@intel.com>
2023-05-15 17:22:27 +04:00
Pawel Raasz
c9cfd6755c [Core] StridedSlice improvements of bound evaluation and constant folding (#17536)
* StridedSlice improvements:
-Bound evaluation for begin, end partial values when ignore mask set.
- Custom constant fold implementation.

* Improve const folding when all begin or end values
are ignored
2023-05-15 12:24:36 +00:00
Karol Blaszczak
c0060aefa7 Prepare "memory_optimization_guide.md" (#17022) (#17498)
---------

Co-authored-by: Vitaliy Urusovskij <vitaliy.urusovskij@intel.com>
2023-05-15 10:48:32 +02:00
Gorokhov Dmitriy
8a97d3c0e1 [CPU] Restore OneDNN InnerProduct primitive os_block computation behavior (#17462) 2023-05-12 15:50:10 +04:00
Wanglei Shen
c5fd3300a2 HOT FIX: disable set_cpu_used in 2023.0 release (#17456)
* disable set_cpu_used in 2023.0 release

* fix code style issue
2023-05-12 14:16:42 +08:00
Mateusz Mikolajczyk
a7f6f5292e Add missing check for special zero (#17479) 2023-05-12 09:30:55 +04:00
Maxim Vafin
804df84f7d Add transformation to convert adaptive pool to reduce (#17478)
* Add transformation to convert adaptive pool to reduce

* Update src/common/transformations/src/transformations/common_optimizations/moc_transformations.cpp

* Add tests and apply feedback

* Simplify if branches
2023-05-11 15:51:26 +00:00
Evgenya Stepyreva
1e49a594f7 [Shape inference] Pooling: Dimension div fix (#17197) (#17471)
* Dimension div fix

* codestyle fixes

* Convolution labels propagation test instances corrected

Co-authored-by: Vladislav Golubev <vladislav.golubev@intel.com>
2023-05-11 14:36:17 +04:00
Tatiana Savina
d5ac1c2e5c [DOCS] Port update docs for TF FE (#17464)
* [TF FE] Update docs for TF FE (#17453)

* Update tensorflow_frontend.md

* Update docs/resources/tensorflow_frontend.md

---------

Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>
2023-05-11 01:03:47 +04:00
Karol Blaszczak
afb2ae6b7a [DOCS] Update GPU.md (#17400) 2023-05-10 17:30:22 +02:00
Maxim Vafin
c5623b71cf Remove posibility to export to onnx (#17423)
* Remove posibility to export to onnx

* Apply suggestions from code review

* Fix tests and docs

* Workaround function inputs

* Fix code style
2023-05-10 16:35:54 +04:00
Maxim Vafin
152b11e77f Remove section about --use_legacy_frontend for PyTorch models (#17441) 2023-05-09 20:30:27 +04:00
Mateusz Tabaka
5adf3b5ca8 [TF frontend] use InterpolateMode::LINEAR_ONNX if input rank is 4 (#17406)
This change mimicks LinearToLinearONNXReplacer transformation in
legacy frontend, where linear interpolate mode is replaced with
linear_onnx due to performance reasons.

Ticket: CVS-108343
2023-05-09 14:52:35 +02:00
Fang Xu
a2ccbdf86e Update oneTBB2021.2.2 for 2023.0 (#17367)
* update oneTBB2021.2.2 for windows

* update SHA256

* update SHA256

oneTBB https://github.com/oneapi-src/oneTBB/releases/tag/v2021.2.2 (a25ebdf)

* add print for hwloc which is not found

---------

Co-authored-by: Chen Peter <peter.chen@intel.com>
2023-05-09 05:49:33 -07:00
Roman Kazantsev
1440b9950f [TF FE] Handle incorrect models (empty, fake) by TF FE (#17408) (#17432)
* [TF FE] Handle incorrect models (empty, fake) by TF FE



* Apply suggestions from code review

---------

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2023-05-09 16:30:51 +04:00
Mateusz Tabaka
d88d4d22e8 Update docs for frontend extensions (#17428) 2023-05-09 13:27:40 +02:00
Tatiana Savina
ea79006a0a DOCS Port shift to rst - Model representation doc (#17385)
* DOCS shift to rst - Model representation doc (#17320)

* model representation to rst

* fix indentation

* fix cide tabs

* fix indentation

* change doc

* fix snippets

* fix snippet

* port changes

* dev docs port
2023-05-09 10:33:12 +02:00
Pavel Esir
a4ff3318ea renumber FAQ (#17376) 2023-05-09 11:35:55 +04:00
Anastasiia Pnevskaia
44e7a003e7 Removed checks of unsatisfied dependencies in MO (#16991) (#17419)
* Fixed dependencies check, made unsatisfied dependencies show only in case of error.

* Small fix.

* Test correction.

* Small test correction.

* Temporarily added debug print.

* Debug output.

* Debug output.

* Debug output.

* Test fix.

* Removed debug output.

* Small fix.

* Moved tests to check_info_messages_test.py

* Remove dependies checks from MO.

* Small corrections.
2023-05-09 11:32:14 +04:00
Przemyslaw Wysocki
fa4112593d [Backport] OMZ submodule bump for Python 3.11 (#17325)
* backport

* Update sha
2023-05-08 18:49:28 +02:00
Surya Siddharth Pemmaraju
45e378f189 Added Torchscript backend (#17328)
* Added Torchscript backend

* Added some torchscript backend tests to ci

* Removed tests from CI as torch.compile doesn't support 3.11 currently

* Fixed linter issues

* Addressed PR comments and linter issues
2023-05-08 03:44:10 -07:00
Maciej Smyk
9320cbaa8c [DOCS] Recreation of BDTI PRs - 23.0 (#17383)
Porting: https://github.com/openvinotoolkit/openvino/pull/16913

Recreation of BDTI PRs for master.

Recreated PRs:

Docs: Update Dynamic Shapes documentation #15216
Docs: Edits to Performance Hints and Cumulative Throughput documentation #14793
Docs: Update Devices pages to state improved INT8 performance with 11th & 12th gen devices #12067
2023-05-08 10:36:56 +00:00
Maciej Smyk
718b194ad6 [DOCS] Legacy MO Extensibility update for 23.0
porting: https://github.com/openvinotoolkit/openvino/pull/15931

Divided MO Extensibility article into separate smaller articles,
Applied the suggestion from [DOCS] Better statement about MO extensions as internal API [Recreating #14062] #15679
Recreated images in svg format
Fixing directives
2023-05-08 12:25:37 +02:00
Maxim Vafin
8241540609 [PT FE] Improve exception when decoder cannot trace or script the model (#17338) (#17347)
* [PT FE] Improve exception when decoder cannot trace or script the model

* Add exception in convert_model

* Add test
2023-05-08 09:22:47 +04:00
Maxim Vafin
10d87b7332 [PT FE] Support default strides for avg and max pooling (#17337) (#17348)
* Support default strides for avg and max pooling

* Fix code style

* Remove changes from other ticket
2023-05-08 09:21:53 +04:00
Karol Blaszczak
386d773b33 [DOCS] fix typos in install guides (#17388) 2023-05-08 07:12:38 +02:00
Sun Xiaoxia
a5312f70db fix binding wrong core with latency mode in i9-13900 (#17364) 2023-05-06 17:17:11 +08:00
Roman Kazantsev
8f113ef24e [TF FE] Provide single tensor names for inputs and outputs in SavedModel (#17373)
* [TF FE] Provide single tensor names for inputs and outputs in SavedModel

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Fix build issue

* Xfail some cases due to internal problems in TF

* Xfail other layer test

* Extend documentation for function to adjust tensor names

* Use old path of tf2 layer testing for legacy frontend

---------

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2023-05-05 19:06:59 +02:00
Sebastian Golebiewski
c651bc5f87 [DOCS] Fix links - port (#17356) 2023-05-05 18:14:38 +02:00
Karol Blaszczak
12aab024d1 [GPU] Update dynamic shape document (#17274) (#17384)
porting: https://github.com/openvinotoolkit/openvino/pull/17384

* Update dynamic shape document for GPU
* Applied review comments

authored-by: Taylor Yeonbok Lee <taylor.lee@intel.com>
2023-05-05 17:58:19 +02:00
Ivan Tikhonov
3978511c5c Fix the names copying in TransposeSinking backward transformations (#17283) (#17344)
* Fix tensor names copying in TS transformations

* added a check that sinking is available for all consumers in TS backward transformations

* codestyle

* Apply review comments, add result sorting by tensor names in graph comparator

* delete debug code

* fix RemoveConsumers method implementation

* fix snippet tests

* use reference instead of raw pointer

* add new transformation tests

* fix transformation tests

Co-authored-by: Andrei Kochin <andrei.kochin@intel.com>
2023-05-05 17:09:44 +02:00
Chen Xu
0de0efd751 [CPU] Fix kernel precision mismatch in Reduce node (#17372)
* [CPU] Fix kernel precision mismatch in Reduce node

* Apply review comments
2023-05-05 14:39:30 +02:00
Sebastian Golebiewski
53e2997909 DOCS shift to rst (#17377) 2023-05-05 10:55:03 +02:00
Maciej Smyk
7779fea76f DOCS shift to rst - Opsets E for 23.0 (#17365) 2023-05-05 10:17:05 +02:00
Sebastian Golebiewski
c785551b57 DOCS shift to rst (#17346) 2023-05-04 13:29:16 +02:00
Roman Kazantsev
8c95c90e45 [TF FE] Use original input types for SavedModel (#17295) (#17335)
Also, refactor TF FE unit-tests

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2023-05-03 16:26:33 +04:00
Evgenya Stepyreva
bf829eead4 NMS-5 calculate upper-bound (#17332)
* NMS-5 calculate upper-bound

* Test
2023-05-03 15:22:08 +04:00
Roman Kazantsev
1141e90435 [MO][TF FE] Handle constant with undefined value (#17311) (#17327)
Since TF 2.10 the native model freezing can produce constants with undefined value,
i.e. tensor shape can be any and value is []. In this case the tensor just fills up with
the default value (0 - for numerics, "" - for strings)

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2023-05-03 12:09:57 +04:00
Roman Kazantsev
15b62d77cc [TF FE] Added additional pruned inputs for MetaGraph support (#17237) (#17326)
* Added handling of additional pruned inputs
Added possible topology of RestoreV2 -> AssignVariableOp
Added additional checks

* Extended tests coverage

Co-authored-by: Georgy Krivoruchko <georgy.krivoruchko@intel.com>
2023-05-03 12:09:20 +04:00
Maxim Vafin
e6347544e2 Fix issue with Pow when both inputs are scalars (#17305) (#17321)
* Fix issue with Pow when both inputs are scalars

* Fix code style
2023-05-03 11:32:13 +04:00
Anton Voronov
fcf261a048 [DOC] small fix for sparse weights decompression feature documentation (#17316) 2023-05-02 15:50:48 +02:00
Tatiana Savina
bba9f3094b [DOCS] Port docs: opsets, import keyword, deprecated options (#17289)
* Added missing import keyword (#17271)

* [DOCS] shift to rst - opsets N (#17267)

* opset to rst

* change list indentations

* fix formula

* add n operations

* add negative and nonzero

* fix link

* specs to rst

* fix matrixnms path

* change path to if

* fix list

* fix format

* DOCS remove deprecated options (#17167)

* DOCS remove deprecated options

* removed a couple more not actual questions

* remove the whole lines completely

* remove a couple of more deprecations

---------

Co-authored-by: Nikita Savelyev <nikita.savelyev@intel.com>
Co-authored-by: Pavel Esir <pavel.esir@intel.com>
2023-05-02 14:05:03 +02:00
Sergey Shlyapnikov
aa13ab63f5 [GPU] Use BFS processing order for out_of_order queue (#17304) 2023-05-02 15:25:21 +04:00
Tatiana Savina
8f978d2c60 update OTE and Datumaro links (#17269) (#17310) 2023-05-02 13:14:21 +02:00
Sebastian Golebiewski
a349ba7295 DOCS shift to rst - Opsets H & I - for 23.0 (#17307)
* update

* update

* cpp
2023-05-02 11:16:21 +02:00
Vladimir Paramuzov
73442bbc82 [GPU] Don't throw exception if no devices are found (#17302)
* [GPU] Don't throw exception if no devices are found

* Fix CAPI test
2023-05-01 23:18:51 +04:00
Tatiana Savina
76c237da8b [DOCS] Document Model Optimizer Python API port (#17287)
* [DOCS] Document Model Optimizer Python API (#14380)

* Added MO convert_model() documentation.

* Apply suggestions from code review

Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Updated Convert_Model pages for PyTorch and TF with PythonAPI info. United TF2 and TF formats lists.

* Added info on flag params, example_input formats list, small corrections.

* Moved MO python API to separate doc. Small text corrections.

* Added TF types conversion description.

* Removed duplicating info.

* Added description of InputCutInfo types and default onnx opset.

* Small correction.

* Changed type table to bullets, added blank lines.

* Added quote marks.

* Removed redunrant bullets.

* Apply suggestions from code review

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Apply suggestions from code review.

* Added new line.

* Apply comments from review.d

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_TensorFlow.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Added description of lists of parameters.

* Update docs/MO_DG/prepare_model/MO_Python_API.md

Co-authored-by: Sergey Lyalin <sergey.lyalin@intel.com>

* Added details about input_shape, example_input.

* Updated PyTorch page.

* Corrected input_signature description.

* Format correction.

* Format correction.

* Format correction.

* Format correction.

* Small correction.

* Small correction.

* Removed input_signature param description.

* Updated text.

* Small correction.

* Small correction.

* Removed not needed examples.

* Apply suggestions from code review

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Added new line.

* Update docs/MO_DG/prepare_model/MO_Python_API.md

Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Added titles of examples.

* Update docs/MO_DG/prepare_model/MO_Python_API.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Apply suggestions from code review

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Update docs/MO_DG/prepare_model/MO_Python_API.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_PyTorch.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

---------

Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>
Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>
Co-authored-by: Sergey Lyalin <sergey.lyalin@intel.com>

* fix first paragraph

* Update MO_Python_API.md

---------

Co-authored-by: Anastasiia Pnevskaia <anastasiia.pnevskaia@intel.com>
Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>
Co-authored-by: Sergey Lyalin <sergey.lyalin@intel.com>
2023-05-01 15:19:11 +04:00
Roman Lyamin
aebea2337e [GPU] Coverity fixes (#17241) (#17281) 2023-05-01 14:35:50 +04:00
Ilya Lavrenov
29c672d6d8 Fixed Python API build for Ubuntu 22.04 with python3.11 (#17297)
* Fixed Python API build for Ubuntu 22.04 with python3.11

* Update ONNX CI docker to test python 3.11 and system pybind11
2023-04-29 03:38:01 +04:00
Maksim Doronin
1f790df33c Fix enable_plugins_xml (#17293) 2023-04-29 00:02:43 +04:00
Ilya Lavrenov
5625424b91 Fixes for OpenCL via brew package (#17273) 2023-04-28 18:10:30 +04:00
Tatiana Savina
c7d0df39b5 remove pre-release note (#17265) 2023-04-28 13:04:31 +02:00
Alina Kladieva
85b57ea2bf Bump Azure refs to 2023/0 (#17264) 2023-04-27 22:09:27 +04:00
6832 changed files with 128397 additions and 219952 deletions

View File

@@ -31,13 +31,6 @@ pr:
- 'tools/*'
- 'tests/layer_tests/*'
resources:
repositories:
- repository: vcpkg
type: github
endpoint: openvinotoolkit
name: microsoft/vcpkg
variables:
- group: github
@@ -53,13 +46,11 @@ jobs:
system.debug: true
VSTS_HTTP_RETRY: 5
VSTS_HTTP_TIMEOUT: 200
BUILD_TYPE: Debug
BUILD_TYPE: Release
OPENVINO_REPO_DIR: $(Build.Repository.LocalPath)
VCPKG_ROOT: $(OPENVINO_REPO_DIR)/../vcpkg
WORK_DIR: $(Pipeline.Workspace)/_w
BUILD_DIR: $(WORK_DIR)/build
ANDROID_TOOLS: $(WORK_DIR)/android_tools
ANDROID_NDK_HOME: $(WORK_DIR)/android_tools/ndk-bundle
ANDROID_SDK_VERSION: 29
ANDROID_ABI_CONFIG: arm64-v8a
TMP_DIR: /mnt/tmp
@@ -117,25 +108,21 @@ jobs:
displayName: 'Make dir'
- checkout: self
clean: 'true'
submodules: 'true'
clean: 'true'
path: openvino
- checkout: vcpkg
clean: 'true'
path: vcpkg
- script: |
set -e
# generic dependencies
sudo -E apt --assume-yes install ccache scons default-jdk python3-pip ninja-build
# vcpkg requires cmake 3.19 or later
python3 -m pip install -U pip cmake
# vcpkg's tool dependencies
sudo -E apt --assume-yes install curl zip unzip tar
# vcpkg tree of dependencies require extra packages
sudo -E apt --assume-yes install pkg-config linux-libc-dev
# Install Android SDK, NDK and Tools
sudo -E $(OPENVINO_REPO_DIR)/install_build_dependencies.sh
# Move into contrib install_build_dependencies.sh
sudo apt --assume-yes install scons crossbuild-essential-arm64 default-jdk
# Speed up build
sudo apt -y --no-install-recommends install unzip
wget https://github.com/ninja-build/ninja/releases/download/v1.10.2/ninja-linux.zip
unzip ninja-linux.zip
sudo cp -v ninja /usr/local/bin/
# Install Android SDK, NDK and TOOLS
sudo apt -y --no-install-recommends install unzip
wget https://dl.google.com/android/repository/commandlinetools-linux-7583922_latest.zip
unzip commandlinetools-linux-7583922_latest.zip
@@ -143,34 +130,19 @@ jobs:
./cmdline-tools/bin/sdkmanager --sdk_root=$(ANDROID_TOOLS) --install "ndk-bundle" "platform-tools" "platforms;android-$(ANDROID_SDK_VERSION)"
displayName: 'Install dependencies'
- script: |
$(VCPKG_ROOT)/bootstrap-vcpkg.sh --disableMetrics
# patch vcpkg default (community) toolchain to build only Release configuration
echo "set(VCPKG_BUILD_TYPE release)" >> $(VCPKG_ROOT)/triplets/community/arm64-android.cmake
displayName: 'Build vcpkg'
- task: CMake@1
inputs:
cmakeArgs: >
-G Ninja
-G "Ninja Multi-Config"
-DCMAKE_VERBOSE_MAKEFILE=ON
-DCMAKE_BUILD_TYPE=$(BUILD_TYPE)
-DVCPKG_TARGET_TRIPLET=arm64-android
-DVCPKG_HOST_TRIPLET=x64-linux-release
-DCMAKE_TOOLCHAIN_FILE=$(VCPKG_ROOT)/scripts/buildsystems/vcpkg.cmake
-DVCPKG_CHAINLOAD_TOOLCHAIN_FILE=$(ANDROID_NDK_HOME)/build/cmake/android.toolchain.cmake
-DCMAKE_TOOLCHAIN_FILE=$(ANDROID_TOOLS)/ndk-bundle/build/cmake/android.toolchain.cmake
-DCMAKE_COMPILE_WARNING_AS_ERROR=ON
-DANDROID_ABI=$(ANDROID_ABI_CONFIG)
-DANDROID_STL=c++_shared
-DANDROID_PLATFORM=$(ANDROID_SDK_VERSION)
-DENABLE_PYTHON=OFF
-DENABLE_SYSTEM_OPENCL=ON
-DENABLE_SYSTEM_PROTOBUF=ON
-DENABLE_SYSTEM_PUGIXML=ON
-DENABLE_SYSTEM_SNAPPY=ON
-DENABLE_SYSTEM_TBB=ON
-DENABLE_SYSTEM_FLATBUFFERS=ON
-DENABLE_INTEL_GPU=ON
-DENABLE_TESTS=ON
-DCMAKE_CXX_LINKER_LAUNCHER=ccache
-DCMAKE_C_LINKER_LAUNCHER=ccache
-DCMAKE_CXX_COMPILER_LAUNCHER=ccache
-DCMAKE_C_COMPILER_LAUNCHER=ccache
-S $(OPENVINO_REPO_DIR)

View File

@@ -32,13 +32,13 @@ resources:
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/openvino_contrib
ref: master
ref: releases/2023/0
- repository: testdata
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/testdata
ref: master
ref: releases/2023/0
variables:
- group: github
@@ -293,7 +293,10 @@ jobs:
- script: cmake -DCOMPONENT=tests -DCMAKE_INSTALL_PREFIX=$(INSTALL_DIR) -P $(BUILD_LAYER_TESTS_DIR)/cmake_install.cmake
displayName: 'Install Layer Tests'
- script: python3 -m pip install openvino-dev --find-links=$(INSTALL_DIR)/tools
- script: |
set -e
python3 -m pip install $(INSTALL_DIR)/tools/openvino-*
python3 -m pip install $(INSTALL_DIR)/tools/openvino_dev-*
displayName: 'Install python wheels'
- script: |
@@ -304,6 +307,32 @@ jobs:
- script: ls -alR $(INSTALL_DIR)
displayName: 'List install test files'
# Skip test_onnx/test_zoo_models and test_onnx/test_backend due to long execution time
- script: |
export LD_LIBRARY_PATH=$(REPO_DIR)/temp/gna_03.05.00.2116/linux/x64:$(LD_LIBRARY_PATH)
python3 -m pytest -s $(INSTALL_TEST_DIR)/pyngraph $(PYTHON_STATIC_ARGS) \
--junitxml=$(INSTALL_TEST_DIR)/TEST-Pyngraph.xml \
--ignore=$(INSTALL_TEST_DIR)/pyngraph/tests/test_onnx/test_zoo_models.py \
--ignore=$(INSTALL_TEST_DIR)/pyngraph/tests/test_onnx/test_backend.py
displayName: 'nGraph and IE Python Bindings Tests'
# Skip test_onnx/test_zoo_models and test_onnx/test_backend due to long execution time
- script: |
# For python imports to import pybind_mock_frontend
export LD_LIBRARY_PATH=$(REPO_DIR)/temp/gna_03.05.00.2116/linux/x64:$(LD_LIBRARY_PATH)
export PYTHONPATH=$(INSTALL_TEST_DIR):$(INSTALL_DIR)/python/python3.8:$PYTHONPATH
python3 -m pytest -sv $(INSTALL_TEST_DIR)/pyopenvino $(PYTHON_STATIC_ARGS) \
--junitxml=$(INSTALL_TEST_DIR)/TEST-Pyngraph.xml \
--ignore=$(INSTALL_TEST_DIR)/pyopenvino/tests/test_utils/test_utils.py \
--ignore=$(INSTALL_TEST_DIR)/pyopenvino/tests/test_onnx/test_zoo_models.py \
--ignore=$(INSTALL_TEST_DIR)/pyopenvino/tests/test_onnx/test_backend.py
displayName: 'Python API 2.0 Tests'
- script: |
export LD_LIBRARY_PATH=$(REPO_DIR)/temp/gna_03.05.00.2116/linux/x64:$(LD_LIBRARY_PATH)
python3 -m pytest -s $(INSTALL_TEST_DIR)/mo/unit_tests --junitxml=$(INSTALL_TEST_DIR)/TEST-ModelOptimizer.xml
displayName: 'Model Optimizer UT'
- script: |
sudo apt-get install libtbb-dev libpugixml-dev -y
cmake --build $(BUILD_DIR) --target package --parallel
@@ -354,7 +383,7 @@ jobs:
- script: rm -fr $(BUILD_DIR)
displayName: 'Clean build dir'
- script: $(RUN_PREFIX) $(INSTALL_TEST_DIR)/ov_core_unit_tests --gtest_print_time=1 --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-OVCoreUT.xml
- script: $(RUN_PREFIX) $(INSTALL_TEST_DIR)/ov_core_unit_tests --gtest_print_time=1 --gtest_filter=-*IE_GPU* --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-NGraphUT.xml
displayName: 'OV Core UT'
- script: $(RUN_PREFIX) $(INSTALL_TEST_DIR)/ov_inference_functional_tests --gtest_print_time=1 --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-InferenceFunc.xml
@@ -363,12 +392,6 @@ jobs:
- script: $(RUN_PREFIX) $(INSTALL_TEST_DIR)/ov_inference_unit_tests --gtest_print_time=1 --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-InferenceUnit.xml
displayName: 'Inference Unit Tests'
- script: $(RUN_PREFIX) $(INSTALL_TEST_DIR)/ov_proxy_plugin_tests --gtest_print_time=1 --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-OVProxyTests.xml
displayName: 'OV Proxy Plugin Tests'
- script: $(RUN_PREFIX) $(INSTALL_TEST_DIR)/ov_hetero_func_tests --gtest_print_time=1 --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-OVHeteroFuncTests.xml
displayName: 'OV Hetero Func Tests'
- script: $(RUN_PREFIX) $(INSTALL_TEST_DIR)/ov_conditional_compilation_tests --gtest_print_time=1 --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-ConditionalCompilation.xml
displayName: 'Conditional Compilation Tests'
@@ -401,7 +424,7 @@ jobs:
- script: $(RUN_PREFIX) $(INSTALL_TEST_DIR)/ov_legacy_transformations_tests --gtest_print_time=1 --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-LegacyTransformations.xml
displayName: 'Legacy Transformations Tests'
- script: $(RUN_PREFIX) $(INSTALL_TEST_DIR)/ov_util_tests --gtest_print_time=1 --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-CommonUtilTests.xml
- script: $(RUN_PREFIX) $(INSTALL_TEST_DIR)/commonUtilsTests --gtest_print_time=1 --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-CommonUtilTests.xml
displayName: 'Common Utils Tests'
- script: $(RUN_PREFIX) $(INSTALL_TEST_DIR)/InferenceEngineUnitTests --gtest_print_time=1 --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-InferenceEngineUnitTests.xml
@@ -414,8 +437,8 @@ jobs:
displayName: 'GNA UT'
enabled: 'false' # TODO: fix
- script: $(RUN_PREFIX) $(INSTALL_TEST_DIR)/ov_auto_unit_tests --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-ov_auto_unit_tests.xml
displayName: 'AUTO UT'
- script: $(RUN_PREFIX) $(INSTALL_TEST_DIR)/ieMultiPluginUnitTests --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-ieMultiPluginUnitTests.xml
displayName: 'MULTI UT'
- script: $(RUN_PREFIX) $(INSTALL_TEST_DIR)/ov_auto_batch_unit_tests --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-ov_auto_batch_unit_tests.xml
displayName: 'AutoBatch UT'
@@ -423,6 +446,10 @@ jobs:
- script: $(RUN_PREFIX) $(INSTALL_TEST_DIR)/ov_template_func_tests --gtest_filter=*smoke* --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-templateFuncTests.xml
displayName: 'TEMPLATE FuncTests'
- script: $(RUN_PREFIX) $(INSTALL_TEST_DIR)/ov_cpu_func_tests --gtest_filter=*smoke* --gtest_print_time=1 --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-ov_cpu_func_tests.xml
displayName: 'CPU FuncTests'
condition: and(succeeded(), eq(variables['CMAKE_BUILD_SHARED_LIBS'], 'OFF'))
- script: |
$(RUN_PREFIX) $(INSTALL_TEST_DIR)/InferenceEngineCAPITests --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-InferenceEngineCAPITests.xml
displayName: 'IE CAPITests'
@@ -431,38 +458,6 @@ jobs:
$(RUN_PREFIX) $(INSTALL_TEST_DIR)/ov_capi_test --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-ov_capi_test.xml
displayName: 'OV CAPITests'
- script: $(RUN_PREFIX) $(INSTALL_TEST_DIR)/ov_auto_batch_func_tests --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-ov_auto_batch_func_tests.xml
displayName: 'AutoBatch FuncTests'
# Skip test_onnx/test_zoo_models and test_onnx/test_backend due to long execution time
- script: |
python3 -m pytest -s $(INSTALL_TEST_DIR)/pyngraph $(PYTHON_STATIC_ARGS) \
--junitxml=$(INSTALL_TEST_DIR)/TEST-Pyngraph.xml \
--ignore=$(INSTALL_TEST_DIR)/pyngraph/tests/test_onnx/test_zoo_models.py \
--ignore=$(INSTALL_TEST_DIR)/pyngraph/tests/test_onnx/test_backend.py
displayName: 'nGraph and IE Python Bindings Tests'
# Skip test_onnx/test_zoo_models and test_onnx/test_backend due to long execution time
- script: |
python3 -m pytest -sv $(INSTALL_TEST_DIR)/pyopenvino $(PYTHON_STATIC_ARGS) \
--junitxml=$(INSTALL_TEST_DIR)/TEST-Pyngraph.xml \
--ignore=$(INSTALL_TEST_DIR)/pyopenvino/tests/test_utils/test_utils.py \
--ignore=$(INSTALL_TEST_DIR)/pyopenvino/tests/test_onnx/test_zoo_models.py \
--ignore=$(INSTALL_TEST_DIR)/pyopenvino/tests/test_onnx/test_backend.py
displayName: 'Python API 2.0 Tests'
- script: |
python3 -m pytest -s $(INSTALL_TEST_DIR)/mo/unit_tests --junitxml=$(INSTALL_TEST_DIR)/TEST-ModelOptimizer.xml
displayName: 'Model Optimizer UT'
- script: |
python3 -m pytest -s $(REPO_DIR)/tools/ovc/unit_tests --junitxml=$(INSTALL_TEST_DIR)/TEST-OpenVinoConversion.xml
displayName: 'OpenVino Conversion UT'
- script: $(RUN_PREFIX) $(INSTALL_TEST_DIR)/ov_cpu_func_tests --gtest_filter=*smoke* --gtest_print_time=1 --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-ov_cpu_func_tests.xml
displayName: 'CPU FuncTests'
condition: and(succeeded(), eq(variables['CMAKE_BUILD_SHARED_LIBS'], 'OFF'))
- task: CMake@1
inputs:
cmakeArgs: >

View File

@@ -46,7 +46,8 @@ jobs:
system.debug: true
VSTS_HTTP_RETRY: 5
VSTS_HTTP_TIMEOUT: 200
NUM_PROC: 2
OPENVINO_ARCH: 'aarch64'
NUM_PROC: 1
BUILD_TYPE: Release
OPENVINO_REPO_DIR: $(Build.Repository.LocalPath)
BUILD_OPENVINO: $(WORK_DIR)/build
@@ -56,8 +57,7 @@ jobs:
TMP_DIR: /mnt/tmp
OPENVINO_CCACHE_DIR: $(SHARE_DIR)/ccache/master/linux_arm64
LD_LIBRARY_PATH: $(Agent.ToolsDirectory)/Python/$(OV_PYTHON_VERSION)/x64/lib
OV_PYTHON_VERSION_MAJOR_MINOR: 3.11
OV_PYTHON_VERSION: $(OV_PYTHON_VERSION_MAJOR_MINOR).2 # Full version of Python its required for LD_LIBRARY_PATH. More details https://github.com/microsoft/azure-pipelines-tool-lib/blob/master/docs/overview.md#tool-cache
OV_PYTHON_VERSION: 3.11.2 # Full version of Python its required for LD_LIBRARY_PATH. More details https://github.com/microsoft/azure-pipelines-tool-lib/blob/master/docs/overview.md#tool-cache
steps:
- task: UsePythonVersion@0
@@ -115,61 +115,40 @@ jobs:
python3 -m pip install --upgrade pip
python3 -m pip install -r $(OPENVINO_REPO_DIR)/src/bindings/python/requirements.txt
python3 -m pip install -r $(OPENVINO_REPO_DIR)/src/bindings/python/wheel/requirements-dev.txt
python3 -m pip install -r $(OPENVINO_REPO_DIR)/src/bindings/python/src/compatibility/openvino/requirements-dev.txt
# install dependencies needed to build CPU plugin for ARM
sudo -E apt --assume-yes install scons gcc-10-aarch64-linux-gnu g++-10-aarch64-linux-gnu
sudo -E apt --assume-yes install scons crossbuild-essential-arm64
# generic dependencies
sudo -E apt --assume-yes install cmake ccache ninja-build unzip fdupes
displayName: 'Install build dependencies'
- script: |
set -e
echo deb [arch=amd64] http://archive.ubuntu.com/ubuntu/ focal main restricted > arm64-sources.list
echo deb [arch=amd64] http://archive.ubuntu.com/ubuntu/ focal-updates main restricted >> arm64-sources.list
echo deb [arch=amd64] http://archive.ubuntu.com/ubuntu/ focal universe >> arm64-sources.list
echo deb [arch=amd64] http://archive.ubuntu.com/ubuntu/ focal-updates universe >> arm64-sources.list
echo deb [arch=amd64] http://archive.ubuntu.com/ubuntu/ focal multiverse >> arm64-sources.list
echo deb [arch=amd64] http://archive.ubuntu.com/ubuntu/ focal-updates multiverse >> arm64-sources.list
echo deb [arch=amd64] http://archive.ubuntu.com/ubuntu/ focal-backports main restricted universe multiverse >> arm64-sources.list
echo deb [arch=amd64] http://security.ubuntu.com/ubuntu/ focal-security main restricted >> arm64-sources.list
echo deb [arch=amd64] http://security.ubuntu.com/ubuntu/ focal-security universe >> arm64-sources.list
echo deb [arch=amd64] http://security.ubuntu.com/ubuntu/ focal-security multiverse >> arm64-sources.list
echo deb [arch=arm64] http://ports.ubuntu.com/ubuntu-ports/ focal main >> arm64-sources.list
echo deb [arch=arm64] http://ports.ubuntu.com/ubuntu-ports/ focal universe >> arm64-sources.list
echo deb [arch=arm64] http://ports.ubuntu.com/ubuntu-ports/ focal-updates main >> arm64-sources.list
echo deb [arch=arm64] http://ports.ubuntu.com/ubuntu-ports/ focal-security main >> arm64-sources.list
sudo mv arm64-sources.list /etc/apt/sources.list.d/
sudo -E dpkg --add-architecture arm64
sudo -E apt-get update -o Dir::Etc::sourcelist=/etc/apt/sources.list.d/arm64-sources.list
sudo -E apt-get install -y --no-install-recommends libpython3-dev:arm64
displayName: 'Install arm64 libraries'
sudo -E apt --assume-yes install cmake ccache
# Speed up build
sudo -E apt -y --no-install-recommends install unzip
wget https://github.com/ninja-build/ninja/releases/download/v1.10.2/ninja-linux.zip
unzip ninja-linux.zip
sudo cp -v ninja /usr/local/bin/
displayName: 'Install dependencies'
- script: |
git submodule update --init -- $(OPENVINO_REPO_DIR)/src/plugins
git submodule update --init -- $(OPENVINO_REPO_DIR)/thirdparty/gtest
git submodule update --init -- $(OPENVINO_REPO_DIR)/thirdparty/open_model_zoo
displayName: 'Init submodules for non Conan dependencies'
- script: |
python3 -m pip install conan
# install build profile compilers
sudo -E apt --assume-yes install gcc g++
# generate build profile
conan profile detect
# generate host profile for linux_arm64
echo "include(default)" > $(BUILD_OPENVINO)/linux_arm64
echo "[buildenv]" >> $(BUILD_OPENVINO)/linux_arm64
echo "CC=aarch64-linux-gnu-gcc-10" >> $(BUILD_OPENVINO)/linux_arm64
echo "CXX=aarch64-linux-gnu-g++-10" >> $(BUILD_OPENVINO)/linux_arm64
echo "CC=aarch64-linux-gnu-gcc" >> $(BUILD_OPENVINO)/linux_arm64
echo "CXX=aarch64-linux-gnu-g++" >> $(BUILD_OPENVINO)/linux_arm64
# install OpenVINO dependencies
export CMAKE_CXX_COMPILER_LAUNCHER=ccache
export CMAKE_C_COMPILER_LAUNCHER=ccache
conan install $(OPENVINO_REPO_DIR)/conanfile.txt \
-pr:h $(BUILD_OPENVINO)/linux_arm64 \
-s:h arch=armv8 \
-of $(BUILD_OPENVINO) \
-b missing
env:
CMAKE_CXX_COMPILER_LAUNCHER: ccache
CMAKE_C_COMPILER_LAUNCHER: ccache
CCACHE_DIR: $(OPENVINO_CCACHE_DIR)
CCACHE_TEMPDIR: $(TMP_DIR)/ccache
CCACHE_BASEDIR: $(Pipeline.Workspace)
@@ -178,20 +157,14 @@ jobs:
- script: |
source $(BUILD_OPENVINO)/conanbuild.sh
# TODO: return tests building once GPU plugin migrates to Plugin API 2.0
cmake \
-G Ninja \
-DCMAKE_VERBOSE_MAKEFILE=ON \
-DBUILD_SHARED_LIBS=OFF \
-DBUILD_SHARED_LIBS=ON \
-DCMAKE_COMPILE_WARNING_AS_ERROR=ON \
-DENABLE_CPPLINT=ON \
-DENABLE_INTEL_GPU=ON \
-DENABLE_PYTHON=ON \
-DENABLE_WHEEL=ON \
-DPYBIND11_PYTHONLIBS_OVERWRITE=OFF \
-DPYTHON_MODULE_EXTENSION=$(aarch64-linux-gnu-python3-config --extension-suffix) \
-DPYTHON_LIBRARY=/usr/lib/aarch64-linux-gnu/libc-2.31.so \
-DPYTHON_INCLUDE_DIR=$(Agent.ToolsDirectory)/Python/$(OV_PYTHON_VERSION)/x64/include/python$(OV_PYTHON_VERSION_MAJOR_MINOR) \
-DENABLE_CPPLINT=OFF \
-DENABLE_PYTHON=OFF \
-DENABLE_TESTS=ON \
-DENABLE_DATA=OFF \
-DENABLE_SYSTEM_TBB=ON \
-DENABLE_SYSTEM_PROTOBUF=ON \
@@ -203,7 +176,6 @@ jobs:
-DARM_COMPUTE_SCONS_JOBS=$(NUM_PROC) \
-DCMAKE_INSTALL_PREFIX=$(INSTALL_OPENVINO) \
-DCMAKE_BUILD_TYPE=$(BUILD_TYPE) \
-DENABLE_PYTHON_PACKAGING=ON \
-S $(OPENVINO_REPO_DIR) \
-B $(BUILD_OPENVINO)
source $(BUILD_OPENVINO)/deactivate_conanbuild.sh
@@ -220,10 +192,8 @@ jobs:
- script: cmake --build $(BUILD_OPENVINO) --parallel --config $(BUILD_TYPE) --target install
displayName: 'Install OpenVINO Runtime'
- script: |
source $(BUILD_OPENVINO)/conanbuild.sh
$(INSTALL_OPENVINO)/samples/cpp/build_samples.sh
source $(BUILD_OPENVINO)/deactivate_conanbuild.sh
env:
CMAKE_TOOLCHAIN_FILE: $(BUILD_OPENVINO)/conan_toolchain.cmake
displayName: 'Build OpenVINO C++ samples'
- task: PublishBuildArtifacts@1
inputs:
PathtoPublish: $(Build.ArtifactStagingDirectory)
ArtifactName: 'openvino_aarch64_linux'
displayName: 'Publish OpenVINO Runtime for ARM'

View File

@@ -35,6 +35,7 @@ resources:
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/testdata
ref: releases/2023/0
variables:
- group: github

View File

@@ -4,7 +4,7 @@ resources:
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/openvino_contrib
ref: master
ref: releases/2023/0
variables:
- group: github

View File

@@ -42,11 +42,13 @@ resources:
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/openvino_contrib
ref: releases/2023/0
- repository: testdata
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/testdata
ref: releases/2023/0
jobs:
- job: CUDAPlugin_Lin
@@ -141,7 +143,7 @@ jobs:
-DENABLE_TESTS=ON \
-S /root/repos/openvino \
-B /root/w/build &&
cmake --build /root/w/build --parallel --config Release --verbose -- ov_nvidia_func_tests ov_nvidia_unit_tests"
cmake --build /root/w/build --parallel --config Release --verbose -- CudaFuncTests CudaUnitTests"
displayName: 'Docker build Lin'
- script: ls -alR $(OPENVINO_REPO_DIR)/bin/

View File

@@ -34,7 +34,7 @@ resources:
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/testdata
ref: master
ref: releases/2023/0
jobs:
- job: Lin_Debian
@@ -62,8 +62,6 @@ jobs:
SAMPLES_INSTALL_DIR: /usr/share/openvino/samples
PYTHON_SAMPLES_INSTALL_DIR: $(INSTALL_DIR)/share/openvino/samples/python
PYTHON_WHEEL_INSTALL_DIR: $HOME/.local/lib/python3.8/site-packages
BUILD_VENV: $(WORK_DIR)/build_venv
TEST_VENV: $(WORK_DIR)/test_venv
TMP_DIR: /mnt/tmp
SHARE_DIR: /mount/cinfsshare/onnxtestdata
CCACHE_DIR: $(SHARE_DIR)/ccache/master/linux
@@ -113,32 +111,25 @@ jobs:
# 'clang' is used as a default compiler
sudo apt --assume-yes install clang
sudo apt --assume-yes install --no-install-recommends libopencv-imgproc-dev libopencv-imgcodecs-dev
# install build dependencies
(cd $(WORK_DIR) && python3 -m venv build_venv)
$(BUILD_VENV)/bin/python3 -m pip install -U pip
$(BUILD_VENV)/bin/python3 -m pip install -r $(REPO_DIR)/src/bindings/python/wheel/requirements-dev.txt
$(BUILD_VENV)/bin/python3 -m pip install -r $(REPO_DIR)/src/bindings/python/requirements.txt
# For opencv-python: python3-setuptools and pip upgrade
python3 -m pip install --upgrade pip
python3 -m pip install -r $(REPO_DIR)/src/bindings/python/wheel/requirements-dev.txt
python3 -m pip install -r $(REPO_DIR)/src/bindings/python/requirements.txt
# For running Python API tests
$(BUILD_VENV)/bin/python3 -m pip install -r $(REPO_DIR)/src/bindings/python/src/compatibility/openvino/requirements-dev.txt
python3 -m pip install -r $(REPO_DIR)/src/bindings/python/src/compatibility/openvino/requirements-dev.txt
# For running Paddle frontend unit tests
$(BUILD_VENV)/bin/python3 -m pip install -r $(REPO_DIR)/src/frontends/paddle/tests/requirements.txt
python3 -m pip install -r $(REPO_DIR)/src/frontends/paddle/tests/requirements.txt
# For running ONNX frontend unit tests
$(BUILD_VENV)/bin/python3 -m pip install -r $(REPO_DIR)/src/frontends/onnx/tests/requirements.txt
python3 -m pip install -r $(REPO_DIR)/src/frontends/onnx/tests/requirements.txt
# For running TensorFlow frontend unit tests
$(BUILD_VENV)/bin/python3 -m pip install -r $(REPO_DIR)/src/frontends/tensorflow/tests/requirements.txt
python3 -m pip install -r $(REPO_DIR)/src/frontends/tensorflow/tests/requirements.txt
# For MO unit tests
(cd $(WORK_DIR) && python3 -m venv test_venv)
$(TEST_VENV)/bin/python3 -m pip install -U pip
$(TEST_VENV)/bin/python3 -m pip install -r $(REPO_DIR)/tools/mo/requirements_mxnet.txt
$(TEST_VENV)/bin/python3 -m pip install -r $(REPO_DIR)/tools/mo/requirements_caffe.txt
$(TEST_VENV)/bin/python3 -m pip install -r $(REPO_DIR)/tools/mo/requirements_kaldi.txt
$(TEST_VENV)/bin/python3 -m pip install -r $(REPO_DIR)/tools/mo/requirements_onnx.txt
$(TEST_VENV)/bin/python3 -m pip install -r $(REPO_DIR)/tools/mo/requirements_tf2.txt
$(TEST_VENV)/bin/python3 -m pip install -r $(REPO_DIR)/tools/mo/requirements_dev.txt
$(TEST_VENV)/bin/python3 -m pip install -r $(REPO_DIR)/src/frontends/paddle/tests/requirements.txt
# for Python API tests
/usr/bin/python3 -m pip install -r $(REPO_DIR)/src/bindings/python/requirements_test.txt
/usr/bin/python3 -m pip uninstall -y numpy # apt-get install python3-numpy will be used
python3 -m pip install -r $(REPO_DIR)/tools/mo/requirements_mxnet.txt
python3 -m pip install -r $(REPO_DIR)/tools/mo/requirements_caffe.txt
python3 -m pip install -r $(REPO_DIR)/tools/mo/requirements_kaldi.txt
python3 -m pip install -r $(REPO_DIR)/tools/mo/requirements_onnx.txt
python3 -m pip install -r $(REPO_DIR)/tools/mo/requirements_tf2.txt
python3 -m pip install -r $(REPO_DIR)/tools/mo/requirements_dev.txt
# Speed up build
sudo apt -y --no-install-recommends install unzip
wget https://github.com/ninja-build/ninja/releases/download/v1.10.2/ninja-linux.zip
@@ -146,7 +137,7 @@ jobs:
sudo cp -v ninja /usr/local/bin/
# Speed up tests
git clone https://github.com/google/gtest-parallel.git
displayName: 'Install build dependencies'
displayName: 'Install dependencies'
# Should be after 'Install dependencies' because Git lfs is not installed
- checkout: testdata
@@ -164,7 +155,7 @@ jobs:
-DCMAKE_COMPILE_WARNING_AS_ERROR=ON
-DENABLE_PYTHON=ON
-DENABLE_INTEL_GNA=OFF
-DPYTHON_EXECUTABLE=$(BUILD_VENV)/bin/python3
-DPYTHON_EXECUTABLE=/usr/bin/python3.8
-DENABLE_TESTS=ON
-DENABLE_FASTER_BUILD=ON
-DENABLE_STRICT_DEPENDENCIES=OFF
@@ -173,7 +164,6 @@ jobs:
-DCMAKE_C_COMPILER_LAUNCHER=ccache
-DCMAKE_CXX_LINKER_LAUNCHER=ccache
-DCMAKE_C_LINKER_LAUNCHER=ccache
-DENABLE_PYTHON_PACKAGING=ON
-DCPACK_GENERATOR=DEB
-S $(REPO_DIR)
-B $(BUILD_DIR)
@@ -186,7 +176,7 @@ jobs:
displayName: 'Clean ccache stats'
- script: cmake --build $(BUILD_DIR) --parallel --config $(BUILD_TYPE)
env:
env:
CCACHE_DIR: $(CCACHE_DIR)
CCACHE_TEMPDIR: $(TMP_DIR)/ccache
CCACHE_BASEDIR: $(Pipeline.Workspace)
@@ -223,12 +213,44 @@ jobs:
- script: cmake -DCOMPONENT=tests -DCMAKE_INSTALL_PREFIX=$(INSTALL_DIR) -P $(BUILD_LAYER_TESTS_DIR)/cmake_install.cmake
displayName: 'Install Layer Tests'
- script: python3 -m pip install openvino-dev --find-links=$(INSTALL_DIR)/tools
displayName: 'Install python wheels'
- script: cmake -DCMAKE_INSTALL_PREFIX=$(INSTALL_DIR) -DCOMPONENT=tests -P $(BUILD_DIR)/cmake_install.cmake
displayName: 'Install tests'
- script: ls -alR $(INSTALL_DIR)
displayName: 'List install test files'
# Skip test_onnx/test_zoo_models and test_onnx/test_backend due to long execution time
- script: |
python3 -m pytest -s $(INSTALL_TEST_DIR)/pyngraph \
--junitxml=$(INSTALL_TEST_DIR)/TEST-Pyngraph.xml \
--ignore=$(INSTALL_TEST_DIR)/pyngraph/tests/test_onnx/test_zoo_models.py \
--ignore=$(INSTALL_TEST_DIR)/pyngraph/tests/test_onnx/test_backend.py
env:
LD_LIBRARY_PATH: $(INSTALL_TEST_DIR)
displayName: 'nGraph and IE Python Bindings Tests'
# Skip test_onnx/test_zoo_models and test_onnx/test_backend due to long execution time
- script: |
# Required by python imports to load requires libraries
# - tests install dir for mock_py
# - OpenVINO wheel installation dir for others frontend required by mock_py (is not part of wheel pkg)
export LD_LIBRARY_PATH=$(PYTHON_WHEEL_INSTALL_DIR)/openvino/libs:$(INSTALL_TEST_DIR):$LD_LIBRARY_PATH
# For python imports to import pybind_mock_frontend
export PYTHONPATH=$(INSTALL_TEST_DIR):$PYTHONPATH
python3 -m pytest -s $(INSTALL_TEST_DIR)/pyopenvino \
--junitxml=$(INSTALL_TEST_DIR)/TEST-Pyngraph.xml \
--ignore=$(INSTALL_TEST_DIR)/pyopenvino/tests/test_utils/test_utils.py \
--ignore=$(INSTALL_TEST_DIR)/pyopenvino/tests/test_onnx/test_zoo_models.py \
--ignore=$(INSTALL_TEST_DIR)/pyopenvino/tests/test_onnx/test_backend.py -v
displayName: 'Python API 2.0 Tests'
- script: |
python3 -m pytest -s $(INSTALL_TEST_DIR)/mo/unit_tests --junitxml=$(INSTALL_TEST_DIR)/TEST-ModelOptimizer.xml
displayName: 'Model Optimizer UT'
- script: |
sudo apt-get install libtbb-dev libpugixml-dev -y
cmake --build $(BUILD_DIR) --config $(BUILD_TYPE) --target package --parallel
@@ -242,13 +264,13 @@ jobs:
sudo apt-key add GPG-PUB-KEY-INTEL-SW-PRODUCTS.PUB
echo "deb https://apt.repos.intel.com/openvino/2023 ubuntu20 main" | sudo tee /etc/apt/sources.list.d/intel-openvino-2023.list
sudo apt-get update -o Dir::Etc::sourcelist=/etc/apt/sources.list.d/intel-openvino-2023.list
sudo apt-get install openvino -y || exit 1
sudo apt-get install openvino-2023.0.1 -y || exit 1
# install our local one and make sure the conflicts are resolved
sudo apt-get install --no-install-recommends dpkg-dev -y
rm -r _CPack_Packages
dpkg-scanpackages . /dev/null | gzip -9c > Packages.gz
echo "deb [trusted=yes] file:$(BUILD_DIR) ./" | sudo tee /etc/apt/sources.list.d/openvino-local.list
sudo apt-get update
sudo apt-get update -o Dir::Etc::sourcelist=/etc/apt/sources.list.d/openvino-local.list
sudo apt-get install openvino -y
workingDirectory: $(BUILD_DIR)
displayName: 'Install Debian packages'
@@ -271,24 +293,14 @@ jobs:
- script: $(SAMPLES_INSTALL_DIR)/c/build_samples.sh -i $(INSTALL_DIR)
displayName: 'Build c samples'
- script: $(INSTALL_TEST_DIR)/ov_core_unit_tests --gtest_print_time=1 --gtest_filter=-*IE_GPU* --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-NGraphUT.xml
- script: |
$(INSTALL_TEST_DIR)/ov_core_unit_tests --gtest_print_time=1 --gtest_filter=-*IE_GPU* --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-NGraphUT.xml
env:
LD_LIBRARY_PATH: $(INSTALL_TEST_DIR)
displayName: 'OV Core UT'
- script: |
$(INSTALL_TEST_DIR)/ov_proxy_plugin_tests --gtest_print_time=1 --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-OVProxyTests.xml
env:
LD_LIBRARY_PATH: $(INSTALL_TEST_DIR)
displayName: 'OV Proxy Tests'
- script: |
$(INSTALL_TEST_DIR)/ov_hetero_func_tests --gtest_print_time=1 --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-OVHeteroFuncTests.xml
env:
LD_LIBRARY_PATH: $(INSTALL_TEST_DIR)
displayName: 'OV Hetero Func Tests'
- script: $(INSTALL_TEST_DIR)/ov_onnx_frontend_tests --gtest_print_time=1 --gtest_filter=-*IE_GPU* --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-ONNXFrontend.xml
$(INSTALL_TEST_DIR)/ov_onnx_frontend_tests --gtest_print_time=1 --gtest_filter=-*IE_GPU* --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-ONNXFrontend.xml
env:
LD_LIBRARY_PATH: $(INSTALL_TEST_DIR)
displayName: 'ONNX Frontend Tests'
@@ -305,12 +317,14 @@ jobs:
LD_LIBRARY_PATH: $(INSTALL_TEST_DIR)
displayName: 'TensorFlow Frontend Unit Tests'
- script: $(INSTALL_TEST_DIR)/ov_tensorflow_common_tests --gtest_print_time=1 --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-TensorflowCommon.xml
- script: |
$(INSTALL_TEST_DIR)/ov_tensorflow_common_tests --gtest_print_time=1 --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-TensorflowCommon.xml
env:
LD_LIBRARY_PATH: $(INSTALL_TEST_DIR)
displayName: 'TensorFlow Common Unit Tests'
- script: $(INSTALL_TEST_DIR)/ov_tensorflow_lite_frontend_tests --gtest_print_time=1 --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-TensorflowLite.xml
- script: |
$(INSTALL_TEST_DIR)/ov_tensorflow_lite_frontend_tests --gtest_print_time=1 --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-TensorflowLite.xml
env:
LD_LIBRARY_PATH: $(INSTALL_TEST_DIR)
displayName: 'TensorFlow Lite Frontend Unit Tests'
@@ -318,15 +332,21 @@ jobs:
- script: $(INSTALL_TEST_DIR)/ov_cpu_unit_tests --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-ov_cpu_unit_tests.xml
displayName: 'Intel CPU Unit Tests'
- script: $(INSTALL_TEST_DIR)/ov_auto_unit_tests --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-ov_auto_unit_tests.xml
displayName: 'AUTO UT'
- script: $(INSTALL_TEST_DIR)/ieMultiPluginUnitTests --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-ieMultiPluginUnitTests.xml
displayName: 'MULTI UT'
- script: $(INSTALL_TEST_DIR)/ov_template_func_tests --gtest_filter=*smoke* --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-templateFuncTests.xml
- script: |
$(INSTALL_TEST_DIR)/ov_template_func_tests --gtest_filter=*smoke* --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-templateFuncTests.xml
env:
LD_LIBRARY_PATH: $(INSTALL_TEST_DIR)
displayName: 'TEMPLATE FuncTests'
- script: $(INSTALL_TEST_DIR)/InferenceEngineCAPITests --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-InferenceEngineCAPITests.xml
# run not all smoke filter to save time in post-commit
- script: $(INSTALL_TEST_DIR)/ov_cpu_func_tests --gtest_filter=*OVCLass*:*CoreThreadingTests* --gtest_print_time=1 --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-ov_cpu_func_tests.xml
displayName: 'CPU FuncTests'
- script: |
$(INSTALL_TEST_DIR)/InferenceEngineCAPITests --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-InferenceEngineCAPITests.xml
env:
DATA_PATH: $(MODELS_PATH)
MODELS_PATH: $(MODELS_PATH)
@@ -338,43 +358,6 @@ jobs:
MODELS_PATH: $(MODELS_PATH)
displayName: 'OV CAPITests'
# Skip test_onnx/test_zoo_models and test_onnx/test_backend due to long execution time
- script: |
/usr/bin/python3 -m pytest -s $(INSTALL_TEST_DIR)/pyngraph \
--junitxml=$(INSTALL_TEST_DIR)/TEST-Pyngraph.xml \
--ignore=$(INSTALL_TEST_DIR)/pyngraph/tests/test_onnx/test_zoo_models.py \
--ignore=$(INSTALL_TEST_DIR)/pyngraph/tests/test_onnx/test_backend.py
env:
LD_LIBRARY_PATH: $(INSTALL_TEST_DIR)
displayName: 'nGraph and IE Python Bindings Tests'
# Skip test_onnx/test_zoo_models and test_onnx/test_backend due to long execution time
- script: |
/usr/bin/python3 -m pytest -s $(INSTALL_TEST_DIR)/pyopenvino \
--junitxml=$(INSTALL_TEST_DIR)/TEST-Pyngraph.xml \
--ignore=$(INSTALL_TEST_DIR)/pyopenvino/tests/test_utils/test_utils.py \
--ignore=$(INSTALL_TEST_DIR)/pyopenvino/tests/test_onnx/test_zoo_models.py \
--ignore=$(INSTALL_TEST_DIR)/pyopenvino/tests/test_onnx/test_backend.py -v
env:
# Required by python imports to load requires libraries
# - tests install dir for mock_py
LD_LIBRARY_PATH: $(INSTALL_TEST_DIR)
# For python imports to import pybind_mock_frontend
PYTHONPATH: $(INSTALL_TEST_DIR)
displayName: 'Python API 2.0 Tests'
- script: |
# TODO: fix 'No mock frontend API available'
$(TEST_VENV)/bin/python3 -m pip install openvino-dev --find-links=$(INSTALL_DIR)/tools
$(TEST_VENV)/bin/python3 -m pytest -s $(INSTALL_TEST_DIR)/mo/unit_tests --junitxml=$(INSTALL_TEST_DIR)/TEST-ModelOptimizer.xml
env:
PYTHONPATH: $(REPO_DIR)/tools/ovc/
displayName: 'Model Optimizer UT'
# run not all smoke filter to save time in post-commit
- script: $(INSTALL_TEST_DIR)/ov_cpu_func_tests --gtest_filter=*OVCLass*:*CoreThreadingTests* --gtest_print_time=1 --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-ov_cpu_func_tests.xml
displayName: 'CPU FuncTests'
- task: CMake@1
inputs:
cmakeArgs: >
@@ -386,10 +369,13 @@ jobs:
- script: cmake -DCOMPONENT=tests -DCMAKE_INSTALL_PREFIX=$(INSTALL_DIR) -P $(BUILD_SAMPLES_TESTS_DIR)/cmake_install.cmake
displayName: 'Install Samples Tests'
- script: python3 -m pip install -r $(INSTALL_TEST_DIR)/smoke_tests/requirements.txt
displayName: 'Install dependencies for samples smoke tests'
- script: |
/usr/bin/python3 -m pip install -r $(INSTALL_TEST_DIR)/smoke_tests/requirements.txt
export PATH=$HOME/.local/bin:$PATH
# GNA isn't a part of Debian package, so filter out that tests
/usr/bin/python3 -m pytest $(INSTALL_TEST_DIR)/smoke_tests/ -k "not GNA" --env_conf $(INSTALL_TEST_DIR)/smoke_tests/env_config.yml -s --junitxml=$(INSTALL_TEST_DIR)/TEST-SamplesSmokeTests.xml
python3 -m pytest $(INSTALL_TEST_DIR)/smoke_tests/ -k "not GNA" --env_conf $(INSTALL_TEST_DIR)/smoke_tests/env_config.yml -s --junitxml=$(INSTALL_TEST_DIR)/TEST-SamplesSmokeTests.xml
env:
IE_APP_PATH: $(INSTALL_DIR)/samples_bin
LD_LIBRARY_PATH: $(INSTALL_DIR)/samples_bin
@@ -399,18 +385,16 @@ jobs:
displayName: 'Samples Smoke Tests'
- script: |
$(TEST_VENV)/bin/python3 -m pip install -r $(LAYER_TESTS_DIR)/requirements.txt
$(TEST_VENV)/bin/python3 -m pytest $(LAYER_TESTS_DIR)/tensorflow_tests/test_tf_Roll.py --ir_version=10 --junitxml=$(INSTALL_TEST_DIR)/TEST-tf_Roll.xmlTEST
env:
PYTHONPATH: $(REPO_DIR)/tools/ovc/:$(LAYER_TESTS_DIR)
python3 -m pip install -r $(LAYER_TESTS_DIR)/requirements.txt
export PYTHONPATH=$(LAYER_TESTS_DIR):$PYTHONPATH
python3 -m pytest $(LAYER_TESTS_DIR)/tensorflow_tests/test_tf_Roll.py --ir_version=10 --junitxml=$(INSTALL_TEST_DIR)/TEST-tf_Roll.xmlTEST
displayName: 'TensorFlow 1 Layer Tests - Legacy FE'
- script: |
$(TEST_VENV)/bin/python3 -m pip install -r $(LAYER_TESTS_DIR)/requirements.txt
$(RUN_PREFIX) $(TEST_VENV)/bin/python3 -m pytest $(LAYER_TESTS_DIR)/tensorflow_lite_tests/ --junitxml=$(INSTALL_TEST_DIR)/TEST-tfl_fe.xmlTEST
env:
PYTHONPATH: $(REPO_DIR)/tools/ovc/:$(REPO_DIR)/tools/mo/:$(LAYER_TESTS_DIR)
TEST_DEVICE: CPU
python3 -m pip install -r $(LAYER_TESTS_DIR)/requirements.txt
export PYTHONPATH=$(REPO_DIR)/tools/mo/:$(LAYER_TESTS_DIR):$PYTHONPATH
export TEST_DEVICE=CPU
$(RUN_PREFIX) python3 -m pytest $(LAYER_TESTS_DIR)/tensorflow_lite_tests/ --junitxml=$(INSTALL_TEST_DIR)/TEST-tfl_fe.xmlTEST
displayName: 'TensorFlow Lite Layer Tests - TFL FE'
- task: PublishTestResults@2

View File

@@ -4,7 +4,7 @@
# type: github
# endpoint: openvinotoolkit
# name: openvinotoolkit/testdata
# ref: master
# ref: releases/2023/0
jobs:
- job: Lin_lohika
@@ -72,7 +72,7 @@ jobs:
libopenvino_gapi_preproc.so
openvino_intel_cpu_plugin
openvino_intel_gpu_plugin
ov_gpu_unit_tests
clDNN_unit_tests64
gpuFuncTests
displayName: Build Lin

View File

@@ -131,6 +131,7 @@ jobs:
-DENABLE_CPPLINT=OFF
-DENABLE_PROFILING_ITT=OFF
-DENABLE_SAMPLES=OFF
-DENABLE_COMPILE_TOOL=OFF
-DENABLE_OV_TF_FRONTEND=OFF
-DENABLE_OV_PADDLE_FRONTEND=OFF
-DENABLE_OV_PYTORCH_FRONTEND=OFF

View File

@@ -35,13 +35,13 @@ resources:
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/openvino_contrib
ref: master
ref: releases/2023/0
- repository: testdata
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/testdata
ref: master
ref: releases/2023/0
variables:
- group: github
@@ -180,18 +180,10 @@ jobs:
- script: ls -alR $(INSTALL_DIR)
displayName: 'List install files'
- script: . $(SETUPVARS) && $(INSTALL_TEST_DIR)/ov_core_unit_tests --gtest_print_time=1 --gtest_filter=-*IE_GPU* --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-OVCoreUT.xml
- script: . $(SETUPVARS) && $(INSTALL_TEST_DIR)/ov_core_unit_tests --gtest_print_time=1 --gtest_filter=-*IE_GPU* --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-NGraphUT.xml
displayName: 'OV Core UT'
enabled: 'false'
- script: . $(SETUPVARS) && $(INSTALL_TEST_DIR)/ov_proxy_plugin_tests --gtest_print_time=1 --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-OVProxyTests.xml
displayName: 'OV Proxy Plugin Tests'
enabled: 'false'
- script: . $(SETUPVARS) && $(INSTALL_TEST_DIR)/ov_hetero_func_tests --gtest_print_time=1 --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-OVHeteroFuncTests.xml
displayName: 'OV Hetero Func Tests'
enabled: 'false'
- script: . $(SETUPVARS) && $(INSTALL_TEST_DIR)/ov_ir_frontend_tests --gtest_print_time=1 --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-IRFrontend.xml
displayName: 'IR Frontend Tests'
enabled: 'false'
@@ -204,8 +196,8 @@ jobs:
displayName: 'Intel CPU Unit Tests'
enabled: 'false'
- script: . $(SETUPVARS) && $(INSTALL_TEST_DIR)/ov_auto_unit_tests --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-ov_auto_unit_tests.xml
displayName: 'AUTO UT'
- script: . $(SETUPVARS) && $(INSTALL_TEST_DIR)/ieMultiPluginUnitTests --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-ieMultiPluginUnitTests.xml
displayName: 'MULTI UT'
enabled: 'false'
- script: . $(SETUPVARS) && $(INSTALL_TEST_DIR)/ov_cpu_func_tests --gtest_filter=*smoke* --gtest_print_time=1 --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-ov_cpu_func_tests.xml

View File

@@ -32,13 +32,13 @@ resources:
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/openvino_contrib
ref: master
ref: releases/2023/0
- repository: testdata
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/testdata
ref: master
ref: releases/2023/0
jobs:
- job: Win
@@ -222,14 +222,10 @@ jobs:
$(CMAKE_CMD) -DCOMPONENT=tests -DCMAKE_INSTALL_PREFIX=$(INSTALL_DIR) -P $(BUILD_SAMPLES_TESTS_DIR)\cmake_install.cmake
displayName: 'Install Samples Tests'
- script: |
$(INSTALL_DIR)\samples\cpp\build_samples_msvc.bat -i $(INSTALL_DIR)
if not exist %USERPROFILE%\Documents\Intel\OpenVINO\openvino_cpp_samples_build\ exit 1
- script: $(INSTALL_DIR)\samples\cpp\build_samples_msvc.bat -i $(INSTALL_DIR)
displayName: 'Build cpp samples'
- script: |
$(INSTALL_DIR)\samples\c\build_samples_msvc.bat -i $(INSTALL_DIR)
if not exist %USERPROFILE%\Documents\Intel\OpenVINO\openvino_c_samples_build\ exit 1
- script: $(INSTALL_DIR)\samples\c\build_samples_msvc.bat -i $(INSTALL_DIR)
displayName: 'Build c samples'
- script: python -m pip install -r $(INSTALL_TEST_DIR)\smoke_tests\requirements.txt
@@ -263,12 +259,6 @@ jobs:
- script: call $(SETUPVARS) && $(INSTALL_TEST_DIR)\ov_inference_unit_tests --gtest_print_time=1 --gtest_output=xml:$(INSTALL_TEST_DIR)\TEST-InferenceUnit.xml
displayName: 'Inference Unit Tests'
- script: call $(SETUPVARS) && $(INSTALL_TEST_DIR)\ov_proxy_plugin_tests --gtest_print_time=1 --gtest_output=xml:$(INSTALL_TEST_DIR)\TEST-OVProxyTests.xml
displayName: 'OV Proxy Plugin Tests'
- script: call $(SETUPVARS) && $(INSTALL_TEST_DIR)\ov_hetero_func_tests --gtest_print_time=1 --gtest_output=xml:$(INSTALL_TEST_DIR)\TEST-OVHeteroFuncTests.xml
displayName: 'OV Hetero Func Tests'
- script: call $(SETUPVARS) && $(INSTALL_TEST_DIR)\ov_conditional_compilation_tests --gtest_print_time=1 --gtest_output=xml:$(INSTALL_TEST_DIR)\TEST-ConditionalCompilation.xml
displayName: 'Conditional Compilation Tests'
@@ -296,11 +286,11 @@ jobs:
- script: call $(SETUPVARS) && $(INSTALL_TEST_DIR)\ov_transformations_tests --gtest_print_time=1 --gtest_output=xml:$(INSTALL_TEST_DIR)\Transformations.xml
displayName: 'Transformations Tests'
- script: call $(SETUPVARS) && $(INSTALL_TEST_DIR)\ov_legacy_transformations_tests --gtest_print_time=1 --gtest_output=xml:$(INSTALL_TEST_DIR)\LegacyTransformations.xml
displayName: 'Legacy Transformations Tests'
- script: call $(SETUPVARS) && $(INSTALL_TEST_DIR)\ov_util_tests --gtest_print_time=1 --gtest_output=xml:$(INSTALL_TEST_DIR)\CommonUtilTests.xml
- script: call $(SETUPVARS) && $(INSTALL_TEST_DIR)\commonUtilsTests --gtest_print_time=1 --gtest_output=xml:$(INSTALL_TEST_DIR)\CommonUtilTests.xml
displayName: 'Common Utils Tests'
- script: call $(SETUPVARS) && $(INSTALL_TEST_DIR)\InferenceEngineUnitTests --gtest_output=xml:$(INSTALL_TEST_DIR)\TEST-InferenceEngineUnitTests.xml
@@ -312,8 +302,8 @@ jobs:
- script: call $(SETUPVARS) && $(INSTALL_TEST_DIR)\ov_gna_unit_tests --gtest_output=xml:$(INSTALL_TEST_DIR)\TEST-ov_gna_unit_tests.xml
displayName: 'GNA UT'
- script: call $(SETUPVARS) && $(INSTALL_TEST_DIR)\ov_auto_unit_tests --gtest_output=xml:$(INSTALL_TEST_DIR)\TEST-ov_auto_unit_tests.xml
displayName: 'AUTO UT'
- script: call $(SETUPVARS) && $(INSTALL_TEST_DIR)\ieMultiPluginUnitTests --gtest_output=xml:$(INSTALL_TEST_DIR)\TEST-ieMultiPluginUnitTests.xml
displayName: 'MULTI UT'
- script: call $(SETUPVARS) && $(INSTALL_TEST_DIR)\ov_auto_batch_unit_tests --gtest_output=xml:$(INSTALL_TEST_DIR)\TEST-ov_auto_batch_unit_tests.xml
displayName: 'AutoBatch UT'
@@ -321,8 +311,9 @@ jobs:
- script: call $(SETUPVARS) && $(INSTALL_TEST_DIR)\ov_template_func_tests --gtest_output=xml:$(INSTALL_TEST_DIR)\TEST-templateFuncTests.xml
displayName: 'TEMPLATE FuncTests'
- script: call $(SETUPVARS) && $(INSTALL_TEST_DIR)\ov_auto_batch_func_tests --gtest_output=xml:$(INSTALL_TEST_DIR)\TEST-ov_auto_batch_func_tests.xml
displayName: 'AutoBatch FuncTests'
- script: call $(SETUPVARS) && $(INSTALL_TEST_DIR)\ov_cpu_func_tests --gtest_filter=*smoke* --gtest_output=xml:$(INSTALL_TEST_DIR)\TEST-ov_cpu_func_tests.xml
displayName: 'CPU FuncTests'
condition: and(succeeded(), eq(variables['CMAKE_BUILD_SHARED_LIBS'], 'OFF'))
- script: |
call $(SETUPVARS) && $(INSTALL_TEST_DIR)\InferenceEngineCAPITests --gtest_output=xml:$(INSTALL_TEST_DIR)\TEST-InferenceEngineCAPITests.xml
@@ -332,10 +323,6 @@ jobs:
call $(SETUPVARS) && $(INSTALL_TEST_DIR)\ov_capi_test --gtest_output=xml:$(INSTALL_TEST_DIR)\TEST-ov_capi_test.xml
displayName: 'OV CAPITests'
- script: call $(SETUPVARS) && $(INSTALL_TEST_DIR)\ov_cpu_func_tests --gtest_filter=*smoke* --gtest_output=xml:$(INSTALL_TEST_DIR)\TEST-ov_cpu_func_tests.xml
displayName: 'CPU FuncTests'
condition: and(succeeded(), eq(variables['CMAKE_BUILD_SHARED_LIBS'], 'OFF'))
- task: PublishTestResults@2
condition: always()
inputs:

View File

@@ -35,6 +35,7 @@ resources:
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/testdata
ref: releases/2023/0
variables:
- group: github

View File

@@ -72,5 +72,5 @@ RUN ninja install
WORKDIR /openvino/src/bindings/python
ENV OpenVINO_DIR=/openvino/dist/runtime/cmake
ENV LD_LIBRARY_PATH=/openvino/dist/runtime/lib/intel64:/openvino/dist/runtime/3rdparty/tbb/lib
ENV PYTHONPATH=/openvino/bin/intel64/${BUILD_TYPE}/python:${PYTHONPATH}
ENV PYTHONPATH=/openvino/bin/intel64/${BUILD_TYPE}/python_api/python3.11:${PYTHONPATH}
CMD tox

1
.github/CODEOWNERS vendored
View File

@@ -99,7 +99,6 @@
/tools/legacy/ @openvinotoolkit/openvino-samples-maintainers
/tools/openvino_dev/ @openvinotoolkit/openvino-tools-maintainers @openvinotoolkit/openvino-ie-python-api-maintainers
/tools/mo/ @openvinotoolkit/openvino-mo-maintainers
/tools/ovc/ @openvinotoolkit/openvino-mo-maintainers
/tools/pot/ @openvinotoolkit/openvino-pot-maintainers
/thirdparty/open_model_zoo/ @openvinotoolkit/omz-maintainers @openvinotoolkit/openvino-pot-maintainers

2
.github/labeler.yml vendored
View File

@@ -41,7 +41,6 @@
'category: dependency_changes':
- '**/requirement*.txt'
- '**/constraints*.txt'
- 'scripts/**/*'
- '.gitmodules'
- '**/setup.py'
@@ -87,7 +86,6 @@
'category: MO':
- 'tools/mo/**/*'
- 'tools/ovc/**/*'
'category: ONNX FE':
- 'src/frontends/onnx/**/*'

View File

@@ -85,8 +85,8 @@ jobs:
- name: Install Clang dependency
run: |
sudo apt update
sudo apt --assume-yes remove clang-7 clang-8 clang-9 clang-10 clang-11 clang-12 clang-13
sudo apt --assume-yes install libclang-14-dev
sudo apt --assume-yes remove clang-7 clang-8 clang-9 clang-10 clang-11 clang-12 clang-13 clang-15
sudo apt --assume-yes install clang-14 libclang-14-dev
- name: Install Python-based dependencies
run: python3 -m pip install -r cmake/developer_package/ncc_naming_style/requirements_dev.txt

View File

@@ -1,149 +0,0 @@
name: Code coverage
on: workflow_dispatch
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
Coverage:
runs-on: ${{ matrix.config.os }}
strategy:
fail-fast: false
matrix:
config:
- { name: "Ubuntu gcc", os: ubuntu-latest-16-cores, cc: "gcc", cxx: "g++" }
steps:
- name: Setup python
uses: actions/setup-python@v4
with:
python-version: '3.10.10'
architecture: 'x64'
- name: Setup ccache
uses: hendrikmuhs/ccache-action@v1.2
with:
max-size: 50G
- name: Clone OpenVINO
uses: actions/checkout@v3
with:
submodules: recursive
- name: Install dependencies
run: |
sudo apt --assume-yes update
sudo -E ${{ github.workspace }}/install_build_dependencies.sh
sudo apt --assume-yes install lcov
python3 -m pip install --upgrade pip
python3 -m pip install -r ${{ github.workspace }}/src/bindings/python/wheel/requirements-dev.txt
python3 -m pip install -r ${{ github.workspace }}/src/bindings/python/requirements.txt
# For running Python API tests
python3 -m pip install -r ${{ github.workspace }}/src/bindings/python/src/compatibility/openvino/requirements-dev.txt
# For running Paddle frontend unit tests
python3 -m pip install -r ${{ github.workspace }}/src/frontends/paddle/tests/requirements.txt
# For running ONNX frontend unit tests
python3 -m pip install -r ${{ github.workspace }}/src/frontends/onnx/tests/requirements.txt
# For running TensorFlow frontend unit tests
python3 -m pip install -r ${{ github.workspace }}/src/frontends/tensorflow/tests/requirements.txt
# For MO unit tests
python3 -m pip install -r ${{ github.workspace }}/tools/mo/requirements_mxnet.txt
python3 -m pip install -r ${{ github.workspace }}/tools/mo/requirements_caffe.txt
python3 -m pip install -r ${{ github.workspace }}/tools/mo/requirements_kaldi.txt
python3 -m pip install -r ${{ github.workspace }}/tools/mo/requirements_onnx.txt
python3 -m pip install -r ${{ github.workspace }}/tools/mo/requirements_tf2.txt
python3 -m pip install -r ${{ github.workspace }}/tools/mo/requirements_dev.txt
- name: Get number of CPU cores
uses: SimenB/github-actions-cpu-cores@v1
id: cpu-cores
- name: Build OpenVINO with CMake
uses: ashutoshvarma/action-cmake-build@master
with:
build-dir: ${{ github.workspace }}/build
cc: ${{ matrix.config.cc }}
cxx: ${{ matrix.config.cxx }}
configure-options: >
-GNinja
-DCMAKE_VERBOSE_MAKEFILE=ON
-DENABLE_PYTHON=ON
-DENABLE_ONEDNN_FOR_GPU=ON
-DBUILD_SHARED_LIBS=ON
-DENABLE_TESTS=ON
-DENABLE_OV_ONNX_FRONTEND=ON
-DENABLE_FASTER_BUILD=ON
-DENABLE_STRICT_DEPENDENCIES=OFF
-DENABLE_COVERAGE=ON
-DCMAKE_C_COMPILER_LAUNCHER=ccache
-DCMAKE_CXX_COMPILER_LAUNCHER=ccache
-DCMAKE_C_LINKER_LAUNCHER=ccache
-DCMAKE_CXX_LINKER_LAUNCHER=ccache
-DENABLE_SYSTEM_SNAPPY=ON
build-type: Release
parallel: ${{ steps.cpu-cores.outputs.count }}
- name: Install wheel packages
run: cmake -DCOMPONENT=python_wheels -DCMAKE_INSTALL_PREFIX=${{ github.workspace }}/install_pkg -P '${{ github.workspace }}/build/cmake_install.cmake'
- name: Install python wheels
run: python3 -m pip install openvino-dev --find-links=${{ github.workspace }}/install_pkg/tools
- name: List binaries
run: ls -la ${{ github.workspace }}/bin/intel64/Release
- name: Install OpenVINO
run: cmake -DCMAKE_INSTALL_PREFIX=${{ github.workspace }}/install_pkg -P '${{ github.workspace }}/build/cmake_install.cmake'
- name: Run OV core unit tests
run: ${{ github.workspace }}/bin/intel64/Release/ov_core_unit_tests
- name: Run OV Proxy plugin tests
run: ${{ github.workspace }}/bin/intel64/Release/ov_proxy_plugin_tests
- name: Run OV Hetero Func tests
run: ${{ github.workspace }}/bin/intel64/Release/ov_hetero_func_tests
- name: Run IR frontend tests
run: ${{ github.workspace }}/bin/intel64/Release/ov_ir_frontend_tests # --gtest_print_time=1 --gtest_output=xml:${{ github.workspace }}/testdata/TEST-IRFrontend.xml
- name: Run ONNX frontend tests
run: ${{ github.workspace }}/bin/intel64/Release/ov_onnx_frontend_tests --gtest_filter=-*IE_GPU*
#- name: Run Paddle frontend unit tests
# run: ${{ github.workspace }}/bin/intel64/Release/paddle_tests --gtest_filter=-*IE_GPU*
- name: Run TensorFlow frontend unit tests
run: ${{ github.workspace }}/bin/intel64/Release/ov_tensorflow_frontend_tests --gtest_filter=-*IE_GPU*
- name: Build coverage with CMake
uses: ashutoshvarma/action-cmake-build@master
with:
build-dir: ${{ github.workspace }}/coverage
cc: ${{ matrix.config.cc }}
cxx: ${{ matrix.config.cxx }}
target: ov_coverage
configure-options: >
-DCMAKE_VERBOSE_MAKEFILE=ON
-DCMAKE_C_COMPILER_LAUNCHER=ccache
-DCMAKE_CXX_COMPILER_LAUNCHER=ccache
-DCMAKE_C_LINKER_LAUNCHER=ccache
-DCMAKE_CXX_LINKER_LAUNCHER=ccache
parallel: ${{ steps.cpu-cores.outputs.count }}
- name: Print info
run: |
ls -laR
pwd
- name: Generate raport
run: |
lcov --capture --directory ${{ github.workspace }}/. --output-file coverage.info
genhtml coverage.info --output-directory coverage-report
- name: Collect coverage
uses: codecov/codecov-action@v3
with:
verbose: true

View File

@@ -1,748 +0,0 @@
name: Tests on Linux (Ubuntu 22.04, Python 3.11)
on:
workflow_dispatch:
pull_request:
paths-ignore:
- '**/docs/**'
- 'docs/**'
- '**/**.md'
- '**.md'
- '**/layer_tests_summary/**'
- '**/conformance/**'
push:
paths-ignore:
- '**/docs/**'
- 'docs/**'
- '**/**.md'
- '**.md'
- '**/layer_tests_summary/**'
- '**/conformance/**'
branches:
- master
concurrency:
group: ${{ github.head_ref || github.run_id }}-linux
cancel-in-progress: true
jobs:
Build:
defaults:
run:
shell: bash
runs-on: ubuntu-latest-8-cores
env:
CMAKE_BUILD_TYPE: 'Release'
CMAKE_GENERATOR: 'Ninja'
CMAKE_CXX_COMPILER_LAUNCHER: ccache
CMAKE_C_COMPILER_LAUNCHER: ccache
OPENVINO_REPO: ${{ github.workspace }}/openvino
OPENVINO_CONTRIB_REPO: ${{ github.workspace }}/openvino_contrib
INSTALL_DIR: ${{ github.workspace }}/install
INSTALL_TEST_DIR: ${{ github.workspace }}/install/tests
SAMPLES_INSTALL_DIR: ${{ github.workspace }}/install/samples
LAYER_TESTS_INSTALL_DIR: ${{ github.workspace }}/install/tests/layer_tests
BUILD_DIR: ${{ github.workspace }}/build
DATA_PATH: ${{ github.workspace }}/testdata
MODELS_PATH: ${{ github.workspace }}/testdata
OV_TEMP: ${{ github.workspace }}/openvino_temp
PYTHON_STATIC_ARGS: -m "not dynamic_library and not template_plugin"
steps:
- name: Clone OpenVINO
uses: actions/checkout@v3
with:
path: 'openvino'
submodules: 'recursive'
- name: Clone OpenVINO Contrib
uses: actions/checkout@v3
with:
repository: 'openvinotoolkit/openvino_contrib'
path: 'openvino_contrib'
submodules: 'recursive'
- name: Clone testdata for C API tests
uses: actions/checkout@v3
with:
repository: 'openvinotoolkit/testdata'
path: 'testdata'
submodules: 'recursive'
lfs: 'true'
#
# Dependencies
#
- name: Install build dependencies
run: |
sudo -E ${{ env.OPENVINO_REPO }}/install_build_dependencies.sh
sudo -E apt update
sudo -E apt --assume-yes install openjdk-11-jdk libbz2-dev clang unzip libpugixml-dev libtbb-dev intel-opencl-icd ocl-icd-opencl-dev opencl-headers
wget https://github.com/ninja-build/ninja/releases/download/v1.10.2/ninja-linux.zip
unzip ninja-linux.zip
sudo cp -v ninja /usr/local/bin/
- uses: actions/setup-python@v4
with:
python-version: '3.11'
- name: Install python dependencies
run: |
# For Python API
python3 -m pip install --upgrade pip
python3 -m pip install Scons
python3 -m pip install -r ${{ env.OPENVINO_REPO }}/src/bindings/python/wheel/requirements-dev.txt
python3 -m pip install -r ${{ env.OPENVINO_REPO }}/src/bindings/python/requirements.txt
python3 -m pip install -r ${{ env.OPENVINO_REPO }}/src/bindings/python/requirements_test.txt
# For running Python API tests
python3 -m pip install -r ${{ env.OPENVINO_REPO }}/src/bindings/python/src/compatibility/openvino/requirements-dev.txt
# For running ONNX frontend unit tests
python3 -m pip install --force-reinstall -r ${{ env.OPENVINO_REPO }}/src/frontends/onnx/tests/requirements.txt
# For running TensorFlow frontend unit tests
python3 -m pip install -r ${{ env.OPENVINO_REPO }}/src/frontends/tensorflow/tests/requirements.txt
# For running Paddle frontend unit tests
python3 -m pip install -r ${{ env.OPENVINO_REPO }}/src/frontends/paddle/tests/requirements.txt
- name: Install MO dependencies
run: |
python3 -m pip install -r ${{ env.OPENVINO_REPO }}/tools/mo/requirements_mxnet.txt
python3 -m pip install -r ${{ env.OPENVINO_REPO }}/tools/mo/requirements_caffe.txt
python3 -m pip install -r ${{ env.OPENVINO_REPO }}/tools/mo/requirements_kaldi.txt
python3 -m pip install -r ${{ env.OPENVINO_REPO }}/tools/mo/requirements_onnx.txt
python3 -m pip install -r ${{ env.OPENVINO_REPO }}/tools/mo/requirements_tf2.txt
python3 -m pip install -r ${{ env.OPENVINO_REPO }}/tools/mo/requirements_dev.txt
#
# Build
#
- name: Setup ccache
uses: hendrikmuhs/ccache-action@v1.2
with:
max-size: "2000M"
# Should save cache only if run in the master branch of the base repo
# github.ref_name is 'ref/PR_#' in case of the PR, and 'branch_name' when executed on push
save: ${{ github.ref_name == 'master' && 'true' || 'false' }}
verbose: 2
key: ${{ github.job }}-linux
restore-keys: |
${{ github.job }}-linux
- name: Get tools versions
run: |
ninja --version || exit 1
ccache --version || exit 1
python3 --version || exit 1
cmake --version || exit 1
- name: Get number of CPU cores
uses: SimenB/github-actions-cpu-cores@v1
id: cpu-cores
- name: CMake configure
run: |
cmake \
-GNinja \
-DENABLE_CPPLINT=OFF \
-DENABLE_NCC_STYLE=OFF \
-DENABLE_TESTS=ON \
-DENABLE_PYTHON=ON \
-DCMAKE_VERBOSE_MAKEFILE=ON \
-DCMAKE_BUILD_TYPE=Release \
-DBUILD_SHARED_LIBS=ON \
-DENABLE_ONEDNN_FOR_GPU=OFF \
-DENABLE_OV_ONNX_FRONTEND=ON \
-DCMAKE_COMPILE_WARNING_AS_ERROR=OFF \
-DENABLE_STRICT_DEPENDENCIES=OFF \
-DCMAKE_CXX_COMPILER_LAUNCHER=ccache \
-DCMAKE_C_COMPILER_LAUNCHER=ccache \
-DCMAKE_CXX_LINKER_LAUNCHER=ccache \
-DCMAKE_C_LINKER_LAUNCHER=ccache \
-DENABLE_SYSTEM_SNAPPY=ON \
-DENABLE_SYSTEM_TBB=ON \
-DBUILD_nvidia_plugin=OFF \
-DENABLE_DEBUG_CAPS=ON \
-DCUSTOM_OPERATIONS="calculate_grid;complex_mul;fft;grid_sample;sparse_conv;sparse_conv_transpose" \
-DOPENVINO_EXTRA_MODULES=${{ env.OPENVINO_CONTRIB_REPO }}/modules \
-S ${{ env.OPENVINO_REPO }} \
-B ${{ env.BUILD_DIR }}
- name: Clean ccache stats
run: ccache --zero-stats --show-config
- name: Build
run: cmake --build ${{ env.BUILD_DIR }} --parallel ${{ steps.cpu-cores.outputs.count }} --config Release
- name: Show ccache stats
run: ccache --show-stats
- name: Cmake Layer Tests
run: cmake -GNinja -S ${{ env.OPENVINO_REPO }}/tests/layer_tests -B ${{ env.BUILD_DIR }}/layer_tests
- name: Build Layer Tests
run: cmake --build ${{ env.BUILD_DIR }}/layer_tests --parallel --config Release
- name: Install wheel packages
run: cmake -DCOMPONENT=python_wheels -DCMAKE_INSTALL_PREFIX=${{ env.INSTALL_DIR }} -P ${{ env.BUILD_DIR }}/cmake_install.cmake
- name: Install Layer Tests
run: cmake -DCOMPONENT=tests -DCMAKE_INSTALL_PREFIX=${{ env.INSTALL_DIR }} -P ${{ env.BUILD_DIR }}/layer_tests/cmake_install.cmake
- name: Install python wheels
run: python3 -m pip install openvino-dev --find-links=${{ env.INSTALL_DIR }}/tools
- name: Install tests
run: cmake -DCMAKE_INSTALL_PREFIX=${{ env.INSTALL_DIR }} -DCOMPONENT=tests -P ${{ env.BUILD_DIR }}/cmake_install.cmake
- name: Install OpenVINO
run: cmake -DCMAKE_INSTALL_PREFIX=${{ env.INSTALL_DIR }} -P ${{ env.BUILD_DIR }}/cmake_install.cmake
- name: CMake Samples Tests
run: cmake -GNinja -S ${{ env.OPENVINO_REPO }}/tests/samples_tests -B ${{ env.BUILD_DIR }}/samples_tests
- name: Build Samples Tests
run: cmake --build ${{ env.BUILD_DIR }}/samples_tests --config Release
- name: Install Samples Tests
run: cmake -DCOMPONENT=tests -DCMAKE_INSTALL_PREFIX=${{ env.INSTALL_DIR }} -P ${{ env.BUILD_DIR }}/samples_tests/cmake_install.cmake
- name: Pack Artifacts
run: |
pushd ${{ env.INSTALL_DIR }}
tar -czvf ${{ env.BUILD_DIR }}/openvino_package.tar.gz --exclude=tests *
popd
pushd ${{ env.INSTALL_DIR }}
tar -czvf ${{ env.BUILD_DIR }}/openvino_tests.tar.gz tests/
popd
- name: Build cpp samples
run: ${{ env.SAMPLES_INSTALL_DIR }}/cpp/build_samples.sh -i ${{ env.INSTALL_DIR }} -b ${{ env.BUILD_DIR }}/cpp_samples
- name: Build c samples
run: ${{ env.SAMPLES_INSTALL_DIR }}/c/build_samples.sh -i ${{ env.INSTALL_DIR }} -b ${{ env.BUILD_DIR }}/c_samples
#
# Tests
#
- name: Samples tests
run: |
python3 -m pip install --ignore-installed PyYAML -r ${{ env.INSTALL_TEST_DIR }}/smoke_tests/requirements.txt
export LD_LIBRARY_PATH=${{ env.IE_APP_PATH }}:$LD_LIBRARY_PATH
source ${{ env.INSTALL_DIR }}/setupvars.sh
python3 -m pytest -sv ${{ env.INSTALL_TEST_DIR }}/smoke_tests \
--env_conf ${{ env.INSTALL_TEST_DIR }}/smoke_tests/env_config.yml \
--junitxml=${{ env.INSTALL_TEST_DIR }}/TEST-SamplesSmokeTests.xml
env:
IE_APP_PATH: ${{ env.INSTALL_DIR }}/samples_bin
IE_APP_PYTHON_PATH: ${{ env.INSTALL_DIR }}/samples/python
SHARE: ${{ env.INSTALL_TEST_DIR }}/smoke_tests/samples_smoke_tests_data
WORKSPACE: ${{ env.INSTALL_DIR }}
# Present in the "Build" job due to the fact that these tests require build directory
- name: ONNX frontend tests
run: |
source ${{ env.INSTALL_DIR }}/setupvars.sh
${{ env.INSTALL_TEST_DIR }}/ov_onnx_frontend_tests --gtest_print_time=1 --gtest_filter=-*IE_GPU*:*FrontEndLoadFromTest.testLoadFromTwoStreams*:*FrontEndLoadFromTest.testLoadFromTwoFiles* \
--gtest_output=xml:${{ env.INSTALL_TEST_DIR }}/TEST-ONNXFrontend.xml
#
# Upload build artifacts
#
- name: Upload openvino package
if: ${{ always() }}
uses: actions/upload-artifact@v3
with:
name: openvino_package
path: ${{ env.BUILD_DIR }}/openvino_package.tar.gz
if-no-files-found: 'error'
- name: Upload openvino tests package
if: ${{ always() }}
uses: actions/upload-artifact@v3
with:
name: openvino_tests
path: ${{ env.BUILD_DIR }}/openvino_tests.tar.gz
if-no-files-found: 'error'
CXX_Unit_Tests:
needs: Build
defaults:
run:
shell: bash
runs-on: ubuntu-22.04
env:
INSTALL_DIR: ${{ github.workspace }}/install
INSTALL_TEST_DIR: ${{ github.workspace }}/install/tests
steps:
- name: Create Directories
run: |
mkdir -p ${{ env.INSTALL_DIR }} ${{ env.INSTALL_TEST_DIR }}
#
# Dependencies
#
- name: Install dependencies
run: |
sudo -E apt update
sudo -E apt --assume-yes install openjdk-11-jdk libbz2-dev clang unzip libpugixml-dev libtbb-dev intel-opencl-icd ocl-icd-opencl-dev opencl-headers
- name: Download OpenVINO package
uses: actions/download-artifact@v3
with:
name: openvino_package
path: ${{ env.INSTALL_DIR }}
- name: Download OpenVINO tests package
uses: actions/download-artifact@v3
with:
name: openvino_tests
path: ${{ env.INSTALL_TEST_DIR }}
- name: Extract OpenVINO packages
run: |
pushd ${{ env.INSTALL_DIR }}
tar -xzf openvino_package.tar.gz -C ${{ env.INSTALL_DIR }} && rm openvino_package.tar.gz || exit 1
popd
pushd ${{ env.INSTALL_TEST_DIR }}
tar -xzf openvino_tests.tar.gz -C ${{ env.INSTALL_DIR }} && rm openvino_tests.tar.gz || exit 1
popd
#
# Tests
#
- name: OpenVINO Core Unit Tests
run: |
source ${{ env.INSTALL_DIR }}/setupvars.sh
${{ env.INSTALL_TEST_DIR }}/ov_core_unit_tests --gtest_print_time=1 --gtest_filter=-*IE_GPU* \
--gtest_output=xml:${{ env.INSTALL_TEST_DIR }}/TEST-OVCoreUT.xml
- name: OpenVINO Inference Functional Tests
run: |
source ${{ env.INSTALL_DIR }}/setupvars.sh
${{ env.INSTALL_TEST_DIR }}/ov_inference_functional_tests --gtest_print_time=1 \
--gtest_output=xml:${{ env.INSTALL_TEST_DIR }}/TEST-InferenceFunc.xml
- name: OpenVINO Inference Unit Tests
run: |
source ${{ env.INSTALL_DIR }}/setupvars.sh
${{ env.INSTALL_TEST_DIR }}/ov_inference_unit_tests --gtest_print_time=1 \
--gtest_output=xml:${{ env.INSTALL_TEST_DIR }}/TEST-InferenceUnit.xml
- name: Low Precision Transformations Tests
run: |
source ${{ env.INSTALL_DIR }}/setupvars.sh
${{ env.INSTALL_TEST_DIR }}/ov_lp_transformations_tests --gtest_print_time=1 \
--gtest_output=xml:${{ env.INSTALL_TEST_DIR }}/TEST-LpTransformations.xml
- name: OpenVINO Conditional compilation tests
run: |
source ${{ env.INSTALL_DIR }}/setupvars.sh
${{ env.INSTALL_TEST_DIR }}/ov_conditional_compilation_tests --gtest_print_time=1 \
--gtest_output=xml:${{ env.INSTALL_TEST_DIR }}/TEST-ConditionalCompilation.xml
- name: IR frontend tests
run: |
source ${{ env.INSTALL_DIR }}/setupvars.sh
${{ env.INSTALL_TEST_DIR }}/ov_ir_frontend_tests --gtest_print_time=1 \
--gtest_output=xml:${{ env.INSTALL_TEST_DIR }}/TEST-IRFrontend.xml
# Disabled in Azure: https://github.com/openvinotoolkit/openvino/blob/master/.ci/azure/linux.yml#L403
# - name: PaddlePaddle frontend tests
# run: |
# source ${{ env.INSTALL_DIR }}/setupvars.sh
# ${{ env.INSTALL_TEST_DIR }}/paddle_tests --gtest_print_time=1 --gtest_filter=*smoke* \
# --gtest_output=xml:${{ env.INSTALL_TEST_DIR }}/TEST-PaddleTests.xml
# Present in the "Build" job as these tests require build directory
# - name: ONNX frontend tests
# run: |
# source ${{ env.INSTALL_DIR }}/setupvars.sh
# ${{ env.INSTALL_TEST_DIR }}/ov_onnx_frontend_tests --gtest_print_time=1 --gtest_filter=-*IE_GPU*:*FrontEndLoadFromTest.testLoadFromTwoStreams*:*FrontEndLoadFromTest.testLoadFromTwoFiles* \
# --gtest_output=xml:${{ env.INSTALL_TEST_DIR }}/TEST-ONNXFrontend.xml
- name: TensorFlow Common tests
run: |
source ${{ env.INSTALL_DIR }}/setupvars.sh
${{ env.INSTALL_TEST_DIR }}/ov_tensorflow_common_tests --gtest_print_time=1 \
--gtest_output=xml:${{ env.INSTALL_TEST_DIR }}/TEST-TensorFlowCommonFrontend.xml
- name: TensorFlow frontend tests
run: |
source ${{ env.INSTALL_DIR }}/setupvars.sh
${{ env.INSTALL_TEST_DIR }}/ov_tensorflow_frontend_tests --gtest_print_time=1 \
--gtest_output=xml:${{ env.INSTALL_TEST_DIR }}/TEST-TensorFlowFrontend.xml
- name: TensorFlow Lite frontend tests
run: |
source ${{ env.INSTALL_DIR }}/setupvars.sh
${{ env.INSTALL_TEST_DIR }}/ov_tensorflow_lite_frontend_tests --gtest_print_time=1 \
--gtest_output=xml:${{ env.INSTALL_TEST_DIR }}/TEST-TensorFlowLiteFrontend.xml
- name: Transformations Tests
run: |
source ${{ env.INSTALL_DIR }}/setupvars.sh
${{ env.INSTALL_TEST_DIR }}/ov_transformations_tests --gtest_print_time=1 \
--gtest_output=xml:${{ env.INSTALL_TEST_DIR }}/TEST-Transformations.xml
- name: Common test utils tests
run: |
source ${{ env.INSTALL_DIR }}/setupvars.sh
${{ env.INSTALL_TEST_DIR }}/ov_util_tests --gtest_print_time=1 \
--gtest_output=xml:${{ env.INSTALL_TEST_DIR }}/TEST-CommonUtilTests.xml
- name: CPU plugin unit tests
run: |
source ${{ env.INSTALL_DIR }}/setupvars.sh
${{ env.INSTALL_TEST_DIR }}/ov_cpu_unit_tests --gtest_print_time=1 \
--gtest_output=xml:${{ env.INSTALL_TEST_DIR }}/TEST-CPUUnitTests.xml
# Disabled in Azure: https://github.com/openvinotoolkit/openvino/blob/master/.ci/azure/linux.yml#L409
# - name: GNA plugin unit tests
# run: |
# source ${{ env.INSTALL_DIR }}/setupvars.sh
# ${{ env.INSTALL_TEST_DIR }}/ov_gna_unit_tests --gtest_print_time=1 \
# --gtest_output=xml:${{ env.INSTALL_TEST_DIR }}/TEST-GNAUnitTests.xml
- name: AUTO UT
run: |
source ${{ env.INSTALL_DIR }}/setupvars.sh
${{ env.INSTALL_TEST_DIR }}/ov_auto_unit_tests --gtest_print_time=1 \
--gtest_output=xml:${{ env.INSTALL_TEST_DIR }}/TEST-ov_auto_unit_tests.xml
- name: Template plugin tests
run: |
source ${{ env.INSTALL_DIR }}/setupvars.sh
${{ env.INSTALL_TEST_DIR }}/ov_template_func_tests --gtest_print_time=1 \
--gtest_filter=*smoke* \
--gtest_output=xml:${{ env.INSTALL_TEST_DIR }}/TEST-TemplateFuncTests.xml
- name: Inference Engine C API tests
run: |
source ${{ env.INSTALL_DIR }}/setupvars.sh
${{ env.INSTALL_TEST_DIR }}/InferenceEngineCAPITests --gtest_print_time=1 \
--gtest_output=xml:${{ env.INSTALL_TEST_DIR }}/TEST-InferenceEngineCAPITests.xml
- name: OpenVINO C API tests
run: |
source ${{ env.INSTALL_DIR }}/setupvars.sh
${{ env.INSTALL_TEST_DIR }}/ov_capi_test --gtest_print_time=1 \
--gtest_output=xml:${{ env.INSTALL_TEST_DIR }}/TEST-OpenVINOCAPITests.xml
- name: AutoBatch FuncTests
run: |
source ${{ env.INSTALL_DIR }}/setupvars.sh
${{ env.INSTALL_TEST_DIR }}/ov_auto_batch_func_tests --gtest_output=xml:${{ env.INSTALL_TEST_DIR }}/TEST-ov_auto_batch_func_tests.xml
- name: Proxy Plugin Tests
run: |
source ${{ env.INSTALL_DIR }}/setupvars.sh
${{ env.INSTALL_TEST_DIR }}/ov_proxy_plugin_tests --gtest_print_time=1 --gtest_output=xml:${{ env.INSTALL_TEST_DIR }}/TEST-OVProxyTests.xml
- name: Hetero Func Tests
run: |
source ${{ env.INSTALL_DIR }}/setupvars.sh
${{ env.INSTALL_TEST_DIR }}/ov_hetero_func_tests --gtest_print_time=1 --gtest_output=xml:${{ env.INSTALL_TEST_DIR }}/TEST-OVHeteroFuncTests.xml
- name: Upload Test Results
uses: actions/upload-artifact@v3
if: ${{ always() }}
with:
name: test-results-cpp
path: ${{ env.INSTALL_TEST_DIR }}/TEST*.xml
if-no-files-found: 'error'
Python_Unit_Tests:
needs: Build
defaults:
run:
shell: bash
runs-on: ubuntu-22.04
env:
OPENVINO_REPO: ${{ github.workspace }}/openvino
OPENVINO_CONTRIB_REPO: ${{ github.workspace }}/openvino_contrib
INSTALL_DIR: ${{ github.workspace }}/install
INSTALL_TEST_DIR: ${{ github.workspace }}/install/tests
SAMPLES_INSTALL_DIR: ${{ github.workspace }}/install/samples
LAYER_TESTS_INSTALL_DIR: ${{ github.workspace }}/install/tests/layer_tests
BUILD_DIR: ${{ github.workspace }}/build
DATA_PATH: ${{ github.workspace }}/testdata
MODELS_PATH: ${{ github.workspace }}/testdata
OV_TEMP: ${{ github.workspace }}/openvino_temp
PYTHON_STATIC_ARGS: -m "not dynamic_library and not template_plugin"
steps:
- name: Create Directories
run: |
mkdir -p ${{ env.INSTALL_DIR }} ${{ env.INSTALL_TEST_DIR }}
- name: Clone OpenVINO
uses: actions/checkout@v3
with:
path: 'openvino'
submodules: 'recursive'
#
# Dependencies
#
- name: Install dependencies
run: |
sudo -E apt update
sudo -E apt --assume-yes install openjdk-11-jdk libbz2-dev clang unzip libpugixml-dev libtbb-dev intel-opencl-icd ocl-icd-opencl-dev opencl-headers
- uses: actions/setup-python@v4
with:
python-version: '3.11'
- name: Install python dependencies
run: |
# For Python API
python3 -m pip install --upgrade pip
python3 -m pip install -r ${{ env.OPENVINO_REPO }}/src/bindings/python/wheel/requirements-dev.txt
python3 -m pip install -r ${{ env.OPENVINO_REPO }}/src/bindings/python/requirements.txt
# For running Python API tests
python3 -m pip install -r ${{ env.OPENVINO_REPO }}/src/bindings/python/src/compatibility/openvino/requirements-dev.txt
# For running ONNX frontend unit tests
python3 -m pip install --force-reinstall -r ${{ env.OPENVINO_REPO }}/src/frontends/onnx/tests/requirements.txt
# For running TensorFlow frontend unit tests
python3 -m pip install -r ${{ env.OPENVINO_REPO }}/src/frontends/tensorflow/tests/requirements.txt
# For running Paddle frontend unit tests
python3 -m pip install -r ${{ env.OPENVINO_REPO }}/src/frontends/paddle/tests/requirements.txt
- name: Install MO dependencies
run: |
python3 -m pip install -r ${{ env.OPENVINO_REPO }}/tools/mo/requirements_mxnet.txt
python3 -m pip install -r ${{ env.OPENVINO_REPO }}/tools/mo/requirements_caffe.txt
python3 -m pip install -r ${{ env.OPENVINO_REPO }}/tools/mo/requirements_kaldi.txt
python3 -m pip install -r ${{ env.OPENVINO_REPO }}/tools/mo/requirements_onnx.txt
python3 -m pip install -r ${{ env.OPENVINO_REPO }}/tools/mo/requirements_tf2.txt
python3 -m pip install -r ${{ env.OPENVINO_REPO }}/tools/mo/requirements_dev.txt
- name: Download OpenVINO package
uses: actions/download-artifact@v3
with:
name: openvino_package
path: ${{ env.INSTALL_DIR }}
- name: Download OpenVINO tests package
uses: actions/download-artifact@v3
with:
name: openvino_tests
path: ${{ env.INSTALL_TEST_DIR }}
- name: Extract OpenVINO packages
run: |
pushd ${{ env.INSTALL_DIR }}
tar -xzf openvino_package.tar.gz -C ${{ env.INSTALL_DIR }} && rm openvino_package.tar.gz || exit 1
popd
pushd ${{ env.INSTALL_TEST_DIR }}
tar -xzf openvino_tests.tar.gz -C ${{ env.INSTALL_DIR }} && rm openvino_tests.tar.gz || exit 1
popd
- name: Install Python wheels
run: |
python3 -m pip install openvino-dev --find-links=${{ env.INSTALL_DIR }}/tools
- name: nGraph and IE Python Bindings Tests
run: |
source ${{ env.INSTALL_DIR }}/setupvars.sh
python3 -m pytest -s ${{ env.INSTALL_TEST_DIR }}/pyngraph ${{ env.PYTHON_STATIC_ARGS }} \
--junitxml=${{ env.INSTALL_TEST_DIR }}/TEST-Pyngraph.xml \
--ignore=${{ env.INSTALL_TEST_DIR }}/pyngraph/tests/test_onnx/test_zoo_models.py \
--ignore=${{ env.INSTALL_TEST_DIR }}/pyngraph/tests/test_onnx/test_backend.py
- name: Python API 2.0 Tests
run: |
# For python imports to import pybind_mock_frontend
export PYTHONPATH=${{ env.INSTALL_TEST_DIR }}:$PYTHONPATH
source ${{ env.INSTALL_DIR }}/setupvars.sh
python3 -m pytest -sv ${{ env.INSTALL_TEST_DIR }}/pyopenvino ${{ env.PYTHON_STATIC_ARGS }} \
--junitxml=${{ env.INSTALL_TEST_DIR }}/TEST-Pyngraph.xml \
--ignore=${{ env.INSTALL_TEST_DIR }}/pyopenvino/tests/test_utils/test_utils.py \
--ignore=${{ env.INSTALL_TEST_DIR }}/pyopenvino/tests/test_onnx/test_zoo_models.py \
--ignore=${{ env.INSTALL_TEST_DIR }}/pyopenvino/tests/test_onnx/test_backend.py
- name: Model Optimizer UT
run: |
export PYTHONPATH=${{ env.OPENVINO_REPO }}/tools/mo/:${{ env.LAYER_TESTS_INSTALL_DIR }}:${{ env.INSTALL_TEST_DIR }}:${{ env.INSTALL_DIR }}/python/python3.11:$PYTHONPATH
# TODO: figure out why they need to be reinstalled
python3 -m pip install -r ${{ env.OPENVINO_REPO }}/tools/mo/requirements_mxnet.txt
python3 -m pip install -r ${{ env.OPENVINO_REPO }}/tools/mo/requirements_caffe.txt
python3 -m pip install -r ${{ env.OPENVINO_REPO }}/tools/mo/requirements_kaldi.txt
python3 -m pip install -r ${{ env.OPENVINO_REPO }}/tools/mo/requirements_onnx.txt
python3 -m pip install -r ${{ env.OPENVINO_REPO }}/tools/mo/requirements_tf2.txt
python3 -m pip install -r ${{ env.OPENVINO_REPO }}/tools/mo/requirements_dev.txt
source ${{ env.INSTALL_DIR }}/setupvars.sh
python3 -m pytest -s ${{ env.INSTALL_TEST_DIR }}/mo/unit_tests \
--junitxml=${{ env.INSTALL_TEST_DIR }}/TEST-ModelOptimizer.xml
- name: PyTorch Layer Tests
run: |
python3 -m pip install -r ${{ env.LAYER_TESTS_INSTALL_DIR }}/requirements.txt
export PYTHONPATH=${{ env.OPENVINO_REPO }}/tools/mo/:${{ env.LAYER_TESTS_INSTALL_DIR }}:$PYTHONPATH
source ${{ env.INSTALL_DIR }}/setupvars.sh
python3 -m pytest ${{ env.LAYER_TESTS_INSTALL_DIR }}/pytorch_tests -m precommit --junitxml=${{ env.INSTALL_TEST_DIR }}/TEST-pytorch.xml
env:
TEST_DEVICE: CPU
- name: TensorFlow 1 Layer Tests - TF FE
run: |
python3 -m pip install -r ${{ env.LAYER_TESTS_INSTALL_DIR }}/requirements.txt
export PYTHONPATH=${{ env.OPENVINO_REPO }}/tools/mo/:${{ env.LAYER_TESTS_INSTALL_DIR }}:$PYTHONPATH
source ${{ env.INSTALL_DIR }}/setupvars.sh
python3 -m pytest ${{ env.LAYER_TESTS_INSTALL_DIR }}/tensorflow_tests/ --use_new_frontend -m precommit_tf_fe --junitxml=${{ env.INSTALL_TEST_DIR }}/TEST-tf_fe.xml
env:
TEST_DEVICE: CPU
- name: TensorFlow 2 Layer Tests - TF FE
run: |
python3 -m pip install -r ${{ env.LAYER_TESTS_INSTALL_DIR }}/requirements.txt
export PYTHONPATH=${{ env.OPENVINO_REPO }}/tools/mo/:${{ env.LAYER_TESTS_INSTALL_DIR }}:$PYTHONPATH
source ${{ env.INSTALL_DIR }}/setupvars.sh
python3 -m pytest ${{ env.LAYER_TESTS_INSTALL_DIR }}/tensorflow2_keras_tests/ --use_new_frontend -m precommit_tf_fe --junitxml=${{ env.INSTALL_TEST_DIR }}/TEST-tf2_fe.xml
env:
TEST_DEVICE: CPU
- name: TensorFlow 1 Layer Tests - Legacy FE
run: |
python3 -m pip install -r ${{ env.LAYER_TESTS_INSTALL_DIR }}/requirements.txt
export PYTHONPATH=${{ env.OPENVINO_REPO }}/tools/mo/:${{ env.LAYER_TESTS_INSTALL_DIR }}:$PYTHONPATH
source ${{ env.INSTALL_DIR }}/setupvars.sh
python3 -m pytest ${{ env.LAYER_TESTS_INSTALL_DIR }}/tensorflow_tests/test_tf_Roll.py --ir_version=10 --junitxml=${{ env.INSTALL_TEST_DIR }}/TEST-tf_Roll.xml
- name: TensorFlow 2 Layer Tests - Legacy FE
run: |
python3 -m pip install -r ${{ env.LAYER_TESTS_INSTALL_DIR }}/requirements.txt
export PYTHONPATH=${{ env.OPENVINO_REPO }}/tools/mo/:${{ env.LAYER_TESTS_INSTALL_DIR }}:$PYTHONPATH
source ${{ env.INSTALL_DIR }}/setupvars.sh
python3 -m pytest ${{ env.LAYER_TESTS_INSTALL_DIR }}/tensorflow2_keras_tests/test_tf2_keras_activation.py \
--ir_version=11 --junitxml=${{ env.INSTALL_TEST_DIR }}/TEST-tf2_Activation.xml -k "sigmoid"
env:
TEST_DEVICE: CPU
- name: TensorFlow Lite Layer Tests - TFL FE
run: |
python3 -m pip install -r ${{ env.LAYER_TESTS_INSTALL_DIR }}/requirements.txt
export PYTHONPATH=${{ env.OPENVINO_REPO }}/tools/mo/:${{ env.LAYER_TESTS_INSTALL_DIR }}:$PYTHONPATH
source ${{ env.INSTALL_DIR }}/setupvars.sh
python3 -m pytest ${{ env.LAYER_TESTS_INSTALL_DIR }}/tensorflow_lite_tests/ --junitxml=${{ env.INSTALL_TEST_DIR }}/TEST-tfl_fe.xml
env:
TEST_DEVICE: CPU
- name: MO Python API Tests
run: |
python3 -m pip install -r ${{ env.LAYER_TESTS_INSTALL_DIR }}/requirements.txt
export PYTHONPATH=${{ env.OPENVINO_REPO }}/tools/mo/:${{ env.LAYER_TESTS_INSTALL_DIR }}:$PYTHONPATH
source ${{ env.INSTALL_DIR }}/setupvars.sh
python3 -m pytest ${{ env.LAYER_TESTS_INSTALL_DIR }}/mo_python_api_tests --junitxml=${{ env.INSTALL_TEST_DIR }}/TEST-test_mo_convert.xml
env:
TEST_DEVICE: CPU
- name: Python Frontend tests
run: |
python3 -m pip install -r ${{ env.LAYER_TESTS_INSTALL_DIR }}/requirements.txt
export PYTHONPATH=${{ env.OPENVINO_REPO }}/tools/mo/:${{ env.LAYER_TESTS_INSTALL_DIR }}:$PYTHONPATH
source ${{ env.INSTALL_DIR }}/setupvars.sh
python3 -m pytest ${{ env.LAYER_TESTS_INSTALL_DIR }}/py_frontend_tests --junitxml=${{ env.INSTALL_TEST_DIR }}/TEST-test_py_fontend.xml
- name: Conversion UT
run: |
# For python imports to import pybind_mock_frontend
export PYTHONPATH=${{ env.INSTALL_TEST_DIR }}:$PYTHONPATH
source ${{ env.INSTALL_DIR }}/setupvars.sh
python3 -m pytest -s ${{ env.OPENVINO_REPO }}/tools/ovc/unit_tests --junitxml=${{ env.INSTALL_TEST_DIR }}/TEST-OpenVinoConversion.xml
- name: Upload Test Results
uses: actions/upload-artifact@v3
if: ${{ always() }}
with:
name: test-results-python
path: ${{ env.INSTALL_TEST_DIR }}/TEST*.xml
if-no-files-found: 'error'
CPU_Functional_Tests:
needs: Build
defaults:
run:
shell: bash
runs-on: ubuntu-22.04
env:
INSTALL_DIR: ${{ github.workspace }}/install
INSTALL_TEST_DIR: ${{ github.workspace }}/install/tests
steps:
- name: Create Directories
run: mkdir -p ${{ env.INSTALL_DIR }} ${{ env.INSTALL_TEST_DIR }}
- name: Install dependencies
run: |
sudo -E apt update
sudo -E apt --assume-yes install openjdk-11-jdk libbz2-dev clang unzip libpugixml-dev libtbb-dev intel-opencl-icd ocl-icd-opencl-dev opencl-headers
- name: Download OpenVINO package
uses: actions/download-artifact@v3
with:
name: openvino_package
path: ${{ env.INSTALL_DIR }}
- name: Download OpenVINO tests package
uses: actions/download-artifact@v3
with:
name: openvino_tests
path: ${{ env.INSTALL_TEST_DIR }}
- name: Extract OpenVINO packages
run: |
pushd ${{ env.INSTALL_DIR }}
tar -xzf openvino_package.tar.gz -C ${{ env.INSTALL_DIR }} && rm openvino_package.tar.gz || exit 1
popd
pushd ${{ env.INSTALL_TEST_DIR }}
tar -xzf openvino_tests.tar.gz -C ${{ env.INSTALL_DIR }} && rm openvino_tests.tar.gz || exit 1
popd
- name: Intel CPU plugin func tests
run: |
source ${{ env.INSTALL_DIR }}/setupvars.sh
${{ env.INSTALL_TEST_DIR }}/ov_cpu_func_tests --gtest_print_time=1 --gtest_filter=*smoke* --gtest_output=xml:"${{ env.INSTALL_TEST_DIR }}/TEST-CPUFuncTests.xml"
- name: Upload Test Results
uses: actions/upload-artifact@v3
if: ${{ always() }}
with:
name: test-results-functional-cpu
path: ${{ env.INSTALL_TEST_DIR }}/TEST*.xml
if-no-files-found: 'error'

View File

@@ -52,10 +52,6 @@ jobs:
pip install -r requirements_dev.txt
working-directory: tools/mo
- name: Pylint-MO
- name: Pylint
run: pylint -d C,R,W openvino/tools/mo
working-directory: tools/mo
- name: Pylint-OVC
run: pylint -d C,R,W openvino/tools/ovc
working-directory: tools/ovc

View File

@@ -151,16 +151,3 @@ jobs:
- name: Run Bandit
run: python -m bandit -r ./ -f screen
working-directory: src/bindings/python/src/compatibility/openvino
# layer_tests Flake code-style
- name: Run flake8 on python tests in openvino/tests/layer_tests
run: |
modified_files=$(git diff --name-only)
for file in $modified_files; do
if [[ $file == "openvino/tests/layer_tests/"* ]]; then
if [[ -f "$file" ]]; then
python -m flake8 "$file" --config= ./setup.cfg
fi
fi
done

View File

@@ -1,25 +0,0 @@
name: 'Close stale issues and PRs'
on:
workflow_dispatch:
schedule:
- cron: '0 0 * * *'
permissions:
issues: write
pull-requests: write
jobs:
stale:
runs-on: ubuntu-latest
steps:
- uses: actions/stale@v8
with:
stale-issue-message: 'This issue will be closed in a week because of 9 months of no activity.'
stale-pr-message: 'This PR will be closed in a week because of 2 weeks of no activity.'
close-issue-message: 'This issue was closed because it has been stalled for 9 months with no activity.'
close-pr-message: 'This PR was closed because it has been stalled for 2 week with no activity.'
days-before-pr-stale: 14
days-before-issue-stale: 274
days-before-close: 7
ascending: true
exempt-pr-labels: 'no_stale'

View File

@@ -1,710 +0,0 @@
name: Tests on Windows (VS 2022, Python 3.11)
on:
workflow_dispatch:
# pull_request:
# paths-ignore:
# - '**/docs/**'
# - 'docs/**'
# - '**/**.md'
# - '**.md'
# - '**/layer_tests_summary/**'
# - '**/conformance/**'
# push:
# paths-ignore:
# - '**/docs/**'
# - 'docs/**'
# - '**/**.md'
# - '**.md'
# - '**/layer_tests_summary/**'
# - '**/conformance/**'
# branches:
# - master
concurrency:
group: ${{ github.head_ref || github.run_id }}-windows
cancel-in-progress: true
env:
CMAKE_BUILD_TYPE: 'Release'
CMAKE_GENERATOR: 'Ninja'
CMAKE_CXX_COMPILER_LAUNCHER: sccache
CMAKE_C_COMPILER_LAUNCHER: sccache
OPENVINO_REPO: "${{ github.workspace }}\\openvino"
OPENVINO_CONTRIB_REPO: "${{ github.workspace }}\\openvino_contrib"
INSTALL_DIR: "${{ github.workspace }}\\install"
INSTALL_TEST_DIR: "${{ github.workspace }}\\install\\tests"
SAMPLES_INSTALL_DIR: "${{ github.workspace }}\\install\\samples"
LAYER_TESTS_INSTALL_DIR: "${{ github.workspace }}\\install\\tests\\layer_tests"
BUILD_DIR: "${{ github.workspace }}\\build"
DATA_PATH: "${{ github.workspace }}\\testdata"
MODELS_PATH: "${{ github.workspace }}\\testdata"
OV_TEMP: "${{ github.workspace }}\\openvino_temp"
PYTHON_STATIC_ARGS: -m "not dynamic_library and not template_plugin"
VCVARSPATH: "C:\\Program Files\\Microsoft Visual Studio\\2022\\Enterprise\\VC\\Auxiliary\\Build\\vcvarsall.bat"
jobs:
Build:
defaults:
run:
shell: pwsh
runs-on: windows-latest-8-cores
steps:
- name: Clone OpenVINO
uses: actions/checkout@v3
with:
path: 'openvino'
submodules: 'recursive'
- name: Clone OpenVINO Contrib
uses: actions/checkout@v3
with:
repository: 'openvinotoolkit/openvino_contrib'
path: 'openvino_contrib'
submodules: 'recursive'
- name: Clone testdata for C API tests
uses: actions/checkout@v3
with:
repository: 'openvinotoolkit/testdata'
path: 'testdata'
submodules: 'recursive'
lfs: 'true'
#
# Dependencies
#
- uses: actions/setup-python@v4
with:
python-version: '3.11'
- name: Install python dependencies
run: |
# For Python API
python3 -m pip install Scons
python3 -m pip install -r ${{ env.OPENVINO_REPO }}/src/bindings/python/wheel/requirements-dev.txt
python3 -m pip install -r ${{ env.OPENVINO_REPO }}/src/bindings/python/requirements.txt
python3 -m pip install -r ${{ env.OPENVINO_REPO }}/src/bindings/python/requirements_test.txt
# For running Python API tests
python3 -m pip install -r ${{ env.OPENVINO_REPO }}/src/bindings/python/src/compatibility/openvino/requirements-dev.txt
# For running ONNX frontend unit tests
python3 -m pip install -r ${{ env.OPENVINO_REPO }}/src/frontends/onnx/tests/requirements.txt
# For running TensorFlow frontend unit tests
python3 -m pip install -r ${{ env.OPENVINO_REPO }}/src/frontends/tensorflow/tests/requirements.txt
# For running Paddle frontend unit tests
python3 -m pip install -r ${{ env.OPENVINO_REPO }}/src/frontends/paddle/tests/requirements.txt
- name: Install MO dependencies
run: |
python3 -m pip install -r ${{ env.OPENVINO_REPO }}/tools/mo/requirements_mxnet.txt `
-r ${{ env.OPENVINO_REPO }}/tools/mo/requirements_caffe.txt `
-r ${{ env.OPENVINO_REPO }}/tools/mo/requirements_kaldi.txt `
-r ${{ env.OPENVINO_REPO }}/tools/mo/requirements_onnx.txt `
-r ${{ env.OPENVINO_REPO }}/tools/mo/requirements_tf2.txt `
-r ${{ env.OPENVINO_REPO }}/tools/mo/requirements_dev.txt
- name: Install build dependencies
run: |
choco install --no-progress ninja
choco install --no-progress shellcheck
- name: Get tools versions
run: |
python3 --version
cmake --version
#
# Build
#
- name: Get number of CPU cores
uses: SimenB/github-actions-cpu-cores@v1
id: cpu-cores
- uses: ilammy/msvc-dev-cmd@v1
- name: Setup sccache
uses: hendrikmuhs/ccache-action@v1.2
with:
variant: sccache
max-size: "2000M"
# Should save cache only if run in the master branch of the base repo
# github.ref_name is 'ref/PR_#' in case of the PR, and 'branch_name' when executed on push
save: ${{ github.ref_name == 'master' && 'true' || 'false' }}
key: ${{ github.job }}-windows
restore-keys: |
${{ github.job }}-windows
- name: CMake configure
run: |
& {{ env.VCVARSPATH }} x64 && cmake -G "Ninja Multi-Config" `
-DENABLE_CPPLINT=OFF `
-DENABLE_ONEDNN_FOR_GPU=OFF `
-DBUILD_SHARED_LIBS=OFF `
-DENABLE_TESTS=ON `
-DCMAKE_COMPILE_WARNING_AS_ERROR=OFF `
-DENABLE_STRICT_DEPENDENCIES=OFF `
-DENABLE_PYTHON=ON `
-DBUILD_nvidia_plugin=OFF `
-DCMAKE_DISABLE_FIND_PACKAGE_PkgConfig=ON `
-DCUSTOM_OPERATIONS="calculate_grid;complex_mul;fft;grid_sample;sparse_conv;sparse_conv_transpose" `
-DOPENVINO_EXTRA_MODULES=${{ env.OPENVINO_CONTRIB_REPO }}\modules `
-DCMAKE_BUILD_TYPE=Release `
-S ${{ env.OPENVINO_REPO }} `
-B ${{ env.BUILD_DIR }}
- name: Build
run: |
& {{ env.VCVARSPATH }} x64 && cmake --build ${{ env.BUILD_DIR }} --parallel ${{ steps.cpu-cores.outputs.count }} --config Release
- name: Install
run: cmake -DCMAKE_INSTALL_PREFIX=${{ env.INSTALL_DIR }} -P ${{ env.BUILD_DIR }}/cmake_install.cmake
- name: Install Wheels
run: python3 -m pip install openvino-dev --find-links=${{ env.INSTALL_DIR }}\tools
- name: CMake Samples Tests
run: |
& {{ env.VCVARSPATH }} x64 && cmake -S ${{ env.OPENVINO_REPO }}/tests/samples_tests -B ${{ env.BUILD_DIR }}/samples_tests
- name: Install Samples Tests
run: cmake -DCOMPONENT=tests -DCMAKE_INSTALL_PREFIX=${{ env.INSTALL_DIR }} -P ${{ env.BUILD_DIR }}/samples_tests/cmake_install.cmake
- name: Install Tests
run: cmake -DCMAKE_INSTALL_PREFIX=${{ env.INSTALL_DIR }} -DCOMPONENT=tests -P ${{ env.BUILD_DIR }}\cmake_install.cmake
- name: Cmake Layer Tests
run: |
& {{ env.VCVARSPATH }} x64 && cmake -S ${{ env.OPENVINO_REPO }}/tests/layer_tests -B ${{ env.BUILD_DIR }}/layer_tests
- name: Build Layer Tests
run: cmake --build ${{ env.BUILD_DIR }}/layer_tests --parallel --config Release
- name: Install Layer Tests
run: cmake -DCOMPONENT=tests -DCMAKE_INSTALL_PREFIX=${{ env.INSTALL_DIR }} -P ${{ env.BUILD_DIR }}/layer_tests/cmake_install.cmake
- name: Pack Artifacts
run: |
$file=Get-ChildItem -Path "${{ env.INSTALL_DIR }}" -Exclude "tests"
$compress = @{
Path = $file
CompressionLevel = "Optimal"
DestinationPath = "${{ env.BUILD_DIR }}/openvino_package.zip"
}
Compress-Archive @compress
$file=Get-ChildItem -Path "${{ env.INSTALL_DIR }}\tests"
$compress = @{
Path = $file
CompressionLevel = "Optimal"
DestinationPath = "${{ env.BUILD_DIR }}/openvino_tests.zip"
}
Compress-Archive @compress
- name: Build cpp samples
run: |
& {{ env.VCVARSPATH }} x64
& ${{ env.SAMPLES_INSTALL_DIR }}/cpp/build_samples_msvc.bat -i ${{ env.INSTALL_DIR }}
- name: Build c samples
run: |
& {{ env.VCVARSPATH }} x64
& ${{ env.SAMPLES_INSTALL_DIR }}/c/build_samples_msvc.bat -i ${{ env.INSTALL_DIR }}
- name: Samples tests
shell: cmd
run: |
python3 -m pip install --ignore-installed PyYAML -r ${{ env.INSTALL_TEST_DIR }}/smoke_tests/requirements.txt
call "${{ env.INSTALL_DIR }}\\setupvars.bat" && python3 -m pytest -sv ${{ env.INSTALL_TEST_DIR }}/smoke_tests --env_conf ${{ env.INSTALL_TEST_DIR }}/smoke_tests/env_config.yml --junitxml=${{ env.INSTALL_TEST_DIR }}/TEST-SamplesSmokeTests.xml
env:
IE_APP_PATH: ${{ env.INSTALL_DIR }}/samples_bin
IE_APP_PYTHON_PATH: ${{ env.INSTALL_DIR }}/samples/python
SHARE: ${{ env.INSTALL_TEST_DIR }}/smoke_tests/samples_smoke_tests_data
WORKSPACE: ${{ env.INSTALL_DIR }}
# Present in the "Build" job due to the fact that these tests require build directory
- name: ONNX frontend tests
shell: cmd
run: |
call "${{ env.INSTALL_DIR }}\\setupvars.bat" && ${{ env.INSTALL_TEST_DIR }}/ov_onnx_frontend_tests --gtest_print_time=1 --gtest_filter=-*IE_GPU* --gtest_output=xml:${{ env.INSTALL_TEST_DIR }}/TEST-ONNXFrontend.xml
- name: List installed files
if: ${{ always() }}
run: |
Get-ChildItem -Recurse -Directory ${{ env.INSTALL_DIR }}
- name: Upload openvino package
uses: actions/upload-artifact@v3
with:
name: openvino_package
path: ${{ env.BUILD_DIR }}/openvino_package.zip
if-no-files-found: 'error'
- name: Upload openvino tests package
uses: actions/upload-artifact@v3
with:
name: openvino_tests
path: ${{ env.BUILD_DIR }}/openvino_tests.zip
if-no-files-found: 'error'
Python_Unit_Tests:
needs: Build
defaults:
run:
shell: pwsh
runs-on: windows-latest
env:
OPENVINO_REPO: "${{ github.workspace }}\\openvino"
OPENVINO_CONTRIB_REPO: "${{ github.workspace }}\\openvino_contrib"
INSTALL_DIR: "${{ github.workspace }}\\install"
INSTALL_TEST_DIR: "${{ github.workspace }}\\install\\tests"
SAMPLES_INSTALL_DIR: "${{ github.workspace }}\\install\\samples"
LAYER_TESTS_INSTALL_DIR: "${{ github.workspace }}\\install\\tests\\layer_tests"
BUILD_DIR: "${{ github.workspace }}\\build"
DATA_PATH: "${{ github.workspace }}\\testdata"
MODELS_PATH: "${{ github.workspace }}\\testdata"
PYTHON_STATIC_ARGS: -m "not dynamic_library and not template_plugin"
steps:
- name: Create Directories
run: |
mkdir ${{ env.INSTALL_DIR }}
mkdir ${{ env.INSTALL_TEST_DIR }}
- name: Download OpenVINO package
uses: actions/download-artifact@v3
with:
name: openvino_package
path: ${{ env.INSTALL_DIR }}
- name: Download OpenVINO tests package
uses: actions/download-artifact@v3
with:
name: openvino_tests
path: ${{ env.INSTALL_TEST_DIR }}
- name: Extract OpenVINO packages
run: |
pushd ${{ env.INSTALL_DIR }}
Expand-Archive openvino_package.zip -DestinationPath "${{ env.INSTALL_DIR }}"
popd
pushd ${{ env.INSTALL_TEST_DIR }}
Expand-Archive openvino_tests.zip -DestinationPath "${{ env.INSTALL_TEST_DIR }}"
popd
- name: Check extraction
run: |
ls "${{ github.workspace }}"
ls "${{ env.INSTALL_DIR }}"
ls "${{ env.INSTALL_TEST_DIR }}"
- name: Clone OpenVINO
uses: actions/checkout@v3
with:
path: 'openvino'
submodules: 'recursive'
- name: Clone OpenVINO Contrib
uses: actions/checkout@v3
with:
repository: 'openvinotoolkit/openvino_contrib'
path: 'openvino_contrib'
submodules: 'recursive'
- uses: actions/setup-python@v4
with:
python-version: '3.11'
- name: Install python dependencies
run: |
# For Python API
python3 -m pip install -r ${{ env.OPENVINO_REPO }}/src/bindings/python/wheel/requirements-dev.txt
python3 -m pip install -r ${{ env.OPENVINO_REPO }}/src/bindings/python/requirements.txt
# For running Python API tests
python3 -m pip install -r ${{ env.OPENVINO_REPO }}/src/bindings/python/src/compatibility/openvino/requirements-dev.txt
# For running ONNX frontend unit tests
python3 -m pip install --force-reinstall -r ${{ env.OPENVINO_REPO }}/src/frontends/onnx/tests/requirements.txt
# For running TensorFlow frontend unit tests
python3 -m pip install -r ${{ env.OPENVINO_REPO }}/src/frontends/tensorflow/tests/requirements.txt
# For running Paddle frontend unit tests
python3 -m pip install -r ${{ env.OPENVINO_REPO }}/src/frontends/paddle/tests/requirements.txt
- name: Install MO dependencies
run: |
python3 -m pip install -r ${{ env.OPENVINO_REPO }}/tools/mo/requirements_mxnet.txt `
-r ${{ env.OPENVINO_REPO }}/tools/mo/requirements_caffe.txt `
-r ${{ env.OPENVINO_REPO }}/tools/mo/requirements_kaldi.txt `
-r ${{ env.OPENVINO_REPO }}/tools/mo/requirements_onnx.txt `
-r ${{ env.OPENVINO_REPO }}/tools/mo/requirements_tf2.txt `
-r ${{ env.OPENVINO_REPO }}/tools/mo/requirements_dev.txt
- name: Install Python wheels
run: |
python3 -m pip install openvino-dev --force-reinstall --find-links=${{ env.INSTALL_DIR }}\tools
- name: nGraph and IE Python Bindings Tests
shell: cmd
run: |
set PYTHONPATH=${{ env.OPENVINO_REPO }}\tools\mo;${{ env.LAYER_TESTS_INSTALL_DIR }};%PYTHONPATH%
call "${{ env.INSTALL_DIR }}\\setupvars.bat" && python3 -m pytest -s ${{ env.INSTALL_TEST_DIR }}/pyngraph ${{ env.PYTHON_STATIC_ARGS }} --junitxml=${{ env.INSTALL_TEST_DIR }}/TEST-Pyngraph.xml --ignore=${{ env.INSTALL_TEST_DIR }}/pyngraph/tests/test_onnx/test_zoo_models.py --ignore=${{ env.INSTALL_TEST_DIR }}/pyngraph/tests/test_onnx/test_backend.py
- name: Python API 2.0 Tests
shell: cmd
run: |
set PYTHONPATH=${{ env.OPENVINO_REPO }}\tools\mo;${{ env.LAYER_TESTS_INSTALL_DIR }};%PYTHONPATH%
call "${{ env.INSTALL_DIR }}\\setupvars.bat" && python3 -m pytest -sv ${{ env.INSTALL_TEST_DIR }}/pyopenvino ${{ env.PYTHON_STATIC_ARGS }} --junitxml=${{ env.INSTALL_TEST_DIR }}/TEST-Pyngraph.xml --ignore=${{ env.INSTALL_TEST_DIR }}/pyopenvino/tests/test_utils/test_utils.py --ignore=${{ env.INSTALL_TEST_DIR }}/pyopenvino/tests/test_onnx/test_zoo_models.py --ignore=${{ env.INSTALL_TEST_DIR }}/pyopenvino/tests/test_onnx/test_backend.py
- name: Model Optimizer UT
shell: cmd
run: |
python3 -m pip install -r ${{ env.OPENVINO_REPO }}/tools/mo/requirements_mxnet.txt ^
-r ${{ env.OPENVINO_REPO }}/tools/mo/requirements_caffe.txt ^
-r ${{ env.OPENVINO_REPO }}/tools/mo/requirements_kaldi.txt ^
-r ${{ env.OPENVINO_REPO }}/tools/mo/requirements_onnx.txt ^
-r ${{ env.OPENVINO_REPO }}/tools/mo/requirements_tf2.txt ^
-r ${{ env.OPENVINO_REPO }}/tools/mo/requirements_dev.txt
set PYTHONPATH=${{ env.OPENVINO_REPO }}\tools\mo;${{ env.LAYER_TESTS_INSTALL_DIR }};${{ env.INSTALL_TEST_DIR }};${{ env.INSTALL_DIR }}\python\python3.11;%PYTHONPATH%
call "${{ env.INSTALL_DIR }}\\setupvars.bat" && python3 -m pytest -s ${{ env.INSTALL_TEST_DIR }}/mo/unit_tests --ignore=${{ env.INSTALL_TEST_DIR }}/mo/unit_tests/mo/front/mxnet --junitxml=${{ env.INSTALL_TEST_DIR }}/TEST-ModelOptimizer.xml
# Ticket - 115085
# - name: PyTorch Layer Tests
# shell: cmd
# run: |
#
# python3 -m pip install -r ${{ env.OPENVINO_REPO }}/tools/mo/requirements_mxnet.txt ^
# -r ${{ env.OPENVINO_REPO }}/tools/mo/requirements_caffe.txt ^
# -r ${{ env.OPENVINO_REPO }}/tools/mo/requirements_kaldi.txt ^
# -r ${{ env.OPENVINO_REPO }}/tools/mo/requirements_onnx.txt ^
# -r ${{ env.OPENVINO_REPO }}/tools/mo/requirements_tf2.txt ^
# -r ${{ env.OPENVINO_REPO }}/tools/mo/requirements_dev.txt
#
# python3 -m pip install -r ${{ env.LAYER_TESTS_INSTALL_DIR }}/requirements.txt
#
# set PYTHONPATH=${{ env.OPENVINO_REPO }}\tools\mo;${{ env.LAYER_TESTS_INSTALL_DIR }};%PYTHONPATH%
#
# call "${{ env.INSTALL_DIR }}\\setupvars.bat" && python3 -m pytest ${{ env.LAYER_TESTS_INSTALL_DIR }}/pytorch_tests -m precommit --junitxml=${{ env.INSTALL_TEST_DIR }}/TEST-pytorch.xml
# env:
# TEST_DEVICE: CPU
- name: TensorFlow 1 Layer Tests - TF FE
if: ${{ always() }}
shell: cmd
run: |
python3 -m pip install -r ${{ env.LAYER_TESTS_INSTALL_DIR }}/requirements.txt
set PYTHONPATH=${{ env.OPENVINO_REPO }}\tools\mo;${{ env.LAYER_TESTS_INSTALL_DIR }};%PYTHONPATH%
call "${{ env.INSTALL_DIR }}\\setupvars.bat" && python3 -m pytest ${{ env.LAYER_TESTS_INSTALL_DIR }}/tensorflow_tests/ --use_new_frontend -m precommit_tf_fe --junitxml=${{ env.INSTALL_TEST_DIR }}/TEST-tf_fe.xml
env:
TEST_DEVICE: CPU
- name: TensorFlow 2 Layer Tests - TF FE
if: ${{ always() }}
shell: cmd
run: |
python3 -m pip install -r ${{ env.LAYER_TESTS_INSTALL_DIR }}/requirements.txt
set PYTHONPATH=${{ env.OPENVINO_REPO }}\tools\mo;${{ env.LAYER_TESTS_INSTALL_DIR }};%PYTHONPATH%
call "${{ env.INSTALL_DIR }}\\setupvars.bat" && python3 -m pytest ${{ env.LAYER_TESTS_INSTALL_DIR }}/tensorflow2_keras_tests/ --use_new_frontend -m precommit_tf_fe --junitxml=${{ env.INSTALL_TEST_DIR }}/TEST-tf2_fe.xml
env:
TEST_DEVICE: CPU
- name: TensorFlow 1 Layer Tests - Legacy FE
if: ${{ always() }}
shell: cmd
run: |
python3 -m pip install -r ${{ env.LAYER_TESTS_INSTALL_DIR }}/requirements.txt
set PYTHONPATH=${{ env.OPENVINO_REPO }}\tools\mo;${{ env.LAYER_TESTS_INSTALL_DIR }};%PYTHONPATH%
call "${{ env.INSTALL_DIR }}\\setupvars.bat" && python3 -m pytest ${{ env.LAYER_TESTS_INSTALL_DIR }}/tensorflow_tests/test_tf_Roll.py --ir_version=10 --junitxml=${{ env.INSTALL_TEST_DIR }}/TEST-tf_Roll.xml
- name: TensorFlow 2 Layer Tests - Legacy FE
if: ${{ always() }}
shell: cmd
run: |
python3 -m pip install -r ${{ env.LAYER_TESTS_INSTALL_DIR }}/requirements.txt
set PYTHONPATH=${{ env.OPENVINO_REPO }}\tools\mo;${{ env.LAYER_TESTS_INSTALL_DIR }};%PYTHONPATH%
call "${{ env.INSTALL_DIR }}\\setupvars.bat" && python3 -m pytest ${{ env.LAYER_TESTS_INSTALL_DIR }}/tensorflow2_keras_tests/test_tf2_keras_activation.py --ir_version=11 --junitxml=${{ env.INSTALL_TEST_DIR }}/TEST-tf2_Activation.xml -k "sigmoid"
env:
TEST_DEVICE: CPU
- name: TensorFlow Lite Layer Tests - TFL FE
if: ${{ always() }}
shell: cmd
run: |
python3 -m pip install -r ${{ env.LAYER_TESTS_INSTALL_DIR }}/requirements.txt
python3 -m pip install -r ${{ env.OPENVINO_REPO }}/tools/mo/requirements_mxnet.txt ^
-r ${{ env.OPENVINO_REPO }}/tools/mo/requirements_caffe.txt ^
-r ${{ env.OPENVINO_REPO }}/tools/mo/requirements_kaldi.txt ^
-r ${{ env.OPENVINO_REPO }}/tools/mo/requirements_onnx.txt ^
-r ${{ env.OPENVINO_REPO }}/tools/mo/requirements_tf2.txt ^
-r ${{ env.OPENVINO_REPO }}/tools/mo/requirements_dev.txt
set PYTHONPATH=${{ env.OPENVINO_REPO }}\tools\mo;${{ env.LAYER_TESTS_INSTALL_DIR }};%PYTHONPATH%
call "${{ env.INSTALL_DIR }}\\setupvars.bat" && python3 -m pytest ${{ env.LAYER_TESTS_INSTALL_DIR }}/tensorflow_lite_tests/ --junitxml=${{ env.INSTALL_TEST_DIR }}/TEST-tfl_fe.xml
env:
TEST_DEVICE: CPU
- name: MO Python API Tests
if: ${{ always() }}
shell: cmd
run: |
python3 -m pip install -r ${{ env.LAYER_TESTS_INSTALL_DIR }}/requirements.txt
set PYTHONPATH=${{ env.OPENVINO_REPO }}\tools\mo;${{ env.LAYER_TESTS_INSTALL_DIR }};%PYTHONPATH%
call "${{ env.INSTALL_DIR }}\\setupvars.bat" && python3 -m pytest ${{ env.LAYER_TESTS_INSTALL_DIR }}/mo_python_api_tests --junitxml=${{ env.INSTALL_TEST_DIR }}/TEST-test_mo_convert.xml
env:
TEST_DEVICE: CPU
- name: Python Frontend tests
if: ${{ always() }}
shell: cmd
run: |
python3 -m pip install -r ${{ env.LAYER_TESTS_INSTALL_DIR }}/requirements.txt
set PYTHONPATH=${{ env.OPENVINO_REPO }}\tools\mo;${{ env.LAYER_TESTS_INSTALL_DIR }};%PYTHONPATH%
call "${{ env.INSTALL_DIR }}\\setupvars.bat" && python3 -m pytest ${{ env.LAYER_TESTS_INSTALL_DIR }}/py_frontend_tests --junitxml=${{ env.INSTALL_TEST_DIR }}/TEST-test_py_fontend.xml
- name: Upload Test Results
uses: actions/upload-artifact@v3
if: ${{ always() }}
with:
name: test-results-python
path: ${{ env.INSTALL_TEST_DIR }}/TEST*.xml
if-no-files-found: 'error'
CXX_Unit_Tests:
needs: Build
defaults:
run:
shell: pwsh
runs-on: windows-latest
env:
INSTALL_DIR: "${{ github.workspace }}\\install"
INSTALL_TEST_DIR: "${{ github.workspace }}\\install\\tests"
steps:
- name: Create Directories
run: |
mkdir ${{ env.INSTALL_DIR }}
mkdir ${{ env.INSTALL_TEST_DIR }}
- name: Download OpenVINO package
uses: actions/download-artifact@v3
with:
name: openvino_package
path: ${{ env.INSTALL_DIR }}
- name: Download OpenVINO tests package
uses: actions/download-artifact@v3
with:
name: openvino_tests
path: ${{ env.INSTALL_TEST_DIR }}
- name: Extract OpenVINO packages
run: |
pushd ${{ env.INSTALL_DIR }}
Expand-Archive openvino_package.zip -DestinationPath "${{ env.INSTALL_DIR }}"
popd
pushd ${{ env.INSTALL_TEST_DIR }}
Expand-Archive openvino_tests.zip -DestinationPath "${{ env.INSTALL_TEST_DIR }}"
popd
- name: Check extraction
run: |
ls "${{ github.workspace }}"
ls "${{ env.INSTALL_DIR }}"
ls "${{ env.INSTALL_TEST_DIR }}"
- name: OpenVINO Core unit tests
shell: cmd
run: |
call "${{ env.INSTALL_DIR }}\\setupvars.bat" && ${{ env.INSTALL_TEST_DIR }}/ov_core_unit_tests --gtest_print_time=1 --gtest_filter=-*IE_GPU* --gtest_output=xml:${{ env.INSTALL_TEST_DIR }}/TEST-NGraphUT.xml
- name: OpenVINO Inference functional tests
shell: cmd
run: |
call "${{ env.INSTALL_DIR }}\\setupvars.bat" && ${{ env.INSTALL_TEST_DIR }}/ov_inference_functional_tests --gtest_print_time=1 --gtest_output=xml:${{ env.INSTALL_TEST_DIR }}/TEST-InferenceFunc.xml
- name: OpenVINO Inference unit tests
shell: cmd
run: |
call "${{ env.INSTALL_DIR }}\\setupvars.bat" && ${{ env.INSTALL_TEST_DIR }}/ov_inference_unit_tests --gtest_print_time=1 --gtest_output=xml:${{ env.INSTALL_TEST_DIR }}/TEST-InferenceUnit.xml
- name: Low Precision Transformations Tests
shell: cmd
run: |
call "${{ env.INSTALL_DIR }}\\setupvars.bat" && ${{ env.INSTALL_TEST_DIR }}/ov_lp_transformations_tests --gtest_print_time=1 --gtest_output=xml:${{ env.INSTALL_TEST_DIR }}/TEST-LpTransformations.xml
- name: OpenVINO Conditional compilation tests
shell: cmd
run: |
call "${{ env.INSTALL_DIR }}\\setupvars.bat" && ${{ env.INSTALL_TEST_DIR }}/ov_conditional_compilation_tests --gtest_print_time=1 --gtest_output=xml:${{ env.INSTALL_TEST_DIR }}/TEST-ConditionalCompilation.xml
- name: IR frontend tests
shell: cmd
run: |
call "${{ env.INSTALL_DIR }}\\setupvars.bat" && ${{ env.INSTALL_TEST_DIR }}/ov_ir_frontend_tests --gtest_print_time=1 --gtest_output=xml:${{ env.INSTALL_TEST_DIR }}/TEST-IRFrontend.xml
# - name: PaddlePaddle frontend tests # Disabled in Azure: https://github.com/openvinotoolkit/openvino/blob/master/.ci/azure/linux.yml#L403
# shell: cmd
# run: |
# call "${{ env.INSTALL_DIR }}\\setupvars.bat" && ${{ env.INSTALL_TEST_DIR }}/paddle_tests --gtest_print_time=1 --gtest_filter=*smoke* --gtest_output=xml:${{ env.INSTALL_TEST_DIR }}/TEST-PaddleTests.xml
# - name: ONNX frontend tests # Present in the "Build" job due to the fact that these tests require build directory
# shell: cmd
# run: |
# call "${{ env.INSTALL_DIR }}\\setupvars.bat" && ${{ env.INSTALL_TEST_DIR }}/ov_onnx_frontend_tests --gtest_print_time=1 --gtest_filter=-*IE_GPU* --gtest_output=xml:${{ env.INSTALL_TEST_DIR }}/TEST-ONNXFrontend.xml
- name: TensorFlow Common tests
shell: cmd
run: |
call "${{ env.INSTALL_DIR }}\\setupvars.bat" && ${{ env.INSTALL_TEST_DIR }}/ov_tensorflow_common_tests --gtest_print_time=1 --gtest_output=xml:${{ env.INSTALL_TEST_DIR }}/TEST-TensorFlowCommonFrontend.xml
- name: TensorFlow frontend tests
shell: cmd
run: |
call "${{ env.INSTALL_DIR }}\\setupvars.bat" && ${{ env.INSTALL_TEST_DIR }}/ov_tensorflow_frontend_tests --gtest_print_time=1 --gtest_output=xml:${{ env.INSTALL_TEST_DIR }}/TEST-TensorFlowFrontend.xml
- name: TensorFlow Lite frontend tests
shell: cmd
run: |
call "${{ env.INSTALL_DIR }}\\setupvars.bat" && ${{ env.INSTALL_TEST_DIR }}/ov_tensorflow_lite_frontend_tests --gtest_print_time=1 --gtest_output=xml:${{ env.INSTALL_TEST_DIR }}/TEST-TensorFlowLiteFrontend.xml
- name: Transformations Tests
shell: cmd
run: |
call "${{ env.INSTALL_DIR }}\\setupvars.bat" && ${{ env.INSTALL_TEST_DIR }}/ov_transformations_tests --gtest_print_time=1 --gtest_output=xml:${{ env.INSTALL_TEST_DIR }}/TEST-Transformations.xml
- name: Common test utils tests
shell: cmd
run: |
call "${{ env.INSTALL_DIR }}\\setupvars.bat" && ${{ env.INSTALL_TEST_DIR }}/ov_util_tests --gtest_print_time=1 --gtest_output=xml:${{ env.INSTALL_TEST_DIR }}/TEST-commonUtilsTests.xml
- name: CPU plugin unit tests
shell: cmd
run: |
call "${{ env.INSTALL_DIR }}\\setupvars.bat" && ${{ env.INSTALL_TEST_DIR }}/ov_cpu_unit_tests --gtest_print_time=1 --gtest_output=xml:${{ env.INSTALL_TEST_DIR }}/TEST-CPUUnitTests.xml
# - name: GNA plugin unit tests # Disabled in Azure: https://github.com/openvinotoolkit/openvino/blob/master/.ci/azure/linux.yml#L434
# shell: cmd
# run: |
# call "${{ env.INSTALL_DIR }}\\setupvars.bat" && ${{ env.INSTALL_TEST_DIR }}/ov_gna_unit_tests --gtest_print_time=1 --gtest_output=xml:${{ env.INSTALL_TEST_DIR }}/TEST-GNAUnitTests.xml
- name: AUTO UT
shell: cmd
run: |
call "${{ env.INSTALL_DIR }}\\setupvars.bat" && ${{ env.INSTALL_TEST_DIR }}/ov_auto_unit_tests --gtest_print_time=1 --gtest_output=xml:${{ env.INSTALL_TEST_DIR }}/TEST-ov_auto_unit_tests.xml
- name: Template plugin tests
shell: cmd
run: |
call "${{ env.INSTALL_DIR }}\\setupvars.bat" && ${{ env.INSTALL_TEST_DIR }}/ov_template_func_tests --gtest_print_time=1 --gtest_filter=*smoke* --gtest_output=xml:${{ env.INSTALL_TEST_DIR }}/TEST-TemplateFuncTests.xml
- name: Inference Engine C API tests
shell: cmd
run: |
call "${{ env.INSTALL_DIR }}\\setupvars.bat" && ${{ env.INSTALL_TEST_DIR }}/InferenceEngineCAPITests --gtest_print_time=1 --gtest_output=xml:${{ env.INSTALL_TEST_DIR }}/TEST-InferenceEngineCAPITests.xml
- name: OpenVINO C API tests
shell: cmd
run: |
call "${{ env.INSTALL_DIR }}\\setupvars.bat" && ${{ env.INSTALL_TEST_DIR }}/ov_capi_test --gtest_print_time=1 --gtest_output=xml:${{ env.INSTALL_TEST_DIR }}/TEST-OpenVINOCAPITests.xml
- name: AutoBatch FuncTests
shell: cmd
run: |
call "${{ env.INSTALL_DIR }}\\setupvars.bat" && ${{ env.INSTALL_TEST_DIR }}/ov_auto_batch_func_tests --gtest_output=xml:${{ env.INSTALL_TEST_DIR }}/TEST-ov_auto_batch_func_tests.xml
- name: Proxy Plugin Tests
shell: cmd
run: |
call "${{ env.INSTALL_DIR }}\\setupvars.bat" && ${{ env.INSTALL_TEST_DIR }}/ov_proxy_plugin_tests --gtest_print_time=1 --gtest_output=xml:${{ env.INSTALL_TEST_DIR }}/TEST-OVProxyTests.xml
- name: Hetero Func Tests
shell: cmd
run: |
call "${{ env.INSTALL_DIR }}\\setupvars.bat" && ${{ env.INSTALL_TEST_DIR }}/ov_hetero_func_tests --gtest_print_time=1 --gtest_output=xml:${{ env.INSTALL_TEST_DIR }}/TEST-OVHeteroFuncTests.xml
- name: Upload Test Results
uses: actions/upload-artifact@v3
if: ${{ always() }}
with:
name: test-results-cpp
path: ${{ env.INSTALL_TEST_DIR }}/TEST*.xml
if-no-files-found: 'error'
CPU_Functional_Tests:
needs: Build
defaults:
run:
shell: pwsh
runs-on: windows-latest
env:
INSTALL_DIR: "${{ github.workspace }}\\install"
INSTALL_TEST_DIR: "${{ github.workspace }}\\install\\tests"
steps:
- name: Create Directories
run: |
mkdir ${{ env.INSTALL_DIR }}
mkdir ${{ env.INSTALL_TEST_DIR }}
- name: Download OpenVINO package
uses: actions/download-artifact@v3
with:
name: openvino_package
path: ${{ env.INSTALL_DIR }}
- name: Download OpenVINO tests package
uses: actions/download-artifact@v3
with:
name: openvino_tests
path: ${{ env.INSTALL_TEST_DIR }}
- name: Extract OpenVINO packages
run: |
pushd ${{ env.INSTALL_DIR }}
Expand-Archive openvino_package.zip -DestinationPath "${{ env.INSTALL_DIR }}"
popd
pushd ${{ env.INSTALL_TEST_DIR }}
Expand-Archive openvino_tests.zip -DestinationPath "${{ env.INSTALL_TEST_DIR }}"
popd
- name: Check extraction
run: |
ls "${{ github.workspace }}"
ls "${{ env.INSTALL_DIR }}"
ls "${{ env.INSTALL_TEST_DIR }}"
- name: Intel CPU plugin func tests
shell: cmd
run: |
call "${{ env.INSTALL_DIR }}\\setupvars.bat" && ${{ env.INSTALL_TEST_DIR }}/ov_cpu_func_tests --gtest_print_time=1 --gtest_filter=*smoke* --gtest_output=xml:${{ env.INSTALL_TEST_DIR }}/TEST-CPUFuncTests.xml
- name: Upload Test Results
uses: actions/upload-artifact@v3
if: ${{ always() }}
with:
name: test-results-functional-cpu
path: ${{ env.INSTALL_TEST_DIR }}/TEST*.xml
if-no-files-found: 'error'

1
.gitignore vendored
View File

@@ -13,7 +13,6 @@ cmake-build*
*.idea
.vscode
.vs/
.vsconan/
.DS_Store
**/tags
compile_commands.json

3
.gitmodules vendored
View File

@@ -72,6 +72,3 @@
[submodule "ARMComputeLibrary"]
path = src/plugins/intel_cpu/thirdparty/ComputeLibrary
url = https://github.com/ARM-software/ComputeLibrary.git
[submodule "src/plugins/intel_cpu/thirdparty/mlas"]
path = src/plugins/intel_cpu/thirdparty/mlas
url = https://github.com/openvinotoolkit/mlas.git

View File

@@ -3,11 +3,10 @@
#
if(DEFINED BUILD_SHARED_LIBS AND NOT BUILD_SHARED_LIBS)
# 3.17: 'target_link_libraries' does not work correctly when called from
# 'target_link_libraries' does not work correctly when called from
# different directory where 'add_library' is called: CMake generates
# incorrect OpenVINOConfig.cmake in this case
# 3.18: add_library cannot create ALIAS for non-GLOBAL targets
cmake_minimum_required(VERSION 3.18)
cmake_minimum_required(VERSION 3.17)
else()
if(CPACK_GENERATOR STREQUAL "DEB")
# we have to use CPACK_DEBIAN_PACKAGE_SHLIBDEPS_PRIVATE_DIRS variable
@@ -69,22 +68,16 @@ if(NOT OV_GLIBC_VERSION VERSION_EQUAL 0.0)
message (STATUS "GLIBC_VERSION ......................... " ${OV_GLIBC_VERSION})
endif()
# remove file with exported targets to force its regeneration
# remove file with exported developer targets to force its regeneration
file(REMOVE "${CMAKE_BINARY_DIR}/ngraphTargets.cmake")
file(REMOVE "${CMAKE_BINARY_DIR}/InferenceEngineTargets.cmake")
file(REMOVE "${CMAKE_BINARY_DIR}/OpenVINOTargets.cmake")
# remove exported developer targets to force its regeneration
macro(ov_clean_dev_targets)
foreach(component IN LISTS openvino_export_components)
file(REMOVE "${CMAKE_BINARY_DIR}/${component}_dev_targets.cmake")
file(REMOVE "${CMAKE_BINARY_DIR}/ov_${component}_dev_targets.cmake")
unset(${component} CACHE)
endforeach()
unset(openvino_export_components CACHE)
unset(openvino_installed_targets CACHE)
endmacro()
ov_clean_dev_targets()
foreach(component IN LISTS openvino_export_components)
file(REMOVE "${CMAKE_BINARY_DIR}/${component}_dev_targets.cmake")
file(REMOVE "${CMAKE_BINARY_DIR}/ov_${component}_dev_targets.cmake")
unset(${component} CACHE)
endforeach()
unset(openvino_export_components CACHE)
#
# Build
@@ -127,10 +120,10 @@ if (ENABLE_TESTS)
include(cmake/test_model_zoo.cmake)
endif()
include(thirdparty/dependencies.cmake)
add_subdirectory(thirdparty)
add_subdirectory(src)
if(ENABLE_SAMPLES OR ENABLE_TESTS)
if(ENABLE_SAMPLES OR ENABLE_TESTS OR ENABLE_COMPILE_TOOL)
add_subdirectory(samples)
endif()

View File

@@ -1,53 +1,88 @@
# How to contribute to the OpenVINO repository
# Contributing to OpenVINO
We welcome community contributions to OpenVINO™. Please read the following guide to learn how to find ideas for contribution, follow best practices for pull requests, and test your changes with our established checks.
## How to contribute to the OpenVINO project
OpenVINO™ is always looking for opportunities to improve and your contributions
play a big role in this process. There are several ways you can make the
product better:
## Before you start contributing you should
### Provide Feedback
- Make sure you agree to contribute your code under [OpenVINO™ (Apache 2.0) license](https://github.com/openvinotoolkit/openvino/blob/master/LICENSE).
- Decide what youre going to contribute. If you are not sure what you want to work on, check out [Contributions Welcome](https://github.com/openvinotoolkit/openvino/issues/17502). See if there isn't anyone already working on the subject you choose, in which case you may still contribute, providing support and suggestions for the given issue or pull request.
- If you are going to fix a bug, check if it still exists. You can do it by building the latest master branch and making sure that the error is still reproducible there. We do not fix bugs that only affect older non-LTS releases like 2020.2, for example (see more details about our [branching strategy](https://github.com/openvinotoolkit/openvino/wiki/Branches)).
* **Report bugs / issues**
If you experience faulty behavior in OpenVINO or its components, you can
[create a new issue](https://github.com/openvinotoolkit/openvino/issues)
in the GitHub issue tracker.
* **Propose new features / improvements**
If you have a suggestion for improving OpenVINO or want to share your ideas, you can open a new
[GitHub Discussion](https://github.com/openvinotoolkit/openvino/discussions).
If your idea is already well defined, you can also create a
[Feature Request Issue](https://github.com/openvinotoolkit/openvino/issues/new?assignees=octocat&labels=enhancement%2Cfeature&projects=&template=feature_request.yml&title=%5BFeature+Request%5D%3A+)
In both cases, provide a detailed description, including use cases, benefits, and potential challenges.
If your points are especially well aligned with the product vision, they will be included in the
[development roadmap](./ROADMAP.md).
User feedback is crucial for OpenVINO development and even if your input is not immediately prioritized,
it may be used at a later time or undertaken by the community, regardless of the official roadmap.
### Contribute Code Changes
* **Fix Bugs or Develop New Features**
If you want to help improving OpenVINO, choose one of the issues reported in
[GitHub Issue Tracker](https://github.com/openvinotoolkit/openvino/issues) and
[create a Pull Request](./CONTRIBUTING_PR.md) addressing it. Consider one of the
tasks listed as [first-time contributions](https://github.com/openvinotoolkit/openvino/issues/17502).
If the feature you want to develop is more complex or not well defined by the reporter,
it is always a good idea to [discuss it](https://github.com/openvinotoolkit/openvino/discussions)
with OpenVINO developers first. Before creating a new PR, check if nobody is already
working on it. In such a case, you may still help, having aligned with the other developer.
Importantly, always check if the change hasn't been implemented before you start working on it!
You can build OpenVINO using the latest master branch and make sure that it still needs your
changes. Also, do not address issues that only affect older non-LTS releases, like 2022.2.
* **Develop a New Device Plugin**
Since the market of computing devices is constantly evolving, OpenVINO is always open to extending
its support for new hardware. If you want to run inference on a device that is currently not supported,
you can see how to develop a new plugin for it in the
[Plugin Developer Guide](https://docs.openvino.ai/canonical/openvino_docs_ie_plugin_dg_overview.html).
### Improve documentation
* **OpenVINO developer documentation** is contained entirely in this repository, under the
[./docs/dev](https://github.com/openvinotoolkit/openvino/tree/master/docs/dev) folder.
* **User documentation** is built from several sources and published at
[docs.openvino.ai](docs.openvino.ai), which is the recommended place for reading
these documents. Use the files maintained in this repository only for editing purposes.
* The easiest way to help with documentation is to review it and provide feedback on the
existing articles. Whether you notice a mistake, see the possibility of improving the text,
or think more information should be added, you can reach out to any of the documentation
contributors to discuss the potential changes.
You can also create a Pull Request directly, following the [editor's guide](./docs/CONTRIBUTING_DOCS.md).
## "Fork & Pull Request model" for code contribution
### Promote and Support OpenVINO
### [](https://github.com/openvinotoolkit/openvino/blob/master/CONTRIBUTING.md#the-instruction-in-brief)The instruction in brief
* **Popularize OpenVINO**
Articles, tutorials, blog posts, demos, videos, and any other involvement
in the OpenVINO community is always a welcome contribution. If you discuss
or present OpenVINO on various social platforms, you are raising awareness
of the product among A.I. enthusiasts and enabling other people to discover
the toolkit. Feel free to reach out to OpenVINO developers if you need help
with making such community-based content.
- Register at GitHub. Create your fork of the OpenVINO™ repository [https://github.com/openvinotoolkit/openvino](https://github.com/openvinotoolkit/openvino) (see [https://help.github.com/articles/fork-a-repo](https://help.github.com/articles/fork-a-repo) for details).
- Install Git.
- Set your user name and email address in Git configuration according to the GitHub account (see [First-Time-Git-Setup](https://git-scm.com/book/en/v2/Getting-Started-First-Time-Git-Setup) for details).
- Choose a task for yourself. It may be a bugfix or an entirely new piece of code.
- Choose a base branch for your work. More details about branches and policies are here: [Branches](https://github.com/openvinotoolkit/openvino/wiki/Branches)
- Clone your fork to your computer.
- Create a new branch (give it a meaningful name) from the base branch of your choice.
- Modify / add the code, following our [Coding Style Guide](./docs/dev/coding_style.md).
- If you want to add a new sample, please have a look at the [Guide for contributing to C++/C/Python IE samples](https://github.com/openvinotoolkit/openvino/wiki/SampleContribute)
- If you want to contribute to the documentation and want to add a new guide, follow that instruction [Documentation guidelines](https://github.com/openvinotoolkit/openvino/wiki/CodingStyleGuideLinesDocumentation)
- Run testsuite locally:
- execute each test binary from the artifacts directory, e.g. `<source dir>/bin/intel64/Release/ieFuncTests`
- When you are done, make sure that your branch is up to date with latest state of the branch you want to contribute to (e.g. `git fetch upstream && git merge upstream/master`). If so, push your branch to your GitHub fork and create a pull request from your branch to the base branch (see [using-pull-requests](https://help.github.com/articles/using-pull-requests) for details).
## Making a good pull request
Following these guidelines will increase the likelihood of your pull request being accepted:
- One PR one issue.
- Build perfectly on your local system.
- Choose the right base branch, based on our [Branch Guidelines](https://github.com/openvinotoolkit/openvino/wiki/Branches).
- Follow the [Coding Style Guide](./docs/dev/coding_style.md) for your code.
- Document your contribution, if you decide it may benefit OpenVINO users. You may do it yourself by editing the files in the "docs" directory or contact someone working with documentation to provide them with the right information.
- Cover your changes with test.
- Add the license statement at the top of new files [C++ example](https://github.com/openvinotoolkit/openvino/blob/master/samples/cpp/classification_sample_async/main.cpp#L1-L2), [Python example](https://github.com/openvinotoolkit/openvino/blob/master/samples/python/hello_classification/hello_classification.py#L3-L4).
- Add proper information to the PR: a meaningful title, the reason why you made the commit, and a link to the issue page, if it exists.
- Remove changes unrelated to the PR.
- If it is still WIP and you want to check CI test results early, use a _Draft_ PR.
- Submit your PR and become an OpenVINO™ contributor!
* **Help Other Community Members**
If you are an experienced OpenVINO user and want to help, you can always
share your expertise with the community. Check GitHub Discussions and
Issues to see if you can help someone.
## Testing and merging pull requests
## License
Your pull request will be automatically tested by OpenVINO™'s precommit (testing statuses are automatically reported as "green" or "red" circles in precommit steps on the PR page). If any builders fail, you need to fix the issues before the PR can be merged. If you push any changes to your branch on GitHub the tests will re-run automatically. No need to close pull request and open a new one!
When an assigned reviewer accepts the pull request and the pre-commit is "green", the review status is set to "Approved", which informs OpenVINO™ maintainers that they can merge your pull request.
By contributing to the OpenVINO project, you agree that your contributions will be
licensed under the terms stated in the [LICENSE](./LICENSE.md) file.

111
CONTRIBUTING_DOCS.md Normal file
View File

@@ -0,0 +1,111 @@
# OpenVINO Documentation Guide
## Basic article structure
OpenVINO documentation is built using Sphinx and the reStructuredText formatting.
That means the basic formatting rules need to be used:
### White Spaces
OpenVINO documentation is developed to be easily readable in both html and
reStructuredText. Here are some suggestions on how to make it render nicely
and improve document clarity.
### Headings (including the article title)
They are made by "underscoring" text with punctuation marks (at least as
many marks as letters in the underscored header). We use the following convention:
```
H1
====================
H2
####################
H3
++++++++++++++++++++
H4
--------------------
H5
....................
```
### Line length
In programming, a limit of 80 characters per line is a common BKM. It may also apply
to reading natural languages fairly well. For this reason, we aim at lines of around
70 to 100 characters long. The limit is not a strict rule but rather a guideline to
follow in most cases. The breaks will not translate to html, and rightly so, but will
make reading and editing documents in GitHub or an editor much easier.
### Tables
Tables may be difficult to implement well in websites. For example, longer portions
of text, like descriptions, may render them difficult to read (e.g. improper cell
widths or heights). Complex tables may also be difficult to read in source files.
To prevent that, check the [table directive documentation](https://www.sphinx-doc.org/en/master/usage/restructuredtext/directives.html#table-directives)
and see our custom directives. Use the following guidelines for easier editing:
* For very big and complex data sets: use a list instead of a table or remove
the problematic content from the table and implement it differently.
* For very big and complex data sets that need to use tables: use an external
file (e.g. PDF) and link to it.
* For medium tables that look bad in source (e.g. due to long lines of text),
use the reStructuredText list table format.
* For medium and small tables, use the reStructuredText grid or simple table formats.
## Cross-linking
There are several directives Sphinx uses for linking, each has its purpose and format.
Follow these guidelines for consistent results:
* Avoid absolute references to internal documents as much as possible (link to source, not html).
* Note that sphinx uses the "back-tick" character and not the "inverted-comma" => ` vs. '
* When a file path starts at the same directory is used, put "./" at its beginning.
* Always add a space before the opening angle bracket ("<") for target files.
Use the following formatting for different links:
* link to an external page / file
* `` `text <url> `__ ``
* use a double underscore for consistency
* link to an internal documentation page / file
* `` :doc:`a docs page <relative file path>` ``
* Link to an rst or md file within our documentation, so that it renders properly in html
* link to a header on the same page
* `` 'a header in the same article <this-is-section-header-title>`__ ``
* anchors are created automatically for all existing headers
* such anchor looks like the header, with minor adjustments:
* all letters are lower case,
* remove all special glyphs, like brackets,
* replace spaces with hyphens
* Create an anchor in an article
* `` .. _anchor-in-the target-article:: ``
* put it before the header to which you want to link
* See the rules for naming anchors / labels at the bottom of this article
* link to an anchor on a different page in our documentation
* `` :ref:`the created anchor <anchor-in-the target-article>` ``
* link to the anchor using just its name
* anchors / labels
Read about anchors
Sphinx uses labels to create html anchors, which can be linked to from anywhere in documentation.
Although they may be put at the top of any article to make linking to it very easy, we do not use
this approach. Every label definition starts with an underscore, the underscore is not used in links.
Most importantly, every label needs to be globally unique. It means that it is always a good
practice to start their labels with a clear identifier of the article they reside in.

63
CONTRIBUTING_PR.md Normal file
View File

@@ -0,0 +1,63 @@
# How to Prepare a Good PR
OpenVINO is an open-source project and you can contribute to its code directly.
To do so, follow these guidelines for creating Pull Requests, so that your
changes get the highest chance of being merged.
## General Rules of a Good Pull Request
* Create your own fork of the repository and use it to create PRs.
Avoid creating change branches in the main repository.
* Choose a proper branch for your work and create your own branch based on it.
* Give your branches, commits, and Pull Requests meaningful names and descriptions.
It helps to track changes later. If your changes cover a particular component,
you can indicate it in the PR name as a prefix, for example: ``[DOCS] PR name``.
* Follow the [OpenVINO code style guide](https://github.com/openvinotoolkit/openvino/blob/master/docs/dev/coding_style.md).
* Make your PRs small - each PR should address one issue. Remove all changes
unrelated to the PR.
* Document your contribution! If your changes may impact how the user works with
OpenVINO, provide the information in proper articles. You can do it yourself,
or contact one of OpenVINO documentation contributors to work together on
developing the right content.
* For Work In Progress, or checking test results early, use a Draft PR.
## Ensure Change Quality
Your pull request will be automatically tested by OpenVINO™'s pre-commit and marked
as "green" if it is ready for merging. If any builders fail, the status is "red,"
you need to fix the issues listed in console logs. Any change to the PR branch will
automatically trigger the checks, so you don't need to recreate the PR, Just wait
for the updated results.
Regardless of the automated tests, you should ensure the quality of your changes:
* Test your changes locally:
* Make sure to double-check your code.
* Run tests locally to identify and fix potential issues (execute test binaries
from the artifacts directory, e.g. ``<source dir>/bin/intel64/Release/ieFuncTests``)
* Before creating a PR, make sure that your branch is up to date with the latest
state of the branch you want to contribute to (e.g. git fetch upstream && git
merge upstream/master).
## Branching Policy
* The "master" branch is used for development and constitutes the base for each new release.
* Each OpenVINO release has its own branch: ``releases/<year>/<release number>``.
* The final release each year is considered a Long Term Support version,
which means it remains active.
* Contributions are accepted only by active branches, which are:
* the "master" branch for future releases,
* the most recently published version for fixes,
* LTS versions (for two years from their release dates).
## Need Additional Help? Check these Articles
* [How to create a fork](https://help.github.com/articles/fork-a-rep)
* [Install Git](https://git-scm.com/book/en/v2/Getting-Started-First-Time-Git-Setup)
* If you want to add a new sample, please have a look at the Guide for contributing
to C++/C/Python IE samples and add the license statement at the top of new files for
C++ example, Python example.

View File

@@ -1,4 +1,5 @@
<div align="center">
<img src="docs/img/openvino-logo-purple-black.png" width="400px">
[![PyPI Status](https://badge.fury.io/py/openvino.svg)](https://badge.fury.io/py/openvino)
@@ -6,8 +7,9 @@
[![brew Status](https://img.shields.io/homebrew/v/openvino)](https://formulae.brew.sh/formula/openvino)
[![PyPI Downloads](https://pepy.tech/badge/openvino)](https://pepy.tech/project/openvino)
[![Anaconda Downloads](https://anaconda.org/conda-forge/libopenvino/badges/downloads.svg)](https://anaconda.org/conda-forge/openvino/files)
[![Anaconda Downloads](https://anaconda.org/conda-forge/openvino/badges/downloads.svg)](https://anaconda.org/conda-forge/openvino/files)
[![brew Downloads](https://img.shields.io/homebrew/installs/dy/openvino)](https://formulae.brew.sh/formula/openvino)
</div>
## Contents:
@@ -103,7 +105,7 @@ OpenVINO™ Toolkit also contains several plugins which simplify loading models
</thead>
<tbody>
<tr>
<td><a href="https://docs.openvino.ai/2023.0/openvino_docs_IE_DG_supported_plugins_AUTO.html#doxid-openvino-docs-i-e-d-g-supported-plugins-a-u-t-o">Auto</a></td>
<td><a href="https://docs.openvino.ai/2023.0/openvino_docs_OV_UG_supported_plugins_AUTO.html">Auto</a></td>
<td><b><i><a href="./src/plugins/auto">openvino_auto_plugin</a></i></b></td>
<td>Auto plugin enables selecting Intel device for inference automatically</td>
</tr>
@@ -166,9 +168,7 @@ See [How to build OpenVINO](./docs/dev/build.md) to get more information about t
## How to contribute
See [Contributions Welcome](https://github.com/openvinotoolkit/openvino/issues/17502) for good first issues.
See [CONTRIBUTING](./CONTRIBUTING.md) for contribution details. Thank you!
See [CONTRIBUTING](./CONTRIBUTING.md) for details. Thank you!
## Get a support

View File

@@ -9,3 +9,67 @@ set(CMAKE_C_COMPILER arm-linux-gnueabihf-gcc)
set(CMAKE_CXX_COMPILER arm-linux-gnueabihf-g++)
set(CMAKE_STRIP arm-linux-gnueabihf-strip)
set(PKG_CONFIG_EXECUTABLE arm-linux-gnueabihf-pkg-config CACHE PATH "Path to ARM pkg-config")
set(CMAKE_FIND_ROOT_PATH_MODE_PROGRAM NEVER)
set(CMAKE_FIND_ROOT_PATH_MODE_LIBRARY ONLY)
set(CMAKE_FIND_ROOT_PATH_MODE_INCLUDE ONLY)
set(CMAKE_FIND_ROOT_PATH_MODE_PACKAGE ONLY)
macro(__cmake_find_root_save_and_reset)
foreach(v
CMAKE_FIND_ROOT_PATH_MODE_LIBRARY
CMAKE_FIND_ROOT_PATH_MODE_INCLUDE
CMAKE_FIND_ROOT_PATH_MODE_PACKAGE
CMAKE_FIND_ROOT_PATH_MODE_PROGRAM
)
set(__save_${v} ${${v}})
set(${v} NEVER)
endforeach()
endmacro()
macro(__cmake_find_root_restore)
foreach(v
CMAKE_FIND_ROOT_PATH_MODE_LIBRARY
CMAKE_FIND_ROOT_PATH_MODE_INCLUDE
CMAKE_FIND_ROOT_PATH_MODE_PACKAGE
CMAKE_FIND_ROOT_PATH_MODE_PROGRAM
)
set(${v} ${__save_${v}})
unset(__save_${v})
endforeach()
endmacro()
# macro to find programs on the host OS
macro(find_host_program)
__cmake_find_root_save_and_reset()
if(CMAKE_HOST_WIN32)
SET(WIN32 1)
SET(UNIX)
elseif(CMAKE_HOST_APPLE)
SET(APPLE 1)
SET(UNIX)
endif()
find_program(${ARGN})
SET(WIN32)
SET(APPLE)
SET(UNIX 1)
__cmake_find_root_restore()
endmacro()
# macro to find packages on the host OS
macro(find_host_package)
__cmake_find_root_save_and_reset()
if(CMAKE_HOST_WIN32)
SET(WIN32 1)
SET(UNIX)
elseif(CMAKE_HOST_APPLE)
SET(APPLE 1)
SET(UNIX)
endif()
find_package(${ARGN})
SET(WIN32)
SET(APPLE)
SET(UNIX 1)
__cmake_find_root_restore()
endmacro()

View File

@@ -9,3 +9,67 @@ set(CMAKE_C_COMPILER aarch64-linux-gnu-gcc)
set(CMAKE_CXX_COMPILER aarch64-linux-gnu-g++)
set(CMAKE_STRIP aarch64-linux-gnu-strip)
set(PKG_CONFIG_EXECUTABLE aarch64-linux-gnu-pkg-config CACHE PATH "Path to ARM64 pkg-config")
set(CMAKE_FIND_ROOT_PATH_MODE_PROGRAM NEVER)
set(CMAKE_FIND_ROOT_PATH_MODE_LIBRARY ONLY)
set(CMAKE_FIND_ROOT_PATH_MODE_INCLUDE ONLY)
set(CMAKE_FIND_ROOT_PATH_MODE_PACKAGE ONLY)
macro(__cmake_find_root_save_and_reset)
foreach(v
CMAKE_FIND_ROOT_PATH_MODE_LIBRARY
CMAKE_FIND_ROOT_PATH_MODE_INCLUDE
CMAKE_FIND_ROOT_PATH_MODE_PACKAGE
CMAKE_FIND_ROOT_PATH_MODE_PROGRAM
)
set(__save_${v} ${${v}})
set(${v} NEVER)
endforeach()
endmacro()
macro(__cmake_find_root_restore)
foreach(v
CMAKE_FIND_ROOT_PATH_MODE_LIBRARY
CMAKE_FIND_ROOT_PATH_MODE_INCLUDE
CMAKE_FIND_ROOT_PATH_MODE_PACKAGE
CMAKE_FIND_ROOT_PATH_MODE_PROGRAM
)
set(${v} ${__save_${v}})
unset(__save_${v})
endforeach()
endmacro()
# macro to find programs on the host OS
macro(find_host_program)
__cmake_find_root_save_and_reset()
if(CMAKE_HOST_WIN32)
SET(WIN32 1)
SET(UNIX)
elseif(CMAKE_HOST_APPLE)
SET(APPLE 1)
SET(UNIX)
endif()
find_program(${ARGN})
SET(WIN32)
SET(APPLE)
SET(UNIX 1)
__cmake_find_root_restore()
endmacro()
# macro to find packages on the host OS
macro(find_host_package)
__cmake_find_root_save_and_reset()
if(CMAKE_HOST_WIN32)
SET(WIN32 1)
SET(UNIX)
elseif(CMAKE_HOST_APPLE)
SET(APPLE 1)
SET(UNIX)
endif()
find_package(${ARGN})
SET(WIN32)
SET(APPLE)
SET(UNIX 1)
__cmake_find_root_restore()
endmacro()

View File

@@ -32,24 +32,21 @@ if(THREADING STREQUAL "OMP")
TARGET_PATH "${TEMP}/omp"
ENVIRONMENT "OMP"
VERSION_REGEX ".*_([a-z]*_([a-z0-9]+\\.)*[0-9]+).*"
SHA256 "62c68646747fb10f19b53217cb04a1e10ff93606f992e6b35eb8c31187c68fbf"
USE_NEW_LOCATION TRUE)
SHA256 "62c68646747fb10f19b53217cb04a1e10ff93606f992e6b35eb8c31187c68fbf")
elseif(LINUX AND X86_64)
RESOLVE_DEPENDENCY(OMP
ARCHIVE_LIN "iomp.tgz"
TARGET_PATH "${TEMP}/omp"
ENVIRONMENT "OMP"
VERSION_REGEX ".*_([a-z]*_([a-z0-9]+\\.)*[0-9]+).*"
SHA256 "7832b16d82513ee880d97c27c7626f9525ebd678decf6a8fe6c38550f73227d9"
USE_NEW_LOCATION TRUE)
SHA256 "7832b16d82513ee880d97c27c7626f9525ebd678decf6a8fe6c38550f73227d9")
elseif(APPLE AND X86_64)
RESOLVE_DEPENDENCY(OMP
ARCHIVE_MAC "iomp_20190130_mac.tgz"
TARGET_PATH "${TEMP}/omp"
ENVIRONMENT "OMP"
VERSION_REGEX ".*_([a-z]*_([a-z0-9]+\\.)*[0-9]+).*"
SHA256 "591ea4a7e08bbe0062648916f42bded71d24c27f00af30a8f31a29b5878ea0cc"
USE_NEW_LOCATION TRUE)
SHA256 "591ea4a7e08bbe0062648916f42bded71d24c27f00af30a8f31a29b5878ea0cc")
else()
message(FATAL_ERROR "Intel OMP is not available on current platform")
endif()
@@ -111,23 +108,21 @@ function(ov_download_tbb)
ARCHIVE_ANDROID "tbb2020_20200404_android.tgz"
TARGET_PATH "${TEMP}/tbb"
ENVIRONMENT "TBBROOT"
SHA256 "f42d084224cc2d643314bd483ad180b081774608844000f132859fca3e9bf0ce"
USE_NEW_LOCATION TRUE)
SHA256 "f42d084224cc2d643314bd483ad180b081774608844000f132859fca3e9bf0ce")
elseif(LINUX AND X86_64 AND OV_GLIBC_VERSION VERSION_GREATER_EQUAL 2.17)
# build oneTBB 2021.2.1 with gcc 4.8 (glibc 2.17)
RESOLVE_DEPENDENCY(TBB
ARCHIVE_LIN "oneapi-tbb-2021.2.1-lin-canary.tgz"
ARCHIVE_LIN "oneapi-tbb-2021.2.1-lin.tgz"
TARGET_PATH "${TEMP}/tbb"
ENVIRONMENT "TBBROOT"
SHA256 "3a2c2ec79b3cce7e6a2484754ba6f029fa968db2eefc6659540792b7db8fea0c"
SHA256 "0a56f73baaa40d72e06949ea6d593ae63a19f7580ce71c08287c1f59d2e5b988"
USE_NEW_LOCATION TRUE)
elseif(YOCTO_AARCH64)
RESOLVE_DEPENDENCY(TBB
ARCHIVE_LIN "keembay/tbb2020_38404_kmb_lic.tgz"
TARGET_PATH "${TEMP}/tbb_yocto"
ENVIRONMENT "TBBROOT"
SHA256 "321261ff2eda6d4568a473cb883262bce77a93dac599f7bd65d2918bdee4d75b"
USE_NEW_LOCATION TRUE)
SHA256 "321261ff2eda6d4568a473cb883262bce77a93dac599f7bd65d2918bdee4d75b")
elseif(APPLE AND X86_64)
# build oneTBB 2021.2.1 with OS version 11.4
RESOLVE_DEPENDENCY(TBB
@@ -147,10 +142,10 @@ function(ov_download_tbb)
elseif(LINUX AND AARCH64 AND OV_GLIBC_VERSION VERSION_GREATER_EQUAL 2.17)
# build oneTBB 2021.2.1 with gcc 4.8 (glibc 2.17)
RESOLVE_DEPENDENCY(TBB
ARCHIVE_LIN "oneapi-tbb-2021.2.1-lin-arm64-canary.tgz"
ARCHIVE_LIN "oneapi-tbb-2021.2.1-lin-arm64.tgz"
TARGET_PATH "${TEMP}/tbb"
ENVIRONMENT "TBBROOT"
SHA256 "042fdac53be65841a970b05d892f4b20b556b06fd3b20d2d0068e49c4fd74f07"
SHA256 "6b87194a845aa9314f3785d842e250d934e545eccc4636655c7b27c98c302c0c"
USE_NEW_LOCATION TRUE)
elseif(APPLE AND AARCH64)
# build oneTBB 2021.2.1 with export MACOSX_DEPLOYMENT_TARGET=11.0

View File

@@ -8,12 +8,6 @@ if(NOT DEFINED IEDevScripts_DIR)
message(FATAL_ERROR "IEDevScripts_DIR is not defined")
endif()
macro(ov_set_if_not_defined var value)
if(NOT DEFINED ${var})
set(${var} ${value})
endif()
endmacro()
set(OLD_CMAKE_MODULE_PATH ${CMAKE_MODULE_PATH})
set(CMAKE_MODULE_PATH "${IEDevScripts_DIR}")
@@ -77,8 +71,23 @@ endfunction()
# For cross-compilation
#
include(cross_compile/find_commands)
include(cross_compile/native_compile)
# Search packages for the host system instead of packages for the target system
# in case of cross compilation these macros should be defined by the toolchain file
if(NOT COMMAND find_host_package)
macro(find_host_package)
find_package(${ARGN})
endmacro()
endif()
if(NOT COMMAND find_host_library)
macro(find_host_library)
find_library(${ARGN})
endmacro()
endif()
if(NOT COMMAND find_host_program)
macro(find_host_program)
find_program(${ARGN})
endmacro()
endif()
#
# Common scripts
@@ -157,6 +166,12 @@ else()
endif()
add_definitions(-DIE_BUILD_POSTFIX=\"${IE_BUILD_POSTFIX}\")
macro(ov_set_if_not_defined var value)
if(NOT DEFINED ${var})
set(${var} ${value})
endif()
endmacro()
ov_set_if_not_defined(CMAKE_LIBRARY_OUTPUT_DIRECTORY ${OUTPUT_ROOT}/${BIN_FOLDER})
ov_set_if_not_defined(CMAKE_ARCHIVE_OUTPUT_DIRECTORY ${OUTPUT_ROOT}/${BIN_FOLDER})
ov_set_if_not_defined(CMAKE_COMPILE_PDB_OUTPUT_DIRECTORY ${OUTPUT_ROOT}/${BIN_FOLDER})
@@ -164,12 +179,14 @@ ov_set_if_not_defined(CMAKE_PDB_OUTPUT_DIRECTORY ${OUTPUT_ROOT}/${BIN_FOLDER})
ov_set_if_not_defined(CMAKE_RUNTIME_OUTPUT_DIRECTORY ${OUTPUT_ROOT}/${BIN_FOLDER})
if(CPACK_GENERATOR MATCHES "^(DEB|RPM)$")
# to make sure that lib/<multiarch-triplet> is created on Debian
# to make sure that lib/<multiarch-tuple> is created on Debian
set(CMAKE_INSTALL_PREFIX "/usr" CACHE PATH "Cmake install prefix" FORCE)
endif()
include(packaging/packaging)
set(CMAKE_SKIP_INSTALL_RPATH ON)
if(APPLE)
set(CMAKE_INSTALL_RPATH_USE_LINK_PATH ON)
@@ -179,6 +196,10 @@ if(APPLE)
message(FATAL_ERROR "Internal error: OV_CPACK_LIBRARYDIR is not defined, while it's required to initialize RPATH")
endif()
if(CPACK_GENERATOR STREQUAL "BREW")
set(CMAKE_SKIP_INSTALL_RPATH OFF)
endif()
# WA for Xcode generator + object libraries issue:
# https://gitlab.kitware.com/cmake/cmake/issues/20260
# http://cmake.3232098.n2.nabble.com/XCODE-DEPEND-HELPER-make-Deletes-Targets-Before-and-While-They-re-Built-td7598277.html
@@ -226,6 +247,17 @@ endif()
# General flags
macro(ov_install_static_lib target comp)
if(NOT BUILD_SHARED_LIBS)
get_target_property(target_type ${target} TYPE)
if(target_type STREQUAL "STATIC_LIBRARY")
set_target_properties(${target} PROPERTIES EXCLUDE_FROM_ALL OFF)
endif()
install(TARGETS ${target} EXPORT OpenVINOTargets
ARCHIVE DESTINATION ${OV_CPACK_ARCHIVEDIR} COMPONENT ${comp} ${ARGN})
endif()
endmacro()
set(THREADS_PREFER_PTHREAD_FLAG ON)
find_package(Threads REQUIRED)
@@ -280,6 +312,7 @@ function(ov_mark_target_as_cc)
endfunction()
include(python_requirements)
include(native_compile)
# Code style utils

View File

@@ -163,22 +163,7 @@ function(addIeTargetTest)
addIeTarget(TYPE EXECUTABLE NAME ${ARG_NAME} ${ARG_UNPARSED_ARGUMENTS})
if(EMSCRIPTEN)
set(JS_BIN_NAME "${ARG_NAME}.js")
set(JS_APP_NAME "${ARG_NAME}_js.js")
set(JS_TEST_APP "${CMAKE_RUNTIME_OUTPUT_DIRECTORY}/${JS_APP_NAME}")
file(WRITE ${JS_TEST_APP} "// Copyright (C) 2018-2023 Intel Corporation\n")
file(APPEND ${JS_TEST_APP} "// SPDX-License-Identifier: Apache-2.0\n")
file(APPEND ${JS_TEST_APP} "//\n")
file(APPEND ${JS_TEST_APP} "// JS test app\n")
file(APPEND ${JS_TEST_APP} "const createModule = require(\"./${JS_BIN_NAME}\");\n")
file(APPEND ${JS_TEST_APP} "createModule().then(function(Module) {});\n")
file(APPEND ${JS_TEST_APP} " ")
# node version>= 16.8.0, else need add "--experimental-wasm-threads --experimental-wasm-bulk-memory" option
add_test(NAME ${ARG_NAME} COMMAND node ${JS_TEST_APP})
else()
add_test(NAME ${ARG_NAME} COMMAND ${ARG_NAME})
endif()
add_test(NAME ${ARG_NAME} COMMAND ${ARG_NAME})
set_property(TEST ${ARG_NAME} PROPERTY LABELS ${ARG_LABELS})
install(TARGETS ${ARG_NAME}

View File

@@ -54,8 +54,6 @@ macro(ov_deprecated_no_errors)
endif()
elseif(OV_COMPILER_IS_CLANG OR CMAKE_COMPILER_IS_GNUCXX)
set(ie_c_cxx_deprecated_no_errors "-Wno-error=deprecated-declarations")
# Suppress #warning messages
set(ie_c_cxx_deprecated_no_errors "${ie_c_cxx_deprecated_no_errors} -Wno-cpp")
else()
message(WARNING "Unsupported CXX compiler ${CMAKE_CXX_COMPILER_ID}")
endif()
@@ -75,7 +73,7 @@ macro(ov_dev_package_no_errors)
if(OV_COMPILER_IS_CLANG OR CMAKE_COMPILER_IS_GNUCXX)
set(ie_c_cxx_dev_no_errors "-Wno-all")
if(SUGGEST_OVERRIDE_SUPPORTED)
set(ie_cxx_dev_no_errors "-Wno-error=suggest-override")
set(ie_cxx_dev_no_errors "${ie_c_cxx_dev_no_errors} -Wno-error=suggest-override")
endif()
endif()
@@ -252,21 +250,7 @@ function(ov_force_include target scope header_file)
endif()
endfunction()
#
# ov_abi_free_target(<target name>)
#
# Marks target to be compiliance in CXX ABI free manner
#
function(ov_abi_free_target target)
# To guarantee OpenVINO can be used with gcc versions 7 through 12.2
# - https://gcc.gnu.org/onlinedocs/gcc/C_002b_002b-Dialect-Options.html
# - https://gcc.gnu.org/onlinedocs/libstdc++/manual/abi.html
if(CMAKE_COMPILER_IS_GNUCXX AND NOT MINGW64)
target_compile_options(${target} PRIVATE $<$<COMPILE_LANGUAGE:CXX>:-Wabi=11>)
endif()
endfunction()
#
#
# ie_python_minimal_api(<target>)
#
# Set options to use only Python Limited API
@@ -318,18 +302,6 @@ elseif(CMAKE_COMPILER_IS_GNUCXX OR OV_COMPILER_IS_CLANG)
ie_add_compiler_flags(-fsigned-char)
endif()
file(RELATIVE_PATH OV_RELATIVE_BIN_PATH ${CMAKE_CURRENT_BINARY_DIR} ${CMAKE_SOURCE_DIR})
if(${CMAKE_VERSION} VERSION_LESS "3.20")
file(TO_NATIVE_PATH ${OpenVINO_SOURCE_DIR} OV_NATIVE_PROJECT_ROOT_DIR)
file(TO_NATIVE_PATH ${OV_RELATIVE_BIN_PATH} NATIVE_OV_RELATIVE_BIN_PATH)
else()
cmake_path(NATIVE_PATH OpenVINO_SOURCE_DIR OV_NATIVE_PROJECT_ROOT_DIR)
cmake_path(NATIVE_PATH OV_RELATIVE_BIN_PATH NATIVE_OV_RELATIVE_BIN_PATH)
endif()
file(RELATIVE_PATH OV_NATIVE_PARENT_PROJECT_ROOT_DIR "${OpenVINO_SOURCE_DIR}/.." ${OpenVINO_SOURCE_DIR})
if(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
#
# Common options / warnings enabled
@@ -378,11 +350,6 @@ if(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
# C4275 non dll-interface class used as base for dll-interface class
ie_add_compiler_flags(/wd4275)
# Enable __FILE__ trim, use path with forward and backward slash as directory separator
add_compile_options(
"$<$<COMPILE_LANGUAGE:CXX>:/d1trimfile:${OV_NATIVE_PROJECT_ROOT_DIR}\\>"
"$<$<COMPILE_LANGUAGE:CXX>:/d1trimfile:${OpenVINO_SOURCE_DIR}/>")
#
# Debug information flags, by default CMake adds /Zi option
# but provides no way to specify CMAKE_COMPILE_PDB_NAME on root level
@@ -425,7 +392,7 @@ elseif(CMAKE_CXX_COMPILER_ID STREQUAL "Intel" AND WIN32)
ie_add_compiler_flags(/Qdiag-disable:3180)
# 11075: To get full report use -Qopt-report:4 -Qopt-report-phase ipo
ie_add_compiler_flags(/Qdiag-disable:11075)
# 15335: was not vectorized: vectorization possible but seems inefficient.
# 15335: was not vectorized: vectorization possible but seems inefficient.
# Use vector always directive or /Qvec-threshold0 to override
ie_add_compiler_flags(/Qdiag-disable:15335)
else()
@@ -443,26 +410,6 @@ else()
# Warn if an undefined identifier is evaluated in an #if directive. Such identifiers are replaced with zero.
ie_add_compiler_flags(-Wundef)
# To guarantee OpenVINO can be used with gcc versions 7 through 12
# - https://gcc.gnu.org/onlinedocs/gcc/C_002b_002b-Dialect-Options.html
# - https://gcc.gnu.org/onlinedocs/libstdc++/manual/abi.html
if(CMAKE_COMPILER_IS_GNUCXX)
if(CMAKE_CXX_COMPILER_VERSION VERSION_GREATER_EQUAL "8")
# Enable __FILE__ trim only for release mode
set(CMAKE_C_FLAGS_RELEASE "${CMAKE_C_FLAGS_RELEASE} -ffile-prefix-map=${OV_NATIVE_PROJECT_ROOT_DIR}/= -ffile-prefix-map=${OV_RELATIVE_BIN_PATH}/=")
set(CMAKE_CXX_FLAGS_RELEASE "${CMAKE_CXX_FLAGS_RELEASE} -ffile-prefix-map=${OV_NATIVE_PROJECT_ROOT_DIR}/= -ffile-prefix-map=${OV_RELATIVE_BIN_PATH}/=")
endif()
# set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wabi=11")
endif()
if(CMAKE_CXX_COMPILER_ID MATCHES "Clang")
if(CMAKE_CXX_COMPILER_VERSION VERSION_GREATER_EQUAL "10")
# Enable __FILE__ trim only for release mode
set(CMAKE_C_FLAGS_RELEASE "${CMAKE_C_FLAGS_RELEASE} -ffile-prefix-map=${OV_NATIVE_PROJECT_ROOT_DIR}/= -ffile-prefix-map=${OV_RELATIVE_BIN_PATH}/=")
set(CMAKE_CXX_FLAGS_RELEASE "${CMAKE_CXX_FLAGS_RELEASE} -ffile-prefix-map=${OV_NATIVE_PROJECT_ROOT_DIR}/= -ffile-prefix-map=${OV_RELATIVE_BIN_PATH}/=")
endif()
endif()
#
# Warnings as errors
#
@@ -506,11 +453,6 @@ else()
endif()
endif()
add_compile_definitions(
# Defines to trim check of __FILE__ macro in case if not done by compiler.
OV_NATIVE_PARENT_PROJECT_ROOT_DIR="${OV_NATIVE_PARENT_PROJECT_ROOT_DIR}")
check_cxx_compiler_flag("-Wsuggest-override" SUGGEST_OVERRIDE_SUPPORTED)
if(SUGGEST_OVERRIDE_SUPPORTED)
set(CMAKE_CXX_FLAGS "-Wsuggest-override ${CMAKE_CXX_FLAGS}")

View File

@@ -15,8 +15,9 @@ if(CMAKE_COMPILER_IS_GNUCXX OR OV_COMPILER_IS_CLANG OR
set(OV_C_CXX_FLAGS "${OV_C_CXX_FLAGS} -D_FORTIFY_SOURCE=2")
endif()
endif()
set(CMAKE_EXE_LINKER_FLAGS_RELEASE "${CMAKE_EXE_LINKER_FLAGS_RELEASE} -pie")
if(NOT APPLE)
set(CMAKE_EXE_LINKER_FLAGS_RELEASE "${CMAKE_EXE_LINKER_FLAGS_RELEASE} -pie")
endif()
if(CMAKE_COMPILER_IS_GNUCXX)
set(OV_C_CXX_FLAGS "${OV_C_CXX_FLAGS} -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv")

View File

@@ -1,160 +0,0 @@
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
# Search packages for the host system instead of packages for the target system
# in case of cross compilation these macros should be defined by the toolchain file
if(CMAKE_CROSSCOMPILING AND NOT (OV_ARCH STREQUAL OV_HOST_ARCH AND
CMAKE_SYSTEM_NAME STREQUAL CMAKE_HOST_SYSTEM_NAME))
# don't look at directories which are part of PATH (with removed bin / sbin at the end)
# like /opt/homebrew on macOS where we cannot use system env path, because brew's
# dependencies will be found, but at the same time we need to find flatbufffers and
# other build system dependencies
# ov_set_if_not_defined(CMAKE_FIND_USE_SYSTEM_ENVIRONMENT_PATH OFF)
ov_set_if_not_defined(CMAKE_FIND_USE_SYSTEM_PACKAGE_REGISTRY OFF)
# it contains /usr and if we set this var to OFF, then CMAKE_FIND_ROOT_PATH is ignored
# ov_set_if_not_defined(CMAKE_FIND_USE_CMAKE_SYSTEM_PATH OFF)
if(LINUX)
# set root paths (overridden to /usr/lib/<CMAKE_LIBRARY_ARCHITECTURE>/cmake)
# CMAKE_LIBRARY_ARCHITECTURE is defined automatically by cmake after trying the compilers
# ov_set_if_not_defined(CMAKE_FIND_ROOT_PATH "/usr")
endif()
# controling CMAKE_FIND_ROOT_PATH usage
ov_set_if_not_defined(CMAKE_FIND_ROOT_PATH_MODE_PROGRAM NEVER)
ov_set_if_not_defined(CMAKE_FIND_ROOT_PATH_MODE_LIBRARY ONLY)
ov_set_if_not_defined(CMAKE_FIND_ROOT_PATH_MODE_INCLUDE ONLY)
ov_set_if_not_defined(CMAKE_FIND_ROOT_PATH_MODE_PACKAGE ONLY)
endif()
macro(__ov_cmake_find_system_path_save_and_reset)
foreach(v
CMAKE_FIND_USE_SYSTEM_ENVIRONMENT_PATH
CMAKE_FIND_USE_SYSTEM_PACKAGE_REGISTRY
CMAKE_FIND_USE_CMAKE_SYSTEM_PATH
)
if(DEFINED ${v})
set(__ov_save_${v} ${${v}})
else()
set(__ov_save_${v} ON)
endif()
set(${v} ON)
endforeach()
endmacro()
macro(__ov_cmake_find_system_path_restore)
foreach(v
CMAKE_FIND_USE_SYSTEM_ENVIRONMENT_PATH
CMAKE_FIND_USE_SYSTEM_PACKAGE_REGISTRY
CMAKE_FIND_USE_CMAKE_SYSTEM_PATH
)
set(${v} ${__ov_save_${v}})
unset(__ov_save_${v})
endforeach()
endmacro()
macro(__ov_cmake_find_root_save_and_reset)
foreach(v
CMAKE_FIND_ROOT_PATH_MODE_LIBRARY
CMAKE_FIND_ROOT_PATH_MODE_INCLUDE
CMAKE_FIND_ROOT_PATH_MODE_PACKAGE
CMAKE_FIND_ROOT_PATH_MODE_PROGRAM
)
set(__ov_save_${v} ${${v}})
set(${v} NEVER)
endforeach()
endmacro()
macro(__ov_cmake_find_root_restore)
foreach(v
CMAKE_FIND_ROOT_PATH_MODE_LIBRARY
CMAKE_FIND_ROOT_PATH_MODE_INCLUDE
CMAKE_FIND_ROOT_PATH_MODE_PACKAGE
CMAKE_FIND_ROOT_PATH_MODE_PROGRAM
)
set(${v} ${__ov_save_${v}})
unset(__ov_save_${v})
endforeach()
endmacro()
macro(__ov_cmake_target_flags_save_and_reset)
foreach(v WIN32 UNIX LINUX APPLE ANDROID BSD)
set(__ov_target_save_${v} ${${v}})
unset(${v})
endforeach()
if(CMAKE_HOST_WIN32)
set(WIN32 1)
elseif(CMAKE_HOST_APPLE)
set(APPLE 1)
set(UNIX 1)
elseif(CMAKE_HOST_LINUX)
set(LINUX 1)
set(UNIX 1)
elseif(CMAKE_HOST_UNIX)
set(UNIX 1)
elseif(CMAKE_HOST_BSD)
set(BSD 1)
endif()
endmacro()
macro(__ov_cmake_target_flags_restore)
foreach(v WIN32 UNIX LINUX APPLE ANDROID BSD)
set(${v} ${__ov_target_save_${v}})
unset(__ov_target_save_${v})
endforeach()
endmacro()
if(CMAKE_CROSSCOMPILING)
# macro to find programs on the host OS
if(NOT COMMAND find_host_package)
macro(find_host_package)
__ov_cmake_find_root_save_and_reset()
__ov_cmake_target_flags_save_and_reset()
__ov_cmake_find_system_path_save_and_reset()
find_package(${ARGN})
__ov_cmake_find_system_path_restore()
__ov_cmake_target_flags_restore()
__ov_cmake_find_root_restore()
endmacro()
endif()
if(NOT COMMAND find_host_program)
macro(find_host_program)
__ov_cmake_find_root_save_and_reset()
__ov_cmake_target_flags_save_and_reset()
__ov_cmake_find_system_path_save_and_reset()
find_program(${ARGN})
__ov_cmake_find_system_path_restore()
__ov_cmake_target_flags_restore()
__ov_cmake_find_root_restore()
endmacro()
endif()
if(NOT COMMAND find_host_library)
macro(find_host_library)
__ov_cmake_find_root_save_and_reset()
__ov_cmake_target_flags_save_and_reset()
__ov_cmake_find_system_path_save_and_reset()
find_library(${ARGN})
__ov_cmake_find_system_path_restore()
__ov_cmake_target_flags_restore()
__ov_cmake_find_root_restore()
endmacro()
endif()
else()
if(NOT COMMAND find_host_package)
macro(find_host_package)
find_package(${ARGN})
endmacro()
endif()
if(NOT COMMAND find_host_program)
macro(find_host_program)
find_program(${ARGN})
endmacro()
endif()
if(NOT COMMAND find_host_library)
macro(find_host_library)
find_library(${ARGN})
endmacro()
endif()
endif()

View File

@@ -112,6 +112,17 @@ macro(ov_add_frontend)
endif()
file(GLOB_RECURSE LIBRARY_SRC ${frontend_root_dir}/src/*.cpp)
if (WIN32)
# Remove linux specific files
file(GLOB_RECURSE LIN_FILES ${frontend_root_dir}/src/os/lin/*.cpp
${frontend_root_dir}/src/os/lin/*.hpp)
list(REMOVE_ITEM LIBRARY_SRC "${LIN_FILES}")
else()
# Remove windows specific files
file(GLOB_RECURSE WIN_FILES ${frontend_root_dir}/src/os/win/*.cpp
${frontend_root_dir}/src/os/win/*.hpp)
list(REMOVE_ITEM LIBRARY_SRC "${WIN_FILES}")
endif()
file(GLOB_RECURSE LIBRARY_HEADERS ${frontend_root_dir}/src/*.hpp)
file(GLOB_RECURSE LIBRARY_PUBLIC_HEADERS ${frontend_root_dir}/include/*.hpp)
@@ -261,10 +272,6 @@ macro(ov_add_frontend)
# must be called after all target_link_libraries
ie_add_api_validator_post_build_step(TARGET ${TARGET_NAME})
# since frontends are user-facing component which can be linked against,
# then we need to mark it to be CXX ABI free
ov_abi_free_target(${TARGET_NAME})
# installation
if(NOT OV_FRONTEND_SKIP_INSTALL)

View File

@@ -11,32 +11,29 @@ set(ncc_style_bin_dir "${CMAKE_CURRENT_BINARY_DIR}/ncc_naming_style")
# find python3
if(ENABLE_NCC_STYLE)
find_host_package(PythonInterp 3 QUIET)
if(NOT PYTHONINTERP_FOUND)
message(WARNING "Python3 interpreter was not found (required for ncc naming style check)")
set(ENABLE_NCC_STYLE OFF)
endif()
find_host_package(PythonInterp 3 QUIET)
if(NOT PYTHONINTERP_FOUND)
message(WARNING "Python3 interpreter was not found (required for ncc naming style check)")
set(ENABLE_NCC_STYLE OFF)
endif()
if(ENABLE_NCC_STYLE)
if(PYTHON_VERSION_MINOR EQUAL 6)
set(clang_version 10)
elseif(PYTHON_VERSION_MINOR EQUAL 7)
set(clang_version 11)
elseif(PYTHON_VERSION_MINOR EQUAL 8)
set(clang_version 12)
elseif(PYTHON_VERSION_MINOR EQUAL 9)
set(clang_version 12)
elseif(PYTHON_VERSION_MINOR EQUAL 10)
set(clang_version 14)
elseif(PYTHON_VERSION_MINOR EQUAL 11)
set(clang_version 14)
else()
message(WARNING "Cannot suggest clang package for python ${PYTHON_VERSION_MAJOR}.${PYTHON_VERSION_MINOR}")
endif()
if(PYTHON_VERSION_MINOR EQUAL 6)
set(clang_version 10)
elseif(PYTHON_VERSION_MINOR EQUAL 7)
set(clang_version 11)
elseif(PYTHON_VERSION_MINOR EQUAL 8)
set(clang_version 12)
elseif(PYTHON_VERSION_MINOR EQUAL 9)
set(clang_version 12)
elseif(PYTHON_VERSION_MINOR EQUAL 10)
set(clang_version 14)
elseif(PYTHON_VERSION_MINOR EQUAL 11)
set(clang_version 14)
else()
message(WARNING "Cannot suggest clang package for python ${PYTHON_VERSION_MAJOR}.${PYTHON_VERSION_MINOR}")
endif()
if(ENABLE_NCC_STYLE)
# try to find_package(Clang QUIET)
# ClangConfig.cmake contains bug that if libclang-XX-dev is not
@@ -61,22 +58,16 @@ endif()
# Since we were able to find_package(Clang) in a separate process
# let's try to find in current process
if(ENABLE_NCC_STYLE)
if(CMAKE_HOST_WIN32)
find_host_program(libclang_location NAMES libclang.dll
PATHS $ENV{PATH}
NO_CMAKE_FIND_ROOT_PATH)
elseif(CMAKE_HOST_APPLE)
set(_old_CMAKE_FIND_LIBRARY_PREFIXES ${CMAKE_FIND_LIBRARY_PREFIXES})
set(_old_CMAKE_FIND_LIBRARY_SUFFIXES ${CMAKE_FIND_LIBRARY_SUFFIXES})
set(CMAKE_FIND_LIBRARY_PREFIXES "lib")
set(CMAKE_FIND_LIBRARY_SUFFIXES ".dylib")
if(WIN32)
set(CLANG_LIB_NAME libclang.dll)
find_host_program(CLANG NAMES ${CLANG_LIB_NAME} PATHS ENV PATH)
if(CLANG)
set(libclang_location ${CLANG})
endif()
elseif(APPLE)
find_host_library(libclang_location NAMES clang
PATHS /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib
DOC "Path to clang library"
NO_DEFAULT_PATH
NO_CMAKE_FIND_ROOT_PATH)
set(CMAKE_FIND_LIBRARY_PREFIXES ${_old_CMAKE_FIND_LIBRARY_PREFIXES})
set(CMAKE_FIND_LIBRARY_SUFFIXES ${_old_CMAKE_FIND_LIBRARY_SUFFIXES})
DOC "Path to clang library")
else()
find_host_package(Clang QUIET)
endif()

View File

@@ -21,23 +21,15 @@ macro(ov_common_libraries_cpack_set_dirs)
endif()
set(OV_WHEEL_RUNTIMEDIR ${OV_CPACK_RUNTIMEDIR})
set(OV_CPACK_ARCHIVEDIR ${CMAKE_INSTALL_LIBDIR})
if(CPACK_GENERATOR MATCHES "^(CONAN|VCPKG)$")
set(OV_CPACK_IE_CMAKEDIR ${CMAKE_INSTALL_DATADIR}/openvino)
set(OV_CPACK_NGRAPH_CMAKEDIR ${CMAKE_INSTALL_DATADIR}/openvino)
set(OV_CPACK_OPENVINO_CMAKEDIR ${CMAKE_INSTALL_DATADIR}/openvino)
set(OV_CPACK_PLUGINSDIR ${OV_CPACK_RUNTIMEDIR})
else()
set(OV_CPACK_IE_CMAKEDIR ${CMAKE_INSTALL_LIBDIR}/cmake/inferenceengine${OpenVINO_VERSION})
set(OV_CPACK_NGRAPH_CMAKEDIR ${CMAKE_INSTALL_LIBDIR}/cmake/ngraph${OpenVINO_VERSION})
set(OV_CPACK_OPENVINO_CMAKEDIR ${CMAKE_INSTALL_LIBDIR}/cmake/openvino${OpenVINO_VERSION})
set(OV_CPACK_PLUGINSDIR ${OV_CPACK_RUNTIMEDIR}/openvino-${OpenVINO_VERSION})
endif()
set(OV_CPACK_PLUGINSDIR ${OV_CPACK_RUNTIMEDIR}/openvino-${OpenVINO_VERSION})
set(OV_CPACK_IE_CMAKEDIR ${CMAKE_INSTALL_LIBDIR}/cmake/inferenceengine${OpenVINO_VERSION})
set(OV_CPACK_NGRAPH_CMAKEDIR ${CMAKE_INSTALL_LIBDIR}/cmake/ngraph${OpenVINO_VERSION})
set(OV_CPACK_OPENVINO_CMAKEDIR ${CMAKE_INSTALL_LIBDIR}/cmake/openvino${OpenVINO_VERSION})
set(OV_CPACK_LICENSESDIR licenses)
ov_get_pyversion(pyversion)
if(pyversion)
# should not be used in production; only by setup.py install
set(OV_CPACK_PYTHONDIR lib/${pyversion}/site-packages)
set(OV_CPACK_PYTHONDIR ${CMAKE_INSTALL_LIBDIR}/${pyversion}/site-packages)
endif()
# non-native stuff
@@ -67,48 +59,17 @@ macro(ov_override_component_names)
# merge C++ and C runtimes
set(OV_CPACK_COMP_CORE_C "${OV_CPACK_COMP_CORE}")
set(OV_CPACK_COMP_CORE_C_DEV "${OV_CPACK_COMP_CORE_DEV}")
# merge all pythons into a single component
set(OV_CPACK_COMP_PYTHON_OPENVINO "pyopenvino")
set(OV_CPACK_COMP_PYTHON_IE_API "${OV_CPACK_COMP_PYTHON_OPENVINO}")
set(OV_CPACK_COMP_PYTHON_NGRAPH "${OV_CPACK_COMP_PYTHON_OPENVINO}")
# merge all C / C++ samples as a single samples component
set(OV_CPACK_COMP_CPP_SAMPLES "samples")
set(OV_CPACK_COMP_C_SAMPLES "${OV_CPACK_COMP_CPP_SAMPLES}")
# move requirements.txt to core-dev
set(OV_CPACK_COMP_DEV_REQ_FILES "${OV_CPACK_COMP_CORE_DEV}")
# move core_tools to core-dev
# set(OV_CPACK_COMP_CORE_TOOLS "${OV_CPACK_COMP_CORE_DEV}")
endmacro()
ov_override_component_names()
#
# Override include / exclude rules for components
# This is required to exclude some files from installation
# (e.g. debian packages don't require setupvars scripts)
#
macro(ov_define_component_include_rules)
# core components
unset(OV_CPACK_COMP_CORE_EXCLUDE_ALL)
set(OV_CPACK_COMP_CORE_C_EXCLUDE_ALL ${OV_CPACK_COMP_CORE_EXCLUDE_ALL})
unset(OV_CPACK_COMP_CORE_DEV_EXCLUDE_ALL)
set(OV_CPACK_COMP_CORE_C_DEV_EXCLUDE_ALL ${OV_CPACK_COMP_CORE_DEV_EXCLUDE_ALL})
# licensing
set(OV_CPACK_COMP_LICENSING_EXCLUDE_ALL EXCLUDE_FROM_ALL)
# samples
set(OV_CPACK_COMP_CPP_SAMPLES_EXCLUDE_ALL EXCLUDE_FROM_ALL)
set(OV_CPACK_COMP_C_SAMPLES_EXCLUDE_ALL ${OV_CPACK_COMP_CPP_SAMPLES_EXCLUDE_ALL})
set(OV_CPACK_COMP_PYTHON_SAMPLES_EXCLUDE_ALL EXCLUDE_FROM_ALL)
# python
set(OV_CPACK_COMP_PYTHON_OPENVINO_EXCLUDE_ALL EXCLUDE_FROM_ALL)
set(OV_CPACK_COMP_BENCHMARK_APP_EXCLUDE_ALL ${OV_CPACK_COMP_PYTHON_OPENVINO_EXCLUDE_ALL})
set(OV_CPACK_COMP_OVC_EXCLUDE_ALL ${OV_CPACK_COMP_PYTHON_OPENVINO_EXCLUDE_ALL})
# we don't pack artifacts of setup.py install, because it's called explicitly in conda / brew
# or not used at all like in cases with conan / vcpkg
set(OV_CPACK_COMP_PYTHON_OPENVINO_PACKAGE_EXCLUDE_ALL ${OV_CPACK_COMP_PYTHON_OPENVINO_EXCLUDE_ALL})
# we don't need wheels in package, it's used installed only in open source distribution
set(OV_CPACK_COMP_PYTHON_WHEELS_EXCLUDE_ALL EXCLUDE_FROM_ALL)
# tools
set(OV_CPACK_COMP_OPENVINO_DEV_REQ_FILES_EXCLUDE_ALL EXCLUDE_FROM_ALL)
set(OV_CPACK_COMP_DEPLOYMENT_MANAGER_EXCLUDE_ALL EXCLUDE_FROM_ALL)
# scripts
set(OV_CPACK_COMP_INSTALL_DEPENDENCIES_EXCLUDE_ALL EXCLUDE_FROM_ALL)
set(OV_CPACK_COMP_SETUPVARS_EXCLUDE_ALL EXCLUDE_FROM_ALL)
endmacro()
ov_define_component_include_rules()
if(CPACK_GENERATOR STREQUAL "BREW")
# brew relies on RPATH
set(CMAKE_SKIP_INSTALL_RPATH OFF)
endif()

View File

@@ -61,58 +61,21 @@ macro(ov_override_component_names)
# merge C++ and C runtimes
set(OV_CPACK_COMP_CORE_C "${OV_CPACK_COMP_CORE}")
set(OV_CPACK_COMP_CORE_C_DEV "${OV_CPACK_COMP_CORE_DEV}")
# merge all pythons into a single component
set(OV_CPACK_COMP_PYTHON_OPENVINO "pyopenvino")
set(OV_CPACK_COMP_PYTHON_IE_API "${OV_CPACK_COMP_PYTHON_OPENVINO}")
set(OV_CPACK_COMP_PYTHON_NGRAPH "${OV_CPACK_COMP_PYTHON_OPENVINO}")
# merge all C / C++ samples as a single samples component
set(OV_CPACK_COMP_CPP_SAMPLES "samples")
set(OV_CPACK_COMP_C_SAMPLES "${OV_CPACK_COMP_CPP_SAMPLES}")
# move requirements.txt to core-dev
set(OV_CPACK_COMP_DEV_REQ_FILES "${OV_CPACK_COMP_CORE_DEV}")
# move core_tools to core-dev
# set(OV_CPACK_COMP_CORE_TOOLS "${OV_CPACK_COMP_CORE_DEV}")
endmacro()
ov_override_component_names()
#
# Override include / exclude rules for components
# This is required to exclude some files from installation
# (e.g. debian packages don't require setupvars scripts)
#
macro(ov_define_component_include_rules)
# core components
unset(OV_CPACK_COMP_CORE_EXCLUDE_ALL)
set(OV_CPACK_COMP_CORE_C_EXCLUDE_ALL ${OV_CPACK_COMP_CORE_EXCLUDE_ALL})
unset(OV_CPACK_COMP_CORE_DEV_EXCLUDE_ALL)
set(OV_CPACK_COMP_CORE_C_DEV_EXCLUDE_ALL ${OV_CPACK_COMP_CORE_DEV_EXCLUDE_ALL})
# licensing
set(OV_CPACK_COMP_LICENSING_EXCLUDE_ALL EXCLUDE_FROM_ALL)
# samples
unset(OV_CPACK_COMP_CPP_SAMPLES_EXCLUDE_ALL)
set(OV_CPACK_COMP_C_SAMPLES_EXCLUDE_ALL ${OV_CPACK_COMP_CPP_SAMPLES_EXCLUDE_ALL})
if(ENABLE_PYTHON_PACKAGING)
unset(OV_CPACK_COMP_PYTHON_SAMPLES_EXCLUDE_ALL)
else()
set(OV_CPACK_COMP_PYTHON_SAMPLES_EXCLUDE_ALL EXCLUDE_FROM_ALL)
endif()
# python
if(ENABLE_PYTHON_PACKAGING)
# pack artifacts of setup.py install
unset(OV_CPACK_COMP_PYTHON_OPENVINO_PACKAGE_EXCLUDE_ALL)
else()
set(OV_CPACK_COMP_PYTHON_OPENVINO_PACKAGE_EXCLUDE_ALL EXCLUDE_FROM_ALL)
endif()
# we don't pack python components itself, we pack artifacts of setup.py install
set(OV_CPACK_COMP_PYTHON_OPENVINO_EXCLUDE_ALL EXCLUDE_FROM_ALL)
set(OV_CPACK_COMP_BENCHMARK_APP_EXCLUDE_ALL ${OV_CPACK_COMP_PYTHON_OPENVINO_EXCLUDE_ALL})
set(OV_CPACK_COMP_OVC_EXCLUDE_ALL ${OV_CPACK_COMP_PYTHON_OPENVINO_EXCLUDE_ALL})
# we don't need wheels in Debian packages
set(OV_CPACK_COMP_PYTHON_WHEELS_EXCLUDE_ALL EXCLUDE_FROM_ALL)
# tools
set(OV_CPACK_COMP_OPENVINO_DEV_REQ_FILES_EXCLUDE_ALL EXCLUDE_FROM_ALL)
set(OV_CPACK_COMP_DEPLOYMENT_MANAGER_EXCLUDE_ALL EXCLUDE_FROM_ALL)
# scripts
set(OV_CPACK_COMP_INSTALL_DEPENDENCIES_EXCLUDE_ALL EXCLUDE_FROM_ALL)
set(OV_CPACK_COMP_SETUPVARS_EXCLUDE_ALL EXCLUDE_FROM_ALL)
endmacro()
ov_define_component_include_rules()
#
# Common Debian specific settings
#
@@ -170,17 +133,14 @@ macro(ov_debian_specific_settings)
set(CPACK_DEBIAN_PACKAGE_ARCHITECTURE i386)
endif()
endif()
# we don't need RPATHs, because libraries are search by standard paths
set(CMAKE_SKIP_INSTALL_RPATH ON)
endmacro()
ov_debian_specific_settings()
# needed to override cmake auto generated files
set(def_postinst "${CMAKE_CURRENT_BINARY_DIR}/_CPack_Packages/postinst")
set(def_postrm "${CMAKE_CURRENT_BINARY_DIR}/_CPack_Packages/postrm")
set(def_triggers "${CMAKE_CURRENT_BINARY_DIR}/_CPack_Packages/triggers")
set(def_postinst "${OpenVINO_BINARY_DIR}/_CPack_Packages/postinst")
set(def_postrm "${OpenVINO_BINARY_DIR}/_CPack_Packages/postrm")
set(def_triggers "${OpenVINO_BINARY_DIR}/_CPack_Packages/triggers")
set(triggers_content "activate-noawait ldconfig\n\n")
set(post_content "#!/bin/sh\n\nset -e;\nset -e\n\n")
@@ -316,7 +276,7 @@ macro(ov_debian_add_latest_component comp)
set(upper_case "${ucomp}_LATEST")
set(CPACK_COMPONENT_${upper_case}_DESCRIPTION "${CPACK_COMPONENT_${ucomp}_DESCRIPTION}")
set(CPACK_DEBIAN_${upper_case}_PACKAGE_ARCHITECTURE "all")
set(CPACK_COMPONENT_${upper_case}_ARCHITECTURE "all")
set(CPACK_COMPONENT_${upper_case}_DEPENDS "${comp}")
set(${comp_name}_copyright "generic")

View File

@@ -2,77 +2,40 @@
# SPDX-License-Identifier: Apache-2.0
#
macro(ov_nsis_specific_settings)
# installation directory
set(CPACK_PACKAGE_INSTALL_DIRECTORY "Intel")
# installation directory
set(CPACK_PACKAGE_INSTALL_DIRECTORY "Intel")
# TODO: provide icons
# set(CPACK_NSIS_MUI_ICON "")
# set(CPACK_NSIS_MUI_UNIICON "${CPACK_NSIS_MUI_ICON}")
# set(CPACK_NSIS_MUI_WELCOMEFINISHPAGE_BITMAP "")
# set(CPACK_NSIS_MUI_UNWELCOMEFINISHPAGE_BITMAP "")
# set(CPACK_NSIS_MUI_HEADERIMAGE "")
# TODO: provide icons
# set(CPACK_NSIS_MUI_ICON "")
# set(CPACK_NSIS_MUI_UNIICON "${CPACK_NSIS_MUI_ICON}")
# set(CPACK_NSIS_MUI_WELCOMEFINISHPAGE_BITMAP "")
# set(CPACK_NSIS_MUI_UNWELCOMEFINISHPAGE_BITMAP "")
# set(CPACK_NSIS_MUI_HEADERIMAGE "")
# we allow to install several packages at once
set(CPACK_NSIS_ENABLE_UNINSTALL_BEFORE_INSTALL OFF)
set(CPACK_NSIS_MODIFY_PATH OFF)
# we allow to install several packages at once
set(CPACK_NSIS_ENABLE_UNINSTALL_BEFORE_INSTALL OFF)
set(CPACK_NSIS_MODIFY_PATH OFF)
set(CPACK_NSIS_DISPLAY_NAME "Intel(R) OpenVINO(TM) ${OpenVINO_VERSION}")
set(CPACK_NSIS_PACKAGE_NAME "Intel(R) OpenVINO(TM) ToolKit, v. ${OpenVINO_VERSION}.${OpenVINO_PATCH_VERSION}")
set(CPACK_NSIS_DISPLAY_NAME "Intel(R) OpenVINO(TM) ${OpenVINO_VERSION}")
set(CPACK_NSIS_PACKAGE_NAME "Intel(R) OpenVINO(TM) ToolKit, v. ${OpenVINO_VERSION}.${OpenVINO_PATCH_VERSION}")
# contact
set(CPACK_NSIS_CONTACT "CPACK_NSIS_CONTACT")
# contact
set(CPACK_NSIS_CONTACT "CPACK_NSIS_CONTACT")
# links in menu
set(CPACK_NSIS_MENU_LINKS "https://docs.openvinoo.ai" "OpenVINO Documentation")
# links in menu
set(CPACK_NSIS_MENU_LINKS
"https://docs.openvinoo.ai" "OpenVINO Documentation")
# welcome and finish titles
set(CPACK_NSIS_WELCOME_TITLE "Welcome to Intel(R) Distribution of OpenVINO(TM) Toolkit installation")
set(CPACK_NSIS_FINISH_TITLE "")
# welcome and finish titles
set(CPACK_NSIS_WELCOME_TITLE "Welcome to Intel(R) Distribution of OpenVINO(TM) Toolkit installation")
set(CPACK_NSIS_FINISH_TITLE "")
# autoresize?
set(CPACK_NSIS_MANIFEST_DPI_AWARE ON)
# autoresize?
set(CPACK_NSIS_MANIFEST_DPI_AWARE ON)
# branding text
set(CPACK_NSIS_BRANDING_TEXT "Intel(R) Corp.")
set(CPACK_NSIS_BRANDING_TEXT_TRIM_POSITION RIGHT)
# branding text
set(CPACK_NSIS_BRANDING_TEXT "Intel(R) Corp.")
set(CPACK_NSIS_BRANDING_TEXT_TRIM_POSITION RIGHT)
# don't set this variable since we need a user to agree with a lincense
# set(CPACK_NSIS_IGNORE_LICENSE_PAGE OFF)
endmacro()
ov_nsis_specific_settings()
#
# Override include / exclude rules for components
# This is required to exclude some files from installation
# (e.g. NSIS packages don't require wheels to be packacged)
#
macro(ov_define_component_include_rules)
# core components
unset(OV_CPACK_COMP_CORE_EXCLUDE_ALL)
unset(OV_CPACK_COMP_CORE_C_EXCLUDE_ALL)
unset(OV_CPACK_COMP_CORE_DEV_EXCLUDE_ALL)
unset(OV_CPACK_COMP_CORE_C_DEV_EXCLUDE_ALL)
# licensing
unset(OV_CPACK_COMP_LICENSING_EXCLUDE_ALL)
# samples
unset(OV_CPACK_COMP_CPP_SAMPLES_EXCLUDE_ALL)
unset(OV_CPACK_COMP_C_SAMPLES_EXCLUDE_ALL)
unset(OV_CPACK_COMP_PYTHON_SAMPLES_EXCLUDE_ALL)
# python
unset(OV_CPACK_COMP_PYTHON_OPENVINO_EXCLUDE_ALL)
set(OV_CPACK_COMP_BENCHMARK_APP_EXCLUDE_ALL ${OV_CPACK_COMP_PYTHON_OPENVINO_EXCLUDE_ALL})
set(OV_CPACK_COMP_OVC_EXCLUDE_ALL ${OV_CPACK_COMP_PYTHON_OPENVINO_EXCLUDE_ALL})
set(OV_CPACK_COMP_PYTHON_WHEELS_EXCLUDE_ALL EXCLUDE_FROM_ALL)
set(OV_CPACK_COMP_PYTHON_OPENVINO_PACKAGE_EXCLUDE_ALL EXCLUDE_FROM_ALL)
# tools
unset(OV_CPACK_COMP_OPENVINO_DEV_REQ_FILES_EXCLUDE_ALL)
unset(OV_CPACK_COMP_DEPLOYMENT_MANAGER_EXCLUDE_ALL)
# scripts
unset(OV_CPACK_COMP_INSTALL_DEPENDENCIES_EXCLUDE_ALL)
unset(OV_CPACK_COMP_SETUPVARS_EXCLUDE_ALL)
endmacro()
ov_define_component_include_rules()
# don't set this variable since we need a user to agree with a lincense
# set(CPACK_NSIS_IGNORE_LICENSE_PAGE OFF)

View File

@@ -4,31 +4,8 @@
include(CPackComponent)
# we don't need RPATHs, because setupvars.sh is used
set(CMAKE_SKIP_INSTALL_RPATH ON)
#
# ov_install_static_lib(<target> <comp>)
#
macro(ov_install_static_lib target comp)
if(NOT BUILD_SHARED_LIBS)
get_target_property(target_type ${target} TYPE)
if(target_type STREQUAL "STATIC_LIBRARY")
set_target_properties(${target} PROPERTIES EXCLUDE_FROM_ALL OFF)
endif()
# save all internal installed targets to filter them later in 'ov_generate_dev_package_config'
list(APPEND openvino_installed_targets ${target})
set(openvino_installed_targets "${openvino_installed_targets}" CACHE INTERNAL
"A list of OpenVINO internal targets" FORCE)
install(TARGETS ${target} EXPORT OpenVINOTargets
ARCHIVE DESTINATION ${OV_CPACK_ARCHIVEDIR} COMPONENT ${comp} ${ARGN})
endif()
endmacro()
#
# ov_get_pyversion(<OUT pyversion>)
# ov_get_pyversion()
#
function(ov_get_pyversion pyversion)
find_package(PythonInterp 3 QUIET)
@@ -52,12 +29,16 @@ macro(ov_cpack_set_dirs)
set(OV_CPACK_NGRAPH_CMAKEDIR runtime/cmake)
set(OV_CPACK_OPENVINO_CMAKEDIR runtime/cmake)
set(OV_CPACK_DOCDIR docs)
set(OV_CPACK_LICENSESDIR licenses)
set(OV_CPACK_LICENSESDIR ${OV_CPACK_DOCDIR}/licenses)
set(OV_CPACK_SAMPLESDIR samples)
set(OV_CPACK_WHEELSDIR tools)
set(OV_CPACK_TOOLSDIR tools)
set(OV_CPACK_DEVREQDIR tools)
set(OV_CPACK_PYTHONDIR python)
ov_get_pyversion(pyversion)
if(pyversion)
set(OV_CPACK_PYTHONDIR python/${pyversion})
endif()
if(WIN32)
set(OV_CPACK_LIBRARYDIR runtime/lib/${ARCH_FOLDER}/$<CONFIG>)
@@ -154,13 +135,13 @@ macro(ov_define_component_names)
set(OV_CPACK_COMP_C_SAMPLES "c_samples")
set(OV_CPACK_COMP_PYTHON_SAMPLES "python_samples")
# python
set(OV_CPACK_COMP_PYTHON_IE_API "pyie")
set(OV_CPACK_COMP_PYTHON_NGRAPH "pyngraph")
set(OV_CPACK_COMP_PYTHON_OPENVINO "pyopenvino")
set(OV_CPACK_COMP_BENCHMARK_APP "benchmark_app")
set(OV_CPACK_COMP_OVC "ovc")
set(OV_CPACK_COMP_PYTHON_OPENVINO_PACKAGE "pyopenvino_package")
set(OV_CPACK_COMP_PYTHON_WHEELS "python_wheels")
# tools
set(OV_CPACK_COMP_OPENVINO_DEV_REQ_FILES "openvino_dev_req_files")
set(OV_CPACK_COMP_CORE_TOOLS "core_tools")
set(OV_CPACK_COMP_DEV_REQ_FILES "openvino_dev_req_files")
set(OV_CPACK_COMP_DEPLOYMENT_MANAGER "deployment_manager")
# scripts
set(OV_CPACK_COMP_INSTALL_DEPENDENCIES "install_dependencies")
@@ -169,66 +150,20 @@ endmacro()
ov_define_component_names()
# default components for case when CPACK_GENERATOR is not set (i.e. default open source user)
macro(ov_define_component_include_rules)
# core components
unset(OV_CPACK_COMP_CORE_EXCLUDE_ALL)
unset(OV_CPACK_COMP_CORE_C_EXCLUDE_ALL)
unset(OV_CPACK_COMP_CORE_DEV_EXCLUDE_ALL)
unset(OV_CPACK_COMP_CORE_C_DEV_EXCLUDE_ALL)
# licensing
unset(OV_CPACK_COMP_LICENSING_EXCLUDE_ALL)
# samples
unset(OV_CPACK_COMP_CPP_SAMPLES_EXCLUDE_ALL)
unset(OV_CPACK_COMP_C_SAMPLES_EXCLUDE_ALL)
unset(OV_CPACK_COMP_PYTHON_SAMPLES_EXCLUDE_ALL)
# python
unset(OV_CPACK_COMP_PYTHON_OPENVINO_EXCLUDE_ALL)
unset(OV_CPACK_COMP_BENCHMARK_APP_EXCLUDE_ALL)
unset(OV_CPACK_COMP_OVC_EXCLUDE_ALL)
set(OV_CPACK_COMP_PYTHON_OPENVINO_PACKAGE_EXCLUDE_ALL EXCLUDE_FROM_ALL)
unset(OV_CPACK_COMP_PYTHON_WHEELS_EXCLUDE_ALL)
# tools
set(OV_CPACK_COMP_OPENVINO_DEV_REQ_FILES_EXCLUDE_ALL EXCLUDE_FROM_ALL)
unset(OV_CPACK_COMP_DEPLOYMENT_MANAGER_EXCLUDE_ALL)
# scripts
unset(OV_CPACK_COMP_INSTALL_DEPENDENCIES_EXCLUDE_ALL)
unset(OV_CPACK_COMP_SETUPVARS_EXCLUDE_ALL)
endmacro()
ov_define_component_include_rules()
#
# Include generator specific configuration file:
# 1. Overrides directories set by ov_<debian | rpm | common_libraries>_cpack_set_dirs()
# This is requried, because different generator use different locations for installed files
# 2. Merges some components using ov_override_component_names()
# This is required, because different generators have different set of components
# (e.g. C and C++ API are separate components)
# 3. Exclude some components using ov_define_component_include_rules()
# This steps exclude some files from installation by defining variables meaning EXCLUDE_ALL
# 4. Sets ov_<debian | rpm | ...>_specific_settings() with DEB generator variables
# This 'callback' is later called from ov_cpack (wrapper for standard cpack) to set
# per-component settings (e.g. package names, dependencies, versions and system dependencies)
# 5. (Optional) Defines the following helper functions, which can be used by 3rdparty modules:
# Debian:
# - ov_debian_add_changelog_and_copyright()
# - ov_debian_add_lintian_suppression()
# - ov_debian_generate_conflicts()
# - ov_debian_add_latest_component()
# RPM:
# - ov_rpm_add_rpmlint_suppression()
# - ov_rpm_generate_conflicts()
# - ov_rpm_copyright()
# - ov_rpm_add_latest_component()
#
# Include Debian specific configuration file:
# - overrides directories set by ov_debian_cpack_set_dirs()
# - merges some components using ov_override_component_names()
# - sets ov_debian_specific_settings() with DEB generator variables
# - defines the following helper functions:
# - ov_add_lintian_suppression()
# - ov_add_latest_component()
if(CPACK_GENERATOR STREQUAL "DEB")
include(packaging/debian/debian)
elseif(CPACK_GENERATOR STREQUAL "RPM")
include(packaging/rpm/rpm)
elseif(CPACK_GENERATOR STREQUAL "NSIS")
include(packaging/nsis)
elseif(CPACK_GENERATOR MATCHES "^(CONDA-FORGE|BREW|CONAN|VCPKG)$")
elseif(CPACK_GENERATOR MATCHES "^(CONDA-FORGE|BREW|CONAN)$")
include(packaging/common-libraries)
endif()

View File

@@ -24,6 +24,11 @@ macro(ov_rpm_cpack_set_dirs)
set(OV_CPACK_DOCDIR ${CMAKE_INSTALL_DATADIR}/doc/openvino-${OpenVINO_VERSION})
set(OV_CPACK_LICENSESDIR ${OV_CPACK_DOCDIR}/licenses)
# TODO:
# 1. define python installation directories for RPM packages
# 2. make sure only a single version of python API can be installed at the same time (define conflicts section)
# set(OV_CPACK_PYTHONDIR lib/python3/dist-packages)
ov_get_pyversion(pyversion)
if(pyversion)
set(OV_CPACK_PYTHONDIR ${CMAKE_INSTALL_LIBDIR}/${pyversion}/site-packages)
@@ -56,58 +61,22 @@ macro(ov_override_component_names)
# merge C++ and C runtimes
set(OV_CPACK_COMP_CORE_C "${OV_CPACK_COMP_CORE}")
set(OV_CPACK_COMP_CORE_C_DEV "${OV_CPACK_COMP_CORE_DEV}")
# merge all pythons into a single component
set(OV_CPACK_COMP_PYTHON_OPENVINO "pyopenvino")
set(OV_CPACK_COMP_PYTHON_IE_API "${OV_CPACK_COMP_PYTHON_OPENVINO}")
set(OV_CPACK_COMP_PYTHON_NGRAPH "${OV_CPACK_COMP_PYTHON_OPENVINO}")
# merge all C / C++ samples as a single samples component
set(OV_CPACK_COMP_CPP_SAMPLES "samples")
set(OV_CPACK_COMP_C_SAMPLES "${OV_CPACK_COMP_CPP_SAMPLES}")
# set(OV_CPACK_COMP_PYTHON_SAMPLES "${OV_CPACK_COMP_CPP_SAMPLES}")
# move requirements.txt to core-dev
set(OV_CPACK_COMP_DEV_REQ_FILES "${OV_CPACK_COMP_CORE_DEV}")
# move core_tools to core-dev
# set(OV_CPACK_COMP_CORE_TOOLS "${OV_CPACK_COMP_CORE_DEV}")
endmacro()
ov_override_component_names()
#
# Override include / exclude rules for components
# This is required to exclude some files from installation
# (e.g. rpm packages don't require setupvars scripts or deployment_manager)
#
macro(ov_define_component_include_rules)
# core components
unset(OV_CPACK_COMP_CORE_EXCLUDE_ALL)
set(OV_CPACK_COMP_CORE_C_EXCLUDE_ALL ${OV_CPACK_COMP_CORE_EXCLUDE_ALL})
unset(OV_CPACK_COMP_CORE_DEV_EXCLUDE_ALL)
set(OV_CPACK_COMP_CORE_C_DEV_EXCLUDE_ALL ${OV_CPACK_COMP_CORE_DEV_EXCLUDE_ALL})
# licensing
set(OV_CPACK_COMP_LICENSING_EXCLUDE_ALL EXCLUDE_FROM_ALL)
# samples
unset(OV_CPACK_COMP_CPP_SAMPLES_EXCLUDE_ALL)
set(OV_CPACK_COMP_C_SAMPLES_EXCLUDE_ALL ${OV_CPACK_COMP_CPP_SAMPLES_EXCLUDE_ALL})
if(ENABLE_PYTHON_PACKAGING)
unset(OV_CPACK_COMP_PYTHON_SAMPLES_EXCLUDE_ALL)
else()
set(OV_CPACK_COMP_PYTHON_SAMPLES_EXCLUDE_ALL EXCLUDE_FROM_ALL)
endif()
# python
if(ENABLE_PYTHON_PACKAGING)
# pack artifacts of setup.py install
unset(OV_CPACK_COMP_PYTHON_OPENVINO_PACKAGE_EXCLUDE_ALL)
else()
set(OV_CPACK_COMP_PYTHON_OPENVINO_PACKAGE_EXCLUDE_ALL EXCLUDE_FROM_ALL)
endif()
# we don't pack python components itself, we pack artifacts of setup.py install
set(OV_CPACK_COMP_PYTHON_OPENVINO_EXCLUDE_ALL EXCLUDE_FROM_ALL)
set(OV_CPACK_COMP_BENCHMARK_APP_EXCLUDE_ALL ${OV_CPACK_COMP_PYTHON_OPENVINO_EXCLUDE_ALL})
set(OV_CPACK_COMP_OVC_EXCLUDE_ALL ${OV_CPACK_COMP_PYTHON_OPENVINO_EXCLUDE_ALL})
# we don't need wheels in RPM packages
set(OV_CPACK_COMP_PYTHON_WHEELS_EXCLUDE_ALL EXCLUDE_FROM_ALL)
# tools
set(OV_CPACK_COMP_OPENVINO_DEV_REQ_FILES_EXCLUDE_ALL EXCLUDE_FROM_ALL)
set(OV_CPACK_COMP_DEPLOYMENT_MANAGER_EXCLUDE_ALL EXCLUDE_FROM_ALL)
# scripts
set(OV_CPACK_COMP_INSTALL_DEPENDENCIES_EXCLUDE_ALL EXCLUDE_FROM_ALL)
set(OV_CPACK_COMP_SETUPVARS_EXCLUDE_ALL EXCLUDE_FROM_ALL)
endmacro()
ov_define_component_include_rules()
#
# Common RPM specific settings
#
@@ -129,7 +98,9 @@ macro(ov_rpm_specific_settings)
# use rpmlint to check packages in post-build step
set(CPACK_POST_BUILD_SCRIPTS "${IEDevScripts_DIR}/packaging/rpm/post_build.cmake")
# enable for debug cpack run
ov_set_if_not_defined(CPACK_RPM_PACKAGE_DEBUG OFF)
if(NOT DEFINED CPACK_RPM_PACKAGE_DEBUG)
set(CPACK_RPM_PACKAGE_DEBUG OFF)
endif()
# naming convention for rpm package files
set(CPACK_RPM_FILE_NAME "RPM-DEFAULT")
@@ -149,9 +120,6 @@ macro(ov_rpm_specific_settings)
set(CPACK_DEBIAN_PACKAGE_ARCHITECTURE i386)
endif()
endif()
# we don't need RPATHs, because libraries are search by standard paths
set(CMAKE_SKIP_INSTALL_RPATH ON)
endmacro()
ov_rpm_specific_settings()

View File

@@ -103,15 +103,11 @@ function(ov_add_plugin)
endforeach()
if (OV_PLUGIN_ADD_CLANG_FORMAT)
add_clang_format_target(${OV_PLUGIN_NAME}_clang FOR_SOURCES ${OV_PLUGIN_SOURCES})
add_clang_format_target(${OV_PLUGIN_NAME}_clang FOR_TARGETS ${OV_PLUGIN_NAME})
else()
add_cpplint_target(${OV_PLUGIN_NAME}_cpplint FOR_TARGETS ${OV_PLUGIN_NAME} CUSTOM_FILTERS ${custom_filter})
endif()
# plugins does not have to be CXX ABI free, because nobody links with plugins,
# but let's add this mark to see how it goes
ov_abi_free_target(${OV_PLUGIN_NAME})
add_dependencies(ov_plugins ${OV_PLUGIN_NAME})
# install rules
@@ -135,7 +131,7 @@ function(ov_add_plugin)
LIBRARY DESTINATION ${OV_CPACK_PLUGINSDIR}
COMPONENT ${install_component})
else()
ov_install_static_lib(${OV_PLUGIN_NAME} ${OV_CPACK_COMP_CORE})
ov_install_static_lib(${OV_PLUGIN_NAME} ${install_component})
endif()
endif()
endif()

View File

@@ -17,56 +17,59 @@ if(WIN32 AND CMAKE_CXX_COMPILER_ID STREQUAL "GNU")
endif()
if(CMAKE_HOST_SYSTEM_PROCESSOR MATCHES "amd64.*|x86_64.*|AMD64.*")
set(OV_HOST_ARCH X86_64)
set(host_arch_flag X86_64)
elseif(CMAKE_HOST_SYSTEM_PROCESSOR MATCHES "i686.*|i386.*|x86.*|amd64.*|AMD64.*")
set(OV_HOST_ARCH X86)
set(host_arch_flag X86)
elseif(CMAKE_HOST_SYSTEM_PROCESSOR MATCHES "^(arm64.*|aarch64.*|AARCH64.*|ARM64.*)")
set(OV_HOST_ARCH AARCH64)
set(host_arch_flag AARCH64)
elseif(CMAKE_HOST_SYSTEM_PROCESSOR MATCHES "^(arm.*|ARM.*)")
set(OV_HOST_ARCH ARM)
set(host_arch_flag ARM)
elseif(CMAKE_HOST_SYSTEM_PROCESSOR MATCHES "^riscv64$")
set(OV_HOST_ARCH RISCV64)
set(host_arch_flag RISCV64)
endif()
set(HOST_${host_arch_flag} ON)
macro(_ov_detect_arch_by_processor_type)
if(CMAKE_OSX_ARCHITECTURES AND APPLE)
if(CMAKE_OSX_ARCHITECTURES STREQUAL "arm64")
set(OV_ARCH AARCH64)
set(AARCH64 ON)
elseif(CMAKE_OSX_ARCHITECTURES STREQUAL "x86_64")
set(OV_ARCH X86_64)
set(X86_64 ON)
elseif(CMAKE_OSX_ARCHITECTURES MATCHES ".*x86_64.*" AND CMAKE_OSX_ARCHITECTURES MATCHES ".*arm64.*")
set(OV_ARCH UNIVERSAL2)
set(UNIVERSAL2 ON)
else()
message(FATAL_ERROR "Unsupported value: CMAKE_OSX_ARCHITECTURES = ${CMAKE_OSX_ARCHITECTURES}")
endif()
elseif(CMAKE_SYSTEM_PROCESSOR MATCHES "amd64.*|x86_64.*|AMD64.*")
set(OV_ARCH X86_64)
set(X86_64 ON)
elseif(CMAKE_SYSTEM_PROCESSOR MATCHES "i686.*|i386.*|x86.*|amd64.*|AMD64.*|wasm")
set(OV_ARCH X86)
set(X86 ON)
elseif(CMAKE_SYSTEM_PROCESSOR MATCHES "^(arm64.*|aarch64.*|AARCH64.*|ARM64.*|armv8)")
set(OV_ARCH AARCH64)
set(AARCH64 ON)
elseif(CMAKE_SYSTEM_PROCESSOR MATCHES "^(arm.*|ARM.*)")
set(OV_ARCH ARM)
set(ARM ON)
elseif(CMAKE_SYSTEM_PROCESSOR MATCHES "^riscv64$")
set(OV_ARCH RISCV64)
set(RISCV64 ON)
endif()
endmacro()
macro(_ov_process_msvc_generator_platform)
# if cmake -A <ARM|ARM64|x64|Win32> is passed
if(CMAKE_GENERATOR_PLATFORM STREQUAL "ARM64")
set(OV_ARCH AARCH64)
set(AARCH64 ON)
elseif(CMAKE_GENERATOR_PLATFORM STREQUAL "ARM")
set(OV_ARCH ARM)
set(ARM ON)
elseif(CMAKE_GENERATOR_PLATFORM STREQUAL "x64")
set(OV_ARCH X86_64)
set(X86_64 ON)
elseif(CMAKE_GENERATOR_PLATFORM STREQUAL "Win32")
set(OV_ARCH X86)
set(X86 ON)
else()
_ov_detect_arch_by_processor_type()
endif()
endmacro()
# TODO: why OpenCV is found by cmake
if(MSVC64 OR MINGW64)
_ov_process_msvc_generator_platform()
elseif(MINGW OR (MSVC AND NOT CMAKE_CROSSCOMPILING))
@@ -75,9 +78,6 @@ else()
_ov_detect_arch_by_processor_type()
endif()
set(HOST_${OV_HOST_ARCH} ON)
set(${OV_ARCH} ON)
if(CMAKE_SYSTEM_NAME STREQUAL "Emscripten")
set(EMSCRIPTEN ON)
endif()
@@ -115,7 +115,7 @@ get_property(OV_GENERATOR_MULTI_CONFIG GLOBAL PROPERTY GENERATOR_IS_MULTI_CONFIG
function(ov_glibc_version)
# cmake needs to look at glibc version only when we build for Linux on Linux
if(LINUX)
if(CMAKE_HOST_LINUX AND LINUX)
function(ov_get_definition definition var)
execute_process(COMMAND echo "#include <errno.h>"
COMMAND "${CMAKE_CXX_COMPILER}" -xc - -E -dM

View File

@@ -15,7 +15,7 @@ function(ie_generate_dev_package_config)
add_custom_target(ie_dev_targets DEPENDS ${all_dev_targets})
set(PATH_VARS "OpenVINO_SOURCE_DIR")
if(ENABLE_SAMPLES OR ENABLE_TESTS)
if(ENABLE_SAMPLES OR ENABLE_COMPILE_TOOL OR ENABLE_TESTS)
list(APPEND PATH_VARS "gflags_BINARY_DIR")
# if we've found system gflags
if(gflags_DIR)
@@ -39,22 +39,20 @@ function(ov_generate_dev_package_config)
find_package(OpenCV QUIET)
foreach(component IN LISTS openvino_export_components)
# filter out targets which are installed by OpenVINOConfig.cmake static build case
set(exported_targets)
foreach(target IN LISTS ${component})
if(NOT target IN_LIST openvino_installed_targets)
list(APPEND exported_targets ${target})
endif()
endforeach()
# export all developer targets with prefix and use them during extra modules build
export(TARGETS ${exported_targets} NAMESPACE openvino::
# TODO: remove legacy targets from tests
# string(FIND "${component}" "_legacy" index)
# if(index EQUAL -1)
# export all targets with prefix and use them during extra modules build
export(TARGETS ${${component}} NAMESPACE openvino::
APPEND FILE "${CMAKE_BINARY_DIR}/ov_${component}_dev_targets.cmake")
list(APPEND all_dev_targets ${${component}})
# endif()
endforeach()
add_custom_target(ov_dev_targets DEPENDS ${all_dev_targets})
set(PATH_VARS "OpenVINO_SOURCE_DIR")
if(ENABLE_SAMPLES OR ENABLE_TESTS)
if(ENABLE_SAMPLES OR ENABLE_COMPILE_TOOL OR ENABLE_TESTS)
list(APPEND PATH_VARS "gflags_BINARY_DIR")
# if we've found system gflags
if(gflags_DIR)
@@ -78,6 +76,10 @@ endfunction()
#
function(register_extra_modules)
# post export
openvino_developer_export_targets(COMPONENT core_legacy TARGETS inference_engine)
openvino_developer_export_targets(COMPONENT core_legacy TARGETS ngraph)
set(InferenceEngineDeveloperPackage_DIR "${CMAKE_CURRENT_BINARY_DIR}/build-modules")
set(OpenVINODeveloperPackage_DIR "${CMAKE_BINARY_DIR}/build-modules")
set(OpenVINO_DIR "${CMAKE_BINARY_DIR}")
@@ -92,12 +94,12 @@ function(register_extra_modules)
file(WRITE "${devconfig_file}" "\# !! AUTOGENERATED: DON'T EDIT !!\n\n")
foreach(targets_list IN LISTS ${openvino_export_components})
foreach(target IN LISTS targets_list)
foreach(target IN LISTS ${openvino_export_components})
if(target)
file(APPEND "${devconfig_file}" "if(NOT TARGET ${NS}::${target})
add_library(${NS}::${target} ALIAS ${target})
endif()\n")
endforeach()
endif()
endforeach()
endfunction()

View File

@@ -5,7 +5,6 @@
#
# Common cmake options
#
ov_option (ENABLE_PROXY "Proxy plugin for OpenVINO Runtime" ON)
ie_dependent_option (ENABLE_INTEL_CPU "CPU plugin for OpenVINO Runtime" ON "RISCV64 OR X86 OR X86_64 OR AARCH64 OR ARM" OFF)
@@ -13,6 +12,8 @@ ie_dependent_option (ENABLE_ARM_COMPUTE_CMAKE "Enable ARM Compute build via cmak
ie_option (ENABLE_TESTS "unit, behavior and functional tests" OFF)
ie_option (ENABLE_COMPILE_TOOL "Enables compile_tool" ON)
ie_option (ENABLE_STRICT_DEPENDENCIES "Skip configuring \"convinient\" dependencies for efficient parallel builds" ON)
if(X86_64)
@@ -23,9 +24,9 @@ endif()
ie_dependent_option (ENABLE_INTEL_GPU "GPU OpenCL-based plugin for OpenVINO Runtime" ${ENABLE_INTEL_GPU_DEFAULT} "X86_64 OR AARCH64;NOT APPLE;NOT WINDOWS_STORE;NOT WINDOWS_PHONE" OFF)
if (ANDROID OR (CMAKE_COMPILER_IS_GNUCXX AND CMAKE_CXX_COMPILER_VERSION VERSION_LESS 7.0) OR NOT BUILD_SHARED_LIBS)
# oneDNN doesn't support old compilers and android builds for now, so we'll build GPU plugin without oneDNN
# also, in case of static build CPU's and GPU's oneDNNs will conflict, so we are disabling GPU's one in this case
if (ANDROID OR (CMAKE_COMPILER_IS_GNUCXX AND CMAKE_CXX_COMPILER_VERSION VERSION_LESS 7.0))
# oneDNN doesn't support old compilers and android builds for now, so we'll
# build GPU plugin without oneDNN
set(ENABLE_ONEDNN_FOR_GPU_DEFAULT OFF)
else()
set(ENABLE_ONEDNN_FOR_GPU_DEFAULT ON)
@@ -34,8 +35,8 @@ endif()
ie_dependent_option (ENABLE_ONEDNN_FOR_GPU "Enable oneDNN with GPU support" ${ENABLE_ONEDNN_FOR_GPU_DEFAULT} "ENABLE_INTEL_GPU" OFF)
ie_option (ENABLE_DEBUG_CAPS "enable OpenVINO debug capabilities at runtime" OFF)
ie_dependent_option (ENABLE_GPU_DEBUG_CAPS "enable GPU debug capabilities at runtime" ON "ENABLE_DEBUG_CAPS;ENABLE_INTEL_GPU" OFF)
ie_dependent_option (ENABLE_CPU_DEBUG_CAPS "enable CPU debug capabilities at runtime" ON "ENABLE_DEBUG_CAPS;ENABLE_INTEL_CPU" OFF)
ie_dependent_option (ENABLE_GPU_DEBUG_CAPS "enable GPU debug capabilities at runtime" ON "ENABLE_DEBUG_CAPS;ENABLE_INTEL_CPU" OFF)
ie_dependent_option (ENABLE_CPU_DEBUG_CAPS "enable CPU debug capabilities at runtime" ON "ENABLE_DEBUG_CAPS;ENABLE_INTEL_GPU" OFF)
ie_option (ENABLE_PROFILING_ITT "Build with ITT tracing. Optionally configure pre-built ittnotify library though INTEL_VTUNE_DIR variable." OFF)
@@ -48,17 +49,13 @@ Supported values:\
ie_option (ENABLE_PROFILING_FIRST_INFERENCE "Build with ITT tracing of first inference time." ON)
ie_option_enum(SELECTIVE_BUILD "Enable OpenVINO conditional compilation or statistics collection. \
In case SELECTIVE_BUILD is enabled, the SELECTIVE_BUILD_STAT variable should contain the path to the collected IntelSEAPI statistics. \
In case SELECTIVE_BUILD is enabled, the SELECTIVE_BUILD_STAT variable should contain the path to the collected InelSEAPI statistics. \
Usage: -DSELECTIVE_BUILD=ON -DSELECTIVE_BUILD_STAT=/path/*.csv" OFF
ALLOWED_VALUES ON OFF COLLECT)
ie_option (ENABLE_DOCS "Build docs using Doxygen" OFF)
if(NOT ANDROID)
# on Android build FindPkgConfig.cmake finds host system pkg-config, which is not appropriate
find_package(PkgConfig QUIET)
endif()
find_package(PkgConfig QUIET)
ie_dependent_option (ENABLE_PKGCONFIG_GEN "Enable openvino.pc pkg-config file generation" ON "LINUX OR APPLE;PkgConfig_FOUND;BUILD_SHARED_LIBS" OFF)
#
@@ -83,7 +80,7 @@ else()
set(ENABLE_TBBBIND_2_5_DEFAULT OFF)
endif()
ie_dependent_option (ENABLE_TBBBIND_2_5 "Enable TBBBind_2_5 static usage in OpenVINO runtime" ${ENABLE_TBBBIND_2_5_DEFAULT} "THREADING MATCHES TBB; NOT APPLE" OFF)
ie_dependent_option (ENABLE_TBBBIND_2_5 "Enable TBBBind_2_5 static usage in OpenVINO runtime" ${ENABLE_TBBBIND_2_5_DEFAULT} "THREADING MATCHES TBB" OFF)
ie_dependent_option (ENABLE_INTEL_GNA "GNA support for OpenVINO Runtime" ON
"NOT APPLE;NOT ANDROID;X86_64;CMAKE_CXX_COMPILER_VERSION VERSION_GREATER_EQUAL 5.4" OFF)
@@ -125,7 +122,7 @@ ie_option(ENABLE_OV_IR_FRONTEND "Enable IR FrontEnd" ON)
ie_option(ENABLE_OV_TF_FRONTEND "Enable TensorFlow FrontEnd" ON)
ie_option(ENABLE_OV_TF_LITE_FRONTEND "Enable TensorFlow Lite FrontEnd" ON)
ie_dependent_option(ENABLE_SNAPPY_COMPRESSION "Enables compression support for TF FE" ON
"ENABLE_OV_TF_FRONTEND" OFF)
"ENABLE_OV_TF_FRONTEND" ON)
if(CMAKE_HOST_LINUX AND LINUX)
# Debian packages are enabled on Ubuntu systems
@@ -151,15 +148,6 @@ else()
set(ENABLE_SYSTEM_PUGIXML_DEFAULT OFF)
endif()
if(ANDROID)
# when protobuf from /usr/include is used, then Android toolchain ignores include paths
# but if we build for Android using vcpkg / conan / etc where flatbuffers is not located in
# the /usr/include folders, we can still use 'system' flatbuffers
set(ENABLE_SYSTEM_FLATBUFFERS_DEFAULT OFF)
else()
set(ENABLE_SYSTEM_FLATBUFFERS_DEFAULT ON)
endif()
# users wants to use his own TBB version, specific either via env vars or cmake options
if(DEFINED ENV{TBBROOT} OR DEFINED ENV{TBB_DIR} OR DEFINED TBB_DIR OR DEFINED TBBROOT)
set(ENABLE_SYSTEM_TBB_DEFAULT OFF)
@@ -171,7 +159,7 @@ ie_dependent_option (ENABLE_SYSTEM_TBB "Enables use of system TBB" ${ENABLE_SYS
# available out of box on all systems (like RHEL, UBI)
ie_option (ENABLE_SYSTEM_PUGIXML "Enables use of system PugiXML" ${ENABLE_SYSTEM_PUGIXML_DEFAULT})
# the option is on by default, because we use only flatc compiler and don't use any libraries
ie_dependent_option(ENABLE_SYSTEM_FLATBUFFERS "Enables use of system flatbuffers" ${ENABLE_SYSTEM_FLATBUFFERS_DEFAULT}
ie_dependent_option(ENABLE_SYSTEM_FLATBUFFERS "Enables use of system flatbuffers" ON
"ENABLE_OV_TF_LITE_FRONTEND" OFF)
ie_dependent_option (ENABLE_SYSTEM_OPENCL "Enables use of system OpenCL" ${ENABLE_SYSTEM_LIBS_DEFAULT}
"ENABLE_INTEL_GPU" OFF)
@@ -183,10 +171,6 @@ ie_dependent_option (ENABLE_SYSTEM_PROTOBUF "Enables use of system Protobuf" OFF
ie_dependent_option (ENABLE_SYSTEM_SNAPPY "Enables use of system version of Snappy" OFF
"ENABLE_SNAPPY_COMPRESSION" OFF)
# temporary option until we enable this by default when review python API distribution
ie_dependent_option (ENABLE_PYTHON_PACKAGING "Enables packaging of Python API in APT / YUM" OFF
"ENABLE_PYTHON;UNIX" OFF)
ie_option(ENABLE_OPENVINO_DEBUG "Enable output for OPENVINO_DEBUG statements" OFF)
if(NOT BUILD_SHARED_LIBS AND ENABLE_OV_TF_FRONTEND)

View File

@@ -10,14 +10,28 @@ macro(ov_cpack_settings)
set(cpack_components_all ${CPACK_COMPONENTS_ALL})
unset(CPACK_COMPONENTS_ALL)
foreach(item IN LISTS cpack_components_all)
string(TOUPPER ${item} UPPER_COMP)
# filter out some components, which are not needed to be wrapped to conda-forge | brew | conan | vcpkg
if(NOT OV_CPACK_COMP_${UPPER_COMP}_EXCLUDE_ALL AND
# filter out some components, which are not needed to be wrapped to conda-forge | brew | conan
if(# python is not a part of conda | brew | conan
NOT item MATCHES "^${OV_CPACK_COMP_PYTHON_OPENVINO}_python.*" AND
# python wheels are not needed to be wrapped by conda | brew packages
NOT item STREQUAL OV_CPACK_COMP_PYTHON_WHEELS AND
# skip C / C++ / Python samples
NOT item STREQUAL OV_CPACK_COMP_CPP_SAMPLES AND
NOT item STREQUAL OV_CPACK_COMP_C_SAMPLES AND
NOT item STREQUAL OV_CPACK_COMP_PYTHON_SAMPLES AND
# even for case of system TBB we have installation rules for wheels packages
# so, need to skip this explicitly since they are installed in `host` section
NOT item MATCHES "^tbb(_dev)?$" AND
# the same for pugixml
NOT item STREQUAL "pugixml")
NOT item STREQUAL "pugixml" AND
# we have `license_file` field in conda meta.yml
NOT item STREQUAL OV_CPACK_COMP_LICENSING AND
# compile_tool is not needed
NOT item STREQUAL OV_CPACK_COMP_CORE_TOOLS AND
# not appropriate components
NOT item STREQUAL OV_CPACK_COMP_DEPLOYMENT_MANAGER AND
NOT item STREQUAL OV_CPACK_COMP_INSTALL_DEPENDENCIES AND
NOT item STREQUAL OV_CPACK_COMP_SETUPVARS)
list(APPEND CPACK_COMPONENTS_ALL ${item})
endif()
endforeach()

View File

@@ -44,24 +44,29 @@ macro(ov_cpack_settings)
set(cpack_components_all ${CPACK_COMPONENTS_ALL})
unset(CPACK_COMPONENTS_ALL)
foreach(item IN LISTS cpack_components_all)
string(TOUPPER ${item} UPPER_COMP)
# filter out some components, which are not needed to be wrapped to .deb package
if(NOT OV_CPACK_COMP_${UPPER_COMP}_EXCLUDE_ALL AND
# skip OpenVINO Python API (pattern in form of "<pyie | pyopenvino | pyngraph>_python${PYTHON_VERSION_MAJOR}${PYTHON_VERSION_MINOR}")
if(# skip OpenVINO Pyhon API and samples
NOT item MATCHES "^${OV_CPACK_COMP_PYTHON_OPENVINO}_python.*" AND
# because in case of .deb package, pyopenvino_package_python${PYTHON_VERSION_MAJOR}${PYTHON_VERSION_MINOR} is installed
(NOT item MATCHES "^${OV_CPACK_COMP_PYTHON_OPENVINO_PACKAGE}_python.*" OR ENABLE_PYTHON_PACKAGING) AND
NOT item STREQUAL OV_CPACK_COMP_PYTHON_SAMPLES AND
# python wheels are not needed to be wrapped by debian packages
NOT item STREQUAL OV_CPACK_COMP_PYTHON_WHEELS AND
# see ticket # 82605
NOT item STREQUAL "gna" AND
# temporary block nvidia
NOT item STREQUAL "nvidia" AND
# don't install Intel OpenMP
# don't install Intel OpenMP during debian
NOT item STREQUAL "omp" AND
# even for case of system TBB we have installation rules for wheels packages
# so, need to skip this explicitly
NOT item MATCHES "^tbb(_dev)?$" AND
# the same for pugixml
NOT item STREQUAL "pugixml")
NOT item STREQUAL "pugixml" AND
# we have copyright file for debian package
NOT item STREQUAL OV_CPACK_COMP_LICENSING AND
# compile_tool is not needed
NOT item STREQUAL OV_CPACK_COMP_CORE_TOOLS AND
# not appropriate components
NOT item STREQUAL OV_CPACK_COMP_DEPLOYMENT_MANAGER AND
NOT item STREQUAL OV_CPACK_COMP_INSTALL_DEPENDENCIES AND
NOT item STREQUAL OV_CPACK_COMP_SETUPVARS)
list(APPEND CPACK_COMPONENTS_ALL ${item})
endif()
endforeach()
@@ -88,8 +93,7 @@ macro(ov_cpack_settings)
# - 2022.1.0 is the last public release with debian packages from Intel install team
# - 2022.1.1, 2022.2 do not have debian packages enabled, distributed only as archives
# - 2022.3 is the first release where Debian updated packages are introduced, others 2022.3.X are LTS
2022.3.0 2022.3.1 2022.3.2 2022.3.3 2022.3.4 2022.3.5
2023.0.0 2023.0.1
2022.3.0 2022.3.1 2022.3.2 2022.3.3 2022.3.4 2022.3.5 2023.0.0 2023.0.1
)
#
@@ -113,7 +117,7 @@ macro(ov_cpack_settings)
# hetero
if(ENABLE_HETERO)
set(CPACK_COMPONENT_HETERO_DESCRIPTION "OpenVINO Hetero software plugin")
set(CPACK_COMPONENT_HETERO_DESCRIPTION "OpenVINO Hetero plugin")
set(CPACK_COMPONENT_HETERO_DEPENDS "${OV_CPACK_COMP_CORE}")
set(CPACK_DEBIAN_HETERO_PACKAGE_NAME "libopenvino-hetero-plugin-${cpack_name_ver}")
set(CPACK_DEBIAN_HETERO_PACKAGE_CONTROL_EXTRA "${def_postinst};${def_postrm}")
@@ -123,7 +127,7 @@ macro(ov_cpack_settings)
# auto batch
if(ENABLE_AUTO_BATCH)
set(CPACK_COMPONENT_BATCH_DESCRIPTION "OpenVINO Automatic Batching software plugin")
set(CPACK_COMPONENT_BATCH_DESCRIPTION "OpenVINO Automatic Batching plugin")
set(CPACK_COMPONENT_BATCH_DEPENDS "${OV_CPACK_COMP_CORE}")
set(CPACK_DEBIAN_BATCH_PACKAGE_NAME "libopenvino-auto-batch-plugin-${cpack_name_ver}")
set(CPACK_DEBIAN_BATCH_PACKAGE_CONTROL_EXTRA "${def_postinst};${def_postrm}")
@@ -134,9 +138,9 @@ macro(ov_cpack_settings)
# multi / auto plugins
if(ENABLE_MULTI)
if(ENABLE_AUTO)
set(CPACK_COMPONENT_MULTI_DESCRIPTION "OpenVINO Auto / Multi software plugin")
set(CPACK_COMPONENT_MULTI_DESCRIPTION "OpenVINO Auto / Multi plugin")
else()
set(CPACK_COMPONENT_MULTI_DESCRIPTION "OpenVINO Multi software plugin")
set(CPACK_COMPONENT_MULTI_DESCRIPTION "OpenVINO Multi plugin")
endif()
set(CPACK_COMPONENT_MULTI_DEPENDS "${OV_CPACK_COMP_CORE}")
set(CPACK_DEBIAN_MULTI_PACKAGE_NAME "libopenvino-auto-plugin-${cpack_name_ver}")
@@ -144,7 +148,7 @@ macro(ov_cpack_settings)
_ov_add_plugin(multi ON)
set(multi_copyright "generic")
elseif(ENABLE_AUTO)
set(CPACK_COMPONENT_AUTO_DESCRIPTION "OpenVINO Auto software plugin")
set(CPACK_COMPONENT_AUTO_DESCRIPTION "OpenVINO Auto plugin")
set(CPACK_COMPONENT_AUTO_DEPENDS "${OV_CPACK_COMP_CORE}")
set(CPACK_DEBIAN_AUTO_PACKAGE_NAME "libopenvino-auto-plugin-${cpack_name_ver}")
set(CPACK_DEBIAN_AUTO_PACKAGE_CONTROL_EXTRA "${def_postinst};${def_postrm}")
@@ -156,11 +160,11 @@ macro(ov_cpack_settings)
if(ENABLE_INTEL_CPU)
if(ARM OR AARCH64)
set(CPACK_DEBIAN_CPU_PACKAGE_NAME "libopenvino-arm-cpu-plugin-${cpack_name_ver}")
set(CPACK_COMPONENT_CPU_DESCRIPTION "ARM® CPU inference plugin")
set(CPACK_COMPONENT_CPU_DESCRIPTION "ARM® CPU plugin")
set(cpu_copyright "arm_cpu")
elseif(X86 OR X86_64)
set(CPACK_DEBIAN_CPU_PACKAGE_NAME "libopenvino-intel-cpu-plugin-${cpack_name_ver}")
set(CPACK_COMPONENT_CPU_DESCRIPTION "Intel® CPU inference plugin")
set(CPACK_COMPONENT_CPU_DESCRIPTION "Intel® CPU plugin")
set(cpu_copyright "generic")
else()
message(FATAL_ERROR "Unsupported CPU architecture: ${CMAKE_SYSTEM_PROCESSOR}")
@@ -172,19 +176,19 @@ macro(ov_cpack_settings)
# intel-gpu
if(ENABLE_INTEL_GPU)
set(CPACK_COMPONENT_GPU_DESCRIPTION "Intel® Processor Graphics inference plugin")
set(CPACK_COMPONENT_GPU_DESCRIPTION "Intel® Processor Graphics plugin")
set(CPACK_COMPONENT_GPU_DEPENDS "${OV_CPACK_COMP_CORE}")
set(CPACK_DEBIAN_GPU_PACKAGE_NAME "libopenvino-intel-gpu-plugin-${cpack_name_ver}")
set(CPACK_DEBIAN_GPU_PACKAGE_CONTROL_EXTRA "${def_postinst};${def_postrm}")
# auto batch exhances GPU
# set(CPACK_DEBIAN_BATCH_PACKAGE_ENHANCES "${CPACK_DEBIAN_GPU_PACKAGE_NAME} (= ${cpack_full_ver})")
# set(CPACK_DEBIAN_BATCH_PACKAGE_ENHANCES "${CPACK_DEBIAN_GPU_PACKAGE_NAME} = (${cpack_full_ver})")
_ov_add_plugin(gpu OFF)
set(gpu_copyright "generic")
endif()
# intel-gna
if(ENABLE_INTEL_GNA AND "gna" IN_LIST CPACK_COMPONENTS_ALL)
set(CPACK_COMPONENT_GNA_DESCRIPTION "Intel® Gaussian Neural Accelerator inference plugin")
set(CPACK_COMPONENT_GNA_DESCRIPTION "Intel® Gaussian Neural Accelerator")
set(CPACK_COMPONENT_GNA_DEPENDS "${OV_CPACK_COMP_CORE}")
set(CPACK_DEBIAN_GNA_PACKAGE_NAME "libopenvino-intel-gna-plugin-${cpack_name_ver}")
# since we have libgna.so we need to call ldconfig and have `def_triggers` here
@@ -294,32 +298,28 @@ macro(ov_cpack_settings)
#
set(CPACK_COMPONENT_CORE_DEV_DESCRIPTION "Intel(R) Distribution of OpenVINO(TM) Toolkit C / C++ Development files")
set(CPACK_COMPONENT_CORE_DEV_DEPENDS "${OV_CPACK_COMP_CORE}")
list(APPEND CPACK_COMPONENT_CORE_DEV_DEPENDS ${frontends})
set(CPACK_COMPONENT_CORE_DEV_DEPENDS "${OV_CPACK_COMP_CORE};${frontends}")
set(CPACK_DEBIAN_CORE_DEV_PACKAGE_NAME "libopenvino-dev-${cpack_name_ver}")
ov_debian_generate_conflicts("${OV_CPACK_COMP_CORE_DEV}" ${conflicting_versions})
set(${OV_CPACK_COMP_CORE_DEV}_copyright "generic")
#
# Python API
# Python bindings
#
if(ENABLE_PYTHON_PACKAGING)
if(ENABLE_PYTHON)
ov_get_pyversion(pyversion)
set(python_component "${OV_CPACK_COMP_PYTHON_OPENVINO_PACKAGE}_${pyversion}")
set(python_component "${OV_CPACK_COMP_PYTHON_OPENVINO}_${pyversion}")
string(TOUPPER "${pyversion}" pyversion)
set(CPACK_COMPONENT_PYOPENVINO_PACKAGE_${pyversion}_DESCRIPTION "OpenVINO Python API")
set(CPACK_COMPONENT_PYOPENVINO_PACKAGE_${pyversion}_DEPENDS "${OV_CPACK_COMP_CORE}")
list(APPEND CPACK_COMPONENT_PYOPENVINO_PACKAGE_${pyversion}_DEPENDS ${installed_plugins})
list(APPEND CPACK_COMPONENT_PYOPENVINO_PACKAGE_${pyversion}_DEPENDS ${frontends})
set(CPACK_COMPONENT_PYOPENVINO_${pyversion}_DESCRIPTION "OpenVINO Python bindings")
set(CPACK_COMPONENT_PYOPENVINO_${pyversion}_DEPENDS "${OV_CPACK_COMP_CORE}")
list(APPEND CPACK_COMPONENT_PYOPENVINO_${pyversion}_DEPENDS ${installed_plugins})
list(APPEND CPACK_COMPONENT_PYOPENVINO_${pyversion}_DEPENDS ${frontends})
set(CPACK_DEBIAN_PYOPENVINO_PACKAGE_${pyversion}_PACKAGE_NAME "python3-openvino")
set(python_package "${CPACK_DEBIAN_PYOPENVINO_PACKAGE_${pyversion}_PACKAGE_NAME} (= ${cpack_full_ver})")
set(CPACK_DEBIAN_PYOPENVINO_PACKAGE_${pyversion}_PACKAGE_DEPENDS "python3, python3-numpy")
# we can have a single python installed, so we need to generate conflicts for all other versions
ov_debian_generate_conflicts(${python_component} ${conflicting_versions})
set(CPACK_DEBIAN_PYOPENVINO_${pyversion}_PACKAGE_NAME "libopenvino-python-${cpack_name_ver}")
set(CPACK_DEBIAN_PYOPENVINO_${pyversion}_PACKAGE_CONTROL_EXTRA "${def_postinst};${def_postrm}")
set(CPACK_DEBIAN_PYOPENVINO_${pyversion}_PACKAGE_DEPENDS "python3")
# TODO: fix all the warnings
ov_debian_add_lintian_suppression(${python_component}
@@ -329,10 +329,6 @@ macro(ov_cpack_settings)
"executable-not-elf-or-script"
# all directories
"non-standard-dir-perm"
# usr/bin/benchmark_app
"binary-without-manpage"
# usr/bin/benchmark_app
"non-standard-executable-perm"
# all python files
"non-standard-file-perm")
set(${python_component}_copyright "generic")
@@ -364,12 +360,11 @@ macro(ov_cpack_settings)
set(samples_copyright "generic")
# python_samples
if(ENABLE_PYTHON_PACKAGING)
if(ENABLE_PYTHON)
set(CPACK_COMPONENT_PYTHON_SAMPLES_DESCRIPTION "Intel(R) Distribution of OpenVINO(TM) Toolkit Python Samples")
set(CPACK_COMPONENT_PYTHON_SAMPLES_DEPENDS "${python_component}")
set(CPACK_DEBIAN_PYTHON_SAMPLES_PACKAGE_NAME "openvino-samples-python-${cpack_name_ver}")
set(python_samples_package "${CPACK_DEBIAN_PYTHON_SAMPLES_PACKAGE_NAME} (= ${cpack_full_ver})")
set(CPACK_DEBIAN_PYTHON_SAMPLES_PACKAGE_DEPENDS "python3, ${python_package}")
set(CPACK_DEBIAN_PYTHON_SAMPLES_PACKAGE_DEPENDS "python3")
set(CPACK_DEBIAN_PYTHON_SAMPLES_PACKAGE_ARCHITECTURE "all")
set(python_samples_copyright "generic")
endif()
@@ -400,9 +395,6 @@ macro(ov_cpack_settings)
# all openvino
set(CPACK_COMPONENT_OPENVINO_DESCRIPTION "Intel(R) Distribution of OpenVINO(TM) Toolkit Libraries and Development files")
set(CPACK_COMPONENT_OPENVINO_DEPENDS "libraries_dev;${OV_CPACK_COMP_CPP_SAMPLES}")
if(ENABLE_PYTHON_PACKAGING)
list(APPEND CPACK_DEBIAN_OPENVINO_PACKAGE_DEPENDS "${python_package}, ${python_samples_package}")
endif()
set(CPACK_DEBIAN_OPENVINO_PACKAGE_NAME "openvino-${cpack_name_ver}")
set(CPACK_DEBIAN_OPENVINO_PACKAGE_ARCHITECTURE "all")
ov_debian_generate_conflicts(openvino ${conflicting_versions})

View File

@@ -6,7 +6,7 @@ if(CPACK_GENERATOR STREQUAL "DEB")
include(cmake/packaging/debian.cmake)
elseif(CPACK_GENERATOR STREQUAL "RPM")
include(cmake/packaging/rpm.cmake)
elseif(CPACK_GENERATOR MATCHES "^(CONDA-FORGE|BREW|CONAN|VCPKG)$")
elseif(CPACK_GENERATOR MATCHES "^(CONDA-FORGE|BREW|CONAN)$")
include(cmake/packaging/common-libraries.cmake)
elseif(CPACK_GENERATOR STREQUAL "NSIS")
include(cmake/packaging/nsis.cmake)

View File

@@ -30,25 +30,30 @@ macro(ov_cpack_settings)
set(cpack_components_all ${CPACK_COMPONENTS_ALL})
unset(CPACK_COMPONENTS_ALL)
foreach(item IN LISTS cpack_components_all)
string(TOUPPER ${item} UPPER_COMP)
# filter out some components, which are not needed to be wrapped to .rpm package
if(NOT OV_CPACK_COMP_${UPPER_COMP}_EXCLUDE_ALL AND
# skip OpenVINO Python API (pattern in form of "<pyie | pyopenvino | pyngraph>_python${PYTHON_VERSION_MAJOR}${PYTHON_VERSION_MINOR}")
# filter out some components, which are not needed to be wrapped to .deb package
if(# skip OpenVINO Pyhon API and samples
NOT item MATCHES "^${OV_CPACK_COMP_PYTHON_OPENVINO}_python.*" AND
# because in case of .rpm package, pyopenvino_package_python${PYTHON_VERSION_MAJOR}${PYTHON_VERSION_MINOR} is installed
(NOT item MATCHES "^${OV_CPACK_COMP_PYTHON_OPENVINO_PACKAGE}_python.*" OR ENABLE_PYTHON_PACKAGING) AND
NOT item STREQUAL OV_CPACK_COMP_PYTHON_SAMPLES AND
# python wheels are not needed to be wrapped by rpm packages
NOT item STREQUAL OV_CPACK_COMP_PYTHON_WHEELS AND
# see ticket # 82605
NOT item STREQUAL "gna" AND
# temporary block nvidia
NOT item STREQUAL "nvidia" AND
# don't install Intel OpenMP
# don't install Intel OpenMP during rpm
NOT item STREQUAL "omp" AND
# even for case of system TBB we have installation rules for wheels packages
# so, need to skip this explicitly
NOT item MATCHES "^tbb(_dev)?$" AND
# the same for pugixml
NOT item STREQUAL "pugixml")
list(APPEND CPACK_COMPONENTS_ALL ${item})
NOT item STREQUAL "pugixml" AND
# we have copyright file for rpm package
NOT item STREQUAL OV_CPACK_COMP_LICENSING AND
# compile_tool is not needed
NOT item STREQUAL OV_CPACK_COMP_CORE_TOOLS AND
# not appropriate components
NOT item STREQUAL OV_CPACK_COMP_DEPLOYMENT_MANAGER AND
NOT item STREQUAL OV_CPACK_COMP_INSTALL_DEPENDENCIES AND
NOT item STREQUAL OV_CPACK_COMP_SETUPVARS)
list(APPEND CPACK_COMPONENTS_ALL ${item})
endif()
endforeach()
list(REMOVE_DUPLICATES CPACK_COMPONENTS_ALL)
@@ -74,8 +79,7 @@ macro(ov_cpack_settings)
# - 2022.1.0 is the last public release with rpm packages from Intel install team
# - 2022.1.1, 2022.2 do not have rpm packages enabled, distributed only as archives
# - 2022.3 is the first release where RPM updated packages are introduced, others 2022.3.X are LTS
2022.3.0 2022.3.1 2022.3.2 2022.3.3 2022.3.4 2022.3.5
2023.0.0 2023.0.1
2022.3.0 2022.3.1 2022.3.2 2022.3.3 2022.3.4 2022.3.5 2023.0.0 2023.0.1
)
find_host_program(rpmlint_PROGRAM NAMES rpmlint DOC "Path to rpmlint")
@@ -117,7 +121,7 @@ macro(ov_cpack_settings)
# hetero
if(ENABLE_HETERO)
set(CPACK_COMPONENT_HETERO_DESCRIPTION "OpenVINO Hetero software plugin")
set(CPACK_COMPONENT_HETERO_DESCRIPTION "OpenVINO Hetero plugin")
set(CPACK_RPM_HETERO_PACKAGE_REQUIRES "${core_package}")
set(CPACK_RPM_HETERO_PACKAGE_NAME "libopenvino-hetero-plugin-${cpack_name_ver}")
_ov_add_package(plugin_packages hetero)
@@ -126,7 +130,7 @@ macro(ov_cpack_settings)
# auto batch
if(ENABLE_AUTO_BATCH)
set(CPACK_COMPONENT_BATCH_DESCRIPTION "OpenVINO Automatic Batching software plugin")
set(CPACK_COMPONENT_BATCH_DESCRIPTION "OpenVINO Automatic Batching plugin")
set(CPACK_RPM_BATCH_PACKAGE_REQUIRES "${core_package}")
set(CPACK_RPM_BATCH_PACKAGE_NAME "libopenvino-auto-batch-plugin-${cpack_name_ver}")
_ov_add_package(plugin_packages batch)
@@ -136,16 +140,16 @@ macro(ov_cpack_settings)
# multi / auto plugins
if(ENABLE_MULTI)
if(ENABLE_AUTO)
set(CPACK_COMPONENT_MULTI_DESCRIPTION "OpenVINO Auto / Multi software plugin")
set(CPACK_COMPONENT_MULTI_DESCRIPTION "OpenVINO Auto / Multi plugin")
else()
set(CPACK_COMPONENT_MULTI_DESCRIPTION "OpenVINO Multi software plugin")
set(CPACK_COMPONENT_MULTI_DESCRIPTION "OpenVINO Multi plugin")
endif()
set(CPACK_RPM_MULTI_PACKAGE_REQUIRES "${core_package}")
set(CPACK_RPM_MULTI_PACKAGE_NAME "libopenvino-auto-plugin-${cpack_name_ver}")
_ov_add_package(plugin_packages multi)
set(multi_copyright "generic")
elseif(ENABLE_AUTO)
set(CPACK_COMPONENT_AUTO_DESCRIPTION "OpenVINO Auto software plugin")
set(CPACK_COMPONENT_AUTO_DESCRIPTION "OpenVINO Auto plugin")
set(CPACK_RPM_AUTO_PACKAGE_REQUIRES "${core_package}")
set(CPACK_RPM_AUTO_PACKAGE_NAME "libopenvino-auto-plugin-${cpack_name_ver}")
_ov_add_package(plugin_packages auto)
@@ -156,11 +160,11 @@ macro(ov_cpack_settings)
if(ENABLE_INTEL_CPU)
if(ARM OR AARCH64)
set(CPACK_RPM_CPU_PACKAGE_NAME "libopenvino-arm-cpu-plugin-${cpack_name_ver}")
set(CPACK_COMPONENT_CPU_DESCRIPTION "ARM® CPU inference plugin")
set(CPACK_COMPONENT_CPU_DESCRIPTION "ARM® CPU plugin")
set(cpu_copyright "arm_cpu")
elseif(X86 OR X86_64)
set(CPACK_RPM_CPU_PACKAGE_NAME "libopenvino-intel-cpu-plugin-${cpack_name_ver}")
set(CPACK_COMPONENT_CPU_DESCRIPTION "Intel® CPU inference plugin")
set(CPACK_COMPONENT_CPU_DESCRIPTION "Intel® CPU")
set(cpu_copyright "generic")
else()
message(FATAL_ERROR "Unsupported CPU architecture: ${CMAKE_SYSTEM_PROCESSOR}")
@@ -171,7 +175,7 @@ macro(ov_cpack_settings)
# intel-gpu
if(ENABLE_INTEL_GPU)
set(CPACK_COMPONENT_GPU_DESCRIPTION "Intel® Processor Graphics inference plugin")
set(CPACK_COMPONENT_GPU_DESCRIPTION "Intel® Processor Graphics")
set(CPACK_RPM_GPU_PACKAGE_REQUIRES "${core_package}")
set(CPACK_RPM_GPU_PACKAGE_NAME "libopenvino-intel-gpu-plugin-${cpack_name_ver}")
_ov_add_package(plugin_packages gpu)
@@ -180,7 +184,7 @@ macro(ov_cpack_settings)
# intel-gna
if(ENABLE_INTEL_GNA AND "gna" IN_LIST CPACK_COMPONENTS_ALL)
set(CPACK_COMPONENT_GNA_DESCRIPTION "Intel® Gaussian Neural Accelerator inference plugin")
set(CPACK_COMPONENT_GNA_DESCRIPTION "Intel® Gaussian Neural Accelerator")
set(CPACK_RPM_GNA_PACKAGE_REQUIRES "${core_package}")
set(CPACK_RPM_GNA_PACKAGE_NAME "libopenvino-intel-gna-plugin-${cpack_name_ver}")
_ov_add_package(plugin_packages gna)
@@ -268,26 +272,17 @@ macro(ov_cpack_settings)
# Python bindings
#
if(ENABLE_PYTHON_PACKAGING)
if(ENABLE_PYTHON)
ov_get_pyversion(pyversion)
set(python_component "${OV_CPACK_COMP_PYTHON_OPENVINO_PACKAGE}_${pyversion}")
string(TOUPPER "${pyversion}" pyversion_upper)
set(python_component "${OV_CPACK_COMP_PYTHON_OPENVINO}_${pyversion}")
string(TOUPPER "${pyversion}" pyversion)
set(CPACK_COMPONENT_PYOPENVINO_PACKAGE_${pyversion_upper}_DESCRIPTION "OpenVINO Python API")
set(CPACK_RPM_PYOPENVINO_PACKAGE_${pyversion_upper}_PACKAGE_REQUIRES
"${core_package}, ${frontend_packages}, ${plugin_packages}, python3, python3-numpy")
set(CPACK_RPM_PYOPENVINO_PACKAGE_${pyversion_upper}_PACKAGE_NAME "python3-openvino")
set(python_package "${CPACK_RPM_PYOPENVINO_PACKAGE_${pyversion_upper}_PACKAGE_NAME} = ${cpack_full_ver}")
set(CPACK_COMPONENT_PYOPENVINO_${pyversion}_DESCRIPTION "OpenVINO Python bindings")
set(CPACK_RPM_PYOPENVINO_${pyversion}_PACKAGE_REQUIRES
"${core_package}, ${frontend_packages}, ${plugin_packages}, python3")
set(CPACK_RPM_PYOPENVINO_${pyversion}_PACKAGE_NAME "libopenvino-python-${cpack_name_ver}")
set(python_package "${CPACK_RPM_PYOPENVINO_${pyversion}_PACKAGE_NAME} = ${cpack_full_ver}")
set(${python_component}_copyright "generic")
# we can have a single python installed, so we need to generate conflicts for all other versions
ov_rpm_generate_conflicts(${python_component} ${conflicting_versions})
ov_rpm_add_rpmlint_suppression("${python_component}"
# all directories
"non-standard-dir-perm /usr/lib64/${pyversion}/site-packages/openvino/*"
"non-standard-dir-perm /usr/lib64/${pyversion}/site-packages/ngraph/*"
)
endif()
#
@@ -322,20 +317,12 @@ macro(ov_cpack_settings)
set(samples_copyright "generic")
# python_samples
if(ENABLE_PYTHON_PACKAGING)
if(ENABLE_PYTHON)
set(CPACK_COMPONENT_PYTHON_SAMPLES_DESCRIPTION "Intel(R) Distribution of OpenVINO(TM) Toolkit Python Samples")
set(CPACK_RPM_PYTHON_SAMPLES_PACKAGE_REQUIRES "${python_package}, python3")
set(CPACK_RPM_PYTHON_SAMPLES_PACKAGE_NAME "openvino-samples-python-${cpack_name_ver}")
set(python_samples_package "${CPACK_RPM_PYTHON_SAMPLES_PACKAGE_NAME} = ${cpack_full_ver}")
set(CPACK_RPM_PYTHON_SAMPLES_PACKAGE_ARCHITECTURE "noarch")
set(python_samples_copyright "generic")
ov_rpm_add_rpmlint_suppression(${OV_CPACK_COMP_PYTHON_SAMPLES}
# all files
"non-executable-script /usr/share/openvino/samples/python/*"
# similar requirements.txt files
"files-duplicate /usr/share/openvino/samples/python/*"
)
endif()
#
@@ -366,9 +353,6 @@ macro(ov_cpack_settings)
# all openvino
set(CPACK_COMPONENT_OPENVINO_DESCRIPTION "Intel(R) Distribution of OpenVINO(TM) Toolkit Libraries and Development files")
set(CPACK_RPM_OPENVINO_PACKAGE_REQUIRES "${libraries_dev_package}, ${samples_package}")
if(ENABLE_PYTHON_PACKAGING)
set(CPACK_DEBIAN_OPENVINO_PACKAGE_DEPENDS "${CPACK_RPM_OPENVINO_PACKAGE_REQUIRES}, ${python_package}, ${python_samples_package}")
endif()
set(CPACK_RPM_OPENVINO_PACKAGE_NAME "openvino-${cpack_name_ver}")
set(CPACK_RPM_OPENVINO_PACKAGE_ARCHITECTURE "noarch")
ov_rpm_generate_conflicts(openvino ${conflicting_versions})

View File

@@ -37,7 +37,6 @@ include(CMakeFindDependencyMacro)
find_dependency(OpenVINO
PATHS "${CMAKE_CURRENT_LIST_DIR}"
"${CMAKE_CURRENT_LIST_DIR}/../openvino${InferenceEngine_VERSION}"
NO_CMAKE_FIND_ROOT_PATH
NO_DEFAULT_PATH)

View File

@@ -238,15 +238,7 @@ macro(_ov_find_pugixml)
if(_ov_pugixml_pkgconfig_interface AND NOT ANDROID)
_ov_find_dependency(PkgConfig)
elseif(_ov_pugixml_cmake_interface)
_ov_find_dependency(PugiXML NAMES PugiXML pugixml)
endif()
# see https://cmake.org/cmake/help/latest/command/add_library.html#alias-libraries
# cmake older than 3.18 cannot create an alias for imported non-GLOBAL targets
# so, we have to use 'GLOBAL' property for cases when we call from OpenVINODeveloperPackage
# because the alias openvino::pugixml is created later
if(CMAKE_VERSION VERSION_LESS 3.18 AND OpenVINODeveloperPackage_DIR)
set(_ov_pugixml_visibility GLOBAL)
_ov_find_dependency(PugiXML REQUIRED)
endif()
if(PugiXML_FOUND)
@@ -256,15 +248,9 @@ macro(_ov_find_pugixml)
set(_ov_pugixml_target pugixml::pugixml)
endif()
if(OpenVINODeveloperPackage_DIR)
# align with build tree and create alias
if(_ov_pugixml_visibility STREQUAL "GLOBAL")
set_target_properties(${_ov_pugixml_target} PROPERTIES IMPORTED_GLOBAL TRUE)
endif()
# check whether openvino::pugixml is already defined in case of
# OpenVINODeveloperPackageConfig.cmake is found multiple times
if(NOT TARGET openvino::pugixml)
add_library(openvino::pugixml ALIAS ${_ov_pugixml_target})
endif()
set_property(TARGET ${_ov_pugixml_target} PROPERTY IMPORTED_GLOBAL ON)
# align with build tree
add_library(openvino::pugixml ALIAS ${_ov_pugixml_target})
endif()
unset(_ov_pugixml_target)
elseif(PkgConfig_FOUND)
@@ -279,7 +265,7 @@ macro(_ov_find_pugixml)
${pkg_config_quiet_arg}
${pkg_config_required_arg}
IMPORTED_TARGET
${_ov_pugixml_visibility}
GLOBAL
pugixml)
unset(pkg_config_quiet_arg)
@@ -287,11 +273,7 @@ macro(_ov_find_pugixml)
if(pugixml_FOUND)
if(OpenVINODeveloperPackage_DIR)
# check whether openvino::pugixml is already defined in case of
# OpenVINODeveloperPackageConfig.cmake is found multiple times
if(NOT TARGET openvino::pugixml)
add_library(openvino::pugixml ALIAS PkgConfig::pugixml)
endif()
add_library(openvino::pugixml ALIAS PkgConfig::pugixml)
endif()
# PATCH: on Ubuntu 18.04 pugixml.pc contains incorrect include directories
@@ -313,8 +295,6 @@ macro(_ov_find_pugixml)
message(FATAL_ERROR "Failed to find system pugixml in OpenVINO Developer Package")
endif()
endif()
unset(_ov_pugixml_visibility)
endif()
endmacro()
@@ -334,7 +314,7 @@ macro(_ov_find_ade)
# whether 'ade' is found via find_package
set(_ENABLE_SYSTEM_ADE "@ade_FOUND@")
if(_OV_ENABLE_GAPI_PREPROCESSING AND _ENABLE_SYSTEM_ADE)
_ov_find_dependency(ade @ade_VERSION@)
_ov_find_dependency(ade 0.1.2)
endif()
unset(_OV_ENABLE_GAPI_PREPROCESSING)
unset(_ENABLE_SYSTEM_ADE)
@@ -343,18 +323,22 @@ endmacro()
macro(_ov_find_intel_cpu_dependencies)
set(_OV_ENABLE_CPU_ACL "@DNNL_USE_ACL@")
if(_OV_ENABLE_CPU_ACL)
set(_ov_in_install_tree "@PACKAGE_ARM_COMPUTE_LIB_DIR@")
if(_ov_in_install_tree)
if(_ov_as_external_package)
set_and_check(ARM_COMPUTE_LIB_DIR "@PACKAGE_ARM_COMPUTE_LIB_DIR@")
set(ACL_DIR "${CMAKE_CURRENT_LIST_DIR}")
set(_ov_find_acl_options NO_DEFAULT_PATH)
set(_ov_find_acl_path "${CMAKE_CURRENT_LIST_DIR}")
else()
set_and_check(ACL_DIR "@PACKAGE_FIND_ACL_PATH@")
set_and_check(_ov_find_acl_path "@PACKAGE_FIND_ACL_PATH@")
endif()
_ov_find_dependency(ACL)
_ov_find_dependency(ACL
NO_MODULE
PATHS "${_ov_find_acl_path}"
${_ov_find_acl_options})
unset(_ov_in_install_tree)
unset(ARM_COMPUTE_LIB_DIR)
unset(_ov_find_acl_path)
unset(_ov_find_acl_options)
endif()
unset(_OV_ENABLE_CPU_ACL)
endmacro()
@@ -391,18 +375,18 @@ endmacro()
macro(_ov_find_protobuf_frontend_dependency)
set(_OV_ENABLE_SYSTEM_PROTOBUF "@ENABLE_SYSTEM_PROTOBUF@")
set(_OV_PROTOBUF_PACKAGE_CONFIG "@protobuf_config@")
if(_OV_ENABLE_SYSTEM_PROTOBUF)
_ov_find_dependency(Protobuf @Protobuf_VERSION@ EXACT ${_OV_PROTOBUF_PACKAGE_CONFIG})
# TODO: remove check for target existence
if(_OV_ENABLE_SYSTEM_PROTOBUF AND NOT TARGET protobuf::libprotobuf)
_ov_find_dependency(Protobuf @Protobuf_VERSION@ EXACT)
endif()
unset(_OV_PROTOBUF_PACKAGE_CONFIG)
unset(_OV_ENABLE_SYSTEM_PROTOBUF)
endmacro()
macro(_ov_find_tensorflow_frontend_dependencies)
set(_OV_ENABLE_SYSTEM_SNAPPY "@ENABLE_SYSTEM_SNAPPY@")
set(_ov_snappy_lib "@ov_snappy_lib@")
if(_OV_ENABLE_SYSTEM_SNAPPY)
# TODO: remove check for target existence
if(_OV_ENABLE_SYSTEM_SNAPPY AND NOT TARGET ${_ov_snappy_lib})
_ov_find_dependency(Snappy @Snappy_VERSION@ EXACT)
endif()
unset(_OV_ENABLE_SYSTEM_SNAPPY)
@@ -419,18 +403,16 @@ macro(_ov_find_onnx_frontend_dependencies)
endmacro()
function(_ov_target_no_deprecation_error)
if(NOT CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
# older macOS x86_64 does not support this linker option
if(CMAKE_CROSSCOMPILING AND NOT APPLE)
set_target_properties(${ARGV} PROPERTIES
INTERFACE_LINK_OPTIONS "-Wl,--allow-shlib-undefined")
endif()
if(NOT MSVC)
if(CMAKE_CXX_COMPILER_ID STREQUAL "Intel")
set(flags "-diag-warning=1786")
else()
set(flags "-Wno-error=deprecated-declarations")
endif()
if(CMAKE_CROSSCOMPILING)
set_target_properties(${ARGV} PROPERTIES
INTERFACE_LINK_OPTIONS "-Wl,--allow-shlib-undefined")
endif()
set_target_properties(${ARGV} PROPERTIES INTERFACE_COMPILE_OPTIONS ${flags})
endif()

View File

@@ -15,7 +15,7 @@ list(APPEND ov_options CMAKE_CXX_COMPILER_LAUNCHER CMAKE_C_COMPILER_LAUNCHER
CMAKE_CXX_LINKER_LAUNCHER CMAKE_C_LINKER_LAUNCHER
CMAKE_BUILD_TYPE CMAKE_SKIP_RPATH CMAKE_INSTALL_PREFIX
CMAKE_OSX_ARCHITECTURES CMAKE_OSX_DEPLOYMENT_TARGET
CPACK_GENERATOR)
CMAKE_CONFIGURATION_TYPES CMAKE_DEFAULT_BUILD_TYPE)
file(TO_CMAKE_PATH "${CMAKE_CURRENT_LIST_DIR}" cache_path)
message(STATUS "The following CMake options are exported from OpenVINO Developer package")
@@ -33,11 +33,7 @@ set(ENABLE_PLUGINS_XML ON)
# for samples in 3rd party projects
if(ENABLE_SAMPLES)
if("@gflags_FOUND@")
set_and_check(gflags_DIR "@gflags_DIR@")
else()
set_and_check(gflags_DIR "@gflags_BINARY_DIR@")
endif()
set_and_check(gflags_DIR "@gflags_BINARY_DIR@")
endif()
if(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")

View File

@@ -1,5 +1,5 @@
# ******************************************************************************
# Copyright 2017-2023 Intel Corporation
# Copyright 2017-2022 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -15,7 +15,7 @@
# ******************************************************************************
#
#
# ngraph config file
# FindNGraph
# ------
#
# This script defines the following variables and imported targets:
@@ -44,7 +44,7 @@ include(CMakeFindDependencyMacro)
find_dependency(OpenVINO
PATHS "${CMAKE_CURRENT_LIST_DIR}"
"${CMAKE_CURRENT_LIST_DIR}/../openvino${ngraph_VERSION}"
"${CMAKE_CURRENT_LIST_DIR}/ngraph"
NO_CMAKE_FIND_ROOT_PATH
NO_DEFAULT_PATH)

View File

@@ -4,7 +4,7 @@
set(CMAKE_SYSTEM_NAME Linux)
set(CMAKE_SYSTEM_PROCESSOR i386)
set(CMAKE_STRIP i686-linux-gnu-strip)
set(PKG_CONFIG_EXECUTABLE i686-linux-gnu-pkg-config CACHE PATH "Path to 32-bits pkg-config")
set(CMAKE_CXX_FLAGS_INIT "-m32")

View File

@@ -14,6 +14,82 @@
set(CMAKE_SYSTEM_NAME Windows)
set(CMAKE_SYSTEM_PROCESSOR x86_64)
set(CMAKE_C_COMPILER x86_64-w64-mingw32-gcc)
set(CMAKE_CXX_COMPILER x86_64-w64-mingw32-g++)
set(CMAKE_C_COMPILER x86_64-w64-mingw32-gcc-posix)
set(CMAKE_CXX_COMPILER x86_64-w64-mingw32-g++-posix)
set(PKG_CONFIG_EXECUTABLE x86_64-w64-mingw32-pkg-config CACHE PATH "Path to Windows x86_64 pkg-config")
set(CMAKE_FIND_ROOT_PATH_MODE_PROGRAM NEVER)
set(CMAKE_FIND_ROOT_PATH_MODE_LIBRARY ONLY)
set(CMAKE_FIND_ROOT_PATH_MODE_INCLUDE ONLY)
set(CMAKE_FIND_ROOT_PATH_MODE_PACKAGE ONLY)
macro(__cmake_find_root_save_and_reset)
foreach(v
CMAKE_FIND_ROOT_PATH_MODE_LIBRARY
CMAKE_FIND_ROOT_PATH_MODE_INCLUDE
CMAKE_FIND_ROOT_PATH_MODE_PACKAGE
CMAKE_FIND_ROOT_PATH_MODE_PROGRAM
)
set(__save_${v} ${${v}})
set(${v} NEVER)
endforeach()
endmacro()
macro(__cmake_find_root_restore)
foreach(v
CMAKE_FIND_ROOT_PATH_MODE_LIBRARY
CMAKE_FIND_ROOT_PATH_MODE_INCLUDE
CMAKE_FIND_ROOT_PATH_MODE_PACKAGE
CMAKE_FIND_ROOT_PATH_MODE_PROGRAM
)
set(${v} ${__save_${v}})
unset(__save_${v})
endforeach()
endmacro()
# macro to find programs on the host OS
macro(find_host_program)
__cmake_find_root_save_and_reset()
if(CMAKE_HOST_WIN32)
SET(WIN32 1)
SET(UNIX)
SET(APPLE)
elseif(CMAKE_HOST_APPLE)
SET(APPLE 1)
SET(UNIX)
SET(WIN32)
elseif(CMAKE_HOST_UNIX)
SET(UNIX 1)
SET(WIN32)
SET(APPLE)
endif()
find_program(${ARGN})
SET(WIN32 1)
SET(APPLE)
SET(UNIX)
__cmake_find_root_restore()
endmacro()
# macro to find packages on the host OS
macro(find_host_package)
__cmake_find_root_save_and_reset()
if(CMAKE_HOST_WIN32)
SET(WIN32 1)
SET(UNIX)
SET(APPLE)
elseif(CMAKE_HOST_APPLE)
SET(APPLE 1)
SET(WIN32)
SET(UNIX)
elseif(CMAKE_HOST_UNIX)
SET(UNIX 1)
SET(WIN32)
SET(APPLE)
endif()
find_package(${ARGN})
SET(WIN32 1)
SET(APPLE)
SET(UNIX)
__cmake_find_root_restore()
endmacro()

View File

@@ -35,3 +35,66 @@ set(CMAKE_MODULE_LINKER_FLAGS_INIT "-L${CMAKE_SYSROOT}/lib")
set(CMAKE_C_STANDARD_LIBRARIES_INIT "-latomic" CACHE STRING "" FORCE)
set(CMAKE_CXX_STANDARD_LIBRARIES_INIT "-latomic" CACHE STRING "" FORCE)
set(CMAKE_FIND_ROOT_PATH_MODE_PROGRAM NEVER)
set(CMAKE_FIND_ROOT_PATH_MODE_LIBRARY ONLY)
set(CMAKE_FIND_ROOT_PATH_MODE_INCLUDE ONLY)
set(CMAKE_FIND_ROOT_PATH_MODE_PACKAGE ONLY)
macro(__cmake_find_root_save_and_reset)
foreach(v
CMAKE_FIND_ROOT_PATH_MODE_LIBRARY
CMAKE_FIND_ROOT_PATH_MODE_INCLUDE
CMAKE_FIND_ROOT_PATH_MODE_PACKAGE
CMAKE_FIND_ROOT_PATH_MODE_PROGRAM
)
set(__save_${v} ${${v}})
set(${v} NEVER)
endforeach()
endmacro()
macro(__cmake_find_root_restore)
foreach(v
CMAKE_FIND_ROOT_PATH_MODE_LIBRARY
CMAKE_FIND_ROOT_PATH_MODE_INCLUDE
CMAKE_FIND_ROOT_PATH_MODE_PACKAGE
CMAKE_FIND_ROOT_PATH_MODE_PROGRAM
)
set(${v} ${__save_${v}})
unset(__save_${v})
endforeach()
endmacro()
# macro to find programs on the host OS
macro(find_host_program)
__cmake_find_root_save_and_reset()
if(CMAKE_HOST_WIN32)
SET(WIN32 1)
SET(UNIX)
elseif(CMAKE_HOST_APPLE)
SET(APPLE 1)
SET(UNIX)
endif()
find_program(${ARGN})
SET(WIN32)
SET(APPLE)
SET(UNIX 1)
__cmake_find_root_restore()
endmacro()
# macro to find packages on the host OS
macro(find_host_package)
__cmake_find_root_save_and_reset()
if(CMAKE_HOST_WIN32)
SET(WIN32 1)
SET(UNIX)
elseif(CMAKE_HOST_APPLE)
SET(APPLE 1)
SET(UNIX)
endif()
find_package(${ARGN})
SET(WIN32)
SET(APPLE)
SET(UNIX 1)
__cmake_find_root_restore()
endmacro()

View File

@@ -2,13 +2,74 @@
# SPDX-License-Identifier: Apache-2.0
#
# Install compiler on debian using:
# apt-get install -y gcc-x86-64-linux-gnu g++-x86-64-linux-gnu binutils-x86-64-linux-gnu pkg-config-x86-64-linux-gnu
set(CMAKE_SYSTEM_NAME Linux)
set(CMAKE_SYSTEM_PROCESSOR amd64)
set(CMAKE_C_COMPILER x86_64-linux-gnu-gcc)
set(CMAKE_CXX_COMPILER x86_64-linux-gnu-g++)
set(CMAKE_STRIP x86_64-linux-gnu-strip)
set(PKG_CONFIG_EXECUTABLE x86_64-linux-gnu-pkg-config CACHE PATH "Path to amd64 pkg-config")
set(PKG_CONFIG_EXECUTABLE "NOT-FOUND" CACHE PATH "Path to amd64 pkg-config")
set(CMAKE_FIND_ROOT_PATH_MODE_PROGRAM NEVER)
set(CMAKE_FIND_ROOT_PATH_MODE_LIBRARY ONLY)
set(CMAKE_FIND_ROOT_PATH_MODE_INCLUDE ONLY)
set(CMAKE_FIND_ROOT_PATH_MODE_PACKAGE ONLY)
macro(__cmake_find_root_save_and_reset)
foreach(v
CMAKE_FIND_ROOT_PATH_MODE_LIBRARY
CMAKE_FIND_ROOT_PATH_MODE_INCLUDE
CMAKE_FIND_ROOT_PATH_MODE_PACKAGE
CMAKE_FIND_ROOT_PATH_MODE_PROGRAM
)
set(__save_${v} ${${v}})
set(${v} NEVER)
endforeach()
endmacro()
macro(__cmake_find_root_restore)
foreach(v
CMAKE_FIND_ROOT_PATH_MODE_LIBRARY
CMAKE_FIND_ROOT_PATH_MODE_INCLUDE
CMAKE_FIND_ROOT_PATH_MODE_PACKAGE
CMAKE_FIND_ROOT_PATH_MODE_PROGRAM
)
set(${v} ${__save_${v}})
unset(__save_${v})
endforeach()
endmacro()
# macro to find programs on the host OS
macro(find_host_program)
__cmake_find_root_save_and_reset()
if(CMAKE_HOST_WIN32)
SET(WIN32 1)
SET(UNIX)
elseif(CMAKE_HOST_APPLE)
SET(APPLE 1)
SET(UNIX)
endif()
find_program(${ARGN})
SET(WIN32)
SET(APPLE)
SET(UNIX 1)
__cmake_find_root_restore()
endmacro()
# macro to find packages on the host OS
macro(find_host_package)
__cmake_find_root_save_and_reset()
if(CMAKE_HOST_WIN32)
SET(WIN32 1)
SET(UNIX)
elseif(CMAKE_HOST_APPLE)
SET(APPLE 1)
SET(UNIX)
endif()
find_package(${ARGN})
SET(WIN32)
SET(APPLE)
SET(UNIX 1)
__cmake_find_root_restore()
endmacro()

36
conan.lock Normal file
View File

@@ -0,0 +1,36 @@
{
"version": "0.5",
"requires": [
"zlib/1.2.13#97d5730b529b4224045fe7090592d4c1%1692672717.049",
"xbyak/6.73#250bc3bc73379f90f255876c1c00a4cd%1691853024.351",
"snappy/1.1.10#916523630083f6d855cb2977de8eefb6%1689780661.062",
"pybind11/2.10.4#dd44c80a5ed6a2ef11194380daae1248%1682692198.909",
"pugixml/1.13#f615c1fcec55122b2e177d17061276e7%1691917296.869",
"protobuf/3.21.12#d9f5f4e3b86552552dda4c0a2e928eeb%1685218275.69",
"opencl-icd-loader/2023.04.17#5f73dd9f0c023d416a7f162e320b9c77%1692732261.088",
"opencl-headers/2023.04.17#3d98f2d12a67c2400de6f11d5335b5a6%1683936272.16",
"opencl-clhpp-headers/2023.04.17#7c62fcc7ac2559d4839150d2ebaac5c8%1685450803.672",
"onnx/1.13.1#f11071c8aba52731a5205b028945acbb%1693130310.715",
"onetbb/2021.10.0#cbb2fc43088070b48f6e4339bc8fa0e1%1693812561.235",
"nlohmann_json/3.11.2#a35423bb6e1eb8f931423557e282c7ed%1666619820.488",
"ittapi/3.24.0#9246125f13e7686dee2b0c992b71db94%1682969872.743",
"hwloc/2.9.2#1c63e2eccac57048ae226e6c946ebf0e%1688677682.002",
"gflags/2.2.2#48d1262ffac8d30c3224befb8275a533%1676224985.343",
"flatbuffers/23.5.26#b153646f6546daab4c7326970b6cd89c%1685838458.449",
"ade/0.1.2a#b569ff943843abd004e65536e265a445%1688125447.482"
],
"build_requires": [
"zlib/1.2.13#97d5730b529b4224045fe7090592d4c1%1692672717.049",
"protobuf/3.21.12#d9f5f4e3b86552552dda4c0a2e928eeb%1685218275.69",
"protobuf/3.21.9#515ceb0a1653cf84363d9968b812d6be%1678364058.993",
"patchelf/0.13#0eaada8970834919c3ce14355afe7fac%1680534241.341",
"m4/1.4.19#c1c4b1ee919e34630bb9b50046253d3c%1676610086.39",
"libtool/2.4.6#9ee8efc04c2e106e7fba13bb1e477617%1677509454.345",
"gnu-config/cci.20210814#15c3bf7dfdb743977b84d0321534ad90%1681250000.747",
"flatbuffers/23.5.26#b153646f6546daab4c7326970b6cd89c%1685838458.449",
"cmake/3.27.4#a7e78418b024dccacccc887f049f47ed%1693515860.005",
"automake/1.16.5#058bda3e21c36c9aa8425daf3c1faf50%1688481772.751",
"autoconf/2.71#53be95d228b2dcb30dc199cb84262d8f%1693395343.513"
],
"python_requires": []
}

View File

@@ -2,12 +2,12 @@
ade/0.1.2a
onetbb/[>=2021.2.1]
pugixml/[>=1.10]
protobuf/3.21.9
protobuf/3.21.12
ittapi/[>=3.23.0]
zlib/[>=1.2.8]
opencl-icd-loader/2023.04.17
opencl-clhpp-headers/2023.04.17
opencl-headers/2023.04.17
opencl-icd-loader/[>=2022.09.30]
# opencl-clhpp-headers/[>=2022.09.30]
opencl-headers/[>=2022.09.30]
xbyak/[>=6.62]
snappy/[>=1.1.7]
gflags/2.2.2
@@ -24,6 +24,8 @@ flatbuffers/[>=22.9.24]
[options]
protobuf/*:lite=True
onetbb/*:tbbmalloc=True
onetbb/*:tbbproxy=True
flatbuffers/*:header_only=True
[generators]

View File

@@ -1,85 +0,0 @@
{
"version": "0.2",
"ignorePaths": [],
"dictionaryDefinitions": [],
"dictionaries": [],
"words": [
"armhf",
"autoremove",
"bblayers",
"bitbake",
"buildtools",
"caffe",
"chrpath",
"devel",
"devtoolset",
"dgpu",
"diffstat",
"Dockerfiles",
"dpkg",
"DWORD",
"endsphinxdirective",
"epel",
"ERRORLEVEL",
"executionpolicy",
"fafe",
"globbing",
"gmmlib",
"GNAs",
"googlenet",
"gpgcheck",
"gpgkey",
"hashfile",
"HKLM",
"HOSTTOOLS",
"iigd",
"insmod",
"intelocl",
"kaldi",
"Khronos",
"ldconfig",
"libopencl",
"libpython",
"linmac",
"LTSC",
"maxdepth",
"mklink",
"monodepth",
"mxnet",
"nocache",
"noglob",
"nohup",
"norestart",
"ocloc",
"onnx",
"opencl",
"openembedded",
"openvino",
"PACKAGECONFIG",
"patchelf",
"pkgdata",
"pkgs",
"pypi",
"pzstd",
"quantizer",
"Redistributable",
"remotesigned",
"repolist",
"rmmod",
"servercore",
"setupvars",
"SETX",
"skylake",
"sphinxdirective",
"toctree",
"Uninstallation",
"userspace",
"venv",
"WDDM",
"WORKDIR",
"yocto",
"zstd"
],
"ignoreWords": [],
"import": []
}

View File

@@ -3,7 +3,8 @@
@sphinxdirective
.. meta::
:description: Preparing models for OpenVINO Runtime. Learn how to convert and compile models from different frameworks or read them directly.
:description: Preparing models for OpenVINO Runtime. Learn about the methods
used to read, convert and compile models from different frameworks.
.. toctree::
@@ -15,41 +16,45 @@
omz_tools_downloader
Every deep learning workflow begins with obtaining a model. You can choose to prepare a custom one, use a ready-made solution and adjust it to your needs, or even download and run a pre-trained network from an online database, such as `TensorFlow Hub <https://tfhub.dev/>`__, `Hugging Face <https://huggingface.co/>`__, `Torchvision models <https://pytorch.org/hub/>`__.
Every deep learning workflow begins with obtaining a model. You can choose to prepare a custom one, use a ready-made solution and adjust it to your needs, or even download and run a pre-trained network from an online database, such as `TensorFlow Hub <https://tfhub.dev/>`__, `Hugging Face <https://huggingface.co/>`__, or `Torchvision models <https://pytorch.org/hub/>`__.
:doc:`OpenVINO™ supports several model formats <Supported_Model_Formats>` and allows converting them to it's own, `openvino.runtime.Model <api/ie_python_api/_autosummary/openvino.runtime.Model.html>`__ (`ov.Model <api/ie_python_api/_autosummary/openvino.runtime.Model.html>`__ ), providing a tool dedicated to this task.
Import a model using ``read_model()``
#################################################
There are several options to convert a model from original framework to OpenVINO model format (``ov.Model``).
Model files (not Python objects) from :doc:`ONNX, PaddlePaddle, TensorFlow and TensorFlow Lite <Supported_Model_Formats>` (check :doc:`TensorFlow Frontend Capabilities and Limitations <openvino_docs_MO_DG_TensorFlow_Frontend>`) do not require a separate step for model conversion, that is ``mo.convert_model``.
The ``read_model()`` method reads a model from a file and produces ``ov.Model``. If the file is in one of the supported original framework file formats, it is converted automatically to OpenVINO Intermediate Representation. If the file is already in the OpenVINO IR format, it is read "as-is", without any conversion involved. ``ov.Model`` can be serialized to IR using the ``ov.serialize()`` method. The serialized IR can be further optimized using :doc:`Neural Network Compression Framework (NNCF) <ptq_introduction>` that applies post-training quantization methods.
The ``read_model()`` method reads a model from a file and produces `openvino.runtime.Model <api/ie_python_api/_autosummary/openvino.runtime.Model.html>`__. If the file is in one of the supported original framework :doc:`file formats <Supported_Model_Formats>`, the method runs internal conversion to an OpenVINO model format. If the file is already in the :doc:`OpenVINO IR format <openvino_ir>`, it is read "as-is", without any conversion involved.
Convert a model in Python
######################################
You can also convert a model from original framework to `openvino.runtime.Model <api/ie_python_api/_autosummary/openvino.runtime.Model.html>`__ using ``convert_model()`` method. More details about ``convert_model()`` are provided in :doc:`model conversion guide <openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide>` .
Model conversion API, specifically, the ``mo.convert_model()`` method converts a model from original framework to ``ov.Model``. ``mo.convert_model()`` returns ``ov.Model`` object in memory so the ``read_model()`` method is not required. The resulting ``ov.Model`` can be inferred in the same training environment (python script or Jupiter Notebook). ``mo.convert_model()`` provides a convenient way to quickly switch from framework-based code to OpenVINO-based code in your inference application. In addition to model files, ``mo.convert_model()`` can take OpenVINO extension objects constructed directly in Python for easier conversion of operations that are not supported in OpenVINO. The ``mo.convert_model()`` method also has a set of parameters to :doc:`cut the model <openvino_docs_MO_DG_prepare_model_convert_model_Cutting_Model>`, :doc:`set input shapes or layout <openvino_docs_MO_DG_prepare_model_convert_model_Converting_Model>`, :doc:`add preprocessing <openvino_docs_MO_DG_Additional_Optimization_Use_Cases>`, etc.
``ov.Model`` can be serialized to IR using the ``ov.serialize()`` method. The serialized IR can be further optimized using :doc:`Neural Network Compression Framework (NNCF) <ptq_introduction>` that applies post-training quantization methods.
.. image:: _static/images/model_conversion_diagram.svg
:alt: model conversion diagram
.. note::
Convert a model with ``mo`` command-line tool
#############################################
``convert_model()`` also allows you to perform input/output cut, add pre-processing or add custom Python conversion extensions.
Another option to convert a model is to use ``mo`` command-line tool. ``mo`` is a cross-platform tool that facilitates the transition between training and deployment environments, performs static model analysis, and adjusts deep learning models for optimal execution on end-point target devices in the same measure, as the ``mo.convert_model`` method.
Convert a model with Python using ``mo.convert_model()``
###########################################################
``mo`` requires the use of a pre-trained deep learning model in one of the supported formats: TensorFlow, TensorFlow Lite, PaddlePaddle, or ONNX. ``mo`` converts the model to the OpenVINO Intermediate Representation format (IR), which needs to be read with the ``ov.read_model()`` method. Then, you can compile and infer the ``ov.Model`` later with :doc:`OpenVINO™ Runtime <openvino_docs_OV_UG_OV_Runtime_User_Guide>`.
Model conversion API, specifically, the ``mo.convert_model()`` method converts a model from original framework to ``ov.Model``. ``mo.convert_model()`` returns ``ov.Model`` object in memory so the ``read_model()`` method is not required. The resulting ``ov.Model`` can be inferred in the same training environment (python script or Jupiter Notebook). ``mo.convert_model()`` provides a convenient way to quickly switch from framework-based code to OpenVINO-based code in your inference application.
In addition to model files, ``mo.convert_model()`` can take OpenVINO extension objects constructed directly in Python for easier conversion of operations that are not supported in OpenVINO. The ``mo.convert_model()`` method also has a set of parameters to :doc:`cut the model <openvino_docs_MO_DG_prepare_model_convert_model_Cutting_Model>`, :doc:`set input shapes or layout <openvino_docs_MO_DG_prepare_model_convert_model_Converting_Model>`, :doc:`add preprocessing <openvino_docs_MO_DG_Additional_Optimization_Use_Cases>`, etc.
The figure below illustrates the typical workflow for deploying a trained deep learning model:
.. image:: _static/images/BASIC_FLOW_MO_simplified.svg
where IR is a pair of files describing the model:
The figure below illustrates the typical workflow for deploying a trained deep learning model, where IR is a pair of files describing the model:
* ``.xml`` - Describes the network topology.
* ``.bin`` - Contains the weights and biases binary data.
.. image:: _static/images/model_conversion_diagram.svg
:alt: model conversion diagram
Model files (not Python objects) from ONNX, PaddlePaddle, TensorFlow and TensorFlow Lite (check :doc:`TensorFlow Frontend Capabilities and Limitations <openvino_docs_MO_DG_TensorFlow_Frontend>`) do not require a separate step for model conversion, that is ``mo.convert_model``. OpenVINO provides C++ and Python APIs for importing the models to OpenVINO Runtime directly by just calling the ``read_model`` method.
Convert a model using ``mo`` command-line tool
#################################################
Another option to convert a model is to use ``mo`` command-line tool. ``mo`` is a cross-platform tool that facilitates the transition between training and deployment environments, performs static model analysis, and adjusts deep learning models for optimal execution on end-point target devices in the same measure, as the ``mo.convert_model()`` method.
``mo`` requires the use of a pre-trained deep learning model in one of the supported formats: TensorFlow, TensorFlow Lite, PaddlePaddle, or ONNX. ``mo`` converts the model to the OpenVINO Intermediate Representation format (IR), which needs to be read with the ``ov.read_model()`` method. Then, you can compile and infer the ``ov.Model`` later with :doc:`OpenVINO™ Runtime <openvino_docs_OV_UG_OV_Runtime_User_Guide>`.
The results of both ``mo`` and ``mo.convert_model()`` conversion methods described above are the same. You can choose one of them, depending on what is most convenient for you. Keep in mind that there should not be any differences in the results of model conversion if the same set of parameters is used.
@@ -58,5 +63,5 @@ This section describes how to obtain and prepare your model for work with OpenVI
* :doc:`See the supported formats and how to use them in your project <Supported_Model_Formats>`.
* :doc:`Convert different model formats to the ov.Model format <openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide>`.
@endsphinxdirective
@endsphinxdirective

View File

@@ -19,6 +19,7 @@
OpenVINO™ is not just one tool. It is an expansive ecosystem of utilities, providing a comprehensive workflow for deep learning solution development. Learn more about each of them to reach the full potential of OpenVINO™ Toolkit.
**Neural Network Compression Framework (NNCF)**
A suite of advanced algorithms for Neural Network inference optimization with minimal accuracy drop. NNCF applies quantization, filter pruning, binarization and sparsity algorithms to PyTorch and TensorFlow models during training.
@@ -40,6 +41,7 @@ More resources:
* `GitHub <https://github.com/openvinotoolkit/training_extensions>`__
* `Documentation <https://openvinotoolkit.github.io/training_extensions/stable/guide/get_started/introduction.html>`__
**OpenVINO™ Security Add-on**
A solution for Model Developers and Independent Software Vendors to use secure packaging and secure model execution.
@@ -94,6 +96,5 @@ A web-based tool for deploying deep learning models. Built on the core of OpenVI
OpenVINO™ Integration with TensorFlow will no longer be supported as of OpenVINO release 2023.0. As part of the 2023.0 release, OpenVINO will feature a significantly enhanced TensorFlow user experience within native OpenVINO without needing offline model conversions. :doc:`Learn more <openvino_docs_MO_DG_TensorFlow_Frontend>`.
@endsphinxdirective

View File

@@ -17,6 +17,7 @@
Running Inference <openvino_docs_OV_UG_OV_Runtime_User_Guide>
Deployment on a Local System <openvino_deployment_guide>
Deployment on a Model Server <ovms_what_is_openvino_model_server>
pytorch_2_0_torch_compile
| :doc:`Model Preparation <openvino_docs_model_processing_introduction>`

View File

@@ -0,0 +1,157 @@
# PyTorch Deployment via "torch.compile" {#pytorch_2_0_torch_compile}
@sphinxdirective
The ``torch.compile`` feature enables you to use OpenVINO for PyTorch-native applications.
It speeds up PyTorch code by JIT-compiling it into optimized kernels.
By default, Torch code runs in eager-mode, but with the use of ``torch.compile`` it goes through the following steps:
1. **Graph acquisition** - the model is rewritten as blocks of subgraphs that are either:
* compiled by TorchDynamo and "flattened",
* falling back to the eager-mode, due to unsupported Python constructs (like control-flow code).
2. **Graph lowering** - all PyTorch operations are decomposed into their constituent kernels specific to the chosen backend.
3. **Graph compilation** - the kernels call their corresponding low-level device-specific operations.
How to Use
#################
To use ``torch.compile``, you need to add an import statement and define one of the two available backends:
| ``openvino``
| With this backend, Torch FX subgraphs are directly converted to OpenVINO representation without any additional PyTorch based tracing/scripting.
| ``openvino_ts``
| With this backend, Torch FX subgraphs are first traced/scripted with PyTorch Torchscript, and then converted to OpenVINO representation.
.. tab-set::
.. tab-item:: openvino
:sync: backend-openvino
.. code-block:: console
import openvino.torch
...
model = torch.compile(model, backend='openvino')
Execution diagram:
.. image:: _static/images/torch_compile_backend_openvino.svg
:width: 992px
:height: 720px
:scale: 60%
:align: center
.. tab-item:: openvino_ts
:sync: backend-openvino-ts
.. code-block:: console
import openvino.torch
...
model = torch.compile(model, backend='openvino_ts')
Execution diagram:
.. image:: _static/images/torch_compile_backend_openvino_ts.svg
:width: 1088px
:height: 720px
:scale: 60%
:align: center
Environment Variables
+++++++++++++++++++++++++++
* **OPENVINO_TORCH_BACKEND_DEVICE**: enables selecting a specific hardware device to run the application.
By default, the OpenVINO backend for ``torch.compile`` runs PyTorch applications using the CPU. Setting
this variable to GPU.0, for example, will make the application use the integrated graphics processor instead.
* **OPENVINO_TORCH_MODEL_CACHING**: enables saving the optimized model files to a hard drive, after the first application run.
This makes them available for the following application executions, reducing the first-inference latency.
By default, this variable is set to ``False``. Setting it to ``True`` enables caching.
* **OPENVINO_TORCH_CACHE_DIR**: enables defining a custom directory for the model files (if model caching set to ``True``).
By default, the OpenVINO IR is saved in the ``cache`` sub-directory, created in the application's root directory.
Windows support
++++++++++++++++++++++++++
Currently, PyTorch does not support ``torch.compile`` feature on Windows officially. However, it can be accessed by running
the below instructions:
1. Install the PyTorch nightly wheel file - `2.1.0.dev20230713 <https://download.pytorch.org/whl/nightly/cpu/torch-2.1.0.dev20230713%2Bcpu-cp38-cp38-win_amd64.whl>`__ ,
2. Update the file at ``<python_env_root>/Lib/site-packages/torch/_dynamo/eval_frames.py``
3. Find the function called ``check_if_dynamo_supported()``:
.. code-block:: console
def check_if_dynamo_supported():
if sys.platform == "win32":
raise RuntimeError("Windows not yet supported for torch.compile")
if sys.version_info >= (3, 11):
raise RuntimeError("Python 3.11+ not yet supported for torch.compile")
4. Put in comments the first two lines in this function, so it looks like this:
.. code-block:: console
def check_if_dynamo_supported():
#if sys.platform == "win32":
# raise RuntimeError("Windows not yet supported for torch.compile")
if sys.version_info >= (3, 11):
`raise RuntimeError("Python 3.11+ not yet supported for torch.compile")
Support for Automatic1111 Stable Diffusion WebUI
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Automatic1111 Stable Diffusion WebUI is an open-source repository that hosts a browser-based interface for the Stable Diffusion
based image generation. It allows users to create realistic and creative images from text prompts.
Stable Diffusion WebUI is supported on Intel CPUs, Intel integrated GPUs, and Intel discrete GPUs by leveraging OpenVINO
``torch.compile`` capability. Detailed instructions are available in
`Stable Diffusion WebUI repository. <https://github.com/openvinotoolkit/stable-diffusion-webui/wiki/Installation-on-Intel-Silicon>`__
Architecture
#################
The ``torch.compile`` feature is part of PyTorch 2.0, and is based on:
* **TorchDynamo** - a Python-level JIT that hooks into the frame evaluation API in CPython,
(PEP 523) to dynamically modify Python bytecode right before it is executed (PyTorch operators
that cannot be extracted to FX graph are executed in the native Python environment).
It maintains the eager-mode capabilities using
`Guards <https://pytorch.org/docs/stable/dynamo/guards-overview.html>`__ to ensure the
generated graphs are valid.
* **AOTAutograd** - generates the backward graph corresponding to the forward graph captured by TorchDynamo.
* **PrimTorch** - decomposes complicated PyTorch operations into simpler and more elementary ops.
* **TorchInductor** - a deep learning compiler that generates fast code for multiple accelerators and backends.
When the PyTorch module is wrapped with ``torch.compile``, TorchDynamo traces the module and
rewrites Python bytecode to extract sequences of PyTorch operations into an FX Graph,
which can be optimized by the OpenVINO backend. The Torch FX graphs are first converted to
inlined FX graphs and the graph partitioning module traverses inlined FX graph to identify
operators supported by OpenVINO.
All the supported operators are clustered into OpenVINO submodules, converted to the OpenVINO
graph using OpenVINO's PyTorch decoder, and executed in an optimized manner using OpenVINO runtime.
All unsupported operators fall back to the native PyTorch runtime on CPU. If the subgraph
fails during OpenVINO conversion, the subgraph falls back to PyTorch's default inductor backend.
Additional Resources
############################
* `PyTorch 2.0 documentation <https://pytorch.org/docs/stable/index.html>`_
@endsphinxdirective

View File

@@ -199,4 +199,4 @@ See Also
* :doc:`Using OpenVINO Runtime Samples <openvino_docs_OV_UG_Samples_Overview>`
* :doc:`Hello Shape Infer SSD sample <openvino_inference_engine_samples_hello_reshape_ssd_README>`
@endsphinxdirective
@endsphinxdirective

View File

@@ -187,7 +187,7 @@ input name at position ``i`` maps to OpenVINO operation input at position ``i``
Let's see the following example. Like previously, we'd like to map ``CustomOperation`` in the original model,
to OpenVINO ``CustomOperation`` as is (so their name and attributes names match). This time, that framework operation
inputs and outputs are not strictly ordered and can be identified by their names ``A``, ``B``, ``C`` for inputs
inputs and outputs are not stricly ordered and can be identified by their names ``A``, ``B``, ``C`` for inputs
and ``X``, ``Y`` for outputs. Those inputs and outputs can be mapped to OpenVINO operation, such that inputs
``A``, ``B``, ``C`` map to OpenVINO ``CustomOperation`` first, second and third input and ``X`` and ``Y``
outputs map to OpenVINO ``CustomOperation`` first and second output respectively.
@@ -301,11 +301,19 @@ This mapping also specifies the input name "X" and output name "Out".
The last step is to register this custom operation by following:
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
:language: cpp
:fragment: [frontend_extension_framework_map_macro_add_extension]
.. important::
To map an operation on a specific framework, you have to link to a respective
frontend (``openvino::frontend::onnx``, ``openvino::frontend::tensorflow``, ``openvino::frontend::paddle``) in the ``CMakeLists.txt`` file:
.. code-block:: sh
target_link_libraries(${TARGET_NAME} PRIVATE openvino::frontend::onnx)
Mapping to Multiple Operations with ConversionExtension
#######################################################
@@ -390,7 +398,7 @@ corresponding outputs of the original framework operation in the same order.
Some frameworks require output names of the operation to be provided during conversion.
For PaddlePaddle operations, it is generally necessary to provide names for all outputs using the ``NamedOutputs`` container.
Usually those names can be found in source code of the individual operation in PaddlePaddle code.
The following example shows such conversion for the ``top_k_v2`` operation.
The next example shows such conversion for the ``top_k_v2`` operation.
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
:language: cpp

View File

@@ -1,3 +0,0 @@
version https://git-lfs.github.com/spec/v1
oid sha256:f7c8ab4f15874d235968471bcf876c89c795d601e69891208107b8b72aa58eb1
size 70014

View File

@@ -1,3 +0,0 @@
version https://git-lfs.github.com/spec/v1
oid sha256:3d5ccf51fe1babb93d96d042494695a6a6e055d1f8ebf7eef5083d54d8987a23
size 58789

View File

@@ -1,40 +0,0 @@
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#! [complex:transformation]
from openvino.tools.mo.front.common.replacement import FrontReplacementSubgraph
from openvino.tools.mo.graph.graph import Graph
class Complex(FrontReplacementSubgraph):
enabled = True
def pattern(self):
return dict(
nodes=[
('strided_slice_real', dict(op='StridedSlice')),
('strided_slice_imag', dict(op='StridedSlice')),
('complex', dict(op='Complex')),
],
edges=[
('strided_slice_real', 'complex', {'in': 0}),
('strided_slice_imag', 'complex', {'in': 1}),
])
@staticmethod
def replace_sub_graph(graph: Graph, match: dict):
strided_slice_real = match['strided_slice_real']
strided_slice_imag = match['strided_slice_imag']
complex_node = match['complex']
# make sure that both strided slice operations get the same data as input
assert strided_slice_real.in_port(0).get_source() == strided_slice_imag.in_port(0).get_source()
# identify the output port of the operation producing datat for strided slice nodes
input_node_output_port = strided_slice_real.in_port(0).get_source()
input_node_output_port.disconnect()
# change the connection so now all consumers of "complex_node" get data from input node of strided slice nodes
complex_node.out_port(0).get_connection().set_source(input_node_output_port)
#! [complex:transformation]

View File

@@ -1,27 +0,0 @@
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#! [complex_abs:transformation]
import numpy as np
from openvino.tools.mo.ops.elementwise import Pow
from openvino.tools.mo.ops.ReduceOps import ReduceSum
from openvino.tools.mo.front.common.replacement import FrontReplacementOp
from openvino.tools.mo.graph.graph import Graph, Node
from openvino.tools.mo.ops.const import Const
class ComplexAbs(FrontReplacementOp):
op = "ComplexAbs"
enabled = True
def replace_op(self, graph: Graph, node: Node):
pow_2 = Const(graph, {'value': np.float32(2.0)}).create_node()
reduce_axis = Const(graph, {'value': np.int32(-1)}).create_node()
pow_0_5 = Const(graph, {'value': np.float32(0.5)}).create_node()
sq = Pow(graph, dict(name=node.in_node(0).name + '/sq', power=2.0)).create_node([node.in_node(0), pow_2])
sum = ReduceSum(graph, dict(name=sq.name + '/sum')).create_node([sq, reduce_axis])
sqrt = Pow(graph, dict(name=sum.name + '/sqrt', power=0.5)).create_node([sum, pow_0_5])
return [sqrt.id]
#! [complex_abs:transformation]

View File

@@ -1,33 +0,0 @@
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
# ! [fft_ext:extractor]
from ...ops.FFT import FFT
from openvino.tools.mo.front.extractor import FrontExtractorOp
class FFT2DFrontExtractor(FrontExtractorOp):
op = 'FFT2D'
enabled = True
@classmethod
def extract(cls, node):
attrs = {
'inverse': 0
}
FFT.update_node_stat(node, attrs)
return cls.enabled
class IFFT2DFrontExtractor(FrontExtractorOp):
op = 'IFFT2D'
enabled = True
@classmethod
def extract(cls, node):
attrs = {
'inverse': 1
}
FFT.update_node_stat(node, attrs)
return cls.enabled
# ! [fft_ext:extractor]

View File

@@ -1,27 +0,0 @@
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#! [fft:operation]
from openvino.tools.mo.front.common.partial_infer.elemental import copy_shape_infer
from openvino.tools.mo.graph.graph import Graph
from openvino.tools.mo.ops.op import Op
class FFT(Op):
op = 'FFT'
enabled = False
def __init__(self, graph: Graph, attrs: dict):
super().__init__(graph, {
'type': self.op,
'op': self.op,
'version': 'custom_opset',
'inverse': None,
'in_ports_count': 1,
'out_ports_count': 1,
'infer': copy_shape_infer
}, attrs)
def backend_attrs(self):
return ['inverse']
#! [fft:operation]

View File

@@ -1,106 +0,0 @@
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#! [mri_demo:demo]
import numpy as np
import cv2 as cv
import argparse
import time
from openvino.inference_engine import IECore
def kspace_to_image(kspace):
assert(len(kspace.shape) == 3 and kspace.shape[-1] == 2)
fft = cv.idft(kspace, flags=cv.DFT_SCALE)
img = cv.magnitude(fft[:,:,0], fft[:,:,1])
return cv.normalize(img, dst=None, alpha=255, beta=0, norm_type=cv.NORM_MINMAX, dtype=cv.CV_8U)
if __name__ == '__main__':
parser = argparse.ArgumentParser(description='MRI reconstrution demo for network from https://github.com/rmsouza01/Hybrid-CS-Model-MRI (https://arxiv.org/abs/1810.12473)')
parser.add_argument('-i', '--input', dest='input', help='Path to input .npy file with MRI scan data.')
parser.add_argument('-p', '--pattern', dest='pattern', help='Path to sampling mask in .npy format.')
parser.add_argument('-m', '--model', dest='model', help='Path to .xml file of OpenVINO IR.')
parser.add_argument('-l', '--cpu_extension', dest='cpu_extension', help='Path to extensions library with FFT implementation.')
parser.add_argument('-d', '--device', dest='device', default='CPU',
help='Optional. Specify the target device to infer on; CPU, '
'GPU, GNA is acceptable. For non-CPU targets, '
'HETERO plugin is used with CPU fallbacks to FFT implementation. '
'Default value is CPU')
args = parser.parse_args()
xml_path = args.model
assert(xml_path.endswith('.xml'))
bin_path = xml_path[:xml_path.rfind('.xml')] + '.bin'
ie = IECore()
ie.add_extension(args.cpu_extension, "CPU")
net = ie.read_network(xml_path, bin_path)
device = 'CPU' if args.device == 'CPU' else ('HETERO:' + args.device + ',CPU')
exec_net = ie.load_network(net, device)
# Hybrid-CS-Model-MRI/Data/stats_fs_unet_norm_20.npy
stats = np.array([2.20295299e-01, 1.11048916e+03, 4.16997984e+00, 4.71741395e+00], dtype=np.float32)
# Hybrid-CS-Model-MRI/Data/sampling_mask_20perc.npy
var_sampling_mask = np.load(args.pattern) # TODO: can we generate it in runtime?
print('Sampling ratio:', 1.0 - var_sampling_mask.sum() / var_sampling_mask.size)
data = np.load(args.input)
num_slices, height, width = data.shape[0], data.shape[1], data.shape[2]
pred = np.zeros((num_slices, height, width), dtype=np.uint8)
data /= np.sqrt(height * width)
print('Compute...')
start = time.time()
for slice_id, kspace in enumerate(data):
kspace = kspace.copy()
# Apply sampling
kspace[var_sampling_mask] = 0
kspace = (kspace - stats[0]) / stats[1]
# Forward through network
input = np.expand_dims(kspace.transpose(2, 0, 1), axis=0)
outputs = exec_net.infer(inputs={'input_1': input})
output = next(iter(outputs.values()))
output = output.reshape(height, width)
# Save predictions
pred[slice_id] = cv.normalize(output, dst=None, alpha=255, beta=0, norm_type=cv.NORM_MINMAX, dtype=cv.CV_8U)
print('Elapsed time: %.1f seconds' % (time.time() - start))
WIN_NAME = 'MRI reconstruction with OpenVINO'
slice_id = 0
def callback(pos):
global slice_id
slice_id = pos
kspace = data[slice_id]
img = kspace_to_image(kspace)
kspace[var_sampling_mask] = 0
masked = kspace_to_image(kspace)
rec = pred[slice_id]
# Add a header
border_size = 20
render = cv.hconcat((img, masked, rec))
render = cv.copyMakeBorder(render, border_size, 0, 0, 0, cv.BORDER_CONSTANT, value=255)
cv.putText(render, 'Original', (0, 15), cv.FONT_HERSHEY_SIMPLEX, 0.5, color=0)
cv.putText(render, 'Sampled (PSNR %.1f)' % cv.PSNR(img, masked), (width, 15), cv.FONT_HERSHEY_SIMPLEX, 0.5, color=0)
cv.putText(render, 'Reconstructed (PSNR %.1f)' % cv.PSNR(img, rec), (width*2, 15), cv.FONT_HERSHEY_SIMPLEX, 0.5, color=0)
cv.imshow(WIN_NAME, render)
cv.waitKey(1)
cv.namedWindow(WIN_NAME, cv.WINDOW_NORMAL)
print(num_slices)
cv.createTrackbar('Slice', WIN_NAME, num_slices // 2, num_slices - 1, callback)
callback(num_slices // 2) # Trigger initial visualization
cv.waitKey()
#! [mri_demo:demo]

View File

@@ -36,7 +36,7 @@ AsyncInferRequest()
The main goal of the ``AsyncInferRequest`` constructor is to define a device pipeline ``m_pipeline``. The example below demonstrates ``m_pipeline`` creation with the following stages:
* ``infer_preprocess_and_start_pipeline`` is a CPU lightweight task to submit tasks to a remote device.
* ``infer_preprocess_and_start_pipeline`` is a CPU ligthweight task to submit tasks to a remote device.
* ``wait_pipeline`` is a CPU non-compute task that waits for a response from a remote device.
* ``infer_postprocess`` is a CPU compute task.

View File

@@ -36,9 +36,9 @@ Once the commands above are executed, the OpenVINO Developer Package is generate
* Libraries for tests development:
* ``openvino::gtest``, ``openvino::gtest_main``, ``openvino::gmock`` - Google Tests framework libraries
* ``openvino::common_test_utils`` - static library with common tests utilities
* ``openvino::func_test_utils`` - static library with functional tests utilities
* ``openvino::unit_test_utils`` - static library with unit tests utilities
* ``openvino::commonTestUtils`` - static library with common tests utilities
* ``openvino::funcTestUtils`` - static library with functional tests utilities
* ``openvino::unitTestUtils`` - static library with unit tests utilities
* ``openvino::ngraphFunctions`` - static library with the set of ``ov::Model`` builders
* ``openvino::funcSharedTests`` - static library with common functional tests
* ``openvino::ngraph_reference`` - static library with operation reference implementations.

View File

@@ -89,7 +89,6 @@ Detailed Guides
* :doc:`Quantized networks <openvino_docs_ov_plugin_dg_quantized_models>`
* :doc:`Low precision transformations <openvino_docs_OV_UG_lpt>` guide
* :doc:`Writing OpenVINO™ transformations <openvino_docs_transformations>` guide
* `Integration with AUTO Plugin <https://github.com/openvinotoolkit/openvino/blob/master/src/plugins/auto/docs/integration.md>`__
API References
##############

View File

@@ -51,7 +51,7 @@ The example implementation have two remote tensor classes:
Based on that, an implementation of a type independent remote tensor class can look as follows:
.. doxygensnippet:: src/plugins/template/src/remote_tensor.hpp
.. doxygensnippet:: src/plugins/template/src/remote_context.cpp
:language: cpp
:fragment: [vector_impl:implementation]

View File

@@ -7,7 +7,7 @@
of Intel® Distribution of OpenVINO™ toolkit.
Performance varies by use, configuration and other factors. Learn more at [www.intel.com/PerformanceIndex](https://www.intel.com/PerformanceIndex).
Performance varies by use, configuration and other factors. Learn more at `www.intel.com/PerformanceIndex <https://www.intel.com/PerformanceIndex>`__.
Performance results are based on testing as of dates shown in configurations and may not reflect all publicly available updates. See backup for configuration details. No product or component can be absolutely secure.

View File

@@ -44,18 +44,21 @@ To convert a model to OpenVINO model format (``ov.Model``), you can use the foll
If the out-of-the-box conversion (only the ``input_model`` parameter is specified) is not successful, use the parameters mentioned below to override input shapes and cut the model:
- model conversion API provides two parameters to override original input shapes for model conversion: ``input`` and ``input_shape``.
For more information about these parameters, refer to the :doc:`Setting Input Shapes <openvino_docs_MO_DG_prepare_model_convert_model_Converting_Model>` guide.
- ``input`` and ``input_shape`` - the model conversion API parameters used to override original input shapes for model conversion,
For more information about the parameters, refer to the :doc:`Setting Input Shapes <openvino_docs_MO_DG_prepare_model_convert_model_Converting_Model>` guide.
- ``input`` and ``output`` - the model conversion API parameters used to define new inputs and outputs of the converted model to cut off unwanted parts (such as unsupported operations and training sub-graphs),
- To cut off unwanted parts of a model (such as unsupported operations and training sub-graphs),
use the ``input`` and ``output`` parameters to define new inputs and outputs of the converted model.
For a more detailed description, refer to the :doc:`Cutting Off Parts of a Model <openvino_docs_MO_DG_prepare_model_convert_model_Cutting_Model>` guide.
You can also insert additional input pre-processing sub-graphs into the converted model by using
the ``mean_values``, ``scales_values``, ``layout``, and other parameters described
in the :doc:`Embedding Preprocessing Computation <openvino_docs_MO_DG_Additional_Optimization_Use_Cases>` article.
- ``mean_values``, ``scales_values``, ``layout`` - the parameters used to insert additional input pre-processing sub-graphs into the converted model,
The ``compress_to_fp16`` compression parameter in ``mo`` command-line tool allows generating IR with constants (for example, weights for convolutions and matrix multiplications) compressed to ``FP16`` data type. For more details, refer to the :doc:`Compression of a Model to FP16 <openvino_docs_MO_DG_FP16_Compression>` guide.
For more details, see the :doc:`Embedding Preprocessing Computation <openvino_docs_MO_DG_Additional_Optimization_Use_Cases>` article.
- ``compress_to_fp16`` - a compression parameter in ``mo`` command-line tool, which allows generating IR with constants (for example, weights for convolutions and matrix multiplications) compressed to ``FP16`` data type.
For more details, refer to the :doc:`Compression of a Model to FP16 <openvino_docs_MO_DG_FP16_Compression>` guide.
To get the full list of conversion parameters, run the following command:

View File

@@ -23,7 +23,7 @@ Overview of Artificial Neural Networks Representation
A deep learning network is usually represented as a directed graph describing the flow of data from the network input data to the inference results.
Input data can be in the form of images, video, text, audio, or preprocessed information representing objects from the target area of interest.
Here is an illustration of a small graph representing a model that consists of a single Convolutional layer and activation function:
Here is an illustration sof a small graph representing a model that consists of a single Convolutional layer and activation function:
.. image:: _static/images/small_IR_graph_demonstration.png
@@ -56,7 +56,7 @@ A set consists of several groups of operations:
* Generic element-wise arithmetic tensor operations such as ``Add``, ``Subtract``, and ``Multiply``.
* Comparison operations that compare two numeric tensors and produce boolean tensors, for example, ``Less``, ``Equal``, ``Greater``.
* Comparison operations that compare two numeric tensors and produce boolean tensors, for example, ``Less``, ``Equeal``, ``Greater``.
* Logical operations that are dealing with boolean tensors, for example, ``And``, ``Xor``, ``Not``.

View File

@@ -2,13 +2,12 @@
@sphinxdirective
By default, when IR is saved all relevant floating-point weights are compressed to ``FP16`` data type during model conversion.
Optionally, all relevant floating-point weights can be compressed to ``FP16`` data type during model conversion.
It results in creating a "compressed ``FP16`` model", which occupies about half of
the original space in the file system. The compression may introduce a minor drop in accuracy,
but it is negligible for most models.
In case if accuracy drop is significant user can disable compression explicitly.
To disable compression, use the ``compress_to_fp16=False`` option:
To compress the model, use the ``compress_to_fp16=True`` option:
.. tab-set::
@@ -18,15 +17,15 @@ To disable compression, use the ``compress_to_fp16=False`` option:
.. code-block:: py
:force:
from openvino.runtime import save_model
ov_model = save_model(INPUT_MODEL, compress_to_fp16=False)
from openvino.tools.mo import convert_model
ov_model = convert_model(INPUT_MODEL, compress_to_fp16=True)
.. tab-item:: CLI
:sync: cli
.. code-block:: sh
mo --input_model INPUT_MODEL --compress_to_fp16=False
mo --input_model INPUT_MODEL --compress_to_fp16=True
For details on how plugins handle compressed ``FP16`` models, see

View File

@@ -128,7 +128,7 @@ Information about layer precision is also stored in the performance counters.
resnet\_model/add\_5/fq\_input\_1 NOT\_RUN FakeQuantize undef 0 0
=========================================================== ============= ============== ===================== ================= ==============
| The ``execStatus`` column of the table includes the following possible values:
| The ``exeStatus`` column of the table includes the following possible values:
| - ``EXECUTED`` - the layer was executed by standalone primitive.
| - ``NOT_RUN`` - the layer was not executed by standalone primitive or was fused with another operation and executed in another layer primitive.
|

View File

@@ -19,7 +19,7 @@ Example of converting a PyTorch model directly from memory:
:force:
import torchvision
model = torchvision.models.resnet50(pretrained=True)
ov_model = convert_model(model)
@@ -36,7 +36,7 @@ Example of using native Python classes to set ``input_shape``, ``mean_values`` a
:force:
from openvino.runtime import PartialShape, Layout
ov_model = convert_model(model, input_shape=PartialShape([1,3,100,100]), mean_values=[127, 127, 127], layout=Layout("NCHW"))
Example of using strings for setting ``input_shape``, ``mean_values`` and ``layout``:
@@ -57,7 +57,7 @@ Example of using a tuple in the ``input`` parameter to cut a model:
ov_model = convert_model(model, input=("input_name", [3], np.float32))
For complex cases, when a value needs to be set in the ``input`` parameter, the ``InputCutInfo`` class can be used. ``InputCutInfo`` accepts four parameters: ``name``, ``shape``, ``type``, and ``value``.
For complex cases, when a value needs to be set in the ``input`` parameter, the ``InputCutInfo`` class can be used. ``InputCutInfo`` accepts four parameters: ``name``, ``shape``, ``type``, and ``value``.
``InputCutInfo("input_name", [3], np.float32, [0.5, 2.1, 3.4])`` is equivalent of ``InputCutInfo(name="input_name", shape=[3], type=np.float32, value=[0.5, 2.1, 3.4])``.
@@ -74,15 +74,15 @@ Example of using ``InputCutInfo`` to freeze an input with value:
:force:
from openvino.tools.mo import convert_model, InputCutInfo
ov_model = convert_model(model, input=InputCutInfo("input_name", [3], np.float32, [0.5, 2.1, 3.4]))
To set parameters for models with multiple inputs, use ``list`` of parameters.
Parameters supporting ``list``:
Parameters supporting ``list``:
* input
* input_shape
* layout
* layout
* source_layout
* dest_layout
* mean_values
@@ -104,7 +104,7 @@ Example of using the ``Layout`` class to set the layout of a model input:
from openvino.runtime import Layout
from openvino.tools.mo import convert_model
ov_model = convert_model(model, source_layout=Layout("NCHW"))
To set both source and destination layouts in the ``layout`` parameter, use the ``LayoutMap`` class. ``LayoutMap`` accepts two parameters: ``source_layout`` and ``target_layout``.
@@ -117,7 +117,7 @@ Example of using the ``LayoutMap`` class to change the layout of a model input:
:force:
from openvino.tools.mo import convert_model, LayoutMap
ov_model = convert_model(model, layout=LayoutMap("NCHW", "NHWC"))
@endsphinxdirective

View File

@@ -6,8 +6,11 @@
All of the issues below refer to :doc:`legacy functionalities <openvino_docs_MO_DG_prepare_model_customize_model_optimizer_Customize_Model_Optimizer>`.
If your question is not covered by the topics below, use the
`OpenVINO Support page <https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/bd-p/distribution-openvino-toolkit>`__,
If your question is not covered by the topics below, use the `OpenVINO Support page <https://software.intel.com/en-us/openvino-toolkit/documentation/get-started>`__, where you can participate on a free forum.
If your question is not covered by the topics below, use the
`OpenVINO Support page <https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/bd-p/distribution-openvino-toolkit>`__,
where you can participate in a free forum discussion.
.. warning::
@@ -335,7 +338,7 @@ Q31. What does the message "Input port > 0 in --input is not supported if --inpu
**A:** When using the ``PORT:NODE`` notation for the ``--input`` command line argument and ``PORT`` > 0, you should specify ``--input_shape`` for this input. This is a limitation of the current Model Optimizer implementation.
> **NOTE**: It is no longer relevant message since the limitation on input port index for model truncation has been resolved.
.. note: It is no longer relevant message since the limitation on input port index for model truncation has been resolved.
.. _question-32:

View File

@@ -48,17 +48,17 @@ CLI Examples Using Caffe-Specific Parameters
++++++++++++++++++++++++++++++++++++++++++++
* Launching model conversion for `bvlc_alexnet.caffemodel <https://github.com/BVLC/caffe/tree/master/models/bvlc_alexnet>`__ with a specified `prototxt` file. This is needed when the name of the Caffe model and the `.prototxt` file are different or are placed in different directories. Otherwise, it is enough to provide only the path to the input `model.caffemodel` file.
.. code-block:: cpp
mo --input_model bvlc_alexnet.caffemodel --input_proto bvlc_alexnet.prototxt
* Launching model conversion for `bvlc_alexnet.caffemodel <https://github.com/BVLC/caffe/tree/master/models/bvlc_alexnet>`__ with a specified `CustomLayersMapping` file. This is the legacy method of quickly enabling model conversion if your model has custom layers. This requires the Caffe system on the computer. Example of ``CustomLayersMapping.xml`` can be found in ``<OPENVINO_INSTALLATION_DIR>/mo/front/caffe/CustomLayersMapping.xml.example``. The optional parameters without default values and not specified by the user in the ``.prototxt`` file are removed from the Intermediate Representation, and nested parameters are flattened:
.. code-block:: cpp
mo --input_model bvlc_alexnet.caffemodel -k CustomLayersMapping.xml --disable_omitting_optional --enable_flattening_nested_params
This example shows a multi-input model with input layers: ``data``, ``rois``
.. code-block:: cpp

View File

@@ -11,7 +11,7 @@
Note that OpenVINO support for Apache MXNet is currently being deprecated and will be removed entirely in the future.
To convert an MXNet model, run Model Optimizer with the path to the ``.params`` file of the input model:
To convert an MXNet model, run model conversion with the path to the ``.params`` file of the input model:
.. code-block:: sh

View File

@@ -21,11 +21,32 @@ This page provides instructions on model conversion from the ONNX format to the
Model conversion process assumes you have an ONNX model that was directly downloaded from a public repository or converted from any framework that supports exporting to the ONNX format.
To convert an ONNX model, run model conversion with the path to the input model ``.onnx`` file:
.. tab-set::
.. code-block:: sh
.. tab-item:: Python
:sync: py
To convert an ONNX model, run ``convert_model()`` method with the path to the ``<INPUT_MODEL>.onnx`` file:
.. code-block:: py
:force:
ov_model = convert_model("<INPUT_MODEL>.onnx")
compiled_model = core.compile_model(ov_model, "AUTO")
.. important::
The ``convert_model()`` method returns ``ov.model`` that you can optimize, compile, or serialize into a file for subsequent use.
.. tab-item:: CLI
:sync: cli
You can use ``mo`` command-line tool to convert a model to IR. The obtained IR can then be read by ``read_model()`` and inferred.
.. code-block:: sh
mo --input_model <INPUT_MODEL>.onnx
mo --input_model <INPUT_MODEL>.onnx
There are no ONNX specific parameters, so only framework-agnostic parameters are available to convert your model. For details, see the *General Conversion Parameters* section in the :doc:`Converting a Model to Intermediate Representation (IR) <openvino_docs_MO_DG_prepare_model_convert_model_Converting_Model>` guide.

View File

@@ -6,7 +6,6 @@
:description: Learn how to convert a model from the
PaddlePaddle format to the OpenVINO Intermediate Representation.
This page provides general instructions on how to convert a model from a PaddlePaddle format to the OpenVINO IR format using Model Optimizer. The instructions are different depending on PaddlePaddle model format.
.. note:: PaddlePaddle models are supported via FrontEnd API. You may skip conversion to IR and read models directly by OpenVINO runtime API. Refer to the :doc:`inference example <openvino_docs_OV_UG_Integrate_OV_with_your_application>` for more details. Using ``convert_model`` is still necessary in more complex cases, such as new custom inputs/outputs in model pruning, adding pre-processing, or using Python conversion extensions.
@@ -16,14 +15,13 @@ Converting PaddlePaddle Model Inference Format
PaddlePaddle inference model includes ``.pdmodel`` (storing model structure) and ``.pdiparams`` (storing model weight). For how to export PaddlePaddle inference model, please refer to the `Exporting PaddlePaddle Inference Model <https://www.paddlepaddle.org.cn/documentation/docs/zh/develop/guides/beginner/model_save_load_cn.html>`__ Chinese guide.
To convert a PaddlePaddle model, use the ``mo`` script and specify the path to the input ``.pdmodel`` model file:
.. code-block:: sh
mo --input_model <INPUT_MODEL>.pdmodel
**For example**, this command converts a yolo v3 PaddlePaddle network to OpenVINO IR network:
**For example,** this command converts a yolo v3 PaddlePaddle network to OpenVINO IR network:
.. code-block:: sh
@@ -32,60 +30,58 @@ To convert a PaddlePaddle model, use the ``mo`` script and specify the path to t
Converting PaddlePaddle Model From Memory Using Python API
##########################################################
Model conversion API supports passing PaddlePaddle models directly from memory.
Following PaddlePaddle model formats are supported:
Model conversion API supports passing the following PaddlePaddle models directly from memory:
* ``paddle.hapi.model.Model``
* ``paddle.fluid.dygraph.layers.Layer``
* ``paddle.fluid.executor.Executor``
Converting certain PaddlePaddle models may require setting ``example_input`` or ``example_output``. Below examples show how to execute such the conversion.
When you convert certain PaddlePaddle models, you may need to set the ``example_input`` or ``example_output`` parameters first. Below you will find examples that show how to convert aforementioned model formats using the parameters.
* Example of converting ``paddle.hapi.model.Model`` format model:
* ``paddle.hapi.model.Model``
.. code-block:: py
:force:
import paddle
from openvino.tools.mo import convert_model
# create a paddle.hapi.model.Model format model
resnet50 = paddle.vision.models.resnet50()
x = paddle.static.InputSpec([1,3,224,224], 'float32', 'x')
y = paddle.static.InputSpec([1,1000], 'float32', 'y')
model = paddle.Model(resnet50, x, y)
# convert to OpenVINO IR format
ov_model = convert_model(model)
# optional: serialize OpenVINO IR to *.xml & *.bin
from openvino.runtime import serialize
serialize(ov_model, "ov_model.xml", "ov_model.bin")
* Example of converting ``paddle.fluid.dygraph.layers.Layer`` format model:
* ``paddle.fluid.dygraph.layers.Layer``
``example_input`` is required while ``example_output`` is optional, which accept the following formats:
``example_input`` is required while ``example_output`` is optional, and accept the following formats:
``list`` with tensor(``paddle.Tensor``) or InputSpec(``paddle.static.input.InputSpec``)
.. code-block:: py
:force:
import paddle
from openvino.tools.mo import convert_model
# create a paddle.fluid.dygraph.layers.Layer format model
model = paddle.vision.models.resnet50()
x = paddle.rand([1,3,224,224])
# convert to OpenVINO IR format
ov_model = convert_model(model, example_input=[x])
* Example of converting ``paddle.fluid.executor.Executor`` format model:
* ``paddle.fluid.executor.Executor``
``example_input`` and ``example_output`` are required, which accept the following formats:
``example_input`` and ``example_output`` are required, and accept the following formats:
``list`` or ``tuple`` with variable(``paddle.static.data``)
@@ -94,86 +90,37 @@ Converting certain PaddlePaddle models may require setting ``example_input`` or
import paddle
from openvino.tools.mo import convert_model
paddle.enable_static()
# create a paddle.fluid.executor.Executor format model
x = paddle.static.data(name="x", shape=[1,3,224])
y = paddle.static.data(name="y", shape=[1,3,224])
relu = paddle.nn.ReLU()
sigmoid = paddle.nn.Sigmoid()
y = sigmoid(relu(x))
exe = paddle.static.Executor(paddle.CPUPlace())
exe.run(paddle.static.default_startup_program())
# convert to OpenVINO IR format
ov_model = convert_model(exe, example_input=[x], example_output=[y])
.. important::
The ``convert_model()`` method returns ``ov.model`` that you can optimize, compile, or serialize into a file for subsequent use.
Supported PaddlePaddle Layers
#############################
For the list of supported standard layers, refer to the :doc:`Supported Operations <openvino_resources_supported_operations_frontend>` page.
Officially Supported PaddlePaddle Models
########################################
The following PaddlePaddle models have been officially validated and confirmed to work (as of OpenVINO 2022.1):
.. list-table::
:widths: 20 25 55
:header-rows: 1
* - Model Name
- Model Type
- Description
* - ppocr-det
- optical character recognition
- Models are exported from `PaddleOCR <https://github.com/PaddlePaddle/PaddleOCR/tree/release/2.1/>`_. Refer to `READ.md <https://github.com/PaddlePaddle/PaddleOCR/tree/release/2.1/#pp-ocr-20-series-model-listupdate-on-dec-15>`_.
* - ppocr-rec
- optical character recognition
- Models are exported from `PaddleOCR <https://github.com/PaddlePaddle/PaddleOCR/tree/release/2.1/>`_. Refer to `READ.md <https://github.com/PaddlePaddle/PaddleOCR/tree/release/2.1/#pp-ocr-20-series-model-listupdate-on-dec-15>`_.
* - ResNet-50
- classification
- Models are exported from `PaddleClas <https://github.com/PaddlePaddle/PaddleClas/tree/release/2.1/>`_. Refer to `getting_started_en.md <https://github.com/PaddlePaddle/PaddleClas/blob/release/2.1/docs/en/tutorials/getting_started_en.md#4-use-the-inference-model-to-predict>`_.
* - MobileNet v2
- classification
- Models are exported from `PaddleClas <https://github.com/PaddlePaddle/PaddleClas/tree/release/2.1/>`_. Refer to `getting_started_en.md <https://github.com/PaddlePaddle/PaddleClas/blob/release/2.1/docs/en/tutorials/getting_started_en.md#4-use-the-inference-model-to-predict>`_.
* - MobileNet v3
- classification
- Models are exported from `PaddleClas <https://github.com/PaddlePaddle/PaddleClas/tree/release/2.1/>`_. Refer to `getting_started_en.md <https://github.com/PaddlePaddle/PaddleClas/blob/release/2.1/docs/en/tutorials/getting_started_en.md#4-use-the-inference-model-to-predict>`_.
* - BiSeNet v2
- semantic segmentation
- Models are exported from `PaddleSeg <https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.1>`_. Refer to `model_export.md <https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.1/docs/model_export.md#>`_.
* - DeepLab v3 plus
- semantic segmentation
- Models are exported from `PaddleSeg <https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.1>`_. Refer to `model_export.md <https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.1/docs/model_export.md#>`_.
* - Fast-SCNN
- semantic segmentation
- Models are exported from `PaddleSeg <https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.1>`_. Refer to `model_export.md <https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.1/docs/model_export.md#>`_.
* - OCRNET
- semantic segmentation
- Models are exported from `PaddleSeg <https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.1>`_. Refer to `model_export.md <https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.1/docs/model_export.md#>`_.
* - Yolo v3
- detection
- Models are exported from `PaddleDetection <https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.1>`_. Refer to `EXPORT_MODEL.md <https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.1/deploy/EXPORT_MODEL.md#>`_.
* - ppyolo
- detection
- Models are exported from `PaddleDetection <https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.1>`_. Refer to `EXPORT_MODEL.md <https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.1/deploy/EXPORT_MODEL.md#>`_.
* - MobileNetv3-SSD
- detection
- Models are exported from `PaddleDetection <https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.2>`_. Refer to `EXPORT_MODEL.md <https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.2/deploy/EXPORT_MODEL.md#>`_.
* - U-Net
- semantic segmentation
- Models are exported from `PaddleSeg <https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.3>`_. Refer to `model_export.md <https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.3/docs/model_export.md#>`_.
* - BERT
- language representation
- Models are exported from `PaddleNLP <https://github.com/PaddlePaddle/PaddleNLP/tree/v2.1.1>`_. Refer to `README.md <https://github.com/PaddlePaddle/PaddleNLP/tree/develop/examples/language_model/bert#readme>`_.
Frequently Asked Questions (FAQ)
################################
When model conversion API is unable to run to completion due to typographical errors, incorrectly used options, or other issues, it provides explanatory messages. They describe the potential cause of the problem and give a link to the :doc:`Model Optimizer FAQ <openvino_docs_MO_DG_prepare_model_Model_Optimizer_FAQ>`, which provides instructions on how to resolve most issues. The FAQ also includes links to relevant sections in :doc:`Convert a Model <openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide>` to help you understand what went wrong.
The model conversion API displays explanatory messages for typographical errors, incorrectly used options, or other issues. They describe the potential cause of the problem and give a link to the :doc:`Model Optimizer FAQ <openvino_docs_MO_DG_prepare_model_Model_Optimizer_FAQ>`, which provides instructions on how to resolve most issues. The FAQ also includes links to relevant sections in :doc:`Convert a Model <openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide>` to help you understand what went wrong.
Additional Resources
####################

View File

@@ -33,8 +33,8 @@ Following PyTorch model formats are supported:
Converting certain PyTorch models may require model tracing, which needs ``input_shape`` or ``example_input`` parameters to be set.
``example_input`` is used as example input for model tracing.
``input_shape`` is used for constructing a float zero-filled torch.Tensor for model tracing.
* ``example_input`` is used as example input for model tracing.
* ``input_shape`` is used for constructing a float zero-filled torch.Tensor for model tracing.
Example of using ``example_input``:
@@ -56,6 +56,10 @@ Example of using ``example_input``:
* ``list`` or ``tuple`` with tensors (``openvino.runtime.Tensor`` / ``torch.Tensor`` / ``np.ndarray``)
* ``dictionary`` where key is the input name, value is the tensor (``openvino.runtime.Tensor`` / ``torch.Tensor`` / ``np.ndarray``)
.. important::
The ``convert_model()`` method returns ``ov.model`` that you can optimize, compile, or serialize into a file for subsequent use.
Exporting a PyTorch Model to ONNX Format
########################################

View File

@@ -120,7 +120,7 @@ TensorFlow 2 SavedModel format has a specific graph structure due to eager execu
pruning, find custom input nodes in the ``StatefulPartitionedCall/*`` subgraph.
Since the 2023.0 release, direct pruning of models in SavedModel format is not supported.
It is essential to freeze the model before pruning. Use the following code snippet for model freezing:
It is essential to freeze the model before pruning. Use the following code snippet for model freezing:
.. code-block:: py
:force:
@@ -299,6 +299,10 @@ Model conversion API supports passing TensorFlow/TensorFlow2 models directly fro
checkpoint.restore(save_path)
ov_model = convert_model(checkpoint)
.. important::
The ``convert_model()`` method returns ``ov.model`` that you can optimize, compile, or serialize into a file for subsequent use.
Supported TensorFlow and TensorFlow 2 Keras Layers
##################################################

View File

@@ -13,7 +13,11 @@ To convert a TensorFlow Lite model, use the ``mo`` script and specify the path t
mo --input_model <INPUT_MODEL>.tflite
.. note:: TensorFlow Lite models are supported via FrontEnd API. You may skip conversion to IR and read models directly by OpenVINO runtime API. Refer to the :doc:`inference example <openvino_docs_OV_UG_Integrate_OV_with_your_application>` for more details. Using ``convert_model`` is still necessary in more complex cases, such as new custom inputs/outputs in model pruning, adding pre-processing, or using Python conversion extensions.
TensorFlow Lite models are supported via FrontEnd API. You may skip conversion to IR and read models directly by OpenVINO runtime API. Refer to the :doc:`inference example <openvino_docs_OV_UG_Integrate_OV_with_your_application>` for more details. Using ``convert_model`` is still necessary in more complex cases, such as new custom inputs/outputs in model pruning, adding pre-processing, or using Python conversion extensions.
.. important::
The ``convert_model()`` method returns ``ov.model`` that you can optimize, compile, or serialize into a file for subsequent use.
Supported TensorFlow Lite Layers
###################################

View File

@@ -22,9 +22,9 @@ The following examples are the situations when model cutting is useful or even r
Model conversion API parameters
###############################
Model conversion API provides command line options ``input`` and ``output`` to specify new entry and exit nodes, while ignoring the rest of the model:
Model conversion API provides ``input`` and ``output`` command-line options to specify new entry and exit nodes, while ignoring the rest of the model:
* ``input`` option accepts a list of layer names of the input model that should be treated as new entry points to the model. See the full list of accepted types for input on :doc:`Model Conversion Python API <openvino_docs_MO_DG_Python_API>` page.
* ``input`` option accepts a list of layer names of the input model that should be treated as new entry points to the model. See the full list of accepted types for input on :doc:`Supported Model Formats <Supported_Model_Formats>` page.
* ``output`` option accepts a list of layer names of the input model that should be treated as new exit points from the model.
The ``input`` option is required for cases unrelated to model cutting. For example, when the model contains several inputs and ``input_shape`` or ``mean_values`` options are used, the ``input`` option specifies the order of input nodes for correct mapping between multiple items provided in ``input_shape`` and ``mean_values`` and the inputs in the model.
@@ -148,6 +148,27 @@ Now, consider how to cut some parts of the model off. This chapter describes the
.. image:: _static/images/inception_v1_first_block.svg
:alt: Inception V1 first convolution block
To cut a model, you can specify the ``input`` parameter in a form of a tuple:
.. code-block:: py
:force:
ov_model = convert_model(model, input=("input_name", [3], np.float32))
For complex cases, when a value needs to be set in the ``input`` parameter, the ``InputCutInfo`` class can be used. ``InputCutInfo`` supports four types of parameters:
* name: ``string``.
* shape: ``list`` or ``tuple`` of dimensions (``int`` or ``openvino.runtime.Dimension``), ``openvino.runtime.PartialShape``, ``openvino.runtime.Shape``.
* type: ``numpy type``, ``openvino.runtime.Type``.
* value: ``numpy.ndarray``, ``list`` of numeric values, ``bool``.
``InputCutInfo("input_name", [3], np.float32, [0.5, 2.1, 3.4])`` is equivalent of ``InputCutInfo(name="input_name", shape=[3], type=np.float32, value=[0.5, 2.1, 3.4])``.
You can cut a model at either the end or the beginning. The examples below present how to perform such cutting in several ways.
Cutting at the End
++++++++++++++++++++

View File

@@ -116,7 +116,7 @@ The patch modifies the framework code by adding a special command-line argument
+ else:
+ state_dict = torch.load(path, map_location=torch.device('cpu'))
# For backward compatibility, remove these (the new variable is called layers)
# For backward compatability, remove these (the new variable is called layers)
for key in list(state_dict.keys()):
@@ -673,8 +679,11 @@ class Yolact(nn.Module):
else:

View File

@@ -17,31 +17,545 @@
openvino_docs_MO_DG_prepare_model_convert_model_tutorials
.. meta::
:description: In OpenVINO, ONNX, PaddlePaddle, TensorFlow and TensorFlow Lite
models do not require any prior conversion, while MxNet, Caffe and Kaldi do.
:description: Learn about supported model formats and the methods used to convert, read, and compile them in OpenVINO™.
**OpenVINO IR (Intermediate Representation)** - the proprietary format of OpenVINO, benefiting from the full extent of its features.
**OpenVINO IR (Intermediate Representation)** - the proprietary and default format of OpenVINO, benefiting from the full extent of its features. All other supported model formats, as listed below, are converted to :doc:`OpenVINO IR <openvino_ir>` to enable inference. Consider storing your model in this format to minimize first-inference latency, perform model optimization, and, in some cases, save space on your drive.
**ONNX, PaddlePaddle, TensorFlow, TensorFlow Lite** - formats supported directly, which means they can be used with
OpenVINO Runtime without any prior conversion. For a guide on how to run inference on ONNX, PaddlePaddle, or TensorFlow,
see how to :doc:`Integrate OpenVINO™ with Your Application <openvino_docs_OV_UG_Integrate_OV_with_your_application>`.
**PyTorch, TensorFlow, ONNX, and PaddlePaddle** - can be used with OpenVINO Runtime API directly,
which means you do not need to save them as OpenVINO IR before including them in your application.
OpenVINO can read, compile, and convert them automatically, as part of its pipeline.
**MXNet, Caffe, Kaldi** - legacy formats that need to be converted to OpenVINO IR before running inference.
The model conversion in some cases may involve intermediate steps. OpenVINO is currently proceeding
**to deprecate these formats** and **remove their support entirely in the future**.
In the Python API, these options are provided as three separate methods:
``read_model()``, ``compile_model()``, and ``convert_model()``.
The ``convert_model()`` method enables you to perform additional adjustments
to the model, such as setting shapes, changing model input types or layouts,
cutting parts of the model, freezing inputs, etc. For a detailed description
of the conversion process, see the
:doc:`model conversion guide <openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide>`.
Here are code examples of how to use these methods with different model formats:
.. tab-set::
.. tab-item:: PyTorch
:sync: torch
.. tab-set::
.. tab-item:: Python
:sync: py
* The ``convert_model()`` method:
This is the only method applicable to PyTorch models.
.. dropdown:: List of supported formats:
* **Python objects**:
* ``torch.nn.Module``
* ``torch.jit.ScriptModule``
* ``torch.jit.ScriptFunction``
.. code-block:: py
:force:
model = torchvision.models.resnet50(pretrained=True)
ov_model = convert_model(model)
compiled_model = core.compile_model(ov_model, "AUTO")
For more details on conversion, refer to the
:doc:`guide <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_PyTorch>`
and an example `tutorial <https://docs.openvino.ai/nightly/notebooks/102-pytorch-onnx-to-openvino-with-output.html>`__
on this topic.
.. tab-item:: TensorFlow
:sync: tf
.. tab-set::
.. tab-item:: Python
:sync: py
* The ``convert_model()`` method:
When you use the ``convert_model()`` method, you have more control and you can specify additional adjustments for ``ov.Model``. The ``read_model()`` and ``compile_model()`` methods are easier to use, however, they do not have such capabilities. With ``ov.Model`` you can choose to optimize, compile and run inference on it or serialize it into a file for subsequent use.
.. dropdown:: List of supported formats:
* **Files**:
* SavedModel - ``<SAVED_MODEL_DIRECTORY>`` or ``<INPUT_MODEL>.pb``
* Checkpoint - ``<INFERENCE_GRAPH>.pb`` or ``<INFERENCE_GRAPH>.pbtxt``
* MetaGraph - ``<INPUT_META_GRAPH>.meta``
* **Python objects**:
* ``tf.keras.Model``
* ``tf.keras.layers.Layer``
* ``tf.Module``
* ``tf.compat.v1.Graph``
* ``tf.compat.v1.GraphDef``
* ``tf.function``
* ``tf.compat.v1.session``
* ``tf.train.checkpoint``
.. code-block:: py
:force:
ov_model = convert_model("saved_model.pb")
compiled_model = core.compile_model(ov_model, "AUTO")
For more details on conversion, refer to the
:doc:`guide <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow>`
and an example `tutorial <https://docs.openvino.ai/nightly/notebooks/101-tensorflow-to-openvino-with-output.html>`__
on this topic.
* The ``read_model()`` and ``compile_model()`` methods:
.. dropdown:: List of supported formats:
* **Files**:
* SavedModel - ``<SAVED_MODEL_DIRECTORY>`` or ``<INPUT_MODEL>.pb``
* Checkpoint - ``<INFERENCE_GRAPH>.pb`` or ``<INFERENCE_GRAPH>.pbtxt``
* MetaGraph - ``<INPUT_META_GRAPH>.meta``
.. code-block:: py
:force:
ov_model = read_model("saved_model.pb")
compiled_model = core.compile_model(ov_model, "AUTO")
For a guide on how to run inference, see how to
:doc:`Integrate OpenVINO™ with Your Application <openvino_docs_OV_UG_Integrate_OV_with_your_application>`.
For TensorFlow format, see :doc:`TensorFlow Frontend Capabilities and Limitations <openvino_docs_MO_DG_TensorFlow_Frontend>`.
.. tab-item:: C++
:sync: cpp
* The ``compile_model()`` method:
.. dropdown:: List of supported formats:
* **Files**:
* SavedModel - ``<SAVED_MODEL_DIRECTORY>`` or ``<INPUT_MODEL>.pb``
* Checkpoint - ``<INFERENCE_GRAPH>.pb`` or ``<INFERENCE_GRAPH>.pbtxt``
* MetaGraph - ``<INPUT_META_GRAPH>.meta``
.. code-block:: cpp
ov::CompiledModel compiled_model = core.compile_model("saved_model.pb", "AUTO");
For a guide on how to run inference, see how to
:doc:`Integrate OpenVINO™ with Your Application <openvino_docs_OV_UG_Integrate_OV_with_your_application>`.
.. tab-item:: C
:sync: c
* The ``compile_model()`` method:
.. dropdown:: List of supported formats:
* **Files**:
* SavedModel - ``<SAVED_MODEL_DIRECTORY>`` or ``<INPUT_MODEL>.pb``
* Checkpoint - ``<INFERENCE_GRAPH>.pb`` or ``<INFERENCE_GRAPH>.pbtxt``
* MetaGraph - ``<INPUT_META_GRAPH>.meta``
.. code-block:: c
ov_compiled_model_t* compiled_model = NULL;
ov_core_compile_model_from_file(core, "saved_model.pb", "AUTO", 0, &compiled_model);
For a guide on how to run inference, see how to
:doc:`Integrate OpenVINO™ with Your Application <openvino_docs_OV_UG_Integrate_OV_with_your_application>`.
.. tab-item:: CLI
:sync: cli
You can use ``mo`` command-line tool to convert a model to IR. The obtained IR can then be read by ``read_model()`` and inferred.
.. code-block:: sh
mo --input_model <INPUT_MODEL>.pb
For details on the conversion, refer to the
:doc:`article <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow>`.
.. tab-item:: TensorFlow Lite
:sync: tflite
.. tab-set::
.. tab-item:: Python
:sync: py
* The ``convert_model()`` method:
When you use the ``convert_model()`` method, you have more control and you can specify additional adjustments for ``ov.Model``. The ``read_model()`` and ``compile_model()`` methods are easier to use, however, they do not have such capabilities. With ``ov.Model`` you can choose to optimize, compile and run inference on it or serialize it into a file for subsequent use.
.. dropdown:: List of supported formats:
* **Files**:
* ``<INPUT_MODEL>.tflite``
.. code-block:: py
:force:
ov_model = convert_model("<INPUT_MODEL>.tflite")
compiled_model = core.compile_model(ov_model, "AUTO")
For more details on conversion, refer to the
:doc:`guide <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow>`
and an example `tutorial <https://docs.openvino.ai/nightly/notebooks/119-tflite-to-openvino-with-output.html>`__
on this topic.
Refer to the following articles for details on conversion for different formats and models:
* The ``read_model()`` method:
* :doc:`How to convert ONNX <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_ONNX>`
* :doc:`How to convert PaddlePaddle <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Paddle>`
* :doc:`How to convert TensorFlow <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow>`
* :doc:`How to convert TensorFlow Lite <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow_Lite>`
* :doc:`How to convert MXNet <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_MxNet>`
* :doc:`How to convert Caffe <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Caffe>`
* :doc:`How to convert Kaldi <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Kaldi>`
.. dropdown:: List of supported formats:
* :doc:`Conversion examples for specific models <openvino_docs_MO_DG_prepare_model_convert_model_tutorials>`
* **Files**:
@endsphinxdirective
* ``<INPUT_MODEL>.tflite``
.. code-block:: py
:force:
ov_model = read_model("<INPUT_MODEL>.tflite")
compiled_model = core.compile_model(ov_model, "AUTO")
* The ``compile_model()`` method:
.. dropdown:: List of supported formats:
* **Files**:
* ``<INPUT_MODEL>.tflite``
.. code-block:: py
:force:
compiled_model = core.compile_model("<INPUT_MODEL>.tflite", "AUTO")
For a guide on how to run inference, see how to
:doc:`Integrate OpenVINO™ with Your Application <openvino_docs_OV_UG_Integrate_OV_with_your_application>`.
.. tab-item:: C++
:sync: cpp
* The ``compile_model()`` method:
.. dropdown:: List of supported formats:
* **Files**:
* ``<INPUT_MODEL>.tflite``
.. code-block:: cpp
ov::CompiledModel compiled_model = core.compile_model("<INPUT_MODEL>.tflite", "AUTO");
For a guide on how to run inference, see how to
:doc:`Integrate OpenVINO™ with Your Application <openvino_docs_OV_UG_Integrate_OV_with_your_application>`.
.. tab-item:: C
:sync: c
* The ``compile_model()`` method:
.. dropdown:: List of supported formats:
* **Files**:
* ``<INPUT_MODEL>.tflite``
.. code-block:: c
ov_compiled_model_t* compiled_model = NULL;
ov_core_compile_model_from_file(core, "<INPUT_MODEL>.tflite", "AUTO", 0, &compiled_model);
For a guide on how to run inference, see how to
:doc:`Integrate OpenVINO™ with Your Application <openvino_docs_OV_UG_Integrate_OV_with_your_application>`.
.. tab-item:: CLI
:sync: cli
* The ``convert_model()`` method:
You can use ``mo`` command-line tool to convert a model to IR. The obtained IR can then be read by ``read_model()`` and inferred.
.. dropdown:: List of supported formats:
* **Files**:
* ``<INPUT_MODEL>.tflite``
.. code-block:: sh
mo --input_model <INPUT_MODEL>.tflite
For details on the conversion, refer to the
:doc:`article <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow_Lite>`.
.. tab-item:: ONNX
:sync: onnx
.. tab-set::
.. tab-item:: Python
:sync: py
* The ``convert_model()`` method:
When you use the ``convert_model()`` method, you have more control and you can specify additional adjustments for ``ov.Model``. The ``read_model()`` and ``compile_model()`` methods are easier to use, however, they do not have such capabilities. With ``ov.Model`` you can choose to optimize, compile and run inference on it or serialize it into a file for subsequent use.
.. dropdown:: List of supported formats:
* **Files**:
* ``<INPUT_MODEL>.onnx``
.. code-block:: py
:force:
ov_model = convert_model("<INPUT_MODEL>.onnx")
compiled_model = core.compile_model(ov_model, "AUTO")
For more details on conversion, refer to the
:doc:`guide <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_ONNX>`
and an example `tutorial <https://docs.openvino.ai/nightly/notebooks/102-pytorch-onnx-to-openvino-with-output.html>`__
on this topic.
* The ``read_model()`` method:
.. dropdown:: List of supported formats:
* **Files**:
* ``<INPUT_MODEL>.onnx``
.. code-block:: py
:force:
ov_model = read_model("<INPUT_MODEL>.onnx")
compiled_model = core.compile_model(ov_model, "AUTO")
* The ``compile_model()`` method:
.. dropdown:: List of supported formats:
* **Files**:
* ``<INPUT_MODEL>.onnx``
.. code-block:: py
:force:
compiled_model = core.compile_model("<INPUT_MODEL>.onnx", "AUTO")
For a guide on how to run inference, see how to :doc:`Integrate OpenVINO™ with Your Application <openvino_docs_OV_UG_Integrate_OV_with_your_application>`.
.. tab-item:: C++
:sync: cpp
* The ``compile_model()`` method:
.. dropdown:: List of supported formats:
* **Files**:
* ``<INPUT_MODEL>.onnx``
.. code-block:: cpp
ov::CompiledModel compiled_model = core.compile_model("<INPUT_MODEL>.onnx", "AUTO");
For a guide on how to run inference, see how to :doc:`Integrate OpenVINO™ with Your Application <openvino_docs_OV_UG_Integrate_OV_with_your_application>`.
.. tab-item:: C
:sync: c
* The ``compile_model()`` method:
.. dropdown:: List of supported formats:
* **Files**:
* ``<INPUT_MODEL>.onnx``
.. code-block:: c
ov_compiled_model_t* compiled_model = NULL;
ov_core_compile_model_from_file(core, "<INPUT_MODEL>.onnx", "AUTO", 0, &compiled_model);
For details on the conversion, refer to the :doc:`article <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_ONNX>`
.. tab-item:: CLI
:sync: cli
* The ``convert_model()`` method:
You can use ``mo`` command-line tool to convert a model to IR. The obtained IR can then be read by ``read_model()`` and inferred.
.. dropdown:: List of supported formats:
* **Files**:
* ``<INPUT_MODEL>.onnx``
.. code-block:: sh
mo --input_model <INPUT_MODEL>.onnx
For details on the conversion, refer to the
:doc:`article <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_ONNX>`
.. tab-item:: PaddlePaddle
:sync: pdpd
.. tab-set::
.. tab-item:: Python
:sync: py
* The ``convert_model()`` method:
When you use the ``convert_model()`` method, you have more control and you can specify additional adjustments for ``ov.Model``. The ``read_model()`` and ``compile_model()`` methods are easier to use, however, they do not have such capabilities. With ``ov.Model`` you can choose to optimize, compile and run inference on it or serialize it into a file for subsequent use.
.. dropdown:: List of supported formats:
* **Files**:
* ``<INPUT_MODEL>.pdmodel``
* **Python objects**:
* ``paddle.hapi.model.Model``
* ``paddle.fluid.dygraph.layers.Layer``
* ``paddle.fluid.executor.Executor``
.. code-block:: py
:force:
ov_model = convert_model("<INPUT_MODEL>.pdmodel")
compiled_model = core.compile_model(ov_model, "AUTO")
For more details on conversion, refer to the
:doc:`guide <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Paddle>`
and an example `tutorial <https://docs.openvino.ai/nightly/notebooks/103-paddle-to-openvino-classification-with-output.html>`__
on this topic.
* The ``read_model()`` method:
.. dropdown:: List of supported formats:
* **Files**:
* ``<INPUT_MODEL>.pdmodel``
.. code-block:: py
:force:
ov_model = read_model("<INPUT_MODEL>.pdmodel")
compiled_model = core.compile_model(ov_model, "AUTO")
* The ``compile_model()`` method:
.. dropdown:: List of supported formats:
* **Files**:
* ``<INPUT_MODEL>.pdmodel``
.. code-block:: py
:force:
compiled_model = core.compile_model("<INPUT_MODEL>.pdmodel", "AUTO")
For a guide on how to run inference, see how to
:doc:`Integrate OpenVINO™ with Your Application <openvino_docs_OV_UG_Integrate_OV_with_your_application>`.
.. tab-item:: C++
:sync: cpp
* The ``compile_model()`` method:
.. dropdown:: List of supported formats:
* **Files**:
* ``<INPUT_MODEL>.pdmodel``
.. code-block:: cpp
ov::CompiledModel compiled_model = core.compile_model("<INPUT_MODEL>.pdmodel", "AUTO");
For a guide on how to run inference, see how to
:doc:`Integrate OpenVINO™ with Your Application <openvino_docs_OV_UG_Integrate_OV_with_your_application>`.
.. tab-item:: C
:sync: c
* The ``compile_model()`` method:
.. dropdown:: List of supported formats:
* **Files**:
* ``<INPUT_MODEL>.pdmodel``
.. code-block:: c
ov_compiled_model_t* compiled_model = NULL;
ov_core_compile_model_from_file(core, "<INPUT_MODEL>.pdmodel", "AUTO", 0, &compiled_model);
For a guide on how to run inference, see how to
:doc:`Integrate OpenVINO™ with Your Application <openvino_docs_OV_UG_Integrate_OV_with_your_application>`.
.. tab-item:: CLI
:sync: cli
* The ``convert_model()`` method:
You can use ``mo`` command-line tool to convert a model to IR. The obtained IR can then be read by ``read_model()`` and inferred.
.. dropdown:: List of supported formats:
* **Files**:
* ``<INPUT_MODEL>.pdmodel``
.. code-block:: sh
mo --input_model <INPUT_MODEL>.pdmodel
For details on the conversion, refer to the
:doc:`article <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Paddle>`.
**MXNet, Caffe, and Kaldi** are legacy formats that need to be converted explicitly to OpenVINO IR or ONNX before running inference.
As OpenVINO is currently proceeding **to deprecate these formats** and **remove their support entirely in the future**,
converting them to ONNX for use with OpenVINO should be considered the default path.
.. note::
If you want to keep working with the legacy formats the old way, refer to a previous
`OpenVINO LTS version and its documentation <https://docs.openvino.ai/2022.3/Supported_Model_Formats.html>`__ .
OpenVINO versions of 2023 are mostly compatible with the old instructions,
through a deprecated MO tool, installed with the deprecated OpenVINO Developer Tools package.
`OpenVINO 2023.0 <https://docs.openvino.ai/2023.0/Supported_Model_Formats.html>`__ is the last
release officially supporting the MO conversion process for the legacy formats.
@endsphinxdirective

View File

@@ -154,7 +154,7 @@ Converting a GNMT Model to the IR
**Step 1**. Clone the GitHub repository and check out the commit:
1. Clone the NMT repository:
1. Clone the NMT reposirory:
.. code-block:: sh

View File

@@ -17,7 +17,7 @@ To download the model for IR conversion, follow the instructions:
1. Create new directory to store the model:
.. code-block:: sh
.. code-block:: shell
mkdir lm_1b

Some files were not shown because too many files have changed in this diff Show More