Compare commits

...

216 Commits

Author SHA1 Message Date
Artyom Anokhov
0f1fa2cbde Update copyrights with 2023 year (#15149)
* Update copyrights with 2023 year

* Updated more files.
2023-02-02 15:47:25 +01:00
Yuan Xu
a75dc09204 fix build number in apt installation (#13257)
* fix issue CVS-81262

* update

* update a to the

* update comment
2022-10-27 17:41:55 +04:00
Anuj Mittal
ccfb0ed1f1 installing-openvino-yocto.md: fix install instructions (#11654)
Change _ to : as per the new override syntax and remove reference to
staticdev package.

Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
2022-05-10 17:57:53 +02:00
Alexander Zhogov
1363573c48 Azure CI: Disable CUDA plugin on Mac 2022-04-05 20:30:11 +03:00
Dmitrii Khurtin
c9beff1c56 updated gna library to 1455 version (#11364) 2022-04-04 13:44:26 +03:00
Roman Kazantsev
9b42a5311e Support TensorFlow Grouped ConvBackpropInput operations (#11424)
Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>
2022-04-04 12:32:06 +03:00
Alexey Suhov
4ebe83842e Change product version to 2021.4.3 (#11396) 2022-04-01 16:43:59 +03:00
Alexander Zhogov
6cdc1bb302 Disable CUDA plugin build (#11377) 2022-04-01 13:07:21 +03:00
Yuan Xu
18fbbd2227 add troubleshooting for PRC users in Windows installation (#11079)
* add troubleshooting for PRC users in Windows installation

* fix an error
2022-03-31 11:24:20 +03:00
Ilya Churaev
4a24c0721e Port compiler requirements to 2021.4 (#11191)
* Port compiler requirements to 2021.4

* Update docs/install_guides/installing-openvino-apt.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

Update docs/install_guides/installing-openvino-apt.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

Update docs/install_guides/installing-openvino-linux.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

Update docs/install_guides/installing-openvino-apt.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

Update docs/install_guides/installing-openvino-apt.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update installing-openvino-linux.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
2022-03-25 08:55:23 +03:00
Yuan Xu
13e01c9ea3 Add Python version for Docker installation (#10876)
* update Python versions

* add Python version for centos 7
2022-03-15 10:54:14 +03:00
Nam-Duong DUONG
8cabbc0768 Update Convert_YOLACT.md (#8682) 2022-03-14 18:12:02 +03:00
Anuj Mittal
b7d91f58b0 Update Yocto documentation (#10547)
* installing-openvino-yocto: fix documentation links

Point to the new Yocto docs website.

Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>

* Update installing-openvino-yocto.md

* installing-openvino-yocto: add step to checkout specific branch

Request users to checkout specific branch of meta-intel where this
version of OpenVINO is available.

Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
2022-02-22 11:46:58 +03:00
Nikolay Tyukaev
92562697e8 Feature/ntyukaev/add api folder if enable python 2021.4 (#10478)
* add api folder if enable python

* ngraph python api
2022-02-17 15:24:50 +03:00
Andrey Zaytsev
bd927ed8bb Releases/2021/4 (#9416)
* Various doc changes

* Added the Ecosystem section to the Resources tab

* Added css

* fix

* Fixed image

* resources.md fixes

* Updated inference-engine/thirdparty/mkl-dnn

Co-authored-by: CCR\ntyukaev <nikolay.tyukaev@intel.com>
2022-02-17 13:54:58 +03:00
Trawinski, Dariusz
14cac44cc3 OVMS docs in sphinx format (#10401)
* added OpenVINO Model Server

* initial

* test

* add ovms repo

* debuging get_label unicode errors

* drop debug message

* remove not needed readme file
2022-02-16 18:05:10 +03:00
Nikolay Tyukaev
cd8f27d1bf update requirements to fix tabs (#10410) 2022-02-16 11:47:52 +03:00
Roman Lyamin
6c172f33a5 [GPU] Performance counters fix (#7143) (#9647) 2022-02-15 13:28:16 +03:00
Vladislav Volkov
24fa511f49 [CPU] [Ngraph] Fix of memory leak in PassBase::get_name and leak in jit_avx2_1x1_convolution_with_dw_conv_fwd_t kernel (#10202) 2022-02-11 13:03:48 +03:00
Gorokhov Dmitriy
90d594a631 [CPU] Fixed out of bounds read in JIT planar convolution (#10200) 2022-02-09 20:29:36 +03:00
Alexander Zhogov
a8f6fd9c2b Azure CI: Fix Mac 2022-02-07 19:13:07 +03:00
Alexander Zhogov
d4ad38b78f Azure CI: Update Linux and Mac (#10163)
* Azure CI: Update Linux and Mac

* Update install_build_dependencies.sh
2022-02-07 18:27:06 +03:00
Alexandra Sidorova
d053b6c331 [CPU] Fix FuseConvolutionSumAndConvolutionSumActivation (#10132) 2022-02-07 09:48:45 +03:00
Vladislav Volkov
846ea1f347 Fix for leaked ExecutorManager (#10052) 2022-02-05 14:04:20 +03:00
Vladislav Volkov
09626eaa9b Memory leaks in tbbbind and onednn were fixed (#9578)
* Migrating to the new tbbbind library version 2.5 (#8262)

* Memory leaks in tbbbind and onednn were fixed (#8825)

* Fix missing declarations for TBB_HYBRID_CPUS (#9567)
2022-02-04 18:04:57 +03:00
azhogov
a195a8a5bf Azure CI: Add "ref: releases/2021/4" for contrib and testdada repos 2022-02-04 11:28:42 +03:00
Nikolay Tyukaev
282f1f6063 js and ipython (#9994) 2022-02-01 13:01:27 +03:00
HARI CHAND BALASUBRAMANIAM
45a7295e64 Initialize cltools_setenv.sh before execute clc (#9905)
linker errors for SHAVE_LDSCRIPT, SHAVE_MYRIAD_LD_DIR, and SHAVE_MOVIASM_DIR will be shown before initializing cltools_setenv.sh <device>
2022-01-29 11:13:25 +03:00
Nikolay Tyukaev
47278caed9 nbdoc cmake (#9875) 2022-01-26 18:44:51 +03:00
Vitaliy Urusovskij
10e2748286 Fix memleaks caused by SetupDiGetClassDevs() (#9749)
To prevent leaks, object should be cleaned up
by calling SetupDiDestroyDeviceInfoList()
Refer to https://docs.microsoft.com/en-us/windows/win32/api/setupapi/nf-setupapi-setupdigetclassdevsw#remarks
2022-01-20 12:12:16 +03:00
Jakub Debski
a0599267ea Nbdoc change source (#9739)
* doc fixes

* doc fix

* doc fix

* Add suggestion how to change source of download

* Fix typo

Co-authored-by: Nikolay Tyukaev <nikolay.tyukaev@intel.com>
2022-01-18 15:03:09 +03:00
Nikolay Tyukaev
d653e22dfe update requirements (#9717) 2022-01-17 17:38:39 +03:00
Alina Kladieva
e220238028 Workaround tokenizers python module install issue on Mac and Win (#9670) 2022-01-17 13:11:22 +03:00
Nikolay Tyukaev
a669d35afe doc-versions-from-server (#9436) 2022-01-12 14:39:49 +03:00
Ivan Novoselov
e2124246b1 [CPU] Disable NotImpelmented exceptions mechanism for generic node. (#7242) (#9346)
(cherry picked from commit cb0d6db4c5)
2021-12-27 10:56:07 +03:00
Andrey Zaytsev
7b66255f34 Feature/azaytsev/add fixes 2021 4 (#9423)
* Various doc changes

* Added the Ecosystem section to the Resources tab

* Added css

* fix

* Additional fixes

Co-authored-by: CCR\ntyukaev <nikolay.tyukaev@intel.com>
2021-12-24 17:39:47 +03:00
Nikolay Tyukaev
4d9a984188 fix broken image links (#9404) 2021-12-23 18:16:10 +03:00
Nikolay Tyukaev
db73b258e6 fix-titles-containing code refs (#9264)
* fix-titles-containing code refs

* add to xfail
2021-12-23 13:58:57 +03:00
Jesus Espinoza
703b2a85f7 Update Convert_Faster_RCNN.md (#9139)
Updating MO command in step 2 to add --input 0:2 and change input shape to NCHW per ticket 54170
2021-12-16 22:21:57 +03:00
Jesus Espinoza
6abeaec8ad Update installing-openvino-raspbian.md (#9219)
Update to Step 3 under building and running sample as directory change in open_model_zoo

from: cd open_model_zoo/tools/downloader
to: cd open_model_zoo/tools/model_tools
2021-12-16 22:21:33 +03:00
Alexander Zhogov
e634014906 Azure CI: Fix "access denied" issue with certutil on Windows (#9270)
* Azure CI: Fix "access denied" issue with certutil on Windows

* Fix UT name
2021-12-16 22:16:48 +03:00
Nikolay Tyukaev
6fc737af51 restore deleted files (#9215) 2021-12-15 15:59:24 +03:00
Nikolay Tyukaev
acad377cf0 doc pytest (#8888)
* docs pytest

* fixes
2021-12-14 23:22:13 +03:00
Nikolay Tyukaev
5885d56e1a perf bench graph animation (#9045)
* animation

* fix
2021-12-14 23:18:35 +03:00
Nikolay Tyukaev
d59945b73b fix untitled titles (#9213) 2021-12-14 23:17:38 +03:00
Mikhail Ryzhov
6b85a0b33e [CAPI] Fixed memory leak in tests (#9123) 2021-12-10 02:36:59 +03:00
Nikolay Tyukaev
9b3b99c84b iframe video enable fullscreen (#9041) 2021-12-06 18:58:08 +03:00
Nikolay Tyukaev
0900c39aca sphinx copybutton doxyrest code blocks (#8992) 2021-12-03 17:32:41 +03:00
Andrey Zaytsev
0ec7da591d Feature/azaytsev/doc fixes (#8897)
* Various doc changes

* Removed the empty Learning path topic

* Restored the Gemini Lake CPIU list
2021-11-29 17:56:05 +03:00
Alexey Suhov
6c4462759e [README.md] change latest release to 2021.4.2 2021-11-16 20:55:28 +03:00
Andrey Zaytsev
a28d93b2bf Feature/azaytsev/doc updates gna 2021 4 2 (#8567)
* Various doc changes

* Reformatted C++/Pythob sections. Updated with info from PR8490

* additional fix

* Gemini Lake replaced with Elkhart Lake

* Fixed links in IGs, Added 12th Gen
2021-11-16 12:36:17 +00:00
Nikolay Tyukaev
400e0657cd doc script changes (#8568)
* fix openvino-sphinx-theme

* add linkcheck target

* fix

* change version

* add doxygen-xfail.txt

* fix

* AA

* fix

* fix

* fix

* fix

* fix
2021-11-13 00:17:59 +03:00
Aleksandr Korolev
e2a469a345 [IE][VPU] heap-use-after-free fix v2 (#8509)
* [VPU] use-after-free fix v2
2021-11-12 02:31:47 +03:00
Mikhail Ryzhov
de85202b29 Added Import/Export functions to C_API #8353 (#8458)
* initial commit

# Conflicts:
#	inference-engine/src/inference_engine/src/ie_core.cpp
#	inference-engine/thirdparty/ade

* [GNA] added export/import for c_api (need fixes)

# Conflicts:
#	inference-engine/thirdparty/ade

* [GNA] import/export with file

# Conflicts:
#	inference-engine/thirdparty/ade

* [GNA] fixed tests and function with memory

# Conflicts:
#	inference-engine/thirdparty/ade

* deleted unnecessary testing changes

# Conflicts:
#	inference-engine/src/inference_engine/src/ie_core.cpp
#	inference-engine/thirdparty/ade

* fixed bug with const

# Conflicts:
#	inference-engine/thirdparty/ade

* fixed review comments

# Conflicts:
#	inference-engine/thirdparty/ade

* [GNA] changed testing model

# Conflicts:
#	inference-engine/thirdparty/ade

* Put memory buffer to istream directly

# Conflicts:
#	inference-engine/thirdparty/ade

* Replaced blob with array of bytes

# Conflicts:
#	inference-engine/thirdparty/ade

* Make config optional

# Conflicts:
#	inference-engine/thirdparty/ade

* Reverted load_network change

# Conflicts:
#	inference-engine/thirdparty/ade

* Fixed tests with null parameters

# Conflicts:
#	inference-engine/thirdparty/ade

* Changed path to models

# Conflicts:
#	inference-engine/thirdparty/ade

* Made config optional for other methods

# Conflicts:
#	inference-engine/thirdparty/ade

* Update ie_c_api.h
# Conflicts:
#	inference-engine/thirdparty/ade

* Merge branch 'c_api' of https://github.com/a-noskov/openvino into c_api

# Conflicts:
#	inference-engine/thirdparty/ade

* Changed export signature

# Conflicts:
#	inference-engine/thirdparty/ade

* Reverted import changes(not needed)

# Conflicts:
#	inference-engine/thirdparty/ade

* fixed test

# Conflicts:
#	inference-engine/thirdparty/ade

* Replaced stringbuf by streambuf

# Conflicts:
#	inference-engine/thirdparty/ade

* fixed code style issues

# Conflicts:
#	inference-engine/thirdparty/ade

* Added condition for the tests

# Conflicts:
#	inference-engine/thirdparty/ade

* Enabled skipping of tests

# Conflicts:
#	inference-engine/thirdparty/ade

Rebase commit

Revert "Rebase commit"

This reverts commit aa717c076a9901637bc525ba6ef9960e012073e9.

* Changed test model path

* skipped tests for old gna lib v1

Co-authored-by: Andrey <andrey.noskov@intel.com>
Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>
2021-11-10 14:13:13 +03:00
Daria Mityagina
5b283ee6d4 [IE][VPU][XLink] - Fix Mac build (#8479)
* Fix Mac build - XLink_sem_trywait problems - trywait

* Fix Mac build - XLink_sem_trywait problems

* Fix Mac build - XLink_sem_trywait - small changes

Co-authored-by: Maksim Doronin <maksim.doronin@intel.com>
2021-11-10 10:31:58 +03:00
Sergey Lyubimtsev
ca36a3335f Set scikit-image~=0.18.3 for python versions >=3.7 (#7910) (#8444)
* Set scikit-image~=0.18.3 for python versions >=3.7

* Set scikit-image>=0.17.2 (0.17.2 is not available for >=3.7)

(cherry picked from commit a10f40d6d4)
2021-11-09 19:27:53 +03:00
Aleksandr Korolev
b1985ff5fd [VPU] heap-use-after-free fix (#8466)
* [VPU]heap-use-after-free fix

* Empty commit
2021-11-09 18:14:10 +03:00
Alexander Zhogov
dd0f845a18 Azure CI: exclude docs from triggers (#8475) 2021-11-09 16:29:11 +03:00
Andrey Zaytsev
3a0d7a28ed Feature/azaytsev/doc updates gna (#8445)
* Various doc changes

* Intel® Celeron® J4125 Processor is added to the list
2021-11-08 15:47:24 +03:00
Egor Duplensky
3cdebfcb7c Revert "[CPU] Fix mixing VEX and non-VEX instructions (#7238)" (#8381)
Because few issues were detected on AVX (unknown instruction)
and AVX2 (accuracy) platforms.
Reverting for now.
2021-11-03 13:51:26 +03:00
Vladislav Volkov
6fdc6b4c16 Reverted "Migrating to the new tbbbind library version 2.5 (#8270)" (#8362) 2021-11-03 13:51:10 +03:00
Aleksandr Korolev
7096c01127 [VPU] vpu scale update (#8180)
* [VPU] vpu scale update

* fix ubuntu20 build
2021-11-03 12:58:41 +03:00
Denis Orlov
2d28a05421 [GNA] Fix comments in GNA public header and speech sample (#8373) 2021-11-03 10:45:01 +03:00
Daria Mityagina
e2ace1da40 [ICV][XLink] - Port changes (#8203)
* [ICV][XLink] - Port - Add mutex for event queues

* [ICV][XLink] - Port - XLinkReleaseSpecificPacket implementation

* [ICV][XLink] - Port - Exposed the configuration of maximum resources for the XLink streams at application build level

* [ICV][XLink] - Port - Expose the setting of the xLink threads priority to the application level

* [ICV][XLink] - Port - Changed calling of sem_wait() to account for EINTR error code

* [ICV][XLink] - Port - Kw issues fixes for R17

* [ICV][XLink] - Port - USB_VSC: moved HW init and datapump startup to upper layer

* [ICV][XLink] - Port - Additional mutexes for XLinkDispatcher

* [ICV][XLink] - Port - Add XLink read/write timeouts

* [ICV][XLink] - precommit failures fix

* [ICV][XLink] - win_semaphore

* [ICV][XLink] - review fixes

* [ICV][XLink] - review fixes

* [ICV][XLink] - review fixes

* [ICV][XLink] - review fixes - Write -> Read

* [ICV][XLink] - review fixes - Windows issues - sem_trywait

* [ICV][XLink] - review fixes - Windows issues - sem_trywait - space

* [ICV][XLink] - review fixes - Windows issues - sem_trywait

* [ICV][XLink] - removed one lock

* [ICV][XLink] - small change - else

* [ICV][XLink] - small change - remove unlock

* [ICV][XLink] - errors fixed - new semaphore

* [ICV][XLink] - review comments

* [ICV][XLink] - review comments

* [ICV][XLink] - review comments

* [ICV][XLink] - remove pthread_attr_destroy(&attr);

* [ICV][XLink] - fw update

* [ICV][XLink] - fw update
2021-11-02 14:25:41 +03:00
Nikolay Tyukaev
f5e9b699ce doc updates (#8268)
* Various doc changes

* theme changes

* remove font-family (#8211)

* fix  css

* Update uninstalling-openvino.md

* fix css

* fix

* Fixes for Installation Guides

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>
Co-authored-by: kblaszczak-intel <karol.blaszczak@intel.com>
2021-11-02 11:27:03 +03:00
Alexey Lebedev
5ec50585df fix sporadic test (#8366) 2021-11-02 10:05:45 +03:00
Anastasia Kuporosova
279961332b [IE Python API] Add set_config for executable_network (#7796) (#8351)
* [IE Python API] Add set_config for executable_network

* fix test

* fix test comment
2021-11-02 03:13:04 +03:00
Elizaveta Lobanova
0d5f86ae33 [GNA] Fixed Log segments calculation for internal algorithm (#8318) 2021-11-01 16:57:03 +03:00
Yury Gorbachev
9b9f5e3eee Update get_started_scripts.md (#8338) 2021-11-01 11:22:11 +03:00
Vladislav Volkov
c885b1933d Migrating to the new tbbbind library version 2.5 (#8270) 2021-10-29 15:47:08 +03:00
Anastasia Kazantaeva
1b6dc06cf0 Update message 2021.4.2 (#8130)
* Upgrated MO message

* fix
2021-10-28 22:41:46 +03:00
Daria Mityagina
a3afa57bd9 [ICV][XLink] - Port some changes (#8284)
* [ICV][XLink] - mvnc_data 0xd0

* [ICV][XLink] - fw update

* [ICV][XLink] - Port - XLinkOpenStream now allocates streamId only on the host side

* [ICV][XLink] - Port - enum updated
2021-10-28 22:14:46 +03:00
kblaszczak-intel
ad1efb7f16 remove font-family (#8211) 2021-10-27 12:49:15 +03:00
Elizaveta Lobanova
7eed1d910b [GNA] Limit activation output scale factor (#8106) 2021-10-26 15:38:12 +03:00
Elizaveta Lobanova
2278100258 [GNA] Fixed segments creation for multi segment pwl functions (#8044) (#8165)
* [GNA] Fixed extreme segments creation for tanh pwl

* [GNA] Tests added

* [GNA] Added tests for different activations

* [GNA] Comments apply
2021-10-25 11:57:40 +03:00
Nikolay Tyukaev
361dec7362 Docs to Sphinx (#8151)
* docs to sphinx

* Update GPU.md

* Update CPU.md

* Update AUTO.md

* Update performance_int8_vs_fp32.md

* update

* update md

* updates

* disable doc ci

* disable ci

* fix index.rst

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>
2021-10-24 17:43:00 +03:00
Alexey Lebedev
d7a917d92c [IE PYTHON] fix gil (#8145)
* remove nogil

* Add test
2021-10-22 16:13:52 +03:00
Roman Lyamin
326b8ad009 [GPU] Fixed matmul handling for some shapes (#6642) (#8103)
Co-authored-by: Vladimir Paramuzov <vladimir.paramuzov@intel.com>
2021-10-20 09:34:32 +03:00
Vladislav Golubev
cc6321f6d3 [LPT] MatmulTransformation: canBeTransformed fix (#7816) 2021-10-19 23:50:20 +03:00
Elizaveta Lobanova
5c2a39009e [GNA] Fixed insertion of delayed copy error (#7944) (#8037)
* [GNA] Fixed error with delayed copy insertion

* [GNA] Added test
2021-10-18 15:34:55 +03:00
Polina Brzezinskaya
a0b4394423 [VPU] Changed calling of sem_wait() to account for EINTR error code (#7815)
Changed calling of sem_wait() to account for EINTR error code
2021-10-15 16:18:14 +03:00
Dmitry Pigasin
85378ce176 Add --ignore-installed to the downloader requirement installation (#8025) 2021-10-15 10:42:23 +03:00
Gleb Kazantaev
e2fd335b70 Fix FrameworkNodeAttr Deserialization (#7943)
* Fix FrameworkNodeAttr Deserialization

* Fix
2021-10-14 13:27:50 +03:00
Elizaveta Lobanova
c5175063c8 [GNA] More precized calculation of Log pwl and pwl with sf < 1 (#7884) (#7982)
* [GNA] More precized calculation of Log pwl and pwl with sf < 1

* [GNA] Added tests

* [GNA] Types correction

* [GNA] Added comment for absolute threshold calculation in the test
2021-10-14 12:38:08 +03:00
Krzysztof Bruniecki
5fb5057cb6 [GNA] Fix pooling output backward compatibility (#7889)
* [GNA] Fix pooling output backward compatibility

* Apply review
2021-10-12 11:43:12 +03:00
Roman Lyamin
485bea73b1 [GPU] Fixed code gen with custom locale (#7548) 2021-10-08 12:54:34 +03:00
Maxim Vafin
105e67573a [MO] Fix axes in FusedBatchNorm -> MVN transformation (#7679)
* [MO] Fix axes in FusedBatchNorm -> MVN transformation

* Improve testing

* Fix eps_mode attribute
2021-10-08 11:21:50 +03:00
Alexander Zhogov
2282b0ee5c Change IE version to 2021.4.2 2021-10-05 13:47:30 +03:00
Anton Dudchenko
f00dc87a92 [IE][VPU] Fix execTimeMcs for VPU (#7442) (#7712)
When serializing execGraph, milliseconds were actually written to the execTimeMcs field. It's just cherry-pick commit from master branch
2021-09-29 21:16:33 +03:00
Mikhail Kozlov
e932a2eda5 Add virtual inheritance to shared tests (#7649) 2021-09-28 17:20:04 +03:00
Andrey Zaytsev
bd989cee67 [doc] Improved description of OpenVINO workflow (#6128) (#7703)
* Update get_started_dl_workbench.md

POToolkit => POTool

* Update QuantizedNetworks.md

POToolkit => POTool

* Moving POT to optimizations section

Moving POT to optimizations section

* structure

* links

* step 1

* step 1 diagram

* step 2

* typo

* step 4

* step 4 diagram

* step 3

* minor corrections

* Applied comments

* Applied comments from Tatiana

* Applied comments from Alex
# Conflicts:
#	docs/index.md
#	inference-engine/thirdparty/ade

Co-authored-by: Maksim Proshin <mvproshin@gmail.com>
2021-09-28 15:32:19 +03:00
Krzysztof Bruniecki
bdf72bcd88 [GNA] Fix KEY_EXEC_TARGET (#7671)
* Use Gna2DeviceCreateForExport when GNA_EXEC_TARGET is != detected

* Update detected GNA device version field in GNA Device helper

   * Use EXEC instead of COMPILE TARGET to append
   CNN Legacy enforcement (GNA1)

* Apply review
2021-09-28 11:55:28 +03:00
Egor Duplensky
6d634d09a4 [CPU] Fix mixing VEX and non-VEX instructions (#7238)
Quote: The Skylake microarchitecture implements a different state
machine than prior generations to manage the YMM state transition
associated with mixing SSE and AVX instructions.
It no longer saves the entire upper YMM state when executing
an SSE instruction when in “Modified and Unsaved” state,
but saves the upper bits of individual register.
As a result, mixing SSE and AVX instructions will experience
a penalty associated with partial register dependency of
the destination registers being used and additional blend
operation on the upper bits of the destination registers.

Such type of penalties have a huge impact on openvino's and oneDNN's kernels.
Basically the mixing of VEX and non-VEX instructions should be avoided.
2021-09-22 10:46:37 +03:00
Rafal Blaczkowski
05c641ff7c Update azure onnx linux 2021/4 (#7534)
* update azure pipeline

* save
2021-09-16 14:50:10 +03:00
Vitaliy Urusovskij
05a768e357 Bump up MKLDNN version to get fix of c26453 warning (#7090) 2021-09-16 12:57:39 +03:00
Szymon Irzabek
a0b2200408 Gna 2dconv30 releases (#7387)
* [GNA] Port padding and 2d convolution support from master

* [GNA] Add separate fixes required after porting

* [GNA] Add support for activation without bias in padding and 2d convolution decomposition
2021-09-15 14:43:42 +03:00
Krzysztof Bruniecki
e339afe375 IE: Fix Windows compilation on MSVC (adapt PR: #7206 from master branch) (#7486)
`Win32` may be undefined when MSVC is used, see:
https://docs.microsoft.com/en-us/cpp/preprocessor/predefined-macros?redirectedfrom=MSDN&view=msvc-160

Signed-off-by: Karol Trzcinski <karolx.trzcinski@intel.com>
2021-09-14 17:49:01 +03:00
Evgeny Talanin
7f47108bb3 Add apt update (#7484) 2021-09-13 13:52:21 +03:00
Elizaveta Lobanova
b13bcbcd0d [GNA] Fixed scale factors propagation for Eltwise with very different inputs ranges (#7305) (#7445)
* [GNA] Fix scale factors propogation for Eltwise with very different inputs ranges

* [GNA] Added test

* [GNA] Added exception for scale factor <= 0

* [GNA] Disable tests with integer weights

* [GNA] Added assert for CNNLayer in getScaleFactor()

* [GNA] Added check if scale factor is inf

* [GNA] Fixed legacy tests
2021-09-13 13:01:37 +03:00
Alexey Suhov
c2bfbf29fb [README.md] change latest release to 2021.4.1 2021-09-10 00:08:38 +03:00
Artyom Tugaryov
14e67d8663 Check installed intel-opencl-icd to find version (#7427) 2021-09-08 17:22:18 +03:00
Artyom Tugaryov
d98cb7bdf8 Use dpkg-info to check installed driver (#7426) 2021-09-08 17:17:35 +03:00
Evgeny Talanin
fa18eecfb7 Revert "[IE CLDNN] Fixed code gen with custom locale (#6243)" (#7370)
This reverts commit fadeaecb6d.
2021-09-08 10:44:49 +03:00
Kate Generalova
3aa2f02240 doc: fix docker win links (#7402) 2021-09-07 15:17:14 +03:00
Sergey Lyubimtsev
a4fdc1c947 Fix for missed OMZ public models in openvino-dev package (#6568) (#7324)
(cherry picked from commit b893eae9e7)
2021-09-01 22:24:04 +03:00
Artyom Anokhov
f14f28f32b [Scripts] Detecting newest drivers on ubu20 (#7322)
* install_NEO_OCL_driver: Added detecting current driver via intel-opencl-icd package in case of newest one on ubuntu20. Added removing intel-opencl-icd package for ubuntu.

* install_NEO_OCL_driver: Fixed pattern for parsing driver version for newest drivers
2021-09-01 15:00:24 +03:00
Artyom Anokhov
e3baff25a6 install_NEO_OCL_driver: Added ocl-icd-libopencl1 for ubuntu18 case (#7314) 2021-08-31 19:46:51 +03:00
Anton Voronov
708825b439 [CPU] fixed convolution outputShape in ConvDWConv fusing (#7290) 2021-08-31 15:21:15 +03:00
Mikhail Letavin
91d85a88a1 [GPU] Remove unused copy constructor (#7221) 2021-08-31 14:14:51 +03:00
Denis Orlov
f77c3d7fdc [GNA] Include documentation for GNA3 QoS (#7046)
* [GNA] Include documentation for GNA3 QoS

* Fix according to review

* Fix the driver version
2021-08-31 12:54:05 +03:00
Artyom Anokhov
36f2e63c9c Update install neo driver script with 21.29.20389 (#7244)
* install_NEO_OCL_driver: Added installing 21.29.20389

* install_NEO_OCL_driver: Updated CMDs for installing 21.29.20389 on Ubuntu

* Update scripts/install_dependencies/install_NEO_OCL_driver.sh

Co-authored-by: Kate Generalova <kate.generalova@intel.com>

* Update scripts/install_dependencies/install_NEO_OCL_driver.sh

Co-authored-by: Kate Generalova <kate.generalova@intel.com>

* install_NEO_OCL_driver: Updated getting values from dictionaries

* install_NEO_OCL_driver: Updated descriptions

* install_NEO_OCL_driver: Added more packages to install

* install_NEO_OCL_driver: Added ocl-icd-2.2.12-1.el8.x86_64 for RHEL

* install_NEO_OCL_driver: Updated error message. Replaced `dnf update --refresh` with --refresh option for dnf install. Added software-properties-common in case if user doesn't have apt-add-repository.

* install_NEO_OCL_driver.sh: Fixed syntax

Co-authored-by: Kate Generalova <kate.generalova@intel.com>
2021-08-31 11:45:02 +03:00
Mikhail Ryzhov
d474617d12 [GNA] Fixed import of model with several inputs (#7277) (#7278)
* [GNA] Fixed import of model with several intputs

* Fixed copyright year
2021-08-31 11:09:57 +03:00
Elizaveta Lobanova
64a896c22e [GNA] Fixed accuracy degradation caused by the input quantization restriction (#7260) 2021-08-27 11:48:57 +03:00
Elizaveta Lobanova
2dcd09055f [GNA] Fixed calculation of input scale factor and search of the next layer for FQ (#7246)
* [GNA] Fixed search of the next layer for FQ

* [GNA] Fixed calculation of input scale factor for POT-quantized model in the case if the first layer after input is activation
2021-08-26 15:02:23 +03:00
Andrey Sapozhnikov
1a656f4e44 [GNA] Plugin transition to the library v3.0 (#7241)
* GNA Plugin transition to the library v3.0

* Remove Gna2RequestConfigEnableHardwareConsistency calls

Co-authored-by: Krzysztof Bruniecki <krzysztof.bruniecki@intel.com>
2021-08-26 01:00:07 +03:00
Mikhail Ryzhov
0702b44174 [Samples] Enabled oname for imported models (#7228) 2021-08-25 12:57:27 +03:00
Krzysztof Bruniecki
ce21344585 [GNA] Fixes for GNA 3.0 library (#7180)
* Pass compileTarget to am_intel_dnn

* Enable tests for GNA lib version prefix 3.0

* Fix conv split transform for 2d cnn tests

* Apply review
2021-08-25 10:06:10 +03:00
Ilya Sharikov
3a28ffaf57 Updated path to gflags (#7200) 2021-08-24 13:57:40 +03:00
Anton Dudchenko
f63be649eb Update FW (#7209) 2021-08-24 12:02:23 +03:00
Polina Brzezinskaya
32a9e98437 [VPU] Added ConvertGather7ToGather1 pass to frontend (#7184) 2021-08-24 11:03:23 +03:00
Victor Kuznetsov
b76c903745 add pytest html (#7169) 2021-08-20 16:58:39 +03:00
Mikhail Ryzhov
4714b8edb8 [GNA] Set input scale factors for imported model (#7139) (#7140)
* [GNA] Set custom scale factor for imported model
2021-08-19 14:48:37 +03:00
Andrey Sapozhnikov
47d1f2147a [GNA] Plugin preparation for transition to the library v3 (#7047)
* GNA Plugin transition to the library v3

* Embedded device support

* Revert transition, keep preparation for future transition
2021-08-19 14:46:43 +03:00
Elizaveta Lobanova
61aa366706 [GNA] Fix order of SwapMatMulInput transformations (#7138) 2021-08-19 09:13:55 +03:00
Sergey Shlyapnikov
b16ce268eb [GPU] Fix incorrect fusions indexes for eltwise ref kernel (#7026) 2021-08-18 18:23:19 +03:00
Daria Mityagina
629de56910 [IE][VPU] Firmware update (#7102)
Update FW version from 1717 to 1729
2021-08-18 16:35:18 +03:00
Victor Kuznetsov
dad76527d6 fixed several issues in time tests (#7059)
* Fixed return in run_timetest
* Fixed parse_stats() argument type notation
* Pass message from timetest exe to test error msg to be uploaded to DB
2021-08-18 13:15:58 +03:00
Yury Gaydaychuk
b86ab12f0f [CPU][Release 2021.4.1] Fix roipooling border proposal computation for 2021.4 (#7116)
* roi_pooling handles border props correctly

* fix adapted for old test infrastructure
2021-08-17 23:20:54 +03:00
Yury Gaydaychuk
170e4d2cce [CPU] Interpolate handles inplace child layout (#6961) 2021-08-17 23:10:09 +03:00
Gleb Kazantaev
2a9eec1c3f Enable SplitSqueezeConcatFusion + TransposeSinking in offline transformations (#7087) 2021-08-17 14:39:05 +03:00
Mikhail Ryzhov
92420cd0d5 [Merge] Gna split align convert to conv filter and dependent (#6347 and #5946) (#7083)
* [GNA] Use stride instead of window for pooling (#5946)

* Use pool stride instead of window size where applicable

* Add test for pooling stride not equal to wnd

* Add more tests and cleanup

* Fix SW_FP32 legacy cnn

* [WIP] Refactor CNN1D

* Remove unused (commented out) code

* Add tests

* Gna split align convert to conv filter (#6347)

* Make unaligned split based on Conv instead of Affine

* Dump Gna2Tensor.Data pointer for debugging

* Apply suggestions from code review

* Reuse conv helpers

* Cleanup CNN fields

* Disable weights reducer on ConvolutionFilter
# Conflicts:
#	inference-engine/src/gna_plugin/backend/am_intel_dnn.cpp
#	inference-engine/src/gna_plugin/optimizer/gna_pass_manager.cpp

Co-authored-by: Krzysztof Bruniecki <krzysztof.bruniecki@intel.com>
2021-08-17 10:26:40 +00:00
Elizaveta Lobanova
886254c5b9 Fixed Eltwise split and batch size selection during 2d reshape, transpose bias (#7099)
* [GNA] Transpose bias (#6759)

* transpose bias

* removed bias transpose; added bias validation predicate to pattern

* fixed after review; added handling of the case bias_output_shape.size() == 1 and bias_output_shape.at(0) > 1

* moved bias shape size check to matcher pattern; replaced loop with algorithm

* [GNA] Fixed Eltwise split and batch size selection during 2d reshape (#7042)

* [GNA] Fixed Eltwise split and batch size selection during 2d reshape

* [GNA] Added exception if memory isn't allocated for concat filter

* Added assert for minZeroDimSize

* [GNA] Added unit test for GetAlignedSplitSizes()

Co-authored-by: Dmitrii Khurtin <dmitrii.khurtin@intel.com>
2021-08-17 10:19:07 +03:00
Vladislav Volkov
e75e647ebe [CPU] Memory leaks in gemm module (#7093) 2021-08-17 10:07:59 +03:00
Maksim Kutakov
6e45b62be6 [CPU] Add reorder if the constant memory is not aligned, and isa is SSE (#7095) 2021-08-17 09:01:14 +03:00
Alexander Zhogov
4a70806d10 Azure CI: Remove IB on Windows (#7097) 2021-08-16 21:36:38 +03:00
Dmitrii Khurtin
761c645f14 [GNA] Convolution to matmul (#7029)
* [GNA] Remove transposes around MatMul

* Added tests for transformation HandleTransposesAroundMatMul

* Move IsTransposeSupported function to GNA limitations file

* added TransposeAfterMatmul tests and moved InsertTransposeBeforeMatmul tests to handle_transposes_around_matmul.cpp

* added inifitiry loop checker and memory concat test

* fixed build errors

* changed the conditions for selecting an input of Concat for ScaleFactor calculation when entering an infinite loop

* fixed after review

* s/INSTANTIATE_TEST_SUITE_P/INSTANTIATE_TEST_CASE_P

* .ignore

Co-authored-by: Elizaveta Lobanova <elizaveta.lobanova@intel.com>
2021-08-16 20:41:14 +03:00
Mikhail Ryzhov
e19b3befb7 [GNA] Disabled TransposeReduction (#7011) (#7062)
* [GNA] Disabled TransposeReduction  (#7011)

* Rebase master

* [gna] Fixed export/import precision

* Revert "[gna] Fixed export/import precision"

This reverts commit d381a2e216.

* Rebase master

* [gna] Fixed export/import precision

* Revert "[gna] Fixed export/import precision"

This reverts commit d381a2e216.

* Fixed transposition error

* [GNA] Added tests for conv wrapped to transpose

* Code review fixes

* Fixed copyright year

* Replaced test suite with case
2021-08-16 14:04:43 +03:00
Mikhail Ryzhov
fb3ceb6aa4 [GNA] Fixed issue for concat connection to memory layer (#6985) (#7058)
* [GNA] Fixed issue for concat connection to memory layer (#6985)

* Fix for concat connection to memory layer

* reverted merge files

* Replaced opset8
2021-08-16 11:04:29 +03:00
Paul Youngsoo Ahn
9a8d8440a5 [GPU] Set TBB affinity in load_network (#7049) 2021-08-16 09:14:22 +03:00
Mikhail Ryzhov
543ea75813 [GNA] FQ accuracy fixes (#6924) (#7061) 2021-08-15 22:36:15 +03:00
Maria Kaglinskaya
f03763defe Pruning for 2021.4 release (#6987)
* Fix Pruning for case with INT8 GroupConvolution operation (#6872)

* Added i8, u8 precisions in Gather constant folding (#6195)

Co-authored-by: Gleb Kazantaev <gleb.kazantaev@intel.com>
2021-08-13 18:23:35 +03:00
Elizaveta Lobanova
114ed1cb4b [GNA] Support bias and FQ in SwapInputMatMul transformation (#6996) (#7027) 2021-08-13 12:07:08 +03:00
Mikhail Ryzhov
3117879c54 [GNA] Added support of FQ layers for outputs (#6999)
* [GNA] Added support of FQ layers for outputs (#6905)

* [GNA] Fixed FQ pass for several outputs

* Added tests
2021-08-12 22:54:02 +03:00
Anton Voronov
2f48787fc4 [CPU] gemm inner product - memory access fix (#7009) 2021-08-12 10:15:40 +03:00
Anton Voronov
7848ac7a74 [CPU] fixed conv + dw conv fusing (#6975) 2021-08-12 09:42:06 +03:00
Alexey Lebedev
62f126cdd2 [IE PYTHON] release GIL (#6968)
* Release GIL in load_network

* release gil in infer, wait and get_idle_request_id

* release gil in read_network and IECore.__cinit__

* release GIL in properties

* Release GIL in infer_async

* Add test

* Fix test

* Fix test
2021-08-09 22:34:11 +03:00
Kate Generalova
4d1c358aa3 doc: refactor docker install guide (#6988)
* doc: refactor docker install guide

* doc: refactor docker install guide windows

* Update installing-openvino-docker-linux.md

* Update installing-openvino-docker-windows.md

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>
2021-08-09 17:56:25 +03:00
Maksim Kutakov
bf51d49ad1 [CPU] Get/Set Blob overhead has been eliminated. (#6737) (#6933)
(cherry picked from commit e47a85b427)
2021-08-06 16:59:00 +03:00
Kate Generalova
6d9699681f doc: fix 58710 issue (#6911) 2021-08-06 12:05:33 +03:00
Vladimir Gavrilov
0b248b68dd Fix for summarize_graph.py. (#6904) 2021-08-05 12:09:31 +03:00
Daria Mityagina
d286e0a9ad [IE][VPU] Added support for 2 axis for MVN layer - duplicate (#6748) (#6778)
Co-authored-by: Polina <polina.brzezinskaya@intel.com>
This PR adds support for Person-reidentification-retail model on VPU device by adding support for {2} axis in MVN layer
2021-08-04 10:29:40 +03:00
Tatiana Savina
21ed761569 [59449][DOCS] GPU table layout change (#6789)
* changed argument display

* added br tag to more arguments

* changed argument display in GPU table

* changed more arguments

* changed Quantized_ models display
2021-08-02 20:00:46 +03:00
Dmitrii Khurtin
2639f35543 fixed crash related to loading model with fq and sigmoid (#6866)
* fixed crash related with loading model with fq and sigmoid

* renamed multiple_input.* to multiple_input_fq.*; removed two unnecessary FQ layers from smoke_fq_fusion_with_sigmoid test; moved FQ params to test params

* s/INSTANTIATE_TEST_CASE_P/INSTANTIATE_TEST_SUITE_P/
2021-08-02 12:20:50 +03:00
iliya mironov
1bbd91506b Fixing Split-Concat reverse input channel (#6765)
* Fix split-concut revers input chennel

* Fix comments
2021-07-28 15:47:46 +03:00
Alina Kladieva
9a31a3d821 Bump Jenkins library (#6817) 2021-07-27 13:24:53 +03:00
Evgenya Stepyreva
9acc3dfe68 Zero point optimization (#6628)
* Zero point optimization

* Expand the equality to zero criteria
2021-07-25 13:33:29 +03:00
Alina Kladieva
205c23b382 Commit to bump Jenkins library (#6781) 2021-07-23 17:16:49 +03:00
Maxim Vafin
e48965683b Fix node name issue introduced by #5854 (#6709) (#6736)
* Fix node name issue introduced by #5854
* Compare names in TransposeFuse tests
2021-07-23 03:28:34 +03:00
Ilya Lavrenov
eaa5a22979 Fixed build with cmake 3.21 (#6757) 2021-07-22 19:55:38 +03:00
Vitaliy Urusovskij
bfdd1a199f Develop installation rules for time and stress tests (#6649) (#6738)
* Develop installation rules for time and stress tests (#6649)

* Prepare `install` rule for time_tests

* Prepare `install` rule for stress tests

* Update path to `gflags`
2021-07-22 10:43:15 +03:00
Ilya Churaev
096a92dcb3 Port extension fix (#6725) 2021-07-21 11:05:14 +03:00
Andrey Zaytsev
7a05a12190 Feature/azaytsev/changes from baychub revise demos samples (#6651)
* Edits to MO

Per findings spreadsheet

* macOS changes

per issue spreadsheet

* Fixes from review spreadsheet

Mostly IE_DG fixes

* Consistency changes

* Make doc fixes from last round of review

* Add GSG build-all details

* Fix links to samples and demos pages

* Make MO_DG v2 changes

* Add image view step to classify demo

* Put MO dependency with others

* Edit docs per issues spreadsheet

* Add file to pytorch_specific

* More fixes per spreadsheet

* Prototype sample page

* Add build section

* Update README.md

* Batch download/convert by default

* Add detail to How It Works

* Minor change

* Temporary restored topics

* corrected layout

* Resized

* Added white background into the picture

* fixed link to omz_tools_downloader

* fixed title in the layout

Co-authored-by: baychub <cbay@yahoo.com>
Co-authored-by: baychub <31420038+baychub@users.noreply.github.com>
2021-07-16 11:41:04 +03:00
iliya mironov
5d39724934 Imironov/add missing descriptions for transformations configs releases (#6521)
* Add retinanet doc

* Update doc

* Splite some text to several lines

* Update ie_docs
2021-07-15 18:24:15 +02:00
Dmitry Pigasin
7cec19fe6e Add information about model preparation (#6661) 2021-07-15 10:04:11 +00:00
Andrey Zaytsev
568096ddeb Install guides improvements (#6418) (#6648)
* Install guides improvements

* add bullet to conda  System Requirements

* fix formating

* - Add conda install command for Ubuntu20

- fix typo /tmp

* added conda prerequisites

* Update installing-openvino-apt.md

* Update installing-openvino-conda.md

* Update installing-openvino-conda.md

CentOS 7.6

* Update installing-openvino-apt.md

APT Repository

* Update installing-openvino-conda.md

Added Introduction & notice about runtime package

* Update installing-openvino-conda.md

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>
# Conflicts:
#	docs/install_guides/installing-openvino-conda.md
#	inference-engine/thirdparty/ade

Co-authored-by: Sergey Lyubimtsev <sergey.lyubimtsev@intel.com>
2021-07-14 22:31:16 +03:00
Krzysztof Bruniecki
34bda79333 [GNA] Introduce an option to invoke the QoS feature (#6604)
* cherry picked (#5827)
* Issue 56759
* Introduce HW_WITH_SW_FBACK
* Add unit test for HW_WITH_SW_FBACK
* Enable HW_WITH_SW_FBACK in speech_sample cpp
* Use perf counters to report number of HW delivered frames to the user (eg speech_sample)
* Add GNA frequency for 6/151 CPU family/model
* Update inference-engine/samples/speech_sample/main.cpp
* Co-authored-by: Mikhail Ryzhov <mikhail.ryzhov@intel.com>
2021-07-14 19:14:38 +03:00
Dmitry Pigasin
0da68d9c70 Add a check for existence of output images (#6640) 2021-07-14 12:28:50 +03:00
Pavel Esir
a82011199a [doc] corrections for Yolo V1 and V2 coversion (#6284)
* added corrections for Yolo V1 and V2

* changed order of conversion commands : v1 goes first general last

* aligned line endings

* added commit hash

* clarified about downloaded files location

* added missing <br>

* Apply suggestions from code review

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>
Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>
2021-07-13 18:39:02 +03:00
Edward Shogulin
6bbec510b0 [Runtime] INT8 inference documentation update (#6419)
* [Runtime] INT8 inference documentation update

* [Runtime] INT8 inference documentation: typo was fixed

* Update docs/IE_DG/Int8Inference.md

Co-authored-by: Anastasiya Ageeva <anastasiya.ageeva@intel.com>

* Update docs/IE_DG/Int8Inference.md

Co-authored-by: Anastasiya Ageeva <anastasiya.ageeva@intel.com>

* Update docs/IE_DG/Int8Inference.md

Co-authored-by: Anastasiya Ageeva <anastasiya.ageeva@intel.com>

* Update docs/IE_DG/Int8Inference.md

Co-authored-by: Anastasiya Ageeva <anastasiya.ageeva@intel.com>

* Update docs/IE_DG/Int8Inference.md

Co-authored-by: Anastasiya Ageeva <anastasiya.ageeva@intel.com>

* Table of Contents was removed

Co-authored-by: Anastasiya Ageeva <anastasiya.ageeva@intel.com>
2021-07-12 20:15:22 +03:00
Tatiana Savina
90eaa2666a [58825][58829][DOCS] Installation docs changes (#6548)
* install docs fixes

* changed video width

* CMake reference added

* fixed table

* added backtics and table formating

* new table changes

* GPU table changes

* added more backtics and changed table format

* gpu table changes

* Update get_started_dl_workbench.md

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>
2021-07-12 14:52:17 +03:00
Alexander Zhogov
4eb4ee1882 Change IE version to 2021.4.1 2021-07-07 17:11:42 +03:00
Vladimir Paramuzov
fadeaecb6d [IE CLDNN] Fixed code gen with custom locale (#6243) 2021-07-06 10:03:57 +03:00
Anastasiya Ageeva
a5c930eeaa Fixed CVS-58871 (#6519) 2021-07-05 17:51:24 +03:00
Nikolay Tyukaev
5135425bb9 automatic insertion of htmlonly (#6466) 2021-06-30 20:07:31 +03:00
Victor Kuznetsov
204c4ba79a add target branch condition (fix null value) (#6467) 2021-06-30 17:52:55 +03:00
Dmitry Pigasin
640ab71b6a [IE Python Sample Docs 2021.4] Use @ref for validated model links (#6465)
* Use @ref for validated model field

* Add link to googlenet-v1
2021-06-30 16:49:50 +03:00
Dmitry Pigasin
0361fc8e2d Fix other language realization links (#6464) 2021-06-30 12:39:44 +03:00
Alexey Suhov
ccae439943 [README.md] change latest release to 2021.4 2021-06-29 21:49:40 +03:00
Nikolay Tyukaev
5cee8bbf29 omz layout fix (#6450)
* fix layout

* 4
2021-06-29 18:20:39 +00:00
Andrey Zaytsev
a220a0a7af Feature/azaytsev/docs 2021 4 (#6447)
* Added benchmark page changes

* Make the picture smaller

* Added Intel® Iris® Xe MAX Graphics

* Changed the TIP about DL WB

* Added Note on the driver for Intel® Iris® Xe MAX Graphics

* Fixed formatting

* Added the link to Intel® software for general purpose GPU capabilities

* OVSA ovsa_get_started updates

* Fixed link
2021-06-29 20:38:51 +03:00
Andrey Zaytsev
af2fec9a00 Feature/azaytsev/changes from baychub colin q2 (#6437)
* Q2 changes

* Changed Convert_RNNT.md

Co-authored-by: baychub <cbay@yahoo.com>
2021-06-29 18:16:46 +03:00
Nikolay Tyukaev
cca57782ce doc fixes (#6438)
* doc fixes

* doc fix

* doc fix
2021-06-29 13:04:12 +00:00
Andrey Zaytsev
c2e8c3bd92 Feature/azaytsev/mo devguide changes (#6405)
* MO devguide edits

* MO devguide edits

* MO devguide edits

* MO devguide edits

* MO devguide edits

* Experimenting with videos

* Experimenting with videos

* Experimenting with videos

* Experimenting with videos

* Experimenting with videos

* Experimenting with videos

* Experimenting with videos

* Experimenting with videos

* Experimenting with videos

* Additional edits

* Additional edits

* Updated the workflow diagram

* Minor fix

* Experimenting with videos

* Updated the workflow diagram

* Removed  Prepare_Trained_Model, changed the title for Config_Model_Optimizer

* Rolled back

* Revert "Rolled back"

This reverts commit 6a4a3e1765.

* Revert "Removed  Prepare_Trained_Model, changed the title for Config_Model_Optimizer"

This reverts commit 0810bd534f.

* Fixed ie_docs.xml, Removed  Prepare_Trained_Model, changed the title for Config_Model_Optimizer

* Fixed ie_docs.xml

* Minor fix

* <details> tag issue

* <details> tag issue

* Fix <details> tag issue

* Fix <details> tag issue

* Fix <details> tag issue
2021-06-29 03:59:24 +03:00
Tatiana Savina
4833c8db72 [DOCS]Changed DL WB related docs and tips (#6318)
* changed DL WB related docs and tips

* added two tips to benchmark and changed layout

* changed layout

* changed links

* page title added

* changed tips

* ie layout fixed

* updated diagram and hints

* changed tooltip and ref link

* changet tooltip link

* changed DL WB description

* typo fix
2021-06-28 16:39:15 +03:00
Andrey Zaytsev
3352b483b9 Updated legal info (#6409) 2021-06-28 16:13:25 +03:00
iliya mironov
c40da68a2b Update ylov4 docs (#6270)
* Update ylov4 docs

* Update docs

* Update docs

* Update doc

* Update docs

* Update doc

* Update doc accordeng to review

* Update doc according to review

* Update Convert_YOLO_From_Tensorflow.md

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>
2021-06-23 10:01:33 +00:00
Yegor Kruglov
0a959ef8e5 script fix (#6306) 2021-06-23 09:25:26 +03:00
Ivan Tikhonov
cd81789d29 Update LowLatency documentation (#6267)
* update LowLatency documentation

* fix missprint

* add deprecated attribute

* Apply suggestions from code review

Co-authored-by: Anastasiya Ageeva <anastasiya.ageeva@intel.com>

* Fix spelling mistakes

* resolve review comments

* refactoring

Co-authored-by: Anastasiya Ageeva <anastasiya.ageeva@intel.com>
2021-06-22 15:36:42 +03:00
Yegor Kruglov
55fb7c6663 [BERT-NER] Document model support (to release branch) (#6239)
* added docs

* update doc

* error correction
2021-06-22 11:46:03 +00:00
Mikhail Nosov
1aa89edbf3 Docs: Model caching feature overview (#6275) 2021-06-22 01:06:54 +03:00
Denis Orlov
6ab6983778 [Doc] Reference POT in documentation for GNA plugin (#6248) 2021-06-21 19:38:27 +03:00
Pavel Esir
fb4d52068b [doc] Update DeepSpeech (#6249)
* corrected output names in DeepSpeech conversion doc

* mo args correction

* changed instruction for DeepSpeech version 0.8.2

* added venv activate; removed redundant ending

* added picture and squashed MO graph input args into one

* Apply suggestions from code review

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* applied review comments

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>
2021-06-21 19:14:42 +03:00
Xie Zhengtian
21514fa9d5 [Doc] Update auto device plugin doc for 2021.4 release (#6271)
* Update auto-device plugin doc

* Add openvino_docs_IE_DG_supported_plugins_AUTO into web page

Signed-off-by: Zhengtian Xie <zhengtian.xie@intel.com>

* Update AUTO.md

Co-authored-by: Maxim Shevtsov <maxim.y.shevtsov@intel.com>
2021-06-21 17:58:19 +03:00
Anton Chetverikov
bb8e2c3137 Add documentation on how to convert RCAN model (#6259)
* Add documentation on how to convert RCAN model

* Apply review feedback
2021-06-21 13:36:44 +00:00
Ilya Lavrenov
7a316dcde3 Fixed links to OMZ / DL Streamer (#6257) 2021-06-21 11:29:54 +03:00
Eugeny Volosenkov
abe9005ffb [Attention OCR] Document model support (#6244)
* Add doc How to convert AttentionOCR

* Add converting

* Add converting 2

* Add converting 3

* Fix document

* Fix document1

* Fix document1

* Fix document1

* Fix ie_docs

* Fix model/path

* Add link to Convert_Model_From_TensorFlow.md

* fix doc

* Fix documentation
2021-06-21 09:49:06 +03:00
Ilya Lavrenov
c6654b9c81 Deprecated API updates (#6252) 2021-06-21 08:26:24 +03:00
Maria Kaglinskaya
58dd421d58 Fix klocwork issues in pruning transformation (#6175)
* Fix klockwork issues in pruning transformation

* Fixed tabs
2021-06-17 16:11:31 +03:00
Maxim Vafin
64bc081abc Add iSeeBetter to supported PyTorch models list (#6177)
* Add iSeeBetter to supported PyTorch models list

* Apply feedback

* Apply feedback
2021-06-17 15:37:02 +03:00
Taylor Yeonbok Lee
c5b65f2cb1 [IE CLDNN] Disable crop optimization only when the node is inside a loop body program (#6198) 2021-06-16 17:46:33 +03:00
Xie Zhengtian
59ffa90724 [Doc] Add doc for Auto-Device Plugin 2021.4 (#6190)
* Add doc for Auto-Device Plugin

Signed-off-by: Zhengtian Xie <zhengtian.xie@intel.com>

* Update doc for auto-device plugin

Signed-off-by: Zhengtian Xie <zhengtian.xie@intel.com>
2021-06-16 16:51:01 +03:00
Edward Shogulin
cb4dcbce83 [LPT] Empty shape on weights handling (#6169)
* [LPT] empty shape on weights fix

* [LPT] SplitTransformation naming fix

Co-authored-by: Vladislav Golubev <vladislav.golubev@intel.com>
2021-06-16 16:45:43 +03:00
Vladislav Volkov
5670e9d8d0 [CPU] Memory leak in jit_uni_i8i8_pooling kernel (#6189) 2021-06-16 15:55:08 +03:00
Gleb Kazantaev
e47287264c Remove Pruning from MO (#6183) 2021-06-16 14:52:54 +03:00
Szymon Durawa
fe1563f0f0 Fix external_port_id serialization for loop op. (#6168) 2021-06-16 12:03:42 +03:00
Edward Shogulin
e87ab16e7c [LPT] Reshape folding extending (#6148)
* [LPT] Reshape folding extending

* [LPT] tests addition

* typo quick fix
2021-06-16 11:14:26 +03:00
Chenhu Wang
cf5c072cf4 [CPU] Change rounding type in load/store emitters (#6156) 2021-06-16 10:35:13 +03:00
Anton Romanov
6b3a652e54 Added copyrights note in CmakeLists (#6158) 2021-06-15 16:15:41 +03:00
Mikhail Nosov
66eef3c3d9 [Caching] Klocwork fixes (#6162) 2021-06-15 14:58:50 +03:00
Andrey Somsikov
0accd09c45 Add cnpy fuzz test and fix issues (#6109) (#6159) 2021-06-15 11:05:10 +00:00
Edward Shogulin
f339cf70c6 [LPT] FakeQuantize folding fix to support ConvolutionBackpropData with FQ on weights (#6118) 2021-06-15 12:36:41 +03:00
Elena Gvozdeva
2ec6d9590c add single_layer_test for Interpolate-1 (#6133) 2021-06-12 10:47:24 +03:00
Zlobin Vladimir
ca116ab8d1 Fix InferRequest::operator!() (#6140) 2021-06-12 10:46:46 +03:00
Ilya Lavrenov
84e935c0f2 Fixed InferenceEngineConfig.cmake usage in include() (#6136) 2021-06-11 13:10:35 +03:00
Alexander Zhogov
5859d44abc GitHub CI: Check committer email 2021-06-11 11:19:50 +03:00
Alexander Zhogov
7b67a83d8c GitHub CI: Fix checking commits in case of no email 2021-06-10 12:52:22 +03:00
9972 changed files with 49323 additions and 29254 deletions

View File

@@ -1,4 +1,4 @@
# Copyright (C) 2018-2021 Intel Corporation
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
"""

View File

@@ -1,14 +1,25 @@
trigger:
branches:
include:
- master
- releases/*
paths:
exclude:
- docs/*
resources:
repositories:
- repository: openvino_contrib
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/openvino_contrib
ref: releases/2021/4
- repository: testdata
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/testdata
ref: releases/2021/4
jobs:
- job: Lin
@@ -43,6 +54,7 @@ jobs:
echo Python info ; which python ; python --version
echo Java info ; which java ; java -version
echo gcc info ; which gcc ; gcc --version
echo cmake info ; which cmake ; cmake --version
lsb_release
env
cat /proc/cpuinfo
@@ -52,14 +64,16 @@ jobs:
df
lsblk -o NAME,HCTL,SIZE,MOUNTPOINT | grep -i "sd"
free -h
echo TargetBranch: $(System.PullRequest.TargetBranch)
echo SourceBranch: $(Build.SourceBranch)
displayName: 'System info'
- script: |
set -e
rm -rf $(WORK_DIR) ; mkdir $(WORK_DIR)
rm -rf $(BUILD_DIR) ; mkdir $(BUILD_DIR)
rm -rf $(BUILD_SAMPLES_DIR) ; mkdir $(BUILD_SAMPLES_DIR)
echo TargetBranch: $(System.PullRequest.TargetBranch)
echo SourceBranch: $(Build.SourceBranch)
displayName: 'Make dir'
- checkout: self
@@ -80,7 +94,8 @@ jobs:
path: testdata
- script: |
sudo apt --assume-yes install libusb-1.0-0-dev
set -e
sudo apt --assume-yes update && sudo apt --assume-yes install libusb-1.0-0-dev
# For opencv-python: setuptools and upgrade
sudo apt-get install python3-setuptools patchelf
python3 -m pip install --upgrade pip
@@ -89,7 +104,7 @@ jobs:
# For running Python API tests
python3 -m pip install -r $(REPO_DIR)/inference-engine/ie_bridges/python/src/requirements-dev.txt
# Speed up build
wget https://github.com/ninja-build/ninja/releases/download/v1.10.0/ninja-linux.zip
wget https://github.com/ninja-build/ninja/releases/download/v1.10.2/ninja-linux.zip
unzip ninja-linux.zip
sudo cp -v ninja /usr/local/bin/
# Speed up tests
@@ -105,6 +120,7 @@ jobs:
-DVERBOSE_BUILD=ON
-DENABLE_TEMPLATE_PLUGIN=ON
-DCMAKE_BUILD_TYPE=$(BUILD_TYPE)
-DBUILD_cuda_plugin=OFF
-DENABLE_PYTHON=ON
-DPYTHON_EXECUTABLE=/usr/bin/python3.6
-DENABLE_WHEEL=ON

View File

@@ -1,3 +1,12 @@
trigger:
branches:
include:
- master
- releases/*
paths:
exclude:
- docs/*
jobs:
- job: LinCC
# About 150% of total time
@@ -10,14 +19,12 @@ jobs:
system.debug: true
VSTS_HTTP_RETRY: 5
VSTS_HTTP_TIMEOUT: 200
WORKERS_NUMBER: 16
BUILD_TYPE: Release
REPO_DIR: $(Build.Repository.LocalPath)
OPENVINO_CONTRIB_REPO_DIR: $(REPO_DIR)/../openvino_contrib
MODELS_PATH: $(REPO_DIR)/../testdata
WORK_DIR: $(Pipeline.Workspace)/_w
BUILD_DIR: $(WORK_DIR)/build
BIN_DIR: $(REPO_DIR)/bin/intel64/$(BUILD_TYPE)
INSTALL_DIR: $(WORK_DIR)/install_pkg
SETUPVARS: $(INSTALL_DIR)/bin/setupvars.sh
@@ -30,6 +37,7 @@ jobs:
echo Python info ; which python ; python --version
echo Java info ; which java ; java -version
echo gcc info ; which gcc ; gcc --version
echo cmake info ; which cmake ; cmake --version
lsb_release
env
cat /proc/cpuinfo
@@ -53,10 +61,11 @@ jobs:
path: openvino
- script: |
sudo apt --assume-yes install libusb-1.0-0-dev
set -e
sudo apt --assume-yes update && sudo apt --assume-yes install libusb-1.0-0-dev
python3 -m pip install -r $(REPO_DIR)/inference-engine/ie_bridges/python/requirements.txt
# Speed up build
wget https://github.com/ninja-build/ninja/releases/download/v1.10.0/ninja-linux.zip
wget https://github.com/ninja-build/ninja/releases/download/v1.10.2/ninja-linux.zip
unzip ninja-linux.zip
sudo cp -v ninja /usr/local/bin/
workingDirectory: $(WORK_DIR)
@@ -76,12 +85,14 @@ jobs:
- script: ninja
workingDirectory: $(BUILD_DIR)
displayName: 'Build'
displayName: 'Build LinCC'
- script: ls -alR $(REPO_DIR)/bin/
displayName: 'List files'
displayName: 'List bin files'
- script: cmake -DCMAKE_INSTALL_PREFIX=$(INSTALL_DIR) -P cmake_install.cmake
workingDirectory: $(BUILD_DIR)
displayName: 'Install'
- script: ls -alR $(INSTALL_DIR)
displayName: 'List install files'

View File

@@ -1,22 +1,42 @@
trigger:
branches:
include:
- master
- releases/*
paths:
exclude:
- docs/*
jobs:
- job: nGraph_ONNX_Lin
- job: OpenVINO_ONNX_CI
strategy:
matrix:
Release:
BUILD_TYPE: 'Release'
PROTOBUF_LITE: 'ON'
TOX_COMMAND: 'tox && tox -e zoo_models'
Debug:
BUILD_TYPE: 'Debug'
PROTOBUF_LITE: 'ON'
TOX_COMMAND: 'tox'
maxParallel: 2
# About 300% of total time
timeoutInMinutes: 90
pool:
name: LIN_VMSS_VENV_ONNX_WU2
name: LIN_VMSS_VENV_ONNX_U20_WU2
variables:
system.debug: true
VSTS_HTTP_RETRY: 5
VSTS_HTTP_TIMEOUT: 200
WORKERS_NUMBER: 8
BUILD_TYPE: Release
REPO_DIR: $(Build.Repository.LocalPath)
WORK_DIR: $(Pipeline.Workspace)/_w
MODELS_DIR: /mount/cinfsshare/onnxtestdata
TMP_DIR: /mnt/tmp
ONNX_MODEL_ZOO_SHA: "d58213534f2a4d1c4b19ba62b3bb5f544353256e"
steps:
- script: |
@@ -27,6 +47,7 @@ jobs:
echo Python info ; which python ; python --version
echo Java info ; which java ; java -version
echo gcc info ; which gcc ; gcc --version
echo cmake info ; which cmake ; cmake --version
lsb_release
env
cat /proc/cpuinfo
@@ -40,10 +61,10 @@ jobs:
- script: |
rm -rf $(WORK_DIR) ; mkdir $(WORK_DIR)
sudo rm -rf $(TMP_DIR) ; sudo mkdir $(TMP_DIR) ; sudo chmod 777 -R $(TMP_DIR)
sudo mkdir -p $(MODELS_DIR)
sudo apt --assume-yes install nfs-common
sudo apt --assume-yes update && sudo apt --assume-yes install nfs-common
sudo mount -vvv -t nfs cinfsshare.file.core.windows.net:/cinfsshare/onnxtestdata $(MODELS_DIR) -o vers=4,minorversion=1,sec=sys
mkdir -p $(MODELS_DIR)/models_data
displayName: 'Make dirs'
- checkout: self
@@ -52,31 +73,23 @@ jobs:
submodules: recursive
path: openvino
- script: docker build --tag=openvino-onnx-ci-image --file=.ci/openvino-onnx/Dockerfile .
displayName: 'Docker build'
- script: ngraph/python/tests/test_onnx/model_zoo_preprocess.sh -d $(TMP_DIR) -o
displayName: 'Get models'
- script: |
##wget -O "$(TMP_DIR)/msft.zip" https://onnxruntimetestdata.blob.core.windows.net/models/20191107.zip
##unzip "$(TMP_DIR)/msft.zip" -d "$(MODELS_DIR)/msft"
#unzip "/mnt/onnxtestdata/models/20191107.zip" -d "$(MODELS_DIR)/msft"
#mv $(MODELS_DIR)/msft/opset9/LSTM_Seq_lens_unpacked/seq_lens_sorted $(MODELS_DIR)/msft/opset9/LSTM_Seq_lens_unpacked/test_data_set_0
#mv $(MODELS_DIR)/msft/opset9/LSTM_Seq_lens_unpacked/seq_lens_unsorted $(MODELS_DIR)/msft/opset9/LSTM_Seq_lens_unpacked/test_data_set_1
displayName: 'Get MSFT models'
enabled: false
set -e
sudo apt --assume-yes install git-lfs uidmap
curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh
workingDirectory: $(WORK_DIR)
displayName: 'Install dependencies'
- script: |
ls -alR $(MODELS_DIR)
ls -alR $(TMP_DIR)
displayName: 'List models'
enabled: false
- script: ngraph/python/tests/test_onnx/model_zoo_preprocess.sh -d $(MODELS_DIR)/models_data -o -s "$(ONNX_MODEL_ZOO_SHA)"
displayName: 'Update models'
condition: ne(variables['BUILD_TYPE'], 'Debug')
- script: sudo fallocate -l 48G /swapfile ; sudo mkswap /swapfile ; sudo swapon /swapfile ; df ; free -h
- script: sudo docker build --tag=openvino-onnx-ci-image --file=.ci/openvino-onnx/Dockerfile --build-arg BUILD_TYPE=$(BUILD_TYPE) --build-arg PROTOBUF_LITE=$(PROTOBUF_LITE) .
displayName: 'Docker build $(BUILD_TYPE) protobuf-lite: $(PROTOBUF_LITE)'
- script: sudo fallocate -l 64G /swapfile ; sudo mkswap /swapfile ; sudo swapon /swapfile ; df ; free -h
displayName: 'Create swap'
- script: |
docker run --name openvino-onnx-ci-container --volume $(TMP_DIR)/model_zoo:/root/.onnx/model_zoo --volume $(MODELS_DIR)/msft:/root/.onnx/model_zoo/MSFT openvino-onnx-ci-image
displayName: 'Docker run'
- script: sudo docker run --name openvino-onnx-ci-container --volume $(MODELS_DIR)/models_data/model_zoo/onnx_model_zoo_$(ONNX_MODEL_ZOO_SHA):/root/.onnx/model_zoo/onnx_model_zoo --volume $(MODELS_DIR)/msft:/root/.onnx/model_zoo/MSFT openvino-onnx-ci-image /bin/bash -c "$(TOX_COMMAND)"
displayName: 'Docker run $(BUILD_TYPE) protobuf-lite: $(PROTOBUF_LITE)'

View File

@@ -1,15 +1,23 @@
trigger:
branches:
include:
- master
- releases/*
paths:
exclude:
- docs/*
jobs:
- job: onnxruntime
timeoutInMinutes: 90
pool:
name: LIN_VMSS_VENV_ONNX_WU2
name: LIN_VMSS_VENV_ONNX_U20_WU2
variables:
system.debug: true
VSTS_HTTP_RETRY: 5
VSTS_HTTP_TIMEOUT: 200
WORKERS_NUMBER: 8
BUILD_TYPE: Release
REPO_DIR: $(Build.Repository.LocalPath)
ONNXRUNTIME_REPO_DIR: $(REPO_DIR)/../onnxruntime
@@ -20,6 +28,7 @@ jobs:
BUILD_DIR: $(WORK_DIR)/build
ONNXRUNTIME_UTILS: $(REPO_DIR)/.ci/azure/ci_utils/onnxruntime
ONNXRUNTIME_BUILD_DIR: $(ONNXRUNTIME_REPO_DIR)/build
steps:
- script: |
curl -H Metadata:true --noproxy "*" "http://169.254.169.254/metadata/instance?api-version=2019-06-01"
@@ -29,6 +38,7 @@ jobs:
echo Python info ; which python ; python --version
echo Java info ; which java ; java -version
echo gcc info ; which gcc ; gcc --version
echo cmake info ; which cmake ; cmake --version
lsb_release
env
cat /proc/cpuinfo
@@ -44,7 +54,7 @@ jobs:
rm -rf $(WORK_DIR) ; mkdir $(WORK_DIR)
sudo rm -rf $(TMP_DIR) ; sudo mkdir $(TMP_DIR) ; sudo chmod 777 -R $(TMP_DIR)
sudo mkdir -p $(MODELS_DIR)
sudo apt --assume-yes install nfs-common
sudo apt --assume-yes update && sudo apt --assume-yes install nfs-common
sudo mount -vvv -t nfs cinfsshare.file.core.windows.net:/cinfsshare/onnxtestdata $(MODELS_DIR) -o vers=4,minorversion=1,sec=sys
displayName: 'Make dirs'
@@ -60,15 +70,14 @@ jobs:
displayName: 'Clone onnxruntime'
- script: |
sudo apt --assume-yes install libusb-1.0-0-dev
# For opencv-python: setuptools and upgrade
sudo apt-get install python3-setuptools
set -e
$(REPO_DIR)/install_build_dependencies.sh
python3 -m pip install --upgrade pip
python3 -m pip install -r $(REPO_DIR)/inference-engine/ie_bridges/python/requirements.txt
# For running Python API tests
python3 -m pip install -r $(REPO_DIR)/inference-engine/ie_bridges/python/src/requirements-dev.txt
# Speed up build
wget https://github.com/ninja-build/ninja/releases/download/v1.10.0/ninja-linux.zip
wget https://github.com/ninja-build/ninja/releases/download/v1.10.2/ninja-linux.zip
unzip ninja-linux.zip
sudo cp -v ninja /usr/local/bin/
# Speed up tests
@@ -83,7 +92,7 @@ jobs:
-GNinja
-DCMAKE_BUILD_TYPE=$(BUILD_TYPE)
-DENABLE_PYTHON=ON
-DPYTHON_EXECUTABLE=/usr/bin/python3.6
-DPYTHON_EXECUTABLE=/usr/bin/python3.8
-DENABLE_VPU=OFF
-DENABLE_GNA=OFF
-DENABLE_OPENCV=OFF
@@ -104,10 +113,10 @@ jobs:
- script: ninja
workingDirectory: $(BUILD_DIR)
displayName: 'Build Lin'
displayName: 'Build Lin ONNX'
- script: ls -alR $(REPO_DIR)/bin/
displayName: 'List files'
displayName: 'List bin files'
- script: cmake -DCMAKE_INSTALL_PREFIX=$(INSTALL_DIR) -P cmake_install.cmake
workingDirectory: $(BUILD_DIR)
@@ -118,7 +127,7 @@ jobs:
echo "2021.2" > $(INSTALL_DIR)/deployment_tools/inference_engine/version.txt
CXXFLAGS="-Wno-error=deprecated-declarations" ./build.sh --config RelWithDebInfo --use_openvino CPU_FP32 --build_shared_lib --parallel --skip_tests --build_dir $(ONNXRUNTIME_BUILD_DIR)
workingDirectory: $(ONNXRUNTIME_REPO_DIR)
displayName: 'Build ONNX Runtime'
displayName: 'Build Lin ONNX Runtime'
- script: |
source $(INSTALL_DIR)/bin/setupvars.sh

View File

@@ -1,14 +1,25 @@
trigger:
branches:
include:
- master
- releases/*
paths:
exclude:
- docs/*
resources:
repositories:
- repository: openvino_contrib
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/openvino_contrib
ref: releases/2021/4
- repository: testdata
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/testdata
ref: releases/2021/4
jobs:
- job: Mac
@@ -22,7 +33,6 @@ jobs:
system.debug: true
VSTS_HTTP_RETRY: 5
VSTS_HTTP_TIMEOUT: 200
WORKERS_NUMBER: 3
BUILD_TYPE: Release
REPO_DIR: $(Build.Repository.LocalPath)
OPENVINO_CONTRIB_REPO_DIR: $(REPO_DIR)/../openvino_contrib
@@ -37,11 +47,11 @@ jobs:
- script: |
whoami
uname -a
which python3
python3 --version
which java
java -version
gcc --version
echo Python3 info ; which python3 ; python3 --version
echo Python info ; which python ; python --version
echo Java info ; which java ; java -version
echo gcc info ; which gcc ; gcc --version
echo cmake info ; which cmake ; cmake --version
xcrun --sdk macosx --show-sdk-version
env
sysctl -a
@@ -90,21 +100,27 @@ jobs:
# Disable errors with Ninja
export CXXFLAGS="-Wno-error=unused-command-line-argument"
export CFLAGS="-Wno-error=unused-command-line-argument"
cmake -GNinja -DVERBOSE_BUILD=ON -DCMAKE_BUILD_TYPE=$(BUILD_TYPE) -DENABLE_PYTHON=ON -DENABLE_TESTS=ON -DENABLE_STRICT_DEPENDENCIES=OFF -DIE_EXTRA_MODULES=$(OPENVINO_CONTRIB_REPO_DIR)/modules $(REPO_DIR)
cmake -GNinja -DVERBOSE_BUILD=ON -DCMAKE_BUILD_TYPE=$(BUILD_TYPE) -DENABLE_PYTHON=ON -DENABLE_TESTS=ON -DENABLE_STRICT_DEPENDENCIES=OFF -DBUILD_cuda_plugin=OFF -DIE_EXTRA_MODULES=$(OPENVINO_CONTRIB_REPO_DIR)/modules $(REPO_DIR)
workingDirectory: $(BUILD_DIR)
displayName: 'CMake'
- script: ls -alR $(REPO_DIR)/inference-engine/temp/
displayName: 'List temp SDKs'
- script: ninja
workingDirectory: $(BUILD_DIR)
displayName: 'Build Mac'
- script: ls -alR $(REPO_DIR)/bin/
displayName: 'List files'
displayName: 'List bin files'
- script: cmake -DCMAKE_INSTALL_PREFIX=$(INSTALL_DIR) -P cmake_install.cmake
workingDirectory: $(BUILD_DIR)
displayName: 'Install'
- script: ls -alR $(INSTALL_DIR)
displayName: 'List install files'
- script: $(BIN_DIR)/unit-test --gtest_print_time=1 --gtest_filter=-backend_api.config_unsupported:*IE_GPU*:IE_CPU.onnx_model_sigmoid:IE_CPU/GRUSequenceOp.onnx_model_gru* --gtest_output=xml:TEST-NGraphUT.xml
displayName: 'nGraph UT'
continueOnError: false

View File

@@ -1,14 +1,25 @@
trigger:
branches:
include:
- master
- releases/*
paths:
exclude:
- docs/*
resources:
repositories:
- repository: openvino_contrib
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/openvino_contrib
ref: releases/2021/4
- repository: testdata
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/testdata
ref: releases/2021/4
jobs:
- job: Win
@@ -16,13 +27,12 @@ jobs:
timeoutInMinutes: 120
pool:
name: WIN_VMSS_VENV_F8S_WU2
name: WIN_VMSS_VENV_F16S_WU2
variables:
system.debug: true
VSTS_HTTP_RETRY: 5
VSTS_HTTP_TIMEOUT: 200
WORKERS_NUMBER: 8
BUILD_TYPE: Release
REPO_DIR: $(Build.Repository.LocalPath)
OPENVINO_CONTRIB_REPO_DIR: $(REPO_DIR)\..\openvino_contrib
@@ -35,14 +45,13 @@ jobs:
MSVC_COMPILER_PATH: C:\Program Files (x86)\Microsoft Visual Studio\2019\Enterprise\VC\Tools\MSVC\14.24.28314\bin\Hostx64\x64\cl.exe
INSTALL_DIR: $(WORK_DIR)\install_pkg
SETUPVARS: $(INSTALL_DIR)\bin\setupvars.bat
IB_DIR: C:\Program Files (x86)\IncrediBuild
IB_TESTCONSOLE: $(IB_DIR)\IBTestConsole.exe
TEST_ENV_PATH: $(REPO_DIR)\inference-engine\temp\tbb\bin;$(REPO_DIR)\inference-engine\temp\opencv_4.5.2\opencv\bin;$(IB_DIR);%PATH%
TEST_ENV_PATH: $(REPO_DIR)\inference-engine\temp\tbb\bin;$(REPO_DIR)\inference-engine\temp\opencv_4.5.2\opencv\bin;%PATH%
steps:
- script: |
powershell -command "Invoke-RestMethod -Headers @{\"Metadata\"=\"true\"} -Method GET -Uri http://169.254.169.254/metadata/instance/compute?api-version=2019-06-01 | format-custom"
where python3
python3 --version
where python
python --version
where java
@@ -60,12 +69,6 @@ jobs:
rd /Q /S $(BUILD_SAMPLES_DIR) & mkdir $(BUILD_SAMPLES_DIR)
displayName: 'Make dir'
- script: |
certutil -urlcache -split -f https://openvinoweb.z5.web.core.windows.net/incredibuild/install_ib_console.bat install_ib_console.bat
call install_ib_console.bat
workingDirectory: $(WORK_DIR)
displayName: 'Install IncrediBuild'
- checkout: self
clean: true
lfs: false
@@ -84,7 +87,8 @@ jobs:
path: testdata
- script: |
certutil -urlcache -split -f https://github.com/ninja-build/ninja/releases/download/v1.10.0/ninja-win.zip ninja-win.zip
rem Speed up build
powershell -command "Invoke-WebRequest https://github.com/ninja-build/ninja/releases/download/v1.10.2/ninja-win.zip -OutFile ninja-win.zip"
powershell -command "Expand-Archive -Force ninja-win.zip"
git clone https://github.com/google/gtest-parallel.git
workingDirectory: $(WORK_DIR)
@@ -92,13 +96,14 @@ jobs:
- script: |
set PATH=$(WORK_DIR)\ninja-win;%PATH%
call "$(MSVS_VARS_PATH)" && cmake -GNinja -DENABLE_FASTER_BUILD=ON -DENABLE_TEMPLATE_PLUGIN=ON -DCMAKE_BUILD_TYPE=$(BUILD_TYPE) -DENABLE_TESTS=ON -DENABLE_STRICT_DEPENDENCIES=OFF -DIE_EXTRA_MODULES=$(OPENVINO_CONTRIB_REPO_DIR)\modules -DCMAKE_C_COMPILER:PATH="$(MSVC_COMPILER_PATH)" -DCMAKE_CXX_COMPILER:PATH="$(MSVC_COMPILER_PATH)" $(REPO_DIR)
call "$(MSVS_VARS_PATH)" && cmake -GNinja -DENABLE_FASTER_BUILD=ON -DENABLE_TEMPLATE_PLUGIN=ON -DCMAKE_BUILD_TYPE=$(BUILD_TYPE) -DBUILD_cuda_plugin=OFF -DENABLE_TESTS=ON -DENABLE_STRICT_DEPENDENCIES=OFF -DIE_EXTRA_MODULES=$(OPENVINO_CONTRIB_REPO_DIR)\modules -DCMAKE_C_COMPILER:PATH="$(MSVC_COMPILER_PATH)" -DCMAKE_CXX_COMPILER:PATH="$(MSVC_COMPILER_PATH)" $(REPO_DIR)
workingDirectory: $(BUILD_DIR)
displayName: 'CMake'
- script: |
set PATH=$(WORK_DIR)\ninja-win;%PATH%
call "$(MSVS_VARS_PATH)" && "C:\Program Files (x86)\IncrediBuild\BuildConsole.exe" /COMMAND="ninja"
- script: dir $(REPO_DIR)\inference-engine\temp\ /s
displayName: 'List temp SDKs'
- script: call "$(MSVS_VARS_PATH)" && $(WORK_DIR)\ninja-win\ninja
workingDirectory: $(BUILD_DIR)
displayName: 'Build Win'
@@ -120,6 +125,9 @@ jobs:
workingDirectory: $(BUILD_SAMPLES_DIR)
displayName: 'Build c samples'
- script: rd /Q /S $(BUILD_DIR)
displayName: 'Clean build dir'
- script: |
set PATH=$(TEST_ENV_PATH)
$(BIN_DIR)\unit-test --gtest_print_time=1 --gtest_filter=-backend_api.config_unsupported:*IE_GPU* --gtest_output=xml:TEST-NGraphUT.xml
@@ -128,8 +136,8 @@ jobs:
- script: |
set PATH=$(TEST_ENV_PATH)
"$(IB_TESTCONSOLE)" $(BIN_DIR)\InferenceEngineUnitTests.exe --gtest_output=xml:TEST-InferenceEngineUnitTests-IB.xml
displayName: 'IE UT old - IB'
$(BIN_DIR)\InferenceEngineUnitTests.exe --gtest_output=xml:TEST-InferenceEngineUnitTests.xml
displayName: 'IE UT old'
- script: |
set PATH=$(TEST_ENV_PATH)
@@ -175,9 +183,8 @@ jobs:
- script: |
set PATH=$(TEST_ENV_PATH)
rem $(BIN_DIR)\cpuFuncTests.exe --gtest_filter=*smoke* --gtest_output=xml:TEST-cpuFuncTests.xml
"$(IB_TESTCONSOLE)" $(BIN_DIR)\cpuFuncTests.exe --gtest_filter=*smoke*:-*CompareWithRefs/base_size=16_pre_nms_topn=100_post_nms_topn=100_nms_thresh=0.7_feat_stride=1_min_size=1_ratio* --gtest_output=xml:TEST-cpuFuncTests-IB.xml /testlevel=24
displayName: 'CPU FuncTests - IB'
$(BIN_DIR)\cpuFuncTests.exe --gtest_filter=*smoke* --gtest_output=xml:TEST-cpuFuncTests.xml
displayName: 'CPU FuncTests'
continueOnError: false
- script: |
@@ -200,8 +207,3 @@ jobs:
buildPlatform: 'x64' # Optional
buildConfiguration: 'Windows' # Optional
#publishRunAttachments: true # Optional
- script: echo Stop IncrediBuild_Agent && net stop IncrediBuild_Agent
displayName: Stop IncrediBuild
continueOnError: true
enabled: false

View File

@@ -1,7 +1,16 @@
trigger:
branches:
include:
- master
- releases/*
paths:
exclude:
- docs/*
jobs:
- job: WinCC
# About 150% of total time
timeoutInMinutes: 120
timeoutInMinutes: 60
pool:
name: WIN_VMSS_VENV_F8S_WU2
@@ -10,26 +19,22 @@ jobs:
system.debug: true
VSTS_HTTP_RETRY: 5
VSTS_HTTP_TIMEOUT: 200
WORKERS_NUMBER: 8
BUILD_TYPE: Release
REPO_DIR: $(Build.Repository.LocalPath)
OPENVINO_CONTRIB_REPO_DIR: $(REPO_DIR)\..\openvino_contrib
MODELS_PATH: $(REPO_DIR)\..\testdata
WORK_DIR: $(Pipeline.Workspace)\_w
BUILD_DIR: D:\build
BIN_DIR: $(REPO_DIR)\bin\intel64
MSVS_VARS_PATH: C:\Program Files (x86)\Microsoft Visual Studio\2019\Enterprise\VC\Auxiliary\Build\vcvars64.bat
MSVC_COMPILER_PATH: C:\Program Files (x86)\Microsoft Visual Studio\2019\Enterprise\VC\Tools\MSVC\14.24.28314\bin\Hostx64\x64\cl.exe
INSTALL_DIR: $(WORK_DIR)\install_pkg
SETUPVARS: $(INSTALL_DIR)\bin\setupvars.bat
IB_DIR: C:\Program Files (x86)\IncrediBuild
IB_TESTCONSOLE: $(IB_DIR)\IBTestConsole.exe
TEST_ENV_PATH: $(REPO_DIR)\inference-engine\temp\tbb\bin;$(REPO_DIR)\inference-engine\temp\opencv_4.5.2\opencv\bin;$(IB_DIR);%PATH%
steps:
- script: |
powershell -command "Invoke-RestMethod -Headers @{\"Metadata\"=\"true\"} -Method GET -Uri http://169.254.169.254/metadata/instance/compute?api-version=2019-06-01 | format-custom"
where python3
python3 --version
where python
python --version
where java
@@ -46,12 +51,6 @@ jobs:
rd /Q /S $(BUILD_DIR) & mkdir $(BUILD_DIR)
displayName: 'Make dir'
- script: |
certutil -urlcache -split -f https://openvinoweb.z5.web.core.windows.net/incredibuild/install_ib_console.bat install_ib_console.bat
call install_ib_console.bat
workingDirectory: $(WORK_DIR)
displayName: 'Install IncrediBuild'
- checkout: self
clean: true
lfs: false
@@ -59,7 +58,8 @@ jobs:
path: openvino
- script: |
certutil -urlcache -split -f https://github.com/ninja-build/ninja/releases/download/v1.10.0/ninja-win.zip ninja-win.zip
rem Speed up build
powershell -command "Invoke-WebRequest https://github.com/ninja-build/ninja/releases/download/v1.10.2/ninja-win.zip -OutFile ninja-win.zip"
powershell -command "Expand-Archive -Force ninja-win.zip"
workingDirectory: $(WORK_DIR)
displayName: 'Install dependencies'
@@ -70,20 +70,19 @@ jobs:
workingDirectory: $(BUILD_DIR)
displayName: 'CMake'
- script: |
set PATH=$(WORK_DIR)\ninja-win;%PATH%
call "$(MSVS_VARS_PATH)" && "C:\Program Files (x86)\IncrediBuild\BuildConsole.exe" /COMMAND="ninja"
- script: dir $(REPO_DIR)\inference-engine\temp\ /s
displayName: 'List temp SDKs'
- script: call "$(MSVS_VARS_PATH)" && $(WORK_DIR)\ninja-win\ninja
workingDirectory: $(BUILD_DIR)
displayName: 'Build Win'
displayName: 'Build Win CC'
- script: dir $(REPO_DIR)\bin\ /s
displayName: 'List files'
displayName: 'List bin files'
- script: cmake -DCMAKE_INSTALL_PREFIX=$(INSTALL_DIR) -P cmake_install.cmake
workingDirectory: $(BUILD_DIR)
displayName: 'Install'
- script: echo Stop IncrediBuild_Agent && net stop IncrediBuild_Agent
displayName: Stop IncrediBuild
continueOnError: true
enabled: false
- script: dir $(INSTALL_DIR) /s
displayName: 'List install files'

View File

@@ -1,6 +1,6 @@
#!/usr/bin/python3
# Copyright (C) 2018-2021 Intel Corporation
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
import logging

View File

@@ -1,6 +1,6 @@
#!/usr/bin/python3
# Copyright (C) 2018-2021 Intel Corporation
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
import requests

View File

@@ -1,6 +1,6 @@
#!/usr/bin/python3
# Copyright (C) 2018-2021 Intel Corporation
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
import argparse

View File

@@ -1,6 +1,6 @@
#!/usr/bin/python3
# Copyright (C) 2018-2021 Intel Corporation
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
import requests

View File

@@ -1,6 +1,6 @@
#!/usr/bin/python3
# Copyright (C) 2018-2021 Intel Corporation
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
import datetime

View File

@@ -1,4 +1,4 @@
# Copyright (C) 2018-2021 Intel Corporation
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
"""

View File

@@ -1,4 +1,4 @@
# Copyright (C) 2018-2021 Intel Corporation
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
"""
@@ -139,7 +139,7 @@ def update_labels(gh_api, pull, non_org_intel_pr_users, non_org_pr_users):
def get_wrong_commits(pull):
"""Returns commits with incorrect user and email"""
pr_author_email = pull.user.email.lower()
pr_author_email = (pull.user.email or "").lower()
print("GitHub PR author email:", pr_author_email)
print("Check commits:")
wrong_commits = set()
@@ -147,21 +147,29 @@ def get_wrong_commits(pull):
# import pprint; pprint.pprint(commit.raw_data)
print("Commit SHA:", commit.sha)
# Use raw data because commit author can be non GitHub user
commit_email = commit.raw_data["commit"]["author"]["email"].lower()
print(" Commit email:", commit_email)
commit_author_email = (commit.raw_data["commit"]["author"]["email"] or "").lower()
commit_committer_email = (commit.raw_data["commit"]["committer"]["email"] or "").lower()
print(" Commit author email:", commit_author_email)
print(" Commit committer email:", commit_committer_email)
if not github_api.is_valid_user(commit.author):
print(
" ERROR: User with the commit email is absent in GitHub:",
" ERROR: User with the commit author email is absent in GitHub:",
commit.raw_data["commit"]["author"]["name"],
)
wrong_commits.add(commit.sha)
if not github_api.is_valid_user(commit.committer):
print(
" ERROR: User with the commit committer email is absent in GitHub:",
commit.raw_data["commit"]["committer"]["name"],
)
wrong_commits.add(commit.sha)
if not commit.raw_data["commit"]["verification"]["verified"]:
print(
" WARNING: The commit is not verified. Reason:",
commit.raw_data["commit"]["verification"]["reason"],
)
if pr_author_email != commit_email:
print(" WARNING: Commit email and GitHub PR author public email are differnt")
if pr_author_email != commit_author_email or pr_author_email != commit_committer_email:
print(" WARNING: Commit emails and GitHub PR author public email are differnt")
return wrong_commits

View File

@@ -1,4 +1,4 @@
# Copyright (C) 2018-2021 Intel Corporation
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
"""

View File

@@ -1,4 +1,4 @@
# Copyright (C) 2018-2021 Intel Corporation
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
"""

View File

@@ -1,4 +1,4 @@
# Copyright (C) 2018-2021 Intel Corporation
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
"""

View File

@@ -14,14 +14,25 @@ jobs:
- name: Install dependencies
run: |
set -e
# install doc dependencies
sudo apt update
sudo apt --assume-yes install libusb-1.0-0-dev graphviz texlive
python3 -m pip install lxml
cd docs
python -m pip install -r requirements.txt --user
cd openvino_sphinx_theme
python setup.py install --user
cd ../..
# install doxyrest
wget https://github.com/vovkos/doxyrest/releases/download/doxyrest-2.1.3/doxyrest-2.1.3-linux-amd64.tar.xz
tar -xf doxyrest-2.1.3-linux-amd64.tar.xz
echo "$(pwd)/doxyrest-2.1.3-linux-amd64/bin/" >> $GITHUB_PATH
# install doxygen
mkdir doxygen
cd doxygen
git clone https://github.com/doxygen/doxygen.git
cd doxygen
git checkout Release_1_9_1
git checkout Release_1_9_2
mkdir build
cd build
cmake ..
@@ -32,15 +43,37 @@ jobs:
run: |
mkdir build
cd build
cmake -DENABLE_DOCS=ON ..
cmake -DENABLE_DOCS=ON -DENABLE_PYTHON=ON -DNGRAPH_PYTHON_BUILD_ENABLE=ON -DCMAKE_BUILD_TYPE=Release ..
- name: Build doc
run: cmake --build . --target openvino_docs
run: |
cmake --build . --target sphinx_docs
working-directory: build
- name: Archive HTML
run: |
zip -r openvino_html.zip _build
working-directory: build/docs
- name: Run Pytest
run: |
pytest --doxygen="./build/docs/doxygen.log" \
--confcutdir="./docs/scripts/tests/" \
--html="./build/docs/_artifacts/doc-generation.html" \
--doxygen-strip="$(pwd)" \
--doxygen-xfail="./docs/doxygen-xfail.txt" \
--self-contained-html ./docs/scripts/tests/test_docs.py
- name: 'Upload doc'
- name: 'Upload test results'
if: always()
uses: actions/upload-artifact@v2
with:
name: openvino_doc_pytest
path: build/docs/_artifacts/
- name: 'Upload html'
if: github.event_name == 'push'
uses: actions/upload-artifact@v2
with:
name: openvino_doc
path: build/docs/html/
name: openvino_html
path: build/docs/openvino_html.zip

View File

@@ -10,10 +10,13 @@ jobs:
submodules: recursive
- name: Install clang-format-9
run: sudo apt --assume-yes install clang-format-9
run: |
sudo apt update
sudo apt --assume-yes install clang-format-9
- name: Install dependencies
run: |
sudo apt update
sudo apt --assume-yes install libusb-1.0-0-dev
python3 -m pip install --upgrade pip
python3 -m pip install -r ./inference-engine/ie_bridges/python/requirements.txt
@@ -50,7 +53,9 @@ jobs:
submodules: recursive
- name: Install ShellCheck
run: sudo apt --assume-yes install shellcheck
run: |
sudo apt update
sudo apt --assume-yes install shellcheck
- name: Install dependencies
run: |

View File

@@ -41,6 +41,7 @@ jobs:
pip install -r requirements.txt
pip install -r requirements_dev.txt
# requrements for CMake
sudo apt update
sudo apt --assume-yes install libusb-1.0-0-dev
working-directory: model-optimizer

2
.gitignore vendored
View File

@@ -2,6 +2,8 @@
_*
# but ensure we don't skip __init__.py
!__init__.py
# and sphinx documentation folders
!docs/_*
# developer tools
*.idea

View File

@@ -1,4 +1,4 @@
# Copyright (C) 2018-2021 Intel Corporation
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
@@ -90,7 +90,7 @@ function(build_ngraph)
ngraph_set(NGRAPH_PYTHON_BUILD_ENABLE OFF)
endif()
if(CMAKE_CXX_COMPILER_ID MATCHES "^(Apple)?Clang$")
if(OV_COMPILER_IS_CLANG)
ie_add_compiler_flags(-Wno-error=uninitialized -Wno-error=literal-conversion)
elseif(UNIX)
ie_add_compiler_flags(-Wno-error=maybe-uninitialized -Wno-error=return-type)

View File

@@ -1,5 +1,5 @@
# OpenVINO™ Toolkit
[![Stable release](https://img.shields.io/badge/version-2021.3-green.svg)](https://github.com/openvinotoolkit/openvino/releases/tag/2021.3)
[![Stable release](https://img.shields.io/badge/version-2021.4.2-green.svg)](https://github.com/openvinotoolkit/openvino/releases/tag/2021.4.2)
[![Apache License Version 2.0](https://img.shields.io/badge/license-Apache_2.0-green.svg)](LICENSE)
![GitHub branch checks state](https://img.shields.io/github/checks-status/openvinotoolkit/openvino/master?label=GitHub%20checks)
![Azure DevOps builds (branch)](https://img.shields.io/azure-devops/build/openvinoci/b2bab62f-ab2f-4871-a538-86ea1be7d20f/13?label=Public%20CI)
@@ -42,7 +42,7 @@ Please report questions, issues and suggestions using:
---
\* Other names and brands may be claimed as the property of others.
[Open Model Zoo]:https://github.com/opencv/open_model_zoo
[Open Model Zoo]:https://github.com/openvinotoolkit/open_model_zoo
[Inference Engine]:https://software.intel.com/en-us/articles/OpenVINO-InferEngine
[Model Optimizer]:https://software.intel.com/en-us/articles/OpenVINO-ModelOptimizer
[nGraph]:https://docs.openvinotoolkit.org/latest/openvino_docs_nGraph_DG_DevGuide.html

View File

@@ -1,4 +1,4 @@
# Copyright (C) 2018-2021 Intel Corporation
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#

View File

@@ -1,4 +1,4 @@
# Copyright (C) 2018-2021 Intel Corporation
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#

View File

@@ -1,4 +1,4 @@
# Copyright (C) 2018-2021 Intel Corporation
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#

View File

@@ -1,4 +1,4 @@
# Copyright (C) 2018-2021 Intel Corporation
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#

View File

@@ -1,4 +1,4 @@
# Copyright (C) 2018-2021 Intel Corporation
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#

View File

@@ -1,4 +1,4 @@
# Copyright (C) 2018-2021 Intel Corporation
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#

View File

@@ -1,4 +1,4 @@
# Copyright (C) 2018-2021 Intel Corporation
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#

View File

@@ -1,4 +1,4 @@
# Copyright (C) 2018-2021 Intel Corporation
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#

View File

@@ -1,4 +1,4 @@
# Copyright (C) 2018-2021 Intel Corporation
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#

View File

@@ -1,4 +1,4 @@
# Copyright (C) 2018-2021 Intel Corporation
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#

View File

@@ -1,4 +1,4 @@
# Copyright (C) 2018-2021 Intel Corporation
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#

View File

@@ -1,4 +1,4 @@
# Copyright (C) 2018-2021 Intel Corporation
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#

View File

@@ -1,4 +1,4 @@
# Copyright (C) 2018-2021 Intel Corporation
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#

View File

@@ -1,4 +1,4 @@
# Copyright (C) 2018-2021 Intel Corporation
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
@@ -17,7 +17,7 @@ if (ENABLE_SANITIZER)
if(CMAKE_CXX_COMPILER_ID STREQUAL "GNU")
set(SANITIZER_LINKER_FLAGS "${SANITIZER_LINKER_FLAGS} -fuse-ld=gold")
elseif(CMAKE_CXX_COMPILER_ID MATCHES "^(Apple)?Clang$" AND NOT WIN32)
elseif(OV_COMPILER_IS_CLANG AND NOT WIN32)
if(CMAKE_CXX_COMPILER_VERSION VERSION_GREATER_EQUAL 8.0)
set(SANITIZER_LINKER_FLAGS "${SANITIZER_LINKER_FLAGS} -fuse-ld=lld")
endif()
@@ -35,7 +35,7 @@ if (ENABLE_THREAD_SANITIZER)
set(SANITIZER_LINKER_FLAGS "-fsanitize=thread")
set(SANITIZER_LINKER_FLAGS "${SANITIZER_LINKER_FLAGS} -Wl,-z,nodelete")
if(CMAKE_CXX_COMPILER_ID MATCHES "^(Apple)?Clang$" AND NOT WIN32)
if(OV_COMPILER_IS_CLANG AND NOT WIN32)
if(CMAKE_CXX_COMPILER_VERSION VERSION_GREATER_EQUAL 8.0)
set(SANITIZER_LINKER_FLAGS "${SANITIZER_LINKER_FLAGS} -fuse-ld=lld")
else()

View File

@@ -1,4 +1,4 @@
# Copyright (C) 2018-2021 Intel Corporation
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
@@ -23,7 +23,7 @@ if (CMAKE_BUILD_TYPE STREQUAL "Release")
if (NOT ENABLE_SANITIZER)
set(IE_C_CXX_FLAGS "${IE_C_CXX_FLAGS} -s")
endif()
elseif(CMAKE_CXX_COMPILER_ID MATCHES "^(Apple)?Clang$")
elseif(OV_COMPILER_IS_CLANG)
set(IE_C_CXX_FLAGS "${IE_C_CXX_FLAGS} -fstack-protector-all")
elseif(CMAKE_CXX_COMPILER_ID STREQUAL "Intel")
if (NOT ENABLE_SANITIZER)

View File

@@ -1,4 +1,4 @@
# Copyright (C) 2018-2021 Intel Corporation
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
set(OV_COVERAGE_GCDA_DATA_DIRECTORY "${CMAKE_BINARY_DIR}")

View File

@@ -1,4 +1,4 @@
# Copyright (C) 2018-2021 Intel Corporation
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#

View File

@@ -1,4 +1,4 @@
# Copyright (C) 2018-2021 Intel Corporation
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#

View File

@@ -1,4 +1,4 @@
# Copyright (C) 2018-2021 Intel Corporation
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#

View File

@@ -1,4 +1,4 @@
# Copyright (C) 2018-2021 Intel Corporation
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#

View File

@@ -1,4 +1,4 @@
# Copyright (C) 2018-2021 Intel Corporation
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#

View File

@@ -1,4 +1,4 @@
# Copyright (C) 2018-2021 Intel Corporation
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#

View File

@@ -1,4 +1,4 @@
# Copyright (C) 2018-2021 Intel Corporation
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#

View File

@@ -1,4 +1,4 @@
# Copyright (C) 2018-2021 Intel Corporation
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#

View File

@@ -1,4 +1,4 @@
# Copyright (C) 2020 Intel Corporation
# Copyright (C) 2020-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#

View File

@@ -1,4 +1,4 @@
# Copyright (C) 2018-2021 Intel Corporation
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#

View File

@@ -1,4 +1,4 @@
# Copyright (C) 2018-2021 Intel Corporation
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#

View File

@@ -1,4 +1,4 @@
# Copyright (C) 2018-2021 Intel Corporation
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#

View File

@@ -1,4 +1,4 @@
# Copyright (C) 2018-2021 Intel Corporation
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#

View File

@@ -1,4 +1,4 @@
# Copyright (C) 2018-2021 Intel Corporation
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#

View File

@@ -1,4 +1,4 @@
# Copyright (C) 2018-2021 Intel Corporation
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#

View File

@@ -1,4 +1,4 @@
# Copyright (C) 2018-2021 Intel Corporation
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#

View File

@@ -1,4 +1,4 @@
# Copyright (C) 2018-2021 Intel Corporation
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#

View File

@@ -1,4 +1,4 @@
# Copyright (C) 2018-2021 Intel Corporation
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#

View File

@@ -1,4 +1,4 @@
# Copyright (C) 2018-2021 Intel Corporation
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
@@ -56,7 +56,7 @@ ie_option (VERBOSE_BUILD "shows extra information about build" OFF)
ie_option (ENABLE_UNSAFE_LOCATIONS "skip check for MD5 for dependency" OFF)
ie_dependent_option (ENABLE_FUZZING "instrument build for fuzzing" OFF "CMAKE_CXX_COMPILER_ID MATCHES ^(Apple)?Clang$; NOT WIN32" OFF)
ie_dependent_option (ENABLE_FUZZING "instrument build for fuzzing" OFF "OV_COMPILER_IS_CLANG; NOT WIN32" OFF)
#
# Check features

View File

@@ -1,4 +1,4 @@
# Copyright (C) 2018-2021 Intel Corporation
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#

View File

@@ -1,4 +1,4 @@
# Copyright (C) 2018-2021 Intel Corporation
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#

View File

@@ -1,4 +1,4 @@
# Copyright (C) 2018-2021 Intel Corporation
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#

View File

@@ -1,4 +1,4 @@
# Copyright (C) 2018-2021 Intel Corporation
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#

View File

@@ -1,4 +1,4 @@
# Copyright (C) 2018-2021 Intel Corporation
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#

View File

@@ -1,4 +1,4 @@
# Copyright (C) 2018-2021 Intel Corporation
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#

View File

@@ -1,4 +1,4 @@
# Copyright (C) 2018-2021 Intel Corporation
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#

View File

@@ -1,4 +1,4 @@
# Copyright (C) 2018-2021 Intel Corporation
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#

View File

@@ -1,4 +1,4 @@
# Copyright (C) 2018-2021 Intel Corporation
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#

View File

@@ -1,4 +1,4 @@
# Copyright (C) 2018-2021 Intel Corporation
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#

View File

@@ -1,4 +1,4 @@
# Copyright (C) 2018-2021 Intel Corporation
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#

View File

@@ -1,4 +1,4 @@
# Copyright (C) 2018-2021 Intel Corporation
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
# Target system specific flags
@@ -55,3 +55,9 @@ endif()
if(UNIX AND NOT APPLE)
set(LINUX ON)
endif()
if(CMAKE_CXX_COMPILER_ID MATCHES "^(Apple)?Clang$")
set(OV_COMPILER_IS_CLANG ON)
else()
set(OV_COMPILER_IS_CLANG OFF)
endif()

View File

@@ -1,4 +1,4 @@
# Copyright (C) 2018-2021 Intel Corporation
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
# TBB_FOUND should not be set explicitly. It is defined automatically by CMake.

View File

@@ -1,4 +1,4 @@
# Copyright (C) 2018-2021 Intel Corporation
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#

View File

@@ -1,4 +1,4 @@
# Copyright (C) 2018-2021 Intel Corporation
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#

View File

@@ -1,4 +1,4 @@
# Copyright (C) 2018-2021 Intel Corporation
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#

View File

@@ -1,4 +1,4 @@
# Copyright (C) 2018-2021 Intel Corporation
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#

View File

@@ -1,4 +1,4 @@
# Copyright (C) 2018-2021 Intel Corporation
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#

View File

@@ -1,4 +1,4 @@
# Copyright (C) 2018-2021 Intel Corporation
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#

View File

@@ -1,4 +1,4 @@
# Copyright (C) 2018-2021 Intel Corporation
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#

View File

@@ -1,4 +1,4 @@
# Copyright (C) 2018-2021 Intel Corporation
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#

View File

@@ -1,4 +1,4 @@
# Copyright (C) 2018-2021 Intel Corporation
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#

View File

@@ -1,4 +1,4 @@
# Copyright (C) 2018-2021 Intel Corporation
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#

View File

@@ -1,4 +1,4 @@
# Copyright (C) 2018-2021 Intel Corporation
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
@@ -44,327 +44,212 @@ if(NOT ENABLE_DOCKER)
endforeach()
endif()
set(LINKCHECKER_PY "" CACHE FILEPATH "Path to linkchecker.py for documentation check")
set(OMZ_DOCS_DIR "" CACHE PATH "Path to open_model_zoo documentation")
set(WORKBENCH_DOCS_DIR "" CACHE PATH "Path to workbench documentation")
set(POT_DOCS_DIR "" CACHE PATH "Path to post-training-compression-tool documentation")
set(GST_DOCS_DIR "" CACHE PATH "Path to gst-video-analytics documentation")
set(LINKCHECKER_PY "" CACHE FILEPATH "Path to linkchecker.py for documentation check dir.")
set(ENABLE_OPENVINO_NOTEBOOKS OFF CACHE BOOL "Build with openvino notebooks")
set(OMZ_DOCS_DIR "" CACHE PATH "Path to open_model_zoo documentation dir.")
set(WORKBENCH_DOCS_DIR "" CACHE PATH "Path to workbench documentation dir.")
set(POT_DOCS_DIR "" CACHE PATH "Path to post-training-compression-tool documentation dir.")
set(OVMS_DOCS_DIR "" CACHE PATH "Path to model server documentation dir.")
set(GST_DOCS_DIR "" CACHE PATH "Path to gst-video-analytics documentation dir.")
set(GRAPH_CSV_DIR "" CACHE PATH "Path to the folder containing csv data for rendering graphs.")
function(build_docs)
find_package(Doxygen REQUIRED dot)
find_package(PythonInterp 3 REQUIRED)
find_package(LATEX REQUIRED)
execute_process(
COMMAND ${PYTHON_EXECUTABLE} -m pip show lxml
RESULT_VARIABLE PIP_EXIT_CODE
OUTPUT_QUIET
)
if (NOT ${PIP_EXIT_CODE} EQUAL 0)
message(FATAL_ERROR "lxml package is not installed. Please use \"pip install lxml\".")
find_program(DOXYREST_EXECUTABLE NAMES doxyrest)
if (NOT DOXYREST_EXECUTABLE)
message(FATAL_ERROR "No doxyrest found. Documentation output is not available")
endif()
set(DOCS_BUILD_DIR "${CMAKE_CURRENT_BINARY_DIR}")
set(DOXYGEN_DIR "${OpenVINO_MAIN_SOURCE_DIR}/docs/doxygen")
set(IE_SOURCE_DIR "${OpenVINO_MAIN_SOURCE_DIR}/inference-engine")
set(PYTHON_API_IN "${IE_SOURCE_DIR}/ie_bridges/python/src/openvino/inference_engine/ie_api.pyx")
set(PYTHON_API_OUT "${DOCS_BUILD_DIR}/python_api/ie_api.pyx")
set(C_API "${IE_SOURCE_DIR}/ie_bridges/c/include")
set(PLUGIN_API_DIR "${DOCS_BUILD_DIR}/IE_PLUGIN_DG")
set(DOCS_SOURCE_DIR "${OpenVINO_MAIN_SOURCE_DIR}/docs")
set(SCRIPTS_DIR "${DOCS_SOURCE_DIR}/scripts")
# API INPUT
set(NGRAPH_DIR "${OpenVINO_MAIN_SOURCE_DIR}/ngraph")
set(NGRAPH_PY_DIR "${NGRAPH_DIR}/python/src/ngraph/")
set(NGRAPH_CPP_DIR "${NGRAPH_DIR}/core/include/" "${NGRAPH_DIR}/frontend/onnx_import/include")
# Preprocessing scripts
set(DOXY_MD_FILTER "${DOXYGEN_DIR}/doxy_md_filter.py")
set(DOXY_LAYOUT_SCRIPT "${DOXYGEN_DIR}/build_main_layout.py")
set(DOXY_LOG_SCRIPT "${DOXYGEN_DIR}/log.py")
set(PYX_FILTER "${DOXYGEN_DIR}/pyx_filter.py")
# assets dir
set(ASSETS_DIR "${DOXYGEN_DIR}/assets")
# markdown docs
set(MARKDOWN_INPUT "${DOCS_BUILD_DIR}")
# header and footer
set(HEADER_SOURCE "${DOXYGEN_DIR}/header.html.in")
set(FOOTER_SOURCE "${DOXYGEN_DIR}/footer.html.in")
set(HEADER_BUILD "${DOCS_BUILD_DIR}/header.html")
set(FOOTER_BUILD "${DOCS_BUILD_DIR}/footer.html")
configure_file(${HEADER_SOURCE} ${HEADER_BUILD} @ONLY)
configure_file(${FOOTER_SOURCE} ${FOOTER_BUILD} @ONLY)
file(GLOB_RECURSE doc_source_files
LIST_DIRECTORIES true RELATIVE ${OpenVINO_MAIN_SOURCE_DIR}
"${OpenVINO_MAIN_SOURCE_DIR}/docs/*.md"
"${OpenVINO_MAIN_SOURCE_DIR}/docs/*.png"
"${OpenVINO_MAIN_SOURCE_DIR}/docs/*.gif"
"${OpenVINO_MAIN_SOURCE_DIR}/docs/*.jpg"
"${OpenVINO_MAIN_SOURCE_DIR}/docs/*.svg"
"${OpenVINO_MAIN_SOURCE_DIR}/inference-engine/*.md"
"${OpenVINO_MAIN_SOURCE_DIR}/inference-engine/*.png"
"${OpenVINO_MAIN_SOURCE_DIR}/inference-engine/*.gif"
"${OpenVINO_MAIN_SOURCE_DIR}/inference-engine/*.jpg"
"${OpenVINO_MAIN_SOURCE_DIR}/inference-engine/*.svg")
configure_file(${PYTHON_API_IN} ${PYTHON_API_OUT} @ONLY)
set(NGRAPH_CPP_CONFIG_SOURCE "${DOXYGEN_DIR}/ngraph_cpp_api.config")
set(NGRAPH_PY_CONFIG_SOURCE "${DOXYGEN_DIR}/ngraph_py_api.config")
set(IE_CONFIG_SOURCE "${DOXYGEN_DIR}/ie_docs.config")
set(C_CONFIG_SOURCE "${DOXYGEN_DIR}/ie_c_api.config")
set(PY_CONFIG_SOURCE "${DOXYGEN_DIR}/ie_py_api.config")
set(PLUGIN_CONFIG_SOURCE "${DOXYGEN_DIR}/ie_plugin_api.config")
set(NGRAPH_CPP_CONFIG_BUILD "${DOCS_BUILD_DIR}/ngraph_cpp_api.config")
set(NGRAPH_PY_CONFIG_BUILD "${DOCS_BUILD_DIR}/ngraph_py_api.config")
set(IE_CONFIG_BUILD "${DOCS_BUILD_DIR}/ie_docs.config")
set(C_CONFIG_BUILD "${DOCS_BUILD_DIR}/ie_c_api.config")
set(PY_CONFIG_BUILD "${DOCS_BUILD_DIR}/ie_py_api.config")
set(PLUGIN_CONFIG_BUILD "${DOCS_BUILD_DIR}/ie_plugin_api.config")
set(NGRAPH_CPP_LAYOUT_SOURCE "${DOXYGEN_DIR}/ngraph_cpp_api.xml")
set(NGRAPH_PY_LAYOUT_SOURCE "${DOXYGEN_DIR}/ngraph_py_api.xml")
set(IE_LAYOUT_SOURCE "${DOXYGEN_DIR}/ie_docs.xml")
set(OPENVINO_LAYOUT_SOURCE "${DOXYGEN_DIR}/openvino_docs.xml")
set(C_LAYOUT_SOURCE "${DOXYGEN_DIR}/ie_c_api.xml")
set(PY_LAYOUT_SOURCE "${DOXYGEN_DIR}/ie_py_api.xml")
set(PLUGIN_LAYOUT_SOURCE "${DOXYGEN_DIR}/ie_plugin_api.xml")
set(NGRAPH_CPP_LAYOUT_BUILD "${DOCS_BUILD_DIR}/ngraph_cpp_api.xml")
set(NGRAPH_PY_LAYOUT_BUILD "${DOCS_BUILD_DIR}/ngraph_py_api.xml")
set(IE_LAYOUT_BUILD "${DOCS_BUILD_DIR}/ie_docs.xml")
set(OPENVINO_LAYOUT_BUILD "${DOCS_BUILD_DIR}/openvino_docs.xml")
set(C_LAYOUT_BUILD "${DOCS_BUILD_DIR}/ie_c_api.xml")
set(PY_LAYOUT_BUILD "${DOCS_BUILD_DIR}/ie_py_api.xml")
set(PLUGIN_LAYOUT_BUILD "${DOCS_BUILD_DIR}/ie_plugin_api.xml")
# IE C++ API
set(IE_SOURCE_DIR "${OpenVINO_MAIN_SOURCE_DIR}/inference-engine")
# IE C API
set(IE_C_API "${IE_SOURCE_DIR}/ie_bridges/c/include")
# Preprocessing scripts
set(DOXY_MD_FILTER "${SCRIPTS_DIR}/doxy_md_filter.py")
set(PYNGRAPH_REF_SCRIPT "${SCRIPTS_DIR}/pyngraph_ref.py")
set(DOXY_LOG_SCRIPT "${SCRIPTS_DIR}/log.py")
set(PYX_FILTER "${SCRIPTS_DIR}/pyx_filter.py")
set(PREPARE_XML_SCRIPT "${SCRIPTS_DIR}/prepare_xml.py")
set(REMOVE_XML_SCRIPT "${SCRIPTS_DIR}/remove_xml.py")
set(COPY_IMAGES_SCRIPT "${SCRIPTS_DIR}/copy_images.py")
set(DOC_TEST_DIR "${SCRIPTS_DIR}/tests")
set(DOXYGEN_MAPPING_SCRIPT "${SCRIPTS_DIR}/create_mapping.py")
set(DOXYGEN_MAPPING_FILE "${DOCS_BUILD_DIR}/mapping.json")
# out dirs
set(OUTPUT_DIRECTORY "${DOCS_BUILD_DIR}/html")
set(IE_OUTPUT "${OUTPUT_DIRECTORY}")
set(C_OUTPUT "${OUTPUT_DIRECTORY}/ie_c_api")
set(PY_OUTPUT "${OUTPUT_DIRECTORY}/ie_python_api")
set(PLUGIN_OUTPUT "${OUTPUT_DIRECTORY}/ie_plugin_api")
set(NGRAPH_CPP_OUTPUT "${OUTPUT_DIRECTORY}/ngraph_cpp_api")
set(NGRAPH_PY_OUTPUT "${OUTPUT_DIRECTORY}/ngraph_python_api")
set(XML_OUTPUT "${DOCS_BUILD_DIR}/xml")
set(RST_OUTPUT "${DOCS_BUILD_DIR}/rst")
set(SPHINX_OUTPUT "${DOCS_BUILD_DIR}/_build")
# Tables of contents
configure_file(${NGRAPH_CPP_LAYOUT_SOURCE} ${NGRAPH_CPP_LAYOUT_BUILD} @ONLY)
configure_file(${NGRAPH_PY_LAYOUT_SOURCE} ${NGRAPH_PY_LAYOUT_BUILD} @ONLY)
configure_file(${IE_LAYOUT_SOURCE} ${IE_LAYOUT_BUILD} @ONLY)
configure_file(${OPENVINO_LAYOUT_SOURCE} ${OPENVINO_LAYOUT_BUILD} @ONLY)
configure_file(${C_LAYOUT_SOURCE} ${C_LAYOUT_BUILD} @ONLY)
configure_file(${PY_LAYOUT_SOURCE} ${PY_LAYOUT_BUILD} @ONLY)
configure_file(${PLUGIN_LAYOUT_SOURCE} ${PLUGIN_LAYOUT_BUILD} @ONLY)
# Sphinx folders, doxyrest templates and config
set(SPHINX_CONF_IN "${DOCS_SOURCE_DIR}/conf.py")
set(SPHINX_CONF_OUT "${RST_OUTPUT}/conf.py")
set(SPHINX_STATIC_IN "${DOCS_SOURCE_DIR}/_static")
set(SPHINX_STATIC_OUT "${RST_OUTPUT}/_static")
set(SPHINX_INDEX_IN "${DOCS_SOURCE_DIR}/index.rst")
set(SPHINX_INDEX_OUT "${RST_OUTPUT}/index.rst")
set(API_DOCS_IN "${DOCS_SOURCE_DIR}/api")
set(API_DOCS_OUT "${RST_OUTPUT}/api")
set(DOXYREST_IN "${DOCS_SOURCE_DIR}/doxyrest")
set(DOXYREST_OUT "${DOCS_BUILD_DIR}/doxyrest")
set(DOXYREST_SPHINX_IN "${DOCS_SOURCE_DIR}/doxyrest-sphinx")
set(DOXYREST_SPHINX_OUT "${RST_OUTPUT}/doxyrest-sphinx")
set(DOXYREST_CONFIG_IN "${DOCS_SOURCE_DIR}/doxyrest-config.lua")
set(DOXYREST_CONFIG_OUT "${DOCS_BUILD_DIR}/doxyrest-config.lua")
configure_file(${DOXYREST_CONFIG_IN} ${DOXYREST_CONFIG_OUT} @ONLY)
configure_file(${SPHINX_CONF_IN} ${SPHINX_CONF_OUT} @ONLY)
# Doxygen config files
configure_file(${NGRAPH_CPP_CONFIG_SOURCE} ${NGRAPH_CPP_CONFIG_BUILD} @ONLY)
configure_file(${NGRAPH_PY_CONFIG_SOURCE} ${NGRAPH_PY_CONFIG_BUILD} @ONLY)
configure_file(${IE_CONFIG_SOURCE} ${IE_CONFIG_BUILD} @ONLY)
configure_file(${C_CONFIG_SOURCE} ${C_CONFIG_BUILD} @ONLY)
configure_file(${PY_CONFIG_SOURCE} ${PY_CONFIG_BUILD} @ONLY)
configure_file(${PLUGIN_CONFIG_SOURCE} ${PLUGIN_CONFIG_BUILD} @ONLY)
# Doxygen config
set(DOXYFILE_SOURCE "${DOCS_SOURCE_DIR}/Doxyfile.config")
set(DOXYFILE_BUILD "${DOCS_BUILD_DIR}/Doxyfile.config")
configure_file(${DOXYFILE_SOURCE} ${DOXYFILE_BUILD} @ONLY)
# Preprocessing scripts
set(DOXY_MD_FILTER "${DOXYGEN_DIR}/doxy_md_filter.py")
set(PYX_FILTER "${DOXYGEN_DIR}/pyx_filter.py")
list(APPEND commands COMMAND ${PYTHON_EXECUTABLE} ${DOXY_MD_FILTER}
--input_dir=${OpenVINO_MAIN_SOURCE_DIR}
--output_dir=${DOCS_BUILD_DIR}/openvino
--exclude_dir=${DOCS_BUILD_DIR})
# nGraph C++ API
# include additional repositories
add_custom_target(ngraph_cpp_api
COMMAND ${CMAKE_COMMAND} -E copy_directory ${ASSETS_DIR} ${NGRAPH_CPP_OUTPUT}/assets
COMMAND ${DOXYGEN_EXECUTABLE} ${NGRAPH_CPP_CONFIG_BUILD}
WORKING_DIRECTORY ${DOCS_BUILD_DIR}
VERBATIM)
# build with openvino notebooks
if(ENABLE_OPENVINO_NOTEBOOKS)
set(NBDOC_SCRIPT "${DOCS_SOURCE_DIR}/nbdoc/nbdoc.py")
list(APPEND commands
COMMAND ${PYTHON_EXECUTABLE} "${NBDOC_SCRIPT}" "${RST_OUTPUT}/notebooks"
)
endif()
# nGraph Python API
if(GRAPH_CSV_DIR)
set(GRAPH_CSV_DIR_OUT "${RST_OUTPUT}/csv")
list(APPEND commands
COMMAND ${CMAKE_COMMAND} -E copy_directory "${GRAPH_CSV_DIR}" "${GRAPH_CSV_DIR_OUT}"
)
endif()
add_custom_target(ngraph_py_api
COMMAND ${CMAKE_COMMAND} -E copy_directory ${ASSETS_DIR} ${NGRAPH_PY_OUTPUT}/assets
COMMAND ${DOXYGEN_EXECUTABLE} ${NGRAPH_PY_CONFIG_BUILD}
WORKING_DIRECTORY ${DOCS_BUILD_DIR}
VERBATIM)
list(APPEND commands
COMMAND ${CMAKE_COMMAND} -E copy ${API_DOCS_IN}/api_reference.rst ${API_DOCS_OUT}/api_reference.rst
)
# C API
add_custom_target(c_api
COMMAND ${CMAKE_COMMAND} -E copy_directory ${ASSETS_DIR} ${C_OUTPUT}/assets
COMMAND ${DOXYGEN_EXECUTABLE} ${C_CONFIG_BUILD}
WORKING_DIRECTORY ${DOCS_BUILD_DIR}
COMMENT "Generating C API Reference"
VERBATIM)
# Python API
add_custom_target(py_api
COMMAND ${CMAKE_COMMAND} -E copy_directory ${ASSETS_DIR} ${PY_OUTPUT}/assets
COMMAND ${DOXYGEN_EXECUTABLE} ${PY_CONFIG_BUILD}
WORKING_DIRECTORY ${DOCS_BUILD_DIR}
COMMENT "Generating Python API Reference"
VERBATIM)
add_custom_command(TARGET py_api
PRE_BUILD
COMMAND ${PYTHON_EXECUTABLE} ${PYX_FILTER} ${PYTHON_API_OUT}
COMMENT "Pre-process Python API")
# Preprocess docs
add_custom_target(preprocess_docs
COMMENT "Pre-process docs"
VERBATIM)
# ovino doc files
file(GLOB_RECURSE ovino_doc_files
LIST_DIRECTORIES true RELATIVE ${OpenVINO_MAIN_SOURCE_DIR}
"${OpenVINO_MAIN_SOURCE_DIR}/docs/*.md"
"${OpenVINO_MAIN_SOURCE_DIR}/docs/*.png"
"${OpenVINO_MAIN_SOURCE_DIR}/docs/*.gif"
"${OpenVINO_MAIN_SOURCE_DIR}/docs/*.jpg"
"${OpenVINO_MAIN_SOURCE_DIR}/inference-engine/*.md"
"${OpenVINO_MAIN_SOURCE_DIR}/inference-engine/*.png"
"${OpenVINO_MAIN_SOURCE_DIR}/inference-engine/*.gif"
"${OpenVINO_MAIN_SOURCE_DIR}/inference-engine/*.jpg")
foreach(source_file ${ovino_doc_files})
list(APPEND commands COMMAND ${CMAKE_COMMAND} -E copy
"${OpenVINO_MAIN_SOURCE_DIR}/${source_file}" "${DOCS_BUILD_DIR}/openvino/${source_file}")
endforeach()
if(ENABLE_PYTHON)
list(APPEND commands
COMMAND ${CMAKE_COMMAND} -E copy_directory ${API_DOCS_IN}/ie_python_api ${API_DOCS_OUT}/ie_python_api
)
list(APPEND commands
COMMAND ${CMAKE_COMMAND} -E copy_directory ${API_DOCS_IN}/ngraph_python_api ${API_DOCS_OUT}/ngraph_python_api
)
endif()
# omz doc files
if(EXISTS "${OMZ_DOCS_DIR}")
get_filename_component(OMZ_DOCS_DIR "${OMZ_DOCS_DIR}" ABSOLUTE)
file(GLOB_RECURSE omz_doc_files
LIST_DIRECTORIES true RELATIVE ${OMZ_DOCS_DIR}
"${OMZ_DOCS_DIR}/*.md"
"${OMZ_DOCS_DIR}/*.png"
"${OMZ_DOCS_DIR}/*.gif"
"${OMZ_DOCS_DIR}/*.jpg")
foreach(source_file ${omz_doc_files})
list(APPEND commands COMMAND ${CMAKE_COMMAND} -E copy
"${OMZ_DOCS_DIR}/${source_file}" "${DOCS_BUILD_DIR}/omz/${source_file}")
endforeach()
configure_file("${OMZ_DOCS_DIR}/omz_docs.xml" "${DOCS_BUILD_DIR}/omz_docs.xml" @ONLY)
list(APPEND commands
COMMAND ${PYTHON_EXECUTABLE} ${OMZ_DOCS_DIR}/ci/prepare-documentation.py ${CMAKE_BINARY_DIR}/open_model_zoo)
list(APPEND commands COMMAND ${PYTHON_EXECUTABLE} ${DOXY_MD_FILTER}
--input_dir=${CMAKE_BINARY_DIR}/open_model_zoo
--output_dir=${DOCS_BUILD_DIR}/open_model_zoo)
endif()
# workbench doc files
if(EXISTS "${WORKBENCH_DOCS_DIR}")
get_filename_component(WORKBENCH_DOCS_DIR "${WORKBENCH_DOCS_DIR}" ABSOLUTE)
file(GLOB_RECURSE workbench_doc_files
LIST_DIRECTORIES true RELATIVE ${WORKBENCH_DOCS_DIR}
"${WORKBENCH_DOCS_DIR}/*.md"
"${WORKBENCH_DOCS_DIR}/*.png"
"${WORKBENCH_DOCS_DIR}/*.gif"
"${WORKBENCH_DOCS_DIR}/*.jpg")
foreach(source_file ${workbench_doc_files})
list(APPEND commands COMMAND ${CMAKE_COMMAND} -E copy
"${WORKBENCH_DOCS_DIR}/${source_file}" "${DOCS_BUILD_DIR}/workbench/${source_file}")
endforeach()
configure_file("${WORKBENCH_DOCS_DIR}/docs/Workbench_DG/workbench_docs.xml" "${DOCS_BUILD_DIR}/workbench_docs.xml" @ONLY)
list(APPEND commands COMMAND ${PYTHON_EXECUTABLE} ${DOXY_MD_FILTER}
--input_dir=${WORKBENCH_DOCS_DIR}
--output_dir=${DOCS_BUILD_DIR}/workbench)
endif()
# pot doc files
if(EXISTS "${POT_DOCS_DIR}")
get_filename_component(POT_DOCS_DIR "${POT_DOCS_DIR}" ABSOLUTE)
file(GLOB_RECURSE pot_doc_files
LIST_DIRECTORIES true RELATIVE ${POT_DOCS_DIR}
"${POT_DOCS_DIR}/*.md"
"${POT_DOCS_DIR}/*.png"
"${POT_DOCS_DIR}/*.gif"
"${POT_DOCS_DIR}/*.jpg")
list(APPEND commands COMMAND ${PYTHON_EXECUTABLE} ${DOXY_MD_FILTER}
--input_dir=${POT_DOCS_DIR}
--output_dir=${DOCS_BUILD_DIR}/pot)
endif()
foreach(source_file ${pot_doc_files})
list(APPEND commands COMMAND ${CMAKE_COMMAND} -E copy
"${POT_DOCS_DIR}/${source_file}" "${DOCS_BUILD_DIR}/pot/${source_file}")
endforeach()
configure_file("${POT_DOCS_DIR}/docs/pot_docs.xml" "${DOCS_BUILD_DIR}/pot_docs.xml" @ONLY)
# ovms doc files
if(EXISTS "${OVMS_DOCS_DIR}")
get_filename_component(OVMS_DOCS_DIR "${OVMS_DOCS_DIR}" ABSOLUTE)
list(APPEND commands COMMAND ${PYTHON_EXECUTABLE} ${DOXY_MD_FILTER}
--input_dir=${OVMS_DOCS_DIR}
--output_dir=${DOCS_BUILD_DIR}/ovms)
endif()
# gst doc files
if(EXISTS "${GST_DOCS_DIR}")
get_filename_component(GST_DOCS_DIR "${GST_DOCS_DIR}" ABSOLUTE)
file(GLOB_RECURSE gst_doc_files
LIST_DIRECTORIES true RELATIVE ${GST_DOCS_DIR}
"${GST_DOCS_DIR}/*.md"
"${GST_DOCS_DIR}/*.png"
"${GST_DOCS_DIR}/*.gif"
"${GST_DOCS_DIR}/*.jpg")
foreach(source_file ${gst_doc_files})
list(APPEND commands COMMAND ${CMAKE_COMMAND} -E copy
"${GST_DOCS_DIR}/${source_file}" "${DOCS_BUILD_DIR}/gst/${source_file}")
endforeach()
list(APPEND commands COMMAND ${PYTHON_EXECUTABLE} ${DOXY_MD_FILTER}
--input_dir=${GST_DOCS_DIR}
--output_dir=${DOCS_BUILD_DIR}/gst)
endif()
add_custom_target(preprocess_docs
COMMENT "Preprocess documentation"
VERBATIM)
# Preprocess docs
add_custom_command(TARGET preprocess_docs
PRE_BUILD
${commands}
COMMAND ${PYTHON_EXECUTABLE} ${DOXY_LAYOUT_SCRIPT} --openvino ${OPENVINO_LAYOUT_BUILD}
COMMAND ${PYTHON_EXECUTABLE} ${DOXY_MD_FILTER} ${DOCS_BUILD_DIR}
COMMENT "Pre-process markdown and image links")
# IE dev guide and C++ API
add_custom_target(ie_docs
DEPENDS ngraph_cpp_api preprocess_docs
COMMAND ${CMAKE_COMMAND} -E copy_directory ${ASSETS_DIR} ${IE_OUTPUT}/assets
COMMAND ${DOXYGEN_EXECUTABLE} ${IE_CONFIG_BUILD}
WORKING_DIRECTORY ${DOCS_BUILD_DIR}
VERBATIM)
# Plugin API
add_custom_target(plugin_api
DEPENDS ngraph_cpp_api ie_docs
COMMAND ${CMAKE_COMMAND} -E copy_directory ${ASSETS_DIR} ${PLUGIN_OUTPUT}/assets
COMMAND ${DOXYGEN_EXECUTABLE} ${PLUGIN_CONFIG_BUILD}
WORKING_DIRECTORY ${DOCS_BUILD_DIR}
COMMENT "Generating Plugin API Reference"
VERBATIM)
# Umbrella OpenVINO target
add_custom_target(openvino_docs
DEPENDS ngraph_cpp_api ngraph_py_api c_api py_api ie_docs plugin_api
COMMENT "Generating OpenVINO documentation"
VERBATIM)
set_target_properties(openvino_docs ie_docs c_api py_api preprocess_docs plugin_api
ngraph_py_api ngraph_cpp_api
PROPERTIES FOLDER docs)
add_custom_command(TARGET openvino_docs
POST_BUILD
COMMAND ${PYTHON_EXECUTABLE} ${DOXY_LOG_SCRIPT} --log "${DOCS_BUILD_DIR}/ie_docs.log"
--include_omz $<BOOL:${OMZ_DOCS_DIR}>
--include_wb $<BOOL:${WORKBENCH_DOCS_DIR}>
--include_pot $<BOOL:${POT_DOCS_DIR}>
--include_gst $<BOOL:${GST_DOCS_DIR}>
COMMENT "Parse doxygen log to find errors."
${commands}
WORKING_DIRECTORY ${DOCS_BUILD_DIR}
COMMENT "Preprocess documentation"
VERBATIM)
# added linkcheker
add_custom_target(doxygen_xml
DEPENDS preprocess_docs
COMMAND ${PYTHON_EXECUTABLE} ${REMOVE_XML_SCRIPT} ${XML_OUTPUT}
COMMAND ${DOXYGEN_EXECUTABLE} ${DOXYFILE_BUILD}
WORKING_DIRECTORY ${DOCS_BUILD_DIR}
COMMENT "Generate doxygen XML output"
VERBATIM)
# Post-process docs
add_custom_command(TARGET doxygen_xml
POST_BUILD
COMMAND ${PYTHON_EXECUTABLE} ${PREPARE_XML_SCRIPT} ${XML_OUTPUT}
COMMAND ${CMAKE_COMMAND} -E copy_directory ${DOXYREST_IN} ${DOXYREST_OUT}
COMMAND ${DOXYREST_EXECUTABLE} -c ${DOXYREST_CONFIG_OUT}
COMMAND ${PYTHON_EXECUTABLE} ${COPY_IMAGES_SCRIPT} ${XML_OUTPUT} ${RST_OUTPUT}
COMMAND ${PYTHON_EXECUTABLE} ${DOXYGEN_MAPPING_SCRIPT} ${XML_OUTPUT} ${DOCS_BUILD_DIR} ${OpenVINO_MAIN_SOURCE_DIR}/../
COMMAND ${CMAKE_COMMAND} -E copy ${SPHINX_INDEX_IN} ${SPHINX_INDEX_OUT}
COMMAND ${CMAKE_COMMAND} -E copy_directory ${DOXYREST_IN} ${DOXYREST_OUT}
COMMAND ${CMAKE_COMMAND} -E copy_directory ${DOXYREST_SPHINX_IN} ${DOXYREST_SPHINX_OUT}
COMMAND ${CMAKE_COMMAND} -E copy_directory ${SPHINX_STATIC_IN} ${SPHINX_STATIC_OUT}
COMMENT "Prepare xml"
VERBATIM)
add_custom_target(sphinx_docs
DEPENDS doxygen_xml
COMMAND sphinx-build -b html ${RST_OUTPUT} ${SPHINX_OUTPUT}
WORKING_DIRECTORY ${RST_OUTPUT}
VERBATIM)
set_target_properties(doxygen_xml sphinx_docs
PROPERTIES FOLDER docs)
if(EXISTS "${LINKCHECKER_PY}")
add_custom_target(docs_check
COMMAND ${PYTHON_EXECUTABLE} "${LINKCHECKER_PY}" -v "${DOCS_BUILD_DIR}/html/"
COMMENT "Check links in generated documentation"
WORKING_DIRECTORY "${DOCS_BUILD_DIR}"
VERBATIM)
set_target_properties(docs_check PROPERTIES FOLDER docs)
endif()
find_program(browser NAMES xdg-open)
if(browser)
add_custom_target(ie_docs_open
COMMAND ${browser} "${OpenVINO_MAIN_SOURCE_DIR}/docs/html/index.html"
DEPENDS ie_docs
COMMAND ${browser} "${SPHINX_OUTPUT}/index.html"
DEPENDS sphinx_docs
COMMENT "Open OpenVINO documentation"
VERBATIM)
set_target_properties(ie_docs_open PROPERTIES FOLDER docs)

View File

@@ -58,7 +58,7 @@ PROJECT_LOGO =
# entered, it will be relative to the location where doxygen was started. If
# left blank the current directory will be used.
OUTPUT_DIRECTORY = "@OUTPUT_DIRECTORY@"
OUTPUT_DIRECTORY = "@DOCS_BUILD_DIR@"
# If the CREATE_SUBDIRS tag is set to YES then doxygen will create 4096 sub-
# directories (in 2 levels) under the output directory of each output format and
@@ -262,6 +262,8 @@ TAB_SIZE = 4
# a double escape (\\{ and \\})
ALIASES = "ref_ie{1}=@ref InferenceEngine::\1 \"\1\""
ALIASES += sphinxdirective="\n\xmlonly<sphinxdirective>"
ALIASES += endsphinxdirective="</sphinxdirective>\endxmlonly"
# Set the OPTIMIZE_OUTPUT_FOR_C tag to YES if your project consists of C sources
# only. Doxygen will then generate output that is more tailored for C. For
@@ -317,7 +319,7 @@ OPTIMIZE_OUTPUT_SLICE = NO
# Note that for custom extensions you also need to set FILE_PATTERNS otherwise
# the files are not read by doxygen.
EXTENSION_MAPPING =
EXTENSION_MAPPING = pyx=Python
# If the MARKDOWN_SUPPORT tag is enabled then doxygen pre-processes all comments
# according to the Markdown format, which allows for more readable
@@ -461,7 +463,7 @@ LOOKUP_CACHE_SIZE = 0
# normally produced when WARNINGS is set to YES.
# The default value is: NO.
EXTRACT_ALL = NO
EXTRACT_ALL = YES
# If the EXTRACT_PRIVATE tag is set to YES, all private members of a class will
# be included in the documentation.
@@ -556,7 +558,7 @@ INTERNAL_DOCS = NO
# (including Cygwin) ands Mac users are advised to set this option to NO.
# The default value is: system dependent.
CASE_SENSE_NAMES = YES
CASE_SENSE_NAMES = NO
# If the HIDE_SCOPE_NAMES tag is set to NO then doxygen will show members with
# their full class and namespace scopes in the documentation. If set to YES, the
@@ -811,7 +813,7 @@ WARN_FORMAT = "$file:$line: $text"
# messages should be written. If left blank the output is written to standard
# error (stderr).
WARN_LOGFILE = "@DOCS_BUILD_DIR@/ie_docs.log"
WARN_LOGFILE = "@DOCS_BUILD_DIR@/doxygen.log"
#---------------------------------------------------------------------------
# Configuration options related to the input files
@@ -823,8 +825,25 @@ WARN_LOGFILE = "@DOCS_BUILD_DIR@/ie_docs.log"
# spaces. See also FILE_PATTERNS and EXTENSION_MAPPING
# Note: If this tag is empty the current directory is searched.
INPUT = "@DOCS_BUILD_DIR@" \
"@IE_SOURCE_DIR@/include"
INPUT = "@MARKDOWN_INPUT@" \
"@IE_SOURCE_DIR@/include/" \
"@IE_C_API@" \
"@OpenVINO_MAIN_SOURCE_DIR@/openvino/itt/include/openvino/" \
"@IE_SOURCE_DIR@/src/plugin_api/" \
"@IE_SOURCE_DIR@/src/transformations/include/" \
"@IE_SOURCE_DIR@/src/transformations/include/ngraph_ops/" \
"@IE_SOURCE_DIR@/src/transformations/include/transformations/" \
"@IE_SOURCE_DIR@/src/transformations/include/transformations/common_optimizations/" \
"@IE_SOURCE_DIR@/src/transformations/include/transformations/control_flow/" \
"@IE_SOURCE_DIR@/src/transformations/include/transformations/low_precision/" \
"@IE_SOURCE_DIR@/src/transformations/include/transformations/op_conversions/" \
"@IE_SOURCE_DIR@/src/transformations/include/transformations/opset_conversions/" \
"@IE_SOURCE_DIR@/src/transformations/include/transformations/rt_info/" \
"@IE_SOURCE_DIR@/src/transformations/include/transformations/smart_reshape/" \
"@IE_SOURCE_DIR@/src/transformations/include/transformations/common_optimizations" \
"@IE_SOURCE_DIR@/src/transformations/include/transformations/utils" \
"@NGRAPH_DIR@/core/include/" \
"@NGRAPH_DIR@/frontend/onnx_import/include \
# This tag can be used to specify the character encoding of the source files
# that doxygen parses. Internally doxygen uses the UTF-8 encoding. Doxygen uses
@@ -855,7 +874,8 @@ FILE_PATTERNS = *.md \
*.cpp \
*.c \
*.hpp \
*.h
*.h \
*.c++ \
# The RECURSIVE tag can be used to specify whether or not subdirectories should
# be searched for input files as well.
@@ -891,7 +911,11 @@ EXCLUDE_PATTERNS = */temp/* \
*/tests/* \
*/openvx/* \
*/thirdparty/* \
*/IE_PLUGIN_DG/*
"@DOXYREST_OUT@" \
"@XML_OUTPUT@" \
"@RST_OUTPUT@" \
"@SPHINX_OUTPUT@" \
*/build/open_model_zoo/*
# The EXCLUDE_SYMBOLS tag can be used to specify one or more symbol names
# (namespaces, classes, functions, etc.) that should be excluded from the
@@ -943,20 +967,61 @@ EXCLUDE_SYMBOLS = InferenceEngine::details \
InferenceEngine::parallel_* \
NOMINMAX \
TBB_PREVIEW_NUMA_SUPPORT \
IE_THREAD_*
IE_THREAD_* \
INFERENCE_ENGINE_C_API_EXTERN \
INFERENCE_ENGINE_C_API \
INFERENCE_ENGINE_C_API_CALLBACK \
IE_NODISCARD \
InferenceEngine::details \
ie_api::BlobBuffer \
*impl* \
*device_name* \
*num_requests* \
*exec_net* \
*c_config* \
*ie_core_impl* \
*plugin_impl* \
*extension_str* \
*buffer* \
*__cinit__* \
ngraph::utils \
ie_core_version \
ie_core_versions \
ie_available_devices \
ie_core_versions \
input_shapes \
colorformat_e \
layout_e \
dimensions \
ie_param_config \
struct_desc \
ie_param \
ie_complete_call_back \
IEStatusCode \
input_shape \
struct_desc
# The EXAMPLE_PATH tag can be used to specify one or more files or directories
# that contain example code fragments that are included (see the \include
# command).
EXAMPLE_PATH = "@CMAKE_CURRENT_SOURCE_DIR@"
EXAMPLE_PATH = "@CMAKE_CURRENT_SOURCE_DIR@" \
"@CMAKE_CURRENT_SOURCE_DIR@/template_plugin/src" \
"@CMAKE_CURRENT_SOURCE_DIR@/template_plugin/include" \
"@CMAKE_CURRENT_SOURCE_DIR@/template_plugin/src/CMakeLists.txt" \
"@CMAKE_CURRENT_SOURCE_DIR@/template_plugin/tests/functional/CMakeLists.txt" \
"@CMAKE_CURRENT_SOURCE_DIR@/template_plugin/tests/functional/transformations" \
"@CMAKE_CURRENT_SOURCE_DIR@/template_plugin/tests/functional/shared_tests_instances/" \
"@CMAKE_CURRENT_SOURCE_DIR@/snippets"
"@IE_SOURCE_DIR@/tests/functional/plugin/shared/include"
# If the value of the EXAMPLE_PATH tag contains directories, you can use the
# EXAMPLE_PATTERNS tag to specify one or more wildcard pattern (like *.cpp and
# *.h) to filter out the source-files in the directories. If left blank all
# files are included.
EXAMPLE_PATTERNS =
EXAMPLE_PATTERNS = *.cpp \
*.hpp
# If the EXAMPLE_RECURSIVE tag is set to YES then subdirectories will be
# searched for input files to be used with the \include or \dontinclude commands
@@ -969,7 +1034,7 @@ EXAMPLE_RECURSIVE = YES
# that contain images that are to be included in the documentation (see the
# \image command).
IMAGE_PATH = .
IMAGE_PATH = "@DOCS_BUILD_DIR@"
# The INPUT_FILTER tag can be used to specify a program that doxygen should
# invoke to filter for each input file. Doxygen will invoke the filter program
@@ -1175,7 +1240,7 @@ IGNORE_PREFIX =
# If the GENERATE_HTML tag is set to YES, doxygen will generate HTML output
# The default value is: YES.
GENERATE_HTML = YES
GENERATE_HTML = NO
# The HTML_OUTPUT tag is used to specify where the HTML docs will be put. If a
# relative path is entered the value of OUTPUT_DIRECTORY will be put in front of
@@ -1183,7 +1248,7 @@ GENERATE_HTML = YES
# The default directory is: html.
# This tag requires that the tag GENERATE_HTML is set to YES.
HTML_OUTPUT = @OUTPUT_DIRECTORY@
HTML_OUTPUT = html
# The HTML_FILE_EXTENSION tag can be used to specify the file extension for each
# generated HTML page (for example: .htm, .php, .asp).
@@ -1210,7 +1275,7 @@ HTML_FILE_EXTENSION = .html
# of the possible markers and block names see the documentation.
# This tag requires that the tag GENERATE_HTML is set to YES.
HTML_HEADER = @HEADER_BUILD@
HTML_HEADER =
# The HTML_FOOTER tag can be used to specify a user-defined HTML footer for each
# generated HTML page. If the tag is left blank doxygen will generate a standard
@@ -1220,7 +1285,7 @@ HTML_HEADER = @HEADER_BUILD@
# that doxygen normally uses.
# This tag requires that the tag GENERATE_HTML is set to YES.
HTML_FOOTER = @FOOTER_BUILD@
HTML_FOOTER =
# The HTML_STYLESHEET tag can be used to specify a user-defined cascading style
# sheet that is used by each HTML page. It can be used to fine-tune the look of
@@ -2049,7 +2114,7 @@ MAN_LINKS = NO
# captures the structure of the code including all documentation.
# The default value is: NO.
GENERATE_XML = NO
GENERATE_XML = YES
# The XML_OUTPUT tag is used to specify where the XML pages will be put. If a
# relative path is entered the value of OUTPUT_DIRECTORY will be put in front of
@@ -2057,7 +2122,7 @@ GENERATE_XML = NO
# The default directory is: xml.
# This tag requires that the tag GENERATE_XML is set to YES.
XML_OUTPUT = xml
XML_OUTPUT = "@XML_OUTPUT@"
# If the XML_PROGRAMLISTING tag is set to YES, doxygen will dump the program
# listings (including syntax highlighting and cross-referencing information) to
@@ -2066,7 +2131,7 @@ XML_OUTPUT = xml
# The default value is: YES.
# This tag requires that the tag GENERATE_XML is set to YES.
XML_PROGRAMLISTING = YES
XML_PROGRAMLISTING = NO
# If the XML_NS_MEMB_FILE_SCOPE tag is set to YES, doxygen will include
# namespace members in file scope as well, matching the HTML output.
@@ -2223,7 +2288,41 @@ PREDEFINED = "INFERENCE_ENGINE_API_CLASS=" \
"INFERENCE_ENGINE_NN_BUILDER_API_CLASS=" \
"INFERENCE_ENGINE_NN_BUILDER_DEPRECATED(x)=" \
"INFERENCE_ENGINE_INTERNAL(x)=" \
"INFERENCE_ENGINE_INTERNAL_CNNLAYER_CLASS(x)="
"INFERENCE_ENGINE_INTERNAL_CNNLAYER_CLASS(x)=" \
"__attribute__(x)=" \
"__VA_ARGS__=" \
"INFERENCE_ENGINE_C_API_EXTERN=" \
"INFERENCE_ENGINE_C_API_CALLBACK=" \
"INFERENCE_ENGINE_C_API=" \
"IE_NODISCARD=" \
"__cdecl=" \
"__declspec(x)=" \
"_WIN32" \
"INFERENCE_ENGINE_API=" \
"INFERENCE_ENGINE_API_CPP=" \
"INFERENCE_ENGINE_API_CLASS=" \
"INFERENCE_ENGINE_DEPRECATED=" \
"inference_engine_transformations_EXPORTS" \
"TRANSFORMATIONS_API=" \
"NGRAPH_HELPER_DLL_EXPORT=" \
"NGRAPH_HELPER_DLL_IMPORT=" \
"IE_SUPPRESS_DEPRECATED_START=" \
"IE_SUPPRESS_DEPRECATED_END=" \
"_IE_SUPPRESS_DEPRECATED_START_MSVC=" \
"_IE_SUPPRESS_DEPRECATED_END_MSVC=" \
"_IE_SUPPRESS_DEPRECATED_START_GCC=" \
"_IE_SUPPRESS_DEPRECATED_END_GCC=" \
"IE_THREAD=IE_THREAD_TBB" \
"NGRAPH_RTTI_DECLARATION=" \
"__attribute__(x)=" \
"__VA_ARGS__=" \
"INFERENCE_ENGINE_C_API_EXTERN=" \
"INFERENCE_ENGINE_C_API=" \
"IE_NODISCARD=" \
"__cdecl=" \
"__declspec(x)=" \
"__GNUC__=" \
"_WIN32"
# If the MACRO_EXPANSION and EXPAND_ONLY_PREDEF tags are set to YES then this
# tag can be used to specify a list of macro names that should be expanded. The

View File

@@ -12,7 +12,7 @@ Representation (IR) for this model.
This guide illustrates the workflow for running inference on topologies featuring custom operations, allowing you to
plug in your own implementation for existing or completely new operation.
> **NOTE:** *Layer* — The legacy term for an *operation* which came from Caffe\* framework. Currently it is not used.
> **NOTE**: *Layer* — The legacy term for an *operation* which came from Caffe\* framework. Currently it is not used.
> Refer to the [Deep Learning Network Intermediate Representation and Operation Sets in OpenVINO™](../MO_DG/IR_and_opsets.md)
> for more information on the topic.
@@ -44,7 +44,7 @@ plugins to support inference of this operation using a particular target hardwar
To see the operations that are supported by each device plugin for the Inference Engine, refer to the
[Supported Devices](../IE_DG/supported_plugins/Supported_Devices.md).
> **NOTE:** If a device doesn't support a particular operation, an alternative to creating a new operation is to target
> **NOTE**: If a device doesn't support a particular operation, an alternative to creating a new operation is to target
> an additional device using the HETERO plugin. The [Heterogeneous Plugin](../IE_DG/supported_plugins/HETERO.md) may be
> used to run an inference model on multiple devices allowing the unsupported operations on one device to "fallback" to
> run on another device (e.g., CPU) that does support those operations.
@@ -63,7 +63,7 @@ operation and uses corresponding operation class to update graph node attributes
operation. Refer to the "Operation Extractor" section of
[Model Optimizer Extensibility](../MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md) for detailed instructions on how to implement it.
> **NOTE:** In some cases you may need to implement some transformation to support the operation. This topic is covered in the "Graph Transformation Extensions" section of [Model Optimizer Extensibility](../MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md).
> **NOTE**: In some cases you may need to implement some transformation to support the operation. This topic is covered in the "Graph Transformation Extensions" section of [Model Optimizer Extensibility](../MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md).
## Custom Operations Extensions for the Inference Engine
@@ -131,15 +131,26 @@ Firstly, open the model in the TensorBoard or other TensorFlow* model visualizat
batch dimension because the value for the batch dimension is not hardcoded in the model. Model Optimizer need to set all
dynamic dimensions to some specific value to create the IR, therefore specify the command line parameter `-b 1` to set
the batch dimension equal to 1. The actual batch size dimension can be changed at runtime using the Inference Engine API
described in the [Using Shape Inference](../IE_DG/ShapeInference.md). Also refer to
[Converting a Model Using General Conversion Parameters](../MO_DG/prepare_model/convert_model/Converting_Model_General.md)
and [Convert Your TensorFlow* Model](../MO_DG/prepare_model/convert_model/Convert_Model_From_TensorFlow.md)
described in the [Using Shape Inference](../IE_DG/ShapeInference.md). Also refer to the General Conversion Parameters section in [Converting a Model to Intermediate Representation (IR)](../MO_DG/prepare_model/convert_model/Converting_Model.md) and [Convert Your TensorFlow* Model](../MO_DG/prepare_model/convert_model/Convert_Model_From_TensorFlow.md)
for more details and command line parameters used for the model conversion.
```bash
./<MO_INSTALL_DIR>/mo.py --input_model <PATH_TO_MODEL>/wnet_20.pb -b 1
```
> **NOTE:** This conversion guide is applicable for the 2021.3 release of OpenVINO and that starting from 2021.4
@sphinxdirective
.. tab:: Package, Docker, open-source installation
.. code-block:: sh
cd <INSTALL_DIR>/deployment_tools/model_optimizer/
python3 mo.py --input_model <PATH_TO_MODEL>/wnet_20.pb -b 1
.. tab:: pip installation
.. code-block:: sh
mo --input_model <PATH_TO_MODEL>/wnet_20.pb -b 1
@endsphinxdirective
> **NOTE**: This conversion guide is applicable for the 2021.3 release of OpenVINO and that starting from 2021.4
> the OpenVINO supports this model out of the box.
Model Optimizer produces the following error:
@@ -221,7 +232,7 @@ following snippet provides two extractors: one for "IFFT2D", another one for "FF
@snippet FFT_ext.py fft_ext:extractor
> **NOTE:** The graph is in inconsistent state after extracting node attributes because according to original operation
> **NOTE**: The graph is in inconsistent state after extracting node attributes because according to original operation
> "IFFT2D" semantic it should have an input consuming a tensor of complex numbers, but the extractor instantiated an
> operation "FFT" which expects a real tensor with specific layout. But the inconsistency will be resolved during
> applying front phase transformations discussed below.
@@ -239,7 +250,7 @@ information on how this type of transformation works. The code snippet should be
@snippet Complex.py complex:transformation
> **NOTE:** The graph is in inconsistent state because the "ComplexAbs" operation consumes complex value tensor but
> **NOTE**: The graph is in inconsistent state because the "ComplexAbs" operation consumes complex value tensor but
> "FFT" produces real value tensor.
Now lets implement a transformation which replace a "ComplexAbs" operation with a sub-graph of primitive operations
@@ -257,15 +268,27 @@ The implementation should be saved to the file `mo_extensions/front/tf/ComplexAb
@snippet ComplexAbs.py complex_abs:transformation
Now it is possible to convert the model using the following command line:
```bash
./<MO_INSTALL_DIR>/mo.py --input_model <PATH_TO_MODEL>/wnet_20.pb -b 1 --extensions mo_extensions/
```
@sphinxdirective
.. tab:: Package, Docker, open-source installation
.. code-block:: sh
cd <INSTALL_DIR>/deployment_tools/model_optimizer/
python3 mo.py --input_model <PATH_TO_MODEL>/wnet_20.pb -b 1 --extensions mo_extensions/
.. tab:: pip installation
.. code-block:: sh
mo --input_model <PATH_TO_MODEL>/wnet_20.pb -b 1 --extensions mo_extensions/
@endsphinxdirective
The sub-graph corresponding to the originally non-supported one is depicted in the image below:
![Converted sub-graph](img/converted_subgraph.png)
> **NOTE:** Model Optimizer performed conversion of the model from NHWC to NCHW layout that is why the dimension with
> **NOTE**: Model Optimizer performed conversion of the model from NHWC to NCHW layout that is why the dimension with
> the value 2 moved to another position.
### Inference Engine Extension Implementation
@@ -350,7 +373,7 @@ python3 mri_reconstruction_demo.py \
## Converting Models:
- [Convert Your Caffe* Model](../MO_DG/prepare_model/convert_model/Convert_Model_From_Caffe.md)
- [Convert Your Kaldi* Model](../MO_DG/prepare_model/convert_model/Convert_Model_From_Kaldi.md)
- [Convert Your TensorFlow* Model](../MO_DG/prepare_model/convert_model/Convert_Model_From_TensorFlow.md)
- [Convert Your MXNet* Model](../MO_DG/prepare_model/convert_model/Convert_Model_From_MxNet.md)
- [Convert Your Kaldi* Model](../MO_DG/prepare_model/convert_model/Convert_Model_From_Kaldi.md)
- [Convert Your ONNX* Model](../MO_DG/prepare_model/convert_model/Convert_Model_From_ONNX.md)

View File

@@ -1,4 +1,4 @@
# Copyright (C) 2018-2021 Intel Corporation
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#! [complex:transformation]

View File

@@ -1,4 +1,4 @@
# Copyright (C) 2018-2021 Intel Corporation
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#! [complex_abs:transformation]

View File

@@ -1,4 +1,4 @@
# Copyright (C) 2018-2021 Intel Corporation
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
# ! [fft_ext:extractor]

View File

@@ -1,4 +1,4 @@
# Copyright (C) 2018-2021 Intel Corporation
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#! [fft:operation]

View File

@@ -1,4 +1,4 @@
# Copyright (C) 2018-2021 Intel Corporation
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#! [mri_demo:demo]

View File

@@ -10,10 +10,14 @@ The sections below contain detailed list of changes made to the Inference Engine
### Deprecated API
**InferenceEngine::Parameter**
* InferenceEngine::Parameter(const std::shared_ptr<ngraph::Variant>&)
* InferenceEngine::Parameter(std::shared_ptr<ngraph::Variant>& var)
* std::shared_ptr<ngraph::Variant> InferenceEngine::Parameter::asVariant() const
* InferenceEngine::Parameter::operator std::shared_ptr<ngraph::Variant>() const
**GPU plugin configuration keys**
* KEY_CLDNN_NV12_TWO_INPUTS GPU plugin option. Use KEY_GPU_NV12_TWO_INPUTS instead
* KEY_CLDNN_PLUGIN_PRIORITY GPU plugin option. Use KEY_GPU_PLUGIN_PRIORITY instead
* KEY_CLDNN_PLUGIN_THROTTLE GPU plugin option. Use KEY_GPU_PLUGIN_THROTTLE instead
@@ -24,6 +28,38 @@ The sections below contain detailed list of changes made to the Inference Engine
* KEY_TUNING_MODE GPU plugin option
* KEY_TUNING_FILE GPU plugin option
**InferenceEngine::IInferRequest**
* IInferRequest interface is deprecated, use InferRequest wrapper:
* Constructor for InferRequest from IInferRequest:: Ptr is deprecated
* Cast operator for InferRequest to IInferRequest shared pointer is deprecated
**InferenceEngine::ICNNNetwork**
* ICNNNetwork interface is deprecated by means of deprecation of all its methods, use CNNNetwork wrapper
* CNNNetwork methods working with ICNNNetwork are deprecated:
* Cast to ICNNNetwork shared pointer
* Cast to reference to ICNNNetwork interface
* Constructor from ICNNNetwork shared pointer
**InferenceEngine::IExecutableNetwork**
* IExecutableNetwork is deprecated, use ExecutableNetwork wrappers:
* Constructor of ExecutableNetwork from IExecutableNetwork shared pointer is deprecated
* The following ExecutableNetwork methods are deprecated:
* ExecutableNetwork::reset
* Cast operator to IExecutableNetwork shared pointer
* ExecutableNetwork::CreateInferRequestPtr - use ExecutableNetwork::CreateInferRequest instead
**Extensions API**
* InferenceEngine::make_so_pointer which is used to create Extensions library is replaced by std::make_shared<Extension>(..)
* InferenceEngine::IExtension::Release is deprecated with no replacement
* Use IE_DEFINE_EXTENSION_CREATE_FUNCTION helper macro instead of explicit declaration of CreateExtension function, which create extension.
**Other changes**
* Version::ApiVersion structure is deprecated, Inference Engine does not have API version anymore
* LowLatency - use lowLatency2 instead
* CONFIG_KEY(DUMP_EXEC_GRAPH_AS_DOT) - use InferenceEngine::ExecutableNetwork::GetExecGraphInfo::serialize() instead
* Core::ImportNetwork with no device - pass device name explicitly.
* details::InferenceEngineException - use InferenceEngine::Exception and its derivatives instead.
## 2021.3
### New API

View File

@@ -1,50 +1,57 @@
# Bfloat16 Inference {#openvino_docs_IE_DG_Bfloat16Inference}
## Disclaimer
## Bfloat16 Inference Usage (C++)
Inference Engine with the bfloat16 inference implemented on CPU must support the native `avx512_bf16` instruction and therefore the bfloat16 data format.
It is possible to use bfloat16 inference in simulation mode on platforms with Intel® Advanced Vector Extensions 512 (Intel® AVX-512), but it leads to significant performance degradation in comparison with FP32 or native `avx512_bf16` instruction usage.
@sphinxdirective
.. raw:: html
## Introduction
<div id="switcher-cpp" class="switcher-anchor">C++</div>
@endsphinxdirective
### Disclaimer
Inference Engine with the bfloat16 inference implemented on CPU must support the native *avx512_bf16* instruction and therefore the bfloat16 data format. It is possible to use bfloat16 inference in simulation mode on platforms with Intel® Advanced Vector Extensions 512 (Intel® AVX-512), but it leads to significant performance degradation in comparison with FP32 or native *avx512_bf16* instruction usage.
### Introduction
Bfloat16 computations (referred to as BF16) is the Brain Floating-Point format with 16 bits. This is a truncated 16-bit version of the 32-bit IEEE 754 single-precision floating-point format FP32. BF16 preserves 8 exponent bits as FP32 but reduces precision of the sign and mantissa from 24 bits to 8 bits.
![bf16_format]
Preserving the exponent bits keeps BF16 to the same range as the FP32 (~1e-38 to ~3e38). This simplifies conversion between two data types: you just need to skip or flush to zero 16 low bits.
Truncated mantissa leads to occasionally less precision, but according to [investigations](https://cloud.google.com/blog/products/ai-machine-learning/bfloat16-the-secret-to-high-performance-on-cloud-tpus), neural networks are more sensitive to the size of the exponent than the mantissa size. Also, in lots of models, precision is needed close to zero but not so much at the maximum range.
Another useful feature of BF16 is possibility to encode INT8 in BF16 without loss of accuracy, because INT8 range completely fits in BF16 mantissa field. It reduces data flow in conversion from INT8 input image data to BF16 directly without intermediate representation in FP32, or in combination of [INT8 inference](Int8Inference.md) and BF16 layers.
Preserving the exponent bits keeps BF16 to the same range as the FP32 (~1e-38 to ~3e38). This simplifies conversion between two data types: you just need to skip or flush to zero 16 low bits. Truncated mantissa leads to occasionally less precision, but according to [investigations](https://cloud.google.com/blog/products/ai-machine-learning/bfloat16-the-secret-to-high-performance-on-cloud-tpus), neural networks are more sensitive to the size of the exponent than the mantissa size. Also, in lots of models, precision is needed close to zero but not so much at the maximum range. Another useful feature of BF16 is possibility to encode INT8 in BF16 without loss of accuracy, because INT8 range completely fits in BF16 mantissa field. It reduces data flow in conversion from INT8 input image data to BF16 directly without intermediate representation in FP32, or in combination of [INT8 inference](Int8Inference.md) and BF16 layers.
See the ["BFLOAT16 Hardware Numerics Definition" white paper"](https://software.intel.com/sites/default/files/managed/40/8b/bf16-hardware-numerics-definition-white-paper.pdf) for more bfloat16 format details.
See the [BFLOAT16 Hardware Numerics Definition white paper](https://software.intel.com/content/dam/develop/external/us/en/documents/bf16-hardware-numerics-definition-white-paper.pdf) for more bfloat16 format details.
There are two ways to check if CPU device can support bfloat16 computations for models:
1. Query the instruction set via system `lscpu | grep avx512_bf16` or `cat /proc/cpuinfo | grep avx512_bf16`.
2. Use [Query API](InferenceEngine_QueryAPI.md) with `METRIC_KEY(OPTIMIZATION_CAPABILITIES)`, which should return `BF16` in the list of CPU optimization options:
1. Query the instruction set using one of these system commands:
* `lscpu | grep avx512_bf16`
* `cat /proc/cpuinfo | grep avx512_bf16`
2. Use the [Query API](InferenceEngine_QueryAPI.md) with `METRIC_KEY(OPTIMIZATION_CAPABILITIES)`, which should return `BF16` in the list of CPU optimization options:
@snippet snippets/Bfloat16Inference0.cpp part0
Current Inference Engine solution for bfloat16 inference uses Intel® Math Kernel Library for Deep Neural Networks (Intel® MKL-DNN) and supports inference of the significant number of layers in BF16 computation mode.
The current Inference Engine solution for bfloat16 inference uses the Intel® Math Kernel Library for Deep Neural Networks (Intel® MKL-DNN) and supports inference of the significant number of layers in BF16 computation mode.
## Lowering Inference Precision
### Lowering Inference Precision
Lowering precision to increase performance is [widely used](https://software.intel.com/content/www/us/en/develop/articles/lower-numerical-precision-deep-learning-inference-and-training.html) for optimization of inference. The bfloat16 data type usage on CPU for the first time opens the possibility of default optimization approach.
The embodiment of this approach is to use the optimization capabilities of the current platform to achieve maximum performance while maintaining the accuracy of calculations within the acceptable range.
Lowering precision to increase performance is [widely used](https://software.intel.com/content/www/us/en/develop/articles/lower-numerical-precision-deep-learning-inference-and-training.html) for optimization of inference. The bfloat16 data type usage on CPU for the first time opens the possibility of default optimization approach. The embodiment of this approach is to use the optimization capabilities of the current platform to achieve maximum performance while maintaining the accuracy of calculations within the acceptable range.
Using Bfloat16 precision provides the following performance benefits:
Bfloat16 data usage provides the following benefits that increase performance:
1. Faster multiplication of two BF16 numbers because of shorter mantissa of bfloat16 data.
2. No need to support denormals and handling exceptions as this is a performance optimization.
3. Fast conversion of float32 to bfloat16 and vice versa.
4. Reduced size of data in memory, as a result, larger models fit in the same memory bounds.
5. Reduced amount of data that must be transferred, as a result, reduced data transition time.
For default optimization on CPU, source model is converted from FP32 or FP16 to BF16 and executed internally on platforms with native BF16 support. In this case, `KEY_ENFORCE_BF16` is set to `YES`.
The code below demonstrates how to check if the key is set:
For default optimization on CPU, the source model is converted from FP32 or FP16 to BF16 and executed internally on platforms with native BF16 support. In this case, `KEY_ENFORCE_BF16` is set to `YES` in the `PluginConfigParams` for `GetConfig()`. The code below demonstrates how to check if the key is set:
@snippet snippets/Bfloat16Inference1.cpp part1
To disable BF16 internal transformations, set the `KEY_ENFORCE_BF16` to `NO`. In this case, the model infers as is without modifications with precisions that were set on each layer edge.
To disable BF16 internal transformations in C++ API, set the `KEY_ENFORCE_BF16` to `NO`. In this case, the model infers as is without modifications with precisions that were set on each layer edge.
@snippet snippets/Bfloat16Inference2.cpp part2
To disable BF16 in C API:
```
@@ -52,15 +59,16 @@ ie_config_t config = { "ENFORCE_BF16", "NO", NULL};
ie_core_load_network(core, network, device_name, &config, &exe_network);
```
An exception with message `Platform doesn't support BF16 format` is formed in case of setting `KEY_ENFORCE_BF16` to `YES` on CPU without native BF16 support or BF16 simulation mode.
An exception with the message `Platform doesn't support BF16 format` is formed in case of setting `KEY_ENFORCE_BF16` to `YES` on CPU without native BF16 support or BF16 simulation mode.
Low-Precision 8-bit integer models cannot be converted to BF16, even if bfloat16 optimization is set by default.
Low-Precision 8-bit integer models cannot be converted to BF16, even if bfloat16 optimization is set by default.
## Bfloat16 Simulation Mode
### Bfloat16 Simulation Mode
Bfloat16 simulation mode is available on CPU and Intel® AVX-512 platforms that do not support the native `avx512_bf16` instruction. The simulator does not guarantee an adequate performance.
To enable Bfloat16 simulator:
* In [Benchmark App](../../inference-engine/samples/benchmark_app/README.md), add the `-enforcebf16=true` option
Bfloat16 simulation mode is available on CPU and Intel® AVX-512 platforms that do not support the native `avx512_bf16` instruction. The simulator does not guarantee good performance. Note that the CPU must still support the AVX-512 extensions.
To enable the simulation of Bfloat16:
* In the [Benchmark App](../../inference-engine/samples/benchmark_app/README.md), add the `-enforcebf16=true` option
* In C++ API, set `KEY_ENFORCE_BF16` to `YES`
* In C API:
```
@@ -68,25 +76,139 @@ ie_config_t config = { "ENFORCE_BF16", "YES", NULL};
ie_core_load_network(core, network, device_name, &config, &exe_network);
```
## Performance Counters
### Performance Counters
Information about layer precision is stored in the performance counters that are available from the Inference Engine API. The layers have the following marks:
Information about layer precision is stored in the performance counters that are
available from the Inference Engine API. The layers have the following marks:
* Suffix `BF16` for layers that had bfloat16 data type input and were computed in BF16 precision
* Suffix `FP32` for layers computed in 32-bit precision
For example, the performance counters table for the Inception model can look as follows:
```
pool5 EXECUTED layerType: Pooling realTime: 143 cpu: 143 execType: jit_avx512_BF16
fc6 EXECUTED layerType: FullyConnected realTime: 47723 cpu: 47723 execType: jit_gemm_BF16
relu6 NOT_RUN layerType: ReLU realTime: 0 cpu: 0 execType: undef
fc7 EXECUTED layerType: FullyConnected realTime: 7558 cpu: 7558 execType: jit_gemm_BF16
relu7 NOT_RUN layerType: ReLU realTime: 0 cpu: 0 execType: undef
fc8 EXECUTED layerType: FullyConnected realTime: 2193 cpu: 2193 execType: jit_gemm_BF16
prob EXECUTED layerType: SoftMax realTime: 68 cpu: 68 execType: jit_avx512_FP32
pool5 EXECUTED layerType: Pooling realTime: 143 cpu: 143 execType: jit_avx512_BF16
fc6 EXECUTED layerType: FullyConnected realTime: 47723 cpu: 47723 execType: jit_gemm_BF16
relu6 NOT_RUN layerType: ReLU realTime: 0 cpu: 0 execType: undef
fc7 EXECUTED layerType: FullyConnected realTime: 7558 cpu: 7558 execType: jit_gemm_BF16
relu7 NOT_RUN layerType: ReLU realTime: 0 cpu: 0 execType: undef
fc8 EXECUTED layerType: FullyConnected realTime: 2193 cpu: 2193 execType: jit_gemm_BF16
prob EXECUTED layerType: SoftMax realTime: 68 cpu: 68 execType: jit_avx512_FP32
```
The `execType` column of the table includes inference primitives with specific suffixes.
The **execType** column of the table includes inference primitives with specific suffixes.
## Bfloat16 Inference Usage (Python)
@sphinxdirective
.. raw:: html
<div id="switcher-python" class="switcher-anchor">Python</div>
@endsphinxdirective
### Disclaimer
Inference Engine with the bfloat16 inference implemented on CPU must support the native *avx512_bf16* instruction and therefore the bfloat16 data format. It is possible to use bfloat16 inference in simulation mode on platforms with Intel® Advanced Vector Extensions 512 (Intel® AVX-512), but it leads to significant performance degradation in comparison with FP32 or native *avx512_bf16* instruction usage.
### Introduction
Bfloat16 computations (referred to as BF16) is the Brain Floating-Point format with 16 bits. This is a truncated 16-bit version of the 32-bit IEEE 754 single-precision floating-point format FP32. BF16 preserves 8 exponent bits as FP32 but reduces precision of the sign and mantissa from 24 bits to 8 bits.
![bf16_format]
Preserving the exponent bits keeps BF16 to the same range as the FP32 (~1e-38 to ~3e38). This simplifies conversion between two data types: you just need to skip or flush to zero 16 low bits. Truncated mantissa leads to occasionally less precision, but according to investigations, neural networks are more sensitive to the size of the exponent than the mantissa size. Also, in lots of models, precision is needed close to zero but not so much at the maximum range. Another useful feature of BF16 is possibility to encode INT8 in BF16 without loss of accuracy, because INT8 range completely fits in BF16 mantissa field. It reduces data flow in conversion from INT8 input image data to BF16 directly without intermediate representation in FP32, or in combination of [INT8 inference](Int8Inference.md) and BF16 layers.
See the [BFLOAT16 Hardware Numerics Definition white paper](https://software.intel.com/content/dam/develop/external/us/en/documents/bf16-hardware-numerics-definition-white-paper.pdf) for more bfloat16 format details.
There are two ways to check if CPU device can support bfloat16 computations for models:
1. Query the instruction set using one of these system commands:
* `lscpu | grep avx512_bf16`
* `cat /proc/cpuinfo | grep avx512_bf16`
2. Use the Query API with METRIC_KEY(OPTIMIZATION_CAPABILITIES), which should return BF16 in the list of CPU optimization options:
```python
from openvino.inference_engine import IECore
ie = IECore()
net = ie.read_network(path_to_xml_file)
cpu_caps = ie.get_metric(metric_name="OPTIMIZATION_CAPABILITIES", device_name="CPU")
```
The current Inference Engine solution for bfloat16 inference uses the Intel® Math Kernel Library for Deep Neural Networks (Intel® MKL-DNN) and supports inference of the significant number of layers in BF16 computation mode.
### Lowering Inference Precision
Lowering precision to increase performance is widely used for optimization of inference. The bfloat16 data type usage on CPU for the first time opens the possibility of default optimization approach. The embodiment of this approach is to use the optimization capabilities of the current platform to achieve maximum performance while maintaining the accuracy of calculations within the acceptable range.
Using Bfloat16 precision provides the following performance benefits:
1. Faster multiplication of two BF16 numbers because of shorter mantissa of bfloat16 data.
2. No need to support denormals and handling exceptions as this is a performance optimization.
3. Fast conversion of float32 to bfloat16 and vice versa.
4. Reduced size of data in memory, as a result, larger models fit in the same memory bounds.
5. Reduced amount of data that must be transferred, as a result, reduced data transition time.
For default optimization on CPU, the source model is converted from FP32 or FP16 to BF16 and executed internally on platforms with native BF16 support. In this case, ENFORCE_BF16 is set to YES. The code below demonstrates how to check if the key is set:
```python
from openvino.inference_engine import IECore
ie = IECore()
net = ie.read_network(path_to_xml_file)
exec_net = ie.load_network(network=net, device_name="CPU")
exec_net.get_config("ENFORCE_BF16")
```
To enable BF16 internal transformations, set the key "ENFORCE_BF16" to "YES" in the ExecutableNetwork configuration.
```python
bf16_config = {"ENFORCE_BF16" : "YES"}
exec_net = ie.load_network(network=net, device_name="CPU", config = bf16_config)
```
To disable BF16 internal transformations, set the key "ENFORCE_BF16" to "NO". In this case, the model infers as is without modifications with precisions that were set on each layer edge.
An exception with the message `Platform doesn't support BF16 format` is formed in case of setting "ENFORCE_BF16" to "YES"on CPU without native BF16 support or BF16 simulation mode.
Low-Precision 8-bit integer models cannot be converted to BF16, even if bfloat16 optimization is set by default.
### Bfloat16 Simulation Mode
Bfloat16 simulation mode is available on CPU and Intel® AVX-512 platforms that do not support the native avx512_bf16 instruction. The simulator does not guarantee good performance. Note that the CPU must still support the AVX-512 extensions.
#### To Enable the simulation of Bfloat16:
* In the Benchmark App, add the -enforcebf16=true option
* In Python, use the following code as an example:
```python
from openvino.inference_engine import IECore
ie = IECore()
net = ie.read_network(path_to_xml_file)
bf16_config = {"ENFORCE_BF16" : "YES"}
exec_net = ie.load_network(network=net, device_name="CPU", config=bf16_config)
```
### Performance Counters
Information about layer precision is stored in the performance counters that are available from the Inference Engine API. The layers have the following marks:
* Suffix *BF16* for layers that had bfloat16 data type input and were computed in BF16 precision
* Suffix *FP32* for layers computed in 32-bit precision
For example, the performance counters table for the Inception model can look as follows:
```
pool5 EXECUTED layerType: Pooling realTime: 143 cpu: 143 execType: jit_avx512_BF16
fc6 EXECUTED layerType: FullyConnected realTime: 47723 cpu: 47723 execType: jit_gemm_BF16
relu6 NOT_RUN layerType: ReLU realTime: 0 cpu: 0 execType: undef
fc7 EXECUTED layerType: FullyConnected realTime: 7558 cpu: 7558 execType: jit_gemm_BF16
relu7 NOT_RUN layerType: ReLU realTime: 0 cpu: 0 execType: undef
fc8 EXECUTED layerType: FullyConnected realTime: 2193 cpu: 2193 execType: jit_gemm_BF16
prob EXECUTED layerType: SoftMax realTime: 68 cpu: 68 execType: jit_avx512_FP32
```
The **execType** column of the table includes inference primitives with specific suffixes.
[bf16_format]: img/bf16_format.png

View File

@@ -1,121 +1,52 @@
# Inference Engine Developer Guide {#openvino_docs_IE_DG_Deep_Learning_Inference_Engine_DevGuide}
> **NOTE:** [Intel® System Studio](https://software.intel.com/content/www/us/en/develop/tools/oneapi/commercial-base-iot.html) (click "Intel® System Studio Users" tab) is an all-in-one, cross-platform tool suite, purpose-built to simplify system bring-up and improve system and IoT device application performance on Intel® platforms. If you are using the Intel® Distribution of OpenVINO™ with Intel® System Studio, go to [Get Started with Intel® System Studio](https://software.intel.com/en-us/articles/get-started-with-openvino-and-intel-system-studio-2019).
@sphinxdirective
This Guide provides an overview of the Inference Engine describing the typical workflow for performing inference of a pre-trained and optimized deep learning model and a set of sample applications.
.. toctree::
:maxdepth: 1
:hidden:
openvino_docs_IE_DG_Integrate_with_customer_application_new_API
openvino_docs_deployment_optimization_guide_dldt_optimization_guide
openvino_docs_IE_DG_Device_Plugins
Direct ONNX Format Support <openvino_docs_IE_DG_ONNX_Support>
openvino_docs_IE_DG_Int8Inference
openvino_docs_IE_DG_Bfloat16Inference
openvino_docs_IE_DG_DynamicBatching
openvino_docs_IE_DG_ShapeInference
openvino_docs_IE_DG_Model_caching_overview
openvino_docs_IE_DG_Extensibility_DG_Intro
openvino_docs_IE_DG_Memory_primitives
openvino_docs_IE_DG_network_state_intro
openvino_docs_IE_DG_API_Changes
openvino_docs_IE_DG_Known_Issues_Limitations
openvino_docs_IE_DG_Glossary
@endsphinxdirective
> **NOTE:** Before you perform inference with the Inference Engine, your models should be converted to the Inference Engine format using the Model Optimizer or built directly in runtime using nGraph API. To learn about how to use Model Optimizer, refer to the [Model Optimizer Developer Guide](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md). To learn about the pre-trained and optimized models delivered with the OpenVINO™ toolkit, refer to [Pre-Trained Models](@ref omz_models_group_intel).
## Introduction
Inference Engine is a set of C++ libraries with C and Python bindings providing a common API to deliver inference solutions on the platform of your choice. Use the Inference Engine API to read the Intermediate Representation (IR), ONNX and execute the model on devices.
After you have used the Model Optimizer to create an Intermediate Representation (IR), use the Inference Engine to infer the result for a given input data.
Inference Engine uses a plugin architecture. Inference Engine plugin is a software component that contains complete implementation for inference on a certain Intel® hardware device: CPU, GPU, VPU, etc. Each plugin implements the unified API and provides additional hardware-specific APIs.
The scheme below illustrates the typical workflow for deploying a trained deep learning model:
Inference Engine is a set of C++ libraries providing a common API to deliver inference solutions on the platform of your choice: CPU, GPU, or VPU. Use the Inference Engine API to read the Intermediate Representation, set the input and output formats, and execute the model on devices. While the C++ libraries is the primary implementation, C libraries and Python bindings are also available.
![](img/BASIC_FLOW_IE_C.svg)
For Intel® Distribution of OpenVINO™ toolkit, Inference Engine binaries are delivered within release packages.
\\* _nGraph_ is the internal graph representation in the OpenVINO™ toolkit. Use it to [build a model from source code](https://docs.openvinotoolkit.org/latest/openvino_docs_nGraph_DG_build_function.html).
The open source version is available in the [OpenVINO™ toolkit GitHub repository](https://github.com/openvinotoolkit/openvino) and can be built for supported platforms using the <a href="https://github.com/openvinotoolkit/openvino/wiki/BuildingCode">Inference Engine Build Instructions</a>.
To learn about how to use the Inference Engine API for your application, see the [Integrating Inference Engine in Your Application](Integrate_with_customer_application_new_API.md) documentation.
## Video
For complete API Reference, see the [Inference Engine API References](./api_references.html) section.
@sphinxdirective
Inference Engine uses a plugin architecture. Inference Engine plugin is a software component that contains complete implementation for inference on a certain Intel&reg; hardware device: CPU, GPU, VPU, etc. Each plugin implements the unified API and provides additional hardware-specific APIs.
.. list-table::
## Modules in the Inference Engine component
### Core Inference Engine Libraries
* - .. raw:: html
Your application must link to the core Inference Engine libraries:
* Linux* OS:
- `libinference_engine.so`, which depends on `libinference_engine_transformations.so`, `libtbb.so`, `libtbbmalloc.so` and `libngraph.so`
* Windows* OS:
- `inference_engine.dll`, which depends on `inference_engine_transformations.dll`, `tbb.dll`, `tbbmalloc.dll` and `ngraph.dll`
* macOS*:
- `libinference_engine.dylib`, which depends on `libinference_engine_transformations.dylib`, `libtbb.dylib`, `libtbbmalloc.dylib` and `libngraph.dylib`
The required C++ header files are located in the `include` directory.
This library contains the classes to:
* Create Inference Engine Core object to work with devices and read network (InferenceEngine::Core)
* Manipulate network information (InferenceEngine::CNNNetwork)
* Execute and pass inputs and outputs (InferenceEngine::ExecutableNetwork and InferenceEngine::InferRequest)
### Plugin Libraries to Read a Network Object
Starting from 2020.4 release, Inference Engine introduced a concept of `CNNNetwork` reader plugins. Such plugins can be automatically dynamically loaded by Inference Engine in runtime depending on file format:
* Linux* OS:
- `libinference_engine_ir_reader.so` to read a network from IR
- `libinference_engine_onnx_reader.so` to read a network from ONNX model format
* Windows* OS:
- `inference_engine_ir_reader.dll` to read a network from IR
- `inference_engine_onnx_reader.dll` to read a network from ONNX model format
### Device-Specific Plugin Libraries
For each supported target device, Inference Engine provides a plugin — a DLL/shared library that contains complete implementation for inference on this particular device. The following plugins are available:
| Plugin | Device Type |
| ------- | ----------------------------- |
|CPU | Intel® Xeon® with Intel® AVX2 and AVX512, Intel® Core™ Processors with Intel® AVX2, Intel® Atom® Processors with Intel® SSE |
|GPU | Intel® Processor Graphics, including Intel® HD Graphics and Intel® Iris® Graphics |
|MYRIAD | Intel® Neural Compute Stick 2 powered by the Intel® Movidius™ Myriad™ X |
|GNA | Intel&reg; Speech Enabling Developer Kit, Amazon Alexa* Premium Far-Field Developer Kit, Intel&reg; Pentium&reg; Silver J5005 Processor, Intel&reg; Pentium&reg; Silver N5000 Processor, Intel&reg; Celeron&reg; J4005 Processor, Intel&reg; Celeron&reg; J4105 Processor, Intel&reg; Celeron&reg; Processor N4100, Intel&reg; Celeron&reg; Processor N4000, Intel&reg; Core&trade; i3-8121U Processor, Intel&reg; Core&trade; i7-1065G7 Processor, Intel&reg; Core&trade; i7-1060G7 Processor, Intel&reg; Core&trade; i5-1035G4 Processor, Intel&reg; Core&trade; i5-1035G7 Processor, Intel&reg; Core&trade; i5-1035G1 Processor, Intel&reg; Core&trade; i5-1030G7 Processor, Intel&reg; Core&trade; i5-1030G4 Processor, Intel&reg; Core&trade; i3-1005G1 Processor, Intel&reg; Core&trade; i3-1000G1 Processor, Intel&reg; Core&trade; i3-1000G4 Processor |
|HETERO | Automatic splitting of a network inference between several devices (for example if a device doesn't support certain layers|
|MULTI | Simultaneous inference of the same network on several devices in parallel|
The table below shows the plugin libraries and additional dependencies for Linux, Windows and macOS platforms.
| Plugin | Library name for Linux | Dependency libraries for Linux | Library name for Windows | Dependency libraries for Windows | Library name for macOS | Dependency libraries for macOS |
|--------|-----------------------------|-------------------------------------------------------------|--------------------------|--------------------------------------------------------------------------------------------------------|------------------------------|---------------------------------------------|
| CPU | `libMKLDNNPlugin.so` | `libinference_engine_lp_transformations.so` | `MKLDNNPlugin.dll` | `inference_engine_lp_transformations.dll` | `libMKLDNNPlugin.so` | `inference_engine_lp_transformations.dylib` |
| GPU | `libclDNNPlugin.so` | `libinference_engine_lp_transformations.so`, `libOpenCL.so` | `clDNNPlugin.dll` | `OpenCL.dll`, `inference_engine_lp_transformations.dll` | Is not supported | - |
| MYRIAD | `libmyriadPlugin.so` | `libusb.so`, | `myriadPlugin.dll` | `usb.dll` | `libmyriadPlugin.so` | `libusb.dylib` |
| HDDL | `libHDDLPlugin.so` | `libbsl.so`, `libhddlapi.so`, `libmvnc-hddl.so` | `HDDLPlugin.dll` | `bsl.dll`, `hddlapi.dll`, `json-c.dll`, `libcrypto-1_1-x64.dll`, `libssl-1_1-x64.dll`, `mvnc-hddl.dll` | Is not supported | - |
| GNA | `libGNAPlugin.so` | `libgna.so`, | `GNAPlugin.dll` | `gna.dll` | Is not supported | - |
| HETERO | `libHeteroPlugin.so` | Same as for selected plugins | `HeteroPlugin.dll` | Same as for selected plugins | `libHeteroPlugin.so` | Same as for selected plugins |
| MULTI | `libMultiDevicePlugin.so` | Same as for selected plugins | `MultiDevicePlugin.dll` | Same as for selected plugins | `libMultiDevicePlugin.so` | Same as for selected plugins |
> **NOTE**: All plugin libraries also depend on core Inference Engine libraries.
Make sure those libraries are in your computer's path or in the place you pointed to in the plugin loader. Make sure each plugin's related dependencies are in the:
* Linux: `LD_LIBRARY_PATH`
* Windows: `PATH`
* macOS: `DYLD_LIBRARY_PATH`
On Linux and macOS, use the script `bin/setupvars.sh` to set the environment variables.
On Windows, run the `bin\setupvars.bat` batch file to set the environment variables.
To learn more about supported devices and corresponding plugins, see the [Supported Devices](supported_plugins/Supported_Devices.md) chapter.
## Common Workflow for Using the Inference Engine API
The common workflow contains the following steps:
1. **Create Inference Engine Core object** - Create an `InferenceEngine::Core` object to work with different devices, all device plugins are managed internally by the `Core` object. Register extensions with custom nGraph operations (`InferenceEngine::Core::AddExtension`).
2. **Read the Intermediate Representation** - Using the `InferenceEngine::Core` class, read an Intermediate Representation file into an object of the `InferenceEngine::CNNNetwork` class. This class represents the network in the host memory.
3. **Prepare inputs and outputs format** - After loading the network, specify input and output precision and the layout on the network. For these specification, use the `InferenceEngine::CNNNetwork::getInputsInfo()` and `InferenceEngine::CNNNetwork::getOutputsInfo()`.
4. Pass per device loading configurations specific to this device (`InferenceEngine::Core::SetConfig`), and register extensions to this device (`InferenceEngine::Core::AddExtension`).
5. **Compile and Load Network to device** - Use the `InferenceEngine::Core::LoadNetwork()` method with specific device (e.g. `CPU`, `GPU`, etc.) to compile and load the network on the device. Pass in the per-target load configuration for this compilation and load operation.
6. **Set input data** - With the network loaded, you have an `InferenceEngine::ExecutableNetwork` object. Use this object to create an `InferenceEngine::InferRequest` in which you signal the input buffers to use for input and output. Specify a device-allocated memory and copy it into the device memory directly, or tell the device to use your application memory to save a copy.
7. **Execute** - With the input and output memory now defined, choose your execution mode:
* Synchronously - `InferenceEngine::InferRequest::Infer()` method. Blocks until inference is completed.
* Asynchronously - `InferenceEngine::InferRequest::StartAsync()` method. Check status with the `InferenceEngine::InferRequest::Wait()` method (0 timeout), wait, or specify a completion callback.
8. **Get the output** - After inference is completed, get the output memory or read the memory you provided earlier. Do this with the `InferenceEngine::IInferRequest::GetBlob()` method.
## Video: Inference Engine Concept
[![](https://img.youtube.com/vi/e6R13V8nbak/0.jpg)](https://www.youtube.com/watch?v=e6R13V8nbak)
\htmlonly
<iframe width="560" height="315" src="https://www.youtube.com/embed/e6R13V8nbak" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
\endhtmlonly
## Further Reading
For more details on the Inference Engine API, refer to the [Integrating Inference Engine in Your Application](Integrate_with_customer_application_new_API.md) documentation.
<iframe allowfullscreen mozallowfullscreen msallowfullscreen oallowfullscreen webkitallowfullscreen height="315" width="100%"
src="https://www.youtube.com/embed/e6R13V8nbak">
</iframe>
* - **Inference Engine Concept**. Duration: 3:43
@endsphinxdirective

View File

@@ -1,52 +1,106 @@
Using Dynamic Batching {#openvino_docs_IE_DG_DynamicBatching}
======================
# Using Dynamic Batching {#openvino_docs_IE_DG_DynamicBatching}
Dynamic Batching feature allows you to dynamically change batch size for inference calls
within preset batch size limit.
This feature might be useful when batch size is unknown beforehand, and using extra large batch size is
undesired or impossible due to resource limitations.
For example, face detection with person age, gender, or mood recognition is a typical usage scenario.
## Using Dynamic Batching (C++)
@sphinxdirective
.. raw:: html
<div id="switcher-cpp" class="switcher-anchor">C++</div>
@endsphinxdirective
The Dynamic Batching feature allows you to dynamically change batch size for inference calls
within a preset batch size limit. This feature might be useful when batch size is unknown beforehand and using an extra-large batch size is undesirable or impossible due to resource limitations. For example, applying face detection and then mood labeling to a video, you won't know in advance how many frames will contain a face when you pass inferencing results to a secondary model.
## Usage
You can activate Dynamic Batching by setting <code>KEY_DYN_BATCH_ENABLED</code> flag to <code>YES</code> in a configuration map that is
You can activate Dynamic Batching by setting `KEY_DYN_BATCH_ENABLED` flag to `YES` in a configuration map that is
passed to the plugin while loading a network.
This configuration creates an <code>ExecutableNetwork</code> object that will allow setting batch size
dynamically in all of its infer requests using <code>SetBatch()</code> method.
The batch size that was set in passed <code>CNNNetwork</code> object will be used as a maximum batch size limit.
This configuration creates an `ExecutableNetwork` object that will allow setting batch size
dynamically in all of its infer requests using `SetBatch()` method.
The batch size that was set in the passed `CNNNetwork` object will be used as a maximum batch size limit.
Here is a code example:
@snippet snippets/DynamicBatching.cpp part0
## Limitations
### Limitations
Currently, certain limitations for using Dynamic Batching exist:
Currently, there are certain limitations for the use of Dynamic Batching exist:
* Use Dynamic Batching with CPU and GPU plugins only.
* Use Dynamic Batching on topologies that consist of certain layers only:
* Convolution
* Deconvolution
* Activation
* LRN
* Pooling
* FullyConnected
* SoftMax
* Split
* Concatenation
* Power
* Eltwise
* Crop
* BatchNormalization
* Copy
* Convolution
* Deconvolution
* Activation
* LRN
* Pooling
* FullyConnected
* SoftMax
* Split
* Concatenation
* Power
* Eltwise
* Crop
* BatchNormalization
* Copy
Do not use layers that might arbitrary change tensor shape (such as Flatten, Permute, Reshape),
layers specific to object detection topologies (ROIPooling, ProirBox, DetectionOutput), and
custom layers.
Topology analysis is performed during the process of loading a network into plugin, and if topology is
not applicable, an exception is generated.
The following types of layers are not supported:
* Layers that might arbitrary change tensor shape (such as Flatten, Permute, Reshape)
* Layers specific to object detection topologies (ROIPooling, ProirBox, DetectionOutput)
* Custom layers
Topology analysis is performed during the process of loading a network into plugin, and if the topology is not supported, an exception is generated.
## Using Dynamic Batching (Python)
@sphinxdirective
.. raw:: html
<div id="switcher-python" class="switcher-anchor">Python</div>
@endsphinxdirective
Dynamic Batching is a feature that allows you to dynamically change batch size for inference calls within a preset batch size limit. This feature might be useful when batch size is unknown beforehand, and using extra large batch size is not desired or impossible due to resource limitations. For example, face detection with person age, gender, or mood recognition is a typical usage scenario.
You can activate Dynamic Batching by setting the "DYN_BATCH_ENABLED" flag to "YES" in a configuration map that is passed to the plugin while loading a network. This configuration creates an `ExecutableNetwork` object that will allow setting batch size dynamically in all of its infer requests using the [ie_api.batch_size](api/ie_python_api/_autosummary/openvino.inference_engine.IENetwork.html#openvino.inference_engine.IENetwork.batch_size) method. The batch size that was set in the passed CNNNetwork object will be used as a maximum batch size limit.
```python
from openvino.inference_engine import IECore
ie = IECore()
dyn_config = {"DYN_BATCH_ENABLED": "YES"}
ie.set_config(config=dyn_config, device_name=device)
# Read a network in IR or ONNX format
net = ie.read_network(path_to_model)
net.batch_size = 32 # set the maximum batch size to 32
exec_net = ie.load_network(network=net, device_name=device)
```
### Limitations
Currently, certain limitations for the use of Dynamic Batching exist:
* Use Dynamic Batching with CPU and GPU plugins only.
* Use Dynamic Batching on topologies that consist of certain layers only:
* Convolution
* Deconvolution
* Activation
* LRN
* Pooling
* FullyConnected
* SoftMax
* Split
* Concatenation
* Power
* Eltwise
* Crop
* BatchNormalization
* Copy
The following types of layers are not supported:
* Layers that might arbitrary change tensor shape (such as Flatten, Permute, Reshape)
* Layers specific to object detection topologies (ROIPooling, ProirBox, DetectionOutput)
* Custom layers
Topology analysis is performed during the process of loading a network into plugin, and if the topology is not supported, an exception is generated.

View File

@@ -1,7 +1,9 @@
# Custom nGraph Operation {#openvino_docs_IE_DG_Extensibility_DG_AddingNGraphOps}
# Custom nGraph Operations {#openvino_docs_IE_DG_Extensibility_DG_AddingNGraphOps}
Inference Engine Extension API allows you to register operation sets (opsets) with custom nGraph operations to support models with operations which OpenVINO™ does not support out-of-the-box.
Besides creating custom nGraph operations, to [support custom operations](../../HOWTO/Custom_Layers_Guide.md) in your model you must also create a Model Optimizer extension for the custom operations and an Inference Engine device plugin extension for the device you will use for inference.
## Operation Class
To add your custom nGraph operation, create a new class that extends `ngraph::Op`, which is in turn derived from `ngraph::Node`, the base class for all graph operations in nGraph. Follow the steps below to add a custom nGraph operation:
@@ -26,8 +28,8 @@ Based on that, declaration of an operation class can look as follows:
The provided implementation has several fields:
* `add` of type `int64_t` is an attribute of a custom operation.
* `type_info` of type `ngraph::NodeTypeInfo` defines the type and version of an operation.
* `add` of type `int64_t` is an attribute of a custom operation
* `type_info` of type `ngraph::NodeTypeInfo` defines type and version of an operation
### Operation Constructors
@@ -67,14 +69,13 @@ To add custom operations to the [Extension](Extension.md) class, create an opera
@snippet template_extension/extension.cpp extension:getOpSets
This method returns a map of opsets that exist in the extension library.
nGraph provides an opset mechanism to group operations into clusters. S. Different opsets distinguish between different versions of one operation.
This method returns a map of opsets that exist in the [extension library](Extension.md).
nGraph provides an opset mechanism to group operations into clusters. Different opsets distinguish between different versions of one operation.
When specifying opset names, follow the rules below:
* Use unique opset names.
* Do not use the following built-in opset names: `extension`, `experimental`, `opset1`, `opset2`, `opset3`, ... , `opsetN`.
* Make sure that the Model Optimizer and your extension use the same opset names.
* [Make sure that the Model Optimizer](../../HOWTO/Custom_Layers_Guide.md) and your extension use the same opset names.
* IR v10 operations have the mandatory `version` attribute specifying the opset.
Operations from the default opset cannot be redefined.

View File

@@ -2,13 +2,13 @@
Inference Engine build infrastructure provides the Inference Engine Package for application development.
To build an extension library, use the following CMake script:
To configure the build of your extension library, use the following CMake script:
@snippet template_extension/CMakeLists.txt cmake:extension
This CMake script finds the Inference Engine and nGraph using the `find_package` CMake command.
To build an extension library, run the commands below:
To build the extension library, run the commands below:
```sh
$ cd template_extension

View File

@@ -1,6 +1,8 @@
# How to Implement Custom CPU Operations {#openvino_docs_IE_DG_Extensibility_DG_CPU_Kernel}
# CPU Kernel Custom Operations {#openvino_docs_IE_DG_Extensibility_DG_CPU_Kernel}
The primary means of the performance of the CPU codepath in the Inference Engine is the Intel® Math Kernel Library for Deep Neural Networks (Intel® MKL-DNN), and new CPU kernels extend the Inference Engine plugin for the Intel MKL-DNN. Implementing the InferenceEngine::ILayerExecImpl defines a general CPU-side extension. There are no Intel MKL-DNN specifics in the way you need to implement a kernel.
To enable operations not supported by OpenVINO™ out of the box, you need a custom extension for Model Optimizer, a custom nGraph operation set, and a custom kernel for the device you will target. This page describes custom kernel support for the CPU device.
The primary means of the performance of the CPU codepath in the Inference Engine is the Intel® Math Kernel Library for Deep Neural Networks (Intel® MKL-DNN), and new CPU kernels extend the Inference Engine plugin for the Intel MKL-DNN. Implementing the InferenceEngine::ILayerExecImpl API call defines a general CPU-side extension. There are no Intel MKL-DNN specifics in the way you need to implement a kernel.
## Implementation Class
@@ -20,31 +22,32 @@ The provided implementation has several fields:
### Constructor of Implementation
An implementation constructor checks parameters of an nGraph operation, stores required attributes, and stores an error message in the case of an error.
An implementation constructor checks parameters of an nGraph operation, stores required attributes, and stores an error message in case of an error.
@snippet template_extension/cpu_kernel.cpp cpu_implementation:ctor
### `getSupportedConfigurations`
InferenceEngine::ILayerExecImpl::getSupportedConfigurations method returns all supported configuration formats (input/output tensor layouts) for your implementation. To specify formats of data, use InferenceEngine::TensorDesc. Refer to the [Memory Primitives](../Memory_primitives.md) section for instructions.
The InferenceEngine::ILayerExecImpl::getSupportedConfigurations method returns all supported configuration formats (input/output tensor layouts) for your implementation. To specify formats of data, use InferenceEngine::TensorDesc. Refer to the [Memory Primitives](../Memory_primitives.md) section for instructions.
@snippet template_extension/cpu_kernel.cpp cpu_implementation:getSupportedConfigurations
### `init`
InferenceEngine::ILayerExecImpl::init method gets a runtime-selected configuration from a vector that is populated from the `getSupportedConfigurations` method and checks the parameters:
The InferenceEngine::ILayerExecImpl::init method gets a runtime-selected configuration from a vector that is populated from the `getSupportedConfigurations` method and checks the parameters:
@snippet template_extension/cpu_kernel.cpp cpu_implementation:init
### `execute`
InferenceEngine::ILayerExecImpl::execute method accepts and processes the actual tenors as input/output blobs:
The InferenceEngine::ILayerExecImpl::execute method accepts and processes the actual tensors as input/output blobs:
@snippet template_extension/cpu_kernel.cpp cpu_implementation:execute
## Register Implementation in `Extension` Class
To register custom kernel implementation in the [Extension](Extension.md) class, implement the following methods:
* <a href="#getImpTypes">getImplTypes</a>
* <a href="#getImplementation">getImplementation</a>
@@ -66,4 +69,3 @@ InferenceEngine::IExtension::getImplementation returns the kernel implementation
Use the `AddExtension` method of the general plugin interface to load your primitives:
@snippet snippets/CPU_Kernel.cpp part0

View File

@@ -1,7 +1,7 @@
# Custom ONNX* Operators {#openvino_docs_IE_DG_Extensibility_DG_Custom_ONNX_Ops}
The ONNX\* importer provides a mechanism to register custom ONNX operators based on predefined or custom nGraph operations.
The function responsible for registering a new operator is called `ngraph::onnx_import::register_operator` and is defined in `onnx_import/onnx_utils.hpp`.
The function responsible for registering a new operator is called `ngraph::onnx_import::register_operator` and is defined in [`onnx_import/onnx_utils.hpp`](https://docs.openvinotoolkit.org/latest/ngraph_cpp_api/onnx__utils_8hpp_source.html).
## Register Custom ONNX Operator Based on Predefined nGraph Operations
@@ -14,18 +14,22 @@ x < 0 => f(x) = x * beta
where `alpha` and `beta` are float constants.
1. Include headers:
@snippet onnx_custom_op/onnx_custom_op.cpp onnx_custom_op:headers
2. Register the CustomRelu operator in the ONNX importer:
@snippet onnx_custom_op/onnx_custom_op.cpp onnx_custom_op:register_operator
The `register_operator` function takes four arguments: op_type, opset version, domain, and a function object.
The function object is a user-defined function that takes `ngraph::onnx_import::Node` as an input and based on that, returns a graph with nGraph operations.
The `ngraph::onnx_import::Node` class represents a node in an ONNX model. It provides functions to fetch input node(s) using `get_ng_inputs`, attribute value using `get_attribute_value`, and many more. See `onnx_import/core/node.hpp` for full class declaration.
The `ngraph::onnx_import::Node` class represents a node in an ONNX model. It provides functions to fetch input node(s) using `get_ng_inputs`, attribute value using `get_attribute_value`, and many more. See [`onnx_import/core/node.hpp`](https://docs.openvinotoolkit.org/latest/ngraph_cpp_api/core_2include_2ngraph_2node_8hpp_source.html) for full class declaration.
New operator registration must happen before an ONNX model is read. For example, if an model uses the `CustomRelu` operator, call `register_operator("CustomRelu", ...)` before InferenceEngine::Core::ReadNetwork.
Reregistering ONNX operators within the same process is supported. If you register an existing operator, you get a warning.
The example below demonstrates an exemplary model that requires a previously created `CustomRelu` operator:
@snippet onnx_custom_op/onnx_custom_op.cpp onnx_custom_op:model
@@ -33,27 +37,30 @@ To create a graph with nGraph operations, visit [Custom nGraph Operations](Addin
For a complete list of predefined nGraph operators, visit [Available Operations Sets](../../ops/opset.md).
If you do not need an operator anymore, unregister it by calling `unregister_operator`. The function takes three arguments: `op_type`, `version`, and `domain`.
@snippet onnx_custom_op/onnx_custom_op.cpp onnx_custom_op:unregister_operator
## Register Custom ONNX Operator Based on Custom nGraph Operations
The same principles apply when registering a custom ONNX operator based on custom nGraph operations.
This example shows how to register a custom ONNX operator based on `Operation` presented in [this tutorial](AddingNGraphOps.md), which is used in [TemplateExtension](Extension.md).
This example shows how to register a custom ONNX operator based on `Operation` presented in [this tutorial](AddingNGraphOps.md), which is used in [TemplateExtension](Extension.md):
@snippet template_extension/extension.cpp extension:ctor
Here, the `register_operator` function is called in the constructor of Extension. The constructor makes sure that the function is called before InferenceEngine::Core::ReadNetwork, because InferenceEngine::Core::AddExtension must be called before a model with a custom operator is read.
The example below demonstrates how to unregister an operator from the destructor of Extension:
@snippet template_extension/extension.cpp extension:dtor
> **REQUIRED**: It is mandatory to unregister a custom ONNX operator if it is defined in a dynamic shared library.
## Requirements for Building with CMake
A program that uses the `register_operator` functionality requires `ngraph` and `onnx_importer` libraries in addition to the Inference Engine.
The `onnx_importer` is a component of the `ngraph` package , so `find_package(ngraph REQUIRED COMPONENTS onnx_importer)` can find both.
The `ngraph` package exposes two variables, `${NGRAPH_LIBRARIES}` and `${ONNX_IMPORTER_LIBRARIES}`, which reference the `ngraph` and `onnx_importer` libraries.
Those variables need to be passed to the `target_link_libraries` command in the CMakeLists.txt file.
A program that uses the `register_operator` functionality requires `ngraph::ngraph` and `ngraph::onnx_ngraph_frontend` libraries in addition to the Inference Engine.
The `onnx_ngraph_frontend` is a component of the `ngraph` package, so `find_package(ngraph REQUIRED COMPONENTS onnx_ngraph_frontend)` can find both.
Those libraries need to be passed to the `target_link_libraries` command in the CMakeLists.txt file.
See CMakeLists.txt below for reference:
@snippet onnx_custom_op/CMakeLists.txt cmake:onnx_custom_op

View File

@@ -1,15 +1,17 @@
# How to Implement Custom GPU Operations {#openvino_docs_IE_DG_Extensibility_DG_GPU_Kernel}
The GPU codepath abstracts many details about OpenCL\*. You need to provide the kernel code in OpenCL C and the configuration file that connects the kernel and its parameters to the parameters of the operation.
To enable operations not supported by OpenVINO™ out of the box, you need a custom extension for Model Optimizer, a custom nGraph operation set, and a custom kernel for the device you will target. This page describes custom kernel support for the GPU device.
There are two options of using the custom operation configuration file:
The GPU codepath abstracts many details about OpenCL\*. You need to provide the kernel code in OpenCL C and an XML configuration file that connects the kernel and its parameters to the parameters of the operation.
There are two options for using the custom operation configuration file:
* Include a section with your kernels into the global automatically-loaded `cldnn_global_custom_kernels/cldnn_global_custom_kernels.xml` file, which is hosted in the `<INSTALL_DIR>/deployment_tools/inference_engine/bin/intel64/{Debug/Release}` folder
* Call the `InferenceEngine::Core::SetConfig()` method from your application with the `InferenceEngine::PluginConfigParams::KEY_CONFIG_FILE` key and the configuration file name as a value before loading the network that uses custom operations to the plugin:
@snippet snippets/GPU_Kernel.cpp part0
All Inference Engine samples, except the trivial `hello_classification`,
All Inference Engine samples, except the trivial `hello_classification`, and most Open Model Zoo demos
feature a dedicated command-line option `-c` to load custom kernels. For example, to load custom operations for the classification sample, run the command below:
```sh
$ ./classification_sample -m <path_to_model>/bvlc_alexnet_fp16.xml -i ./validation_set/daily/227x227/apron.bmp -d GPU
@@ -132,8 +134,8 @@ queuing an OpenCL program for execution.
## Example Configuration File
The following code sample provides an example configuration file in the
`.xml` format. For information on the configuration file structure, see
The following code sample provides an example configuration file in XML
format. For information on the configuration file structure, see
[Configuration File Format](#config-file-format).
```xml
<CustomLayer name="ReLU" type="SimpleGPU" version="1">
@@ -208,12 +210,12 @@ __kernel void example_relu_kernel(
}
```
> **NOTE:** As described in the previous section, all things like
> **NOTE**: As described in the previous section, all items like
> `INPUT0_TYPE` are actually defined as OpenCL (pre-)compiler inputs by
> the Inference Engine for efficiency reasons. See [Debugging
> Tips](#debugging-tips) for information on debugging the results.
> **NOTE**: Several GPU-targeted kernels are also added to the binaries upon samples compilation
> **NOTE**: Several GPU-targeted kernels are also added to the binaries upon compilation of samples
> so that the sample application can easy load them.
> Refer to the `cldnn_global_custom_kernels` folder in the GPU plugin installation directory.
@@ -221,10 +223,11 @@ __kernel void example_relu_kernel(
* **Using `printf` in the OpenCL™ Kernels**.
To debug the specific values, you can use `printf` in your kernels.
However, be careful: for instance, do not output excessively
as it would generate too much data. The `printf` output is typical, so
However, be careful not to output excessively, which
could generate too much data. The `printf` output is typical, so
your output can be truncated to fit the buffer. Also, because of
buffering, you actually get an entire buffer of output when the
execution ends.<br>
For more information, refer to the [printf
Function](https://www.khronos.org/registry/OpenCL/sdk/1.2/docs/man/xhtml/printfFunction.html).

View File

@@ -1,27 +1,39 @@
# Inference Engine Extensibility Mechanism {#openvino_docs_IE_DG_Extensibility_DG_Intro}
Inference Engine Extensibility API enables you to add support of custom operations to the Inference Engine.
Extension should contain operation sets with custom operations and execution kernels for custom operations.
Physically, an extension library can be represented as a dynamic library exporting the single `CreateExtension` function
that creates a new extension instance.
@sphinxdirective
To load the Extensibility library to the `InferenceEngine::Core` object, use the
`InferenceEngine::Core::AddExtension` method.
.. toctree::
:maxdepth: 1
:hidden:
openvino_docs_IE_DG_Extensibility_DG_AddingNGraphOps
openvino_docs_IE_DG_Extensibility_DG_Custom_ONNX_Ops
CPU Kernels Extensibility <openvino_docs_IE_DG_Extensibility_DG_CPU_Kernel>
GPU Kernels Extensibility <openvino_docs_IE_DG_Extensibility_DG_GPU_Kernel>
VPU Kernels Extensibility <openvino_docs_IE_DG_Extensibility_DG_VPU_Kernel>
openvino_docs_IE_DG_Extensibility_DG_Extension
openvino_docs_IE_DG_Extensibility_DG_Building
@endsphinxdirective
If your model contains operations not normally supported by OpenVINO, the Inference Engine Extensibility API lets you add support for those custom operations in a library containing custom nGraph operation sets, corresponding extensions to the Model Optimizer, and a device plugin extension. See the overview in the [Custom Operations Guide](../../HOWTO/Custom_Layers_Guide.md) to learn how these work together.
To load the Extensibility library to the `InferenceEngine::Core` object, use the `InferenceEngine::Core::AddExtension` method.
## Inference Engine Extension Library
Inference Engine Extension dynamic library contains the following components:
An Inference Engine Extension dynamic library contains the following components:
* [Extension Library](Extension.md):
- Contains custom operation sets.
- Provides CPU implementations for custom operations.
- Contains custom operation sets
- Provides CPU implementations for custom operations
* [Custom nGraph Operation](AddingNGraphOps.md):
- Enables the use of `InferenceEngine::Core::ReadNetwork` to read Intermediate Representation (IR) with unsupported
operations.
- Enables the creation of `ngraph::Function` with unsupported operations.
- Provides a shape inference mechanism for custom operations.
operations
- Enables the creation of `ngraph::Function` with unsupported operations
- Provides a shape inference mechanism for custom operations
> **NOTE**: This documentation is written based on the `Template extension`, which demonstrates extension development details. Find the complete code of the `Template extension`, which is fully compilable and up-to-date, at `<dldt source tree>/docs/template_extension`.
> **NOTE**: This documentation is written based on the [Template extension](https://github.com/openvinotoolkit/openvino/tree/master/docs/template_extension), which demonstrates extension development details. You can review the complete code, which is fully compilable and up-to-date, to see how it works.
## Execution Kernels

Some files were not shown because too many files have changed in this diff Show More