Compare commits

...

24 Commits

Author SHA1 Message Date
Ilya Lavrenov
a6d0a3c71b Added version generation as in CI (#12521) (#12831)
* Added vresion generation as in CI (#12521)

* Allow CI_BUILD_NUMBER to define only build number
2022-08-31 10:52:14 +04:00
Xiake Sun
d8fc645466 port adding patchelf dependency to 2022.1.1 release (#12624)
* port adding patchelf dependency to 2022.1.1 release

* Remove unused gflags and zlib dependency
2022-08-30 21:16:30 +04:00
Anastasia Popova
7eb0dfd08c Updated date for update message. (#12405)
* Updated date for update message.

* Year update.
2022-08-03 10:00:30 +00:00
Tomasz Dołbniak
39aba80957 OpenCV build switched off by default [2022/1.1] (#12358)
* OCV build off by default

* OCV on in the Linux Azure pipeline

* Copy of OCV in the linux pipeline
2022-08-02 20:01:26 +02:00
Evgenya Stepyreva
c62251c89a Auto Batch: if disabled during cmake (#12382) 2022-08-02 11:26:51 +04:00
Tomasz Dołbniak
e1865fd8e0 Update of zlib to 1.2.12 (#12357) 2022-08-01 10:10:41 +04:00
Ekaterina Aidova
fe4cfc1b43 [OMZ]: include fix for onnx version to release (#12360) 2022-07-30 09:27:52 +00:00
Alina Kladieva
f45fb8f7c8 Update patch version for 22.1.1 (#12347) 2022-07-28 20:21:30 +00:00
Tomasz Dołbniak
7a6df77198 Dependencies update (#12340) 2022-07-28 15:45:00 +02:00
Ilya Churaev
d1b48740cd Tbb fixes (#12321)
* Property to force terminate tbb threads

During inference done, tbb threads cannot be closed by itself, which cause memory leak and unload/lingering threads.
Sometimes the tbb threads need to be terminate for resource(memory, thread) consumption

This PR contains:
1. Add a new property to control whether force to terminate tbb threads.
2. Property key is "FORCE_TBB_TERMINATE", default value is false.
3. Explicitly to terminate tbb task scheduler during unload openvino dll if this property is set true.
    e.g: core.set_property(device, ov::force_tbb_terminate(true));
4. If not set FORCE_TBB_TERMINATE, there will be no any additional tbb operations.

Change-Id: I32dc0ba122bb19a9dbf3ba12fdd596aad9ac54b4

* Fix executorManager test case

Change executorManager from static to be dynamic, the test case should fit this change.

* Fix race condition between executor and executorManger

* Add test case for tbb property

1. Add basic test case for ov::force_tbb_terminate property
2. set ov::force_tbb_terminate to be false

* Avoid terminate tbb in case of no tbb thread created

* change tbb blocking_terminate to terminate

Tbb blocking_terminate calling will cause some segmentfault during run some special models,
the reason may comes from block_terminate cause current thread block here to wait for tbb exit,
but cannot handle some resource dependencies.
After adopt terminate(), the dependencies can be resolved and no segmentfault any more.

Change-Id: I0b920630a25cd3fd2747c57ec71ca749ba35573b

* Disable dynamic lib test case in static library compilation version

As CVS-68982 description, we should disable the test case which will load
dynamic library in openvino static library compilation.

* Address reviewer's comments

* Fix coverity issue in executorManager

1. fix coverity issue
2. avoid oneTBB build error due to different API with TBB

Change-Id: I0339446e33186e0ce57de07aa8492186f2f6e369

* oneTBB support terminate tbb thread

Change-Id: Iea618b72db193bd48bfbf0dba3586dcdb139c43f

* Add FORCE_TBB_TERMINATE to legacy API

* Put this config into proper place

* fix issue in property test

* Add some descriprion for this config

* Xiaoxia/onetbb old version (#12303)

* support oneTBB old version

* fix oneTBB version mismatch issues

* fix clang issue

* add 'tbb' path to setupvars.sh and OpenVINOConfig.cmake.in

* Update scripts/setupvars/setupvars.sh

Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
Co-authored-by: Shen, Wanglei <wanglei.shen@intel.com>

Co-authored-by: River,Li <river.li@intel.com>
Co-authored-by: Sun Xiaoxia <xiaoxia.sun@intel.com>
Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
Co-authored-by: Shen, Wanglei <wanglei.shen@intel.com>
2022-07-27 14:17:36 +00:00
Ilya Churaev
9287ae5d93 Port cc fix (#12296)
* Revert "Fixed 3 naming issue"

This reverts commit a92d3cfff5.

* Revert "Fix CC issues for transformation and snippets"

This reverts commit d08a3f5aac.

* Fix NGRAPH_PASS_CALLBACK issue to make it can work

* Fix matcher name missing issue

* Fixed build

Co-authored-by: River,Li <river.li@intel.com>
2022-07-27 07:22:17 +00:00
Ilya Lavrenov
b2200941ba Ported multiple fixes for 2022.1.1 release (#12249)
* Update for get started samples (#10975) (#11020)

* Update for get started samples

* Update docs/get_started/get_started_demos.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/get_started/get_started_demos.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/get_started/get_started_demos.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* formatting

* rewording

* fix links

* fix formatting

* Update docs/get_started/get_started_demos.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/get_started/get_started_demos.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* replace squeezenet1.1 with googlenet-v1

* GoogleNet v1 Caffe* model

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
(cherry picked from commit 412f2190d1)

* [DOCS] update HETERO execution (#11003)

the PR has been reviewed and accepted for master already, now updating 22.1

* Incremental improvement of MO user guide. (#11010) (#11028)

* Incremental improvement of MO user guide.

* Apply feedback

* POT documentation updates (#10578) (#11024)

* POT changes

* change install

* change img size

* remove cli option

* Documentation fixes (#11044)

* Benchmark app usage

* Fixed link to the devices

* More fixes

* Update docs/OV_Runtime_UG/multi_device.md

Co-authored-by: Sergey Lyubimtsev <sergey.lyubimtsev@intel.com>

* Removed several hardcoded links

Co-authored-by: Sergey Lyubimtsev <sergey.lyubimtsev@intel.com>

* Removed obsolete code snippets (#11061)

* Removed obsolete code snippets

* NCC style

* Fixed NCC for BA

* fix a reference link (#11048)

* updates

* adding gna to linux

* add missing reference

* update

* Update docs/install_guides/installing-model-dev-tools.md

Co-authored-by: Sergey Lyubimtsev <sergey.lyubimtsev@intel.com>

* Update docs/install_guides/installing-model-dev-tools.md

Co-authored-by: Sergey Lyubimtsev <sergey.lyubimtsev@intel.com>

* Update docs/install_guides/installing-model-dev-tools.md

Co-authored-by: Sergey Lyubimtsev <sergey.lyubimtsev@intel.com>

* Update docs/install_guides/installing-model-dev-tools.md

Co-authored-by: Sergey Lyubimtsev <sergey.lyubimtsev@intel.com>

* Update docs/install_guides/installing-model-dev-tools.md

Co-authored-by: Sergey Lyubimtsev <sergey.lyubimtsev@intel.com>

* update

* minor updates

* add gna item to yum and apt

* add gna to get started page

* update reference formatting

* merge commit

Co-authored-by: Sergey Lyubimtsev <sergey.lyubimtsev@intel.com>

* AUTO and MULTI Doc update for release 2022.1 (#11066)

* Update Auto plugin docs (#10623)

* Update Auto plugin docs

Revise auto plugin and auto plugin debugging articles. Include necessary image files.

* Update docs/OV_Runtime_UG/supported_plugins/AutoPlugin_Debugging.md

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/AutoPlugin_Debugging.md

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/AutoPlugin_Debugging.md

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

* Update docs/OV_Runtime_UG/auto_device_selection.md

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/AutoPlugin_Debugging.md

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

* Update docs/OV_Runtime_UG/auto_device_selection.md

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/AutoPlugin_Debugging.md

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

* Update AutoPlugin_Debugging.md

* include review corrections

* Update auto_device_selection.md

* Update auto_device_selection.md

* Update auto_device_selection.md

* Update auto_device_selection.md

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

* [AUTOPLUGIN] update multi plugin document for ov2.0 (#10688)

* update multi document

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* update

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* update snippets ov::enableProfile

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* fix build issue

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* use Anymap in snippets

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* fix format and set property

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* update python

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* try fo fix test document issue

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* removed NEW IE-CENTRIC API and upated set_property

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* update ov::optimal_number_of_infer_requests

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* Updated multi code snippets (#11037)

* [Auto PLUGIN] update Auto docs (#10889)

* update Auto docs

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* update python snippets

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* remove vpu, fix a mistaken in python code

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* update MYRIAD device full name

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* update API name

old API use name Inference Engine API
NEW API usen name OpenVINO Runtime API 2.0

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* update tab name, and code format

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* fix AUTO4 format issue

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* update set_property code

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* auto draft

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* mv code into .cpp and .py

modify the devicelist part accoding to the review

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* remove priority list in code and document

modify the begning of the document
remove perfomance data
remove old API
use compile_model instead of set_property
add a image about cpu accelerate

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* fix mis print and code is not match document

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* try to fix doc build issue

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* fix snippets code compile issue

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* update sh scripts with ```sh```

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>
Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>

* [CPU] CPU plugin docs refactoring backport to the release branch (#11039)

* CPU device documentation refresh

* Bfloat16 inference page aligned with the new API

* Bfloat16 inference section moved to CPU main

* First review comments applied

* Second review step comments applied

* OneDNN reference changed to the GitHub page

* AvgPool added to the oneDNN ops list

* Updated note about latency, added note about mem usage with dynamic shapes

* DOCS: API Reference (#11063)

* Renamed API reference

* Try to fix API reference for new API

* Fixes after self-review

* Reworked OpenVINO Plugin dev guide structure

* Properties

* Try to fix links

* Mark properties for MYRIAD & HDDL

* Extensibility guide with FE extensions and remove OV_FRAMEWORK_MAP from docs

* Rework of Extensibility Intro, adopted examples to missing OPENVINO_FRAMEWORK_MAP

* Removed OPENVINO_FRAMEWORK_MAP reference

* Frontend extension detailed documentation

* Fixed distributed snippets

* Fixed snippet inclusion in FE extension document and chapter headers

* Fixed wrong name in a snippet reference

* Fixed test for template extension due to changed number of loaded extensions

* Update docs/Extensibility_UG/frontend_extensions.md

Co-authored-by: Ivan Tikhonov <ivan.tikhonov@intel.com>

* Minor fixes in extension snippets

* Small grammar fix

Co-authored-by: Ivan Tikhonov <ivan.tikhonov@intel.com>

Co-authored-by: Ivan Tikhonov <ivan.tikhonov@intel.com>

* Update Benchmark guides (#11076)

* - Update Benchmark Tool usage message

- Remove not existed paths
- Fix examples

* remove reference on FPGA

* Added groups for core headers (#11068)

* DOCS: transition banner (#10973)

* transition banner

* minor fix

* update transition banner

* updates

* update custom.js

* updates

* updates

* Add a troubleshooting issue for PRC installation (#11074)

* updates

* adding gna to linux

* add missing reference

* update

* Update docs/install_guides/installing-model-dev-tools.md

Co-authored-by: Sergey Lyubimtsev <sergey.lyubimtsev@intel.com>

* Update docs/install_guides/installing-model-dev-tools.md

Co-authored-by: Sergey Lyubimtsev <sergey.lyubimtsev@intel.com>

* Update docs/install_guides/installing-model-dev-tools.md

Co-authored-by: Sergey Lyubimtsev <sergey.lyubimtsev@intel.com>

* Update docs/install_guides/installing-model-dev-tools.md

Co-authored-by: Sergey Lyubimtsev <sergey.lyubimtsev@intel.com>

* Update docs/install_guides/installing-model-dev-tools.md

Co-authored-by: Sergey Lyubimtsev <sergey.lyubimtsev@intel.com>

* update

* minor updates

* add gna item to yum and apt

* add gna to get started page

* update reference formatting

* merge commit

* add a troubleshooting issue

* update

* update

* fix CVS-71846

Co-authored-by: Sergey Lyubimtsev <sergey.lyubimtsev@intel.com>

* DOC Removed indentation before snippets (#11111)

* Removed indentation

* Fixed code style

* Added more information about tensor names (#11070)

* Added more information about tensor names

* Fixed comment and added documentation for extensions

* Fixed code style

* Fixed typo

* Added group for transformation passes (#11101)

* Added group for transformation passes

* Try to fix CI

* Docs: update AC info in API 2.0 migration guide (#11106)

* Docs: update AC info in API 2.0 migration guide

* Update docs/OV_Runtime_UG/migration_ov_2_0/intro.md

* Update docs/OV_Runtime_UG/migration_ov_2_0/intro.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update headings and some wordings for Transition Guide (#11065)

* updates

* update

* merge from releases/22/1

* update heading

* update headings and some wordings

* Feature/azaytsev/cherry pick pr11110 (#11115)

* Minor fixes

* Feature/azaytsev/img updates (#11110)

* Updated images

* Updated images

* DOCS: doxy sphinxtabs (#11027)

* initial implementation of doxy sphinxtabs

* fixes

* fixes

* fixes

* fixes

* fixes

* Reshape documentation (#10901) (#11108)

* Reshape documentation

* Converting Model : reshape metrined, Supported Devices: no shape inference mentioning

* demos removed

* Added deployment guide (#11060)

* Added deployment guide

* Added local distribution

* Updates

* Fixed more indentations

* update edit on github branches (#11129)

* DOCS: fixed hardcoded links  (#11100)

* Fixes

* Use links

* Updated documentation for compile_tool (#11049)

* Benchmarks 2022 1 (#11130)

* Minor fixes

* Updates for 2022.1

* Edits according to the review

* Edits according to review comments

* Edits according to review comments

* Edits according to review comments

* Fixed table

* Edits according to review comments

* Removed config for Intel® Core™ i7-11850HE

* Removed forward-tacotron-duration-prediction-241 graph

* Added resnet-18-pytorch

* [80085] New images for docs (#11114)

* change doc structure

* fix manager tools

* fix manager tools 3 step

* fix manager tools 3 step

* new img

* new img for OV Runtime

* fix steps

* steps

* fix intendents

* change list

* fix space

* fix space

* code snippets fix

* change display

* fix screenshot (#11140)

* applying reviewers comments to the Opt Guide (#11093)

* applying reviewrs comments

* fixed refs, more structuring (bold, bullets, etc)

* refactoring tput/latency sections

* next iteration (mostly latency), also brushed the auto-batching and other sections

* updates sync/async images

* common opts brushed

* WIP tput redesigned

* minor brushing of common and auto-batching

* Tput fully refactored

* fixed doc name in the link

* moved int8 perf counters to the right section

* fixed links

* fixed broken quotes

* fixed more links

* add ref to the internals to the TOC

* Added a note on the batch size

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

* Add info about Docker images in Deployment guide (#11136)

* [DOCS]transition_guide_intro_language (#11134) (#11142)

a few language suggestions and grammar issues
# Conflicts:
#	docs/OV_Runtime_UG/migration_ov_2_0/intro.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* [DOCS]autodevice_table_fix (#11141)

* Update release version in readme (#11146)

* [AUTO] Fix mess table in doc (#11149)

* update AUTO Debug doc with snippets (#11153)

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* Update ShapeInference.md (#11168)

* Benchmarks 2022 1 updates (#11180)

* Updated graphs

* Quick fix for TODO in Dynamic Shapes article

* Anchor link fixes

* [Docs][IE Samples] fix hard links (#11144) (#11186)

* fix hard links

* change encoding

* fix TM

Co-authored-by: CCR\ntyukaev <nikolay.tyukaev@intel.com>

Co-authored-by: CCR\ntyukaev <nikolay.tyukaev@intel.com>

* More conservative recommendations on dynamic shapes usage in docs (#11161)

* More conservative recommendations about using dynamic shapes

* Duplicated statement from C++ part to Python part of reshape doc (no semantical changes)

* Added software tab for Linux installer (#11159)

* Added software tab for Linux installer

* Added information for apt and yum

* Update docs/install_guides/installing-openvino-apt.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

Update docs/install_guides/installing-openvino-apt.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

Update docs/install_guides/installing-openvino-linux.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

Update docs/install_guides/installing-openvino-apt.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

Update docs/install_guides/installing-openvino-apt.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* [docs] python snippets for devices (#11174)

* Update CPU docs

* update GPU docs

* update with sphinxtab

* Fix docs

* Add preprocessig snippet

* Fix path

* Fixed DM config (#11199)

* Renamed user guides (#11137)

* [Python API] Fix documentation for Core API -- release (#11200)

* [Python API] Fix documentation for Core API

* fix style

* [OMZ]: port bugfix to 2022/1 branch (#11204)

* a bunch of doc fixes (#11230)

* Missing backslashes right after mo (#11252)

* Revert vpu custom kernel (#11226)

* Added original VPU custom kernel doc

* Moved to new API

* Added links from introduction

* Fixed intro

* DOCS-InstallGuide_review (#11217)

langage adjustment

* Docs labels adjustment (#11227)

* Adjusted documentation labels

* Renamed images

* fix doc tests

Co-authored-by: CCR\ntyukaev <nikolay.tyukaev@intel.com>

* cvs-80083 (#11280)

* fix wildcard sphinxdirective (#11263)

* [docs] python snippets for migration pages (#11224)

* save work

* Add common snipp

* update ie pipeline with python snippets

* ov_common_snippet

* Python snippets for graph construction

* Fix docs

Co-authored-by: Anastasia Kuporosova <anastasia.kuporosova@intel.com>

* [Python API][Docs] Fix references for several classes (#11260)

* next iteration after discussion with Yuri (#11197)

* next iteration after discussion with Yuri

* WIP tput

* Basic/Advanced Flow

* brushing/links

* wording, testing the failing link

* refactored levels, added hash

* added advanced tput to the TOC (required by sphinx)

* changed wording of the title to be more pro-active

* minor misprint, etc

* emphasized the flow names

* Update two paragraphs in performance hints docs

(cherry picked from commit 61415fd91f417b70eae595cc15976dec7af0865b)

* minor brushing

* e2e flow in the app design

* no separate hints doc

* minor brushing

* final, neat-picking brushing

Co-authored-by: Helena <helena.kloosterman@intel.com>

* [docs] add missed old python api snippets (#11233)

* Add missed old api snippets

* Fix names

* Fix markers

* Fix methods call

* Model optimizataion documentation update (#11072)

* Fixed Model Optimization Guide and NNCF docs

* Fixed the link to Optimum

* Updated installatin guide

* Changed API description

* Changes quantization documents

* Fixed links in the relevant components

* Fixed API description

* Revised CLI document

* Fixed formatting bugs in the main document

* Fixed formatting bugs in the main document

* Changed the structure. Added Default quantization usage via API

* Fixed E2E CLI example

* Added AccuracyAware usage description

* Revised structure and examples

* Fixed a link to POT intro

* Changed the structure for algorithms

* Fixed links

* Additional fixed of the links

* Revised Ranger documentation

* Some fixes

* Revised Best Practicies

* Fixed descriptions

* Fixed section names

* Changed the workflow one more time

* Additional fixes to the model structure

* Fixed AA usage

* Added DefaultQuantization flow image

* Fixed many issues

* Fixed many issues

* Applied many comments

* Additional fixes

* Fixed examples and provided links to them

* Changed DataLoader Example. Fixed FAQ

* Changed the main README for GitHub

* Fixed E2E CLI example

* Fixed links and code of DataLoader

* Fixed build issues

* Fixed more links

* Fixed one more documentation build issue

* Fixed more links

* Fixed code example

* Add multiple data loaders

* Add audio example

* Minor fixes in the code of sample loaders

* Add descriptions of dataloaders. Changed the behaviour of text loader

* Fixed typos

* Added a new item into the FAQ

* Apply wording corrections

* Update docs/OV_Runtime_UG/supported_plugins/CPU.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Fixed comments

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* [DOCS]continue_language_review-transitionguide (#11148)

* [DOCS]-continue_language_review-transitionguide

the overview has been merged, the remaining articles are reviewed here

* Update docs/OV_Runtime_UG/migration_ov_2_0/deployment_migration.md

* Update docs/OV_Runtime_UG/migration_ov_2_0/deployment_migration.md

* Update docs/OV_Runtime_UG/migration_ov_2_0/deployment_migration.md

* Update docs/OV_Runtime_UG/migration_ov_2_0/graph_construction.md

* Update docs/OV_Runtime_UG/migration_ov_2_0/configure_devices.md

* Configurable OpenCL usage in BA (#11344) (#11363)

* Feature/azaytsev/doc fixes 2022 1 1 (#11388)

* Removed a redundant image

* Fixed ops specifications and other issues

* converted html links to anchor links

* converted html links to anchor links

* Fixed a link

* Fixed a link

* Changed anchor links according to dev review

* [DOCS] polish autodevice article (#11171)

the article has been changed much and its language has been impacted in the process. Here are some corrections.

* sphinx google search (#11439)

* sphinx google search

* fixes

* fixes

* fix version tabs

* Fixed operation names (#11447)

* DOCS-transitionguide_name_correction (#11449)

OpenVINO™  2.0 => OpenVINO™ API 2.0

* Azure CI: Update branch for contrib and testdata repos (#11473)

* review GPU language changes (#11343)

As per ticket #CVS-80053
* int8 link removed

* DOCS-benchmarktool_python_correction (#11479)

add info on tool installation

Co-authored-by: Helena Kloosterman <helena.kloosterman@intel.com>

* DOCS-cpu_language_review (#11526)

Co-Authored-By: Yuan Xu <yuan1.xu@intel.com>

* Update Convert_Model_From_TensorFlow.md (#11425)

* [OMZ]: update submodule (#11286)

* Support config option for time_tests suite (#11628)

* Add links to MO installation and ONNX examples (#11617)

These edits help make it easier for a new user to find more information on how to convert ONNX models.

* Docs: Add links to specific examples (#11618)

* Update docs/OV_Runtime_UG/integrate_with_your_application.md
* Add links to specific examples

This edit adds links to more example applications, making it easier for users to discover how to build an OpenVINO application around their specific model.

* Fix failure of pytest in timetest (#11647)

* Update installing-openvino-windows-header.md (#11221) (#11592)

* Update installing-openvino-windows-header.md

* Update docs/install_guides/installing-openvino-windows-header.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

Co-authored-by: Sebastian Golebiewski <sebastianx.golebiewski@intel.com>

* Update Yocto documentation for 2022.1 (#11655)

* installing-openvino-yocto.md: fix install instructions (#10785)

Change _ to : as per the new override syntax.

Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>

* installing-openvino-yocto: update for 2022.1

Update the branch to be used for 2022.1 and remove reference to
-staticdev package which isn't generated anymore.

Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>

* DOCS-hetero_alignment_changes (#11643)

Align the HETERO article with the AUTO and MULTI template

* Fix CI on Windows (#11659)

- fix pip requirements in OMZ
- fix cpuFuncTests on AlderLake

* Docs multiplugin page-wide tabs merge (#11461)

* Update multi_device.md

* druga runda

* runda trzecia

11

* Update docs/OV_Runtime_UG/multi_device.md

* Update docs/OV_Runtime_UG/multi_device.md

* Update docs/OV_Runtime_UG/multi_device.md

* Update docs/OV_Runtime_UG/multi_device.md

* Update docs/OV_Runtime_UG/multi_device.md

* Update docs/OV_Runtime_UG/multi_device.md

* Update docs/OV_Runtime_UG/multi_device.md

* Update docs/OV_Runtime_UG/multi_device.md

* Update docs/OV_Runtime_UG/supported_plugins/Device_Plugins.md

* correct post review

* align the property table

* Update docs/OV_Runtime_UG/auto_device_selection.md

* Update docs/OV_Runtime_UG/multi_device.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/OV_Runtime_UG/multi_device.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/OV_Runtime_UG/multi_device.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/OV_Runtime_UG/multi_device.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/OV_Runtime_UG/multi_device.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/OV_Runtime_UG/multi_device.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/Device_Plugins.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/Device_Plugins.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/Device_Plugins.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/Device_Plugins.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Fix compilation error in docs snippets (#11675)

* plugin api separate config (#11109)

* Revert "plugin api separate config (#11109)" (#11705)

This reverts commit 3249e61bfb.

* Fix a heading in Auto (#11743)

* fix the heading

* fix headings

* Docs: Add source code links to OpenVINO Samples (#11803)

* Docs: Add links to Samples source code on GitHub

* Add link to source code on GitHub

* Add link to source code on GitHub

* Add link to source code on GitHub

* Add link to source code on GitHub

* Add link to source code on GitHub

* Add link to source code on GitHub

* Add link to source code on GitHub

* Add link to source code on GitHub

* Add link to source code on GitHub

* Add link to source code on GitHub

* Add link to source code on GitHub

* Add link to source code on GitHub

* Add link to source code on GitHub

* Add link to source code on GitHub

* Add link to source code on GitHub

* Add link to source code on GitHub

* Update docs/OV_Runtime_UG/Samples_Overview.md

* Update samples/c/hello_classification/README.md

* Update samples/c/hello_nv12_input_classification/README.md

* Update samples/cpp/classification_sample_async/README.md

* Update samples/cpp/hello_classification/README.md

* Update samples/cpp/hello_nv12_input_classification/README.md

* Update samples/python/classification_sample_async/README.md

* Update samples/python/hello_classification/README.md

* Update samples/python/hello_query_device/README.md

* Update samples/python/hello_reshape_ssd/README.md

* Update samples/python/speech_sample/README.md

* Update samples/cpp/hello_query_device/README.md

* Update samples/cpp/speech_sample/README.md

* Update samples/cpp/hello_reshape_ssd/README.md

* Update samples/cpp/model_creation_sample/README.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Docs: Add links to specific object detection examples (#11820)

* Docs: Add links to object detection examples

* Docs: Add links to specific examples

* Docs: Add links to specific examples

* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_YOLO_From_Tensorflow.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Docs: Add that ONNX models are compatible with OpenVINO (#11821)

* Docs: Add that ONNX models are compatible with OpenVINO

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_ONNX.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Docs: Add links to info on benchmark application (#11822)

* Docs: Add link to benchmark_app

* Docs: Add link to benchmark_app

* Docs: Add link to benchmark_app

* DOCS-add supported PdPd models_port (#11804) (#11827)

* fix formatting (#11904)

* DOCS-nncf_rephrasing-port #11997 (#12007)

* Puts page switch parameters in alphabetic order to support S3 (#11960) (#11966)

* Puts page switch parameters in alphabetic order to support S3 (#11960)

Signed-off-by: intelkevinputnam <intelkevinputnam@github.com>

Co-authored-by: intelkevinputnam <intelkevinputnam@github.com>

* DOCS-restore_gsearch_comma (#11980)

Co-authored-by: Kevin Putnam <kevin.putnam@intel.com>
Co-authored-by: intelkevinputnam <intelkevinputnam@github.com>
Co-authored-by: Piotr Milewski <piotr.milewski@intel.com>

* Install only proper GNA library files (#11243)

* If CMAKE_BUILD_TYPE is not set - set it to 'Release' by default (#11026)

This behavior is already used by default because ONNX is enabled by default and thirdparty/onnx/onnx/CMakeLists.txt forcing CMAKE_BUILD_TYPE to Release if it is not set

It fixes the following issues:
- When ONNX frontend is disabled - source is built for Debug, which is very unexpected comparing to Release with ONNX frontend enabled
- When ONNX frontend is disabled, even libopenvino.so could not be built due to some generated makefiles issues

It is set to 'Release' (not to 'Debug') to comply with default behavior when ONNX is enabled (it is default option working for most users)

* Build with system TBB (#11244)

* Build with system TBB

* Fixes

* Check whether system TBB is available

* Try to fix ONNX Runtime build with system TBB

* Test

* Fixed compilation of threading.cpp

* Fixed unset of cache dirs

* Limit dearch paths of TBB

* Try to enable pip packages with custom TBB

* Fix for TBB 2021.2

* Install only needed TBB libraries

* Install TBB from system to pip package

* Reverted usage of TBBROOT

* Fixed oneTBB case

* Try to fix Android

* Escape some paths

* Added samples path

* Fixed TBBBind usage for case of system TBB

* Disabled TBBBind usage for oneTBB (#11386)

* Tbb 2018 and older usage (#11411)

* fixed TBB

* Fixed compilation with old TBBs

* Fixed installation for custom provided TBB

* Fixed detection of sample type c / cpp (#11444)

* Tbb: download only if system libraries are not found (#11415)

* Download custom TBB on demand

* Download TBBBind on demand

* Fixed install steps

* FIxes

* Don't use system TBB

* Fixed WIndows backslash paths

* Revert "Install only proper GNA library files (#11243)"

This reverts commit 8a1a6e8b1a.

* Limit ONNX version (#11949)

OV does not currently support opset 17 introduced in onnx 1.12 release.

* setupvars.sh: Removing extra semicolon, which breaks glibc build (#11849)

This extra semicolon creates an output as example below. The extra
'::' is equivalent to add '.' as part of the LD_LIBRARY_PATH. This
breaks glibc build, and very often creates weird issue when launch
commands from different path.

...inference_engine/external/tbb/lib::/opt/intel/openvino_2021/...

We also noticed that :${parameter:+:$parameter} is widely used in
this file. Please review the code and fix as needed.

* Updated setupvars scripts

* Install external / user provided TBB as well

* Remove protobuf requirements in python bindings (#11886)

* Fixes for cases when TBB_DIR env var is set

* Disable loading of v7 reader for new IR versions (#12252)

* Disable loading of v7 reader for new IR versions

* Try to fix CI

* Fixed PDPD frontend

* Fixed error message creation

* Fixed newAPI for case if core was removed (#12207)

* Fixed newAPI for case if core was removed

* Fixed code style

* Fixed typo

* Use new API by default

* Create core with template plugin

* Added doxygen comment

* Updated build_samples.sh not to call make command

* Fixes

* Don't use make in build_samples.sh script

* Limit protobuf version

* Fix for Include dirs

* [PyOV] Fix bugbear's B023 (#12040)

* Sync .github/workflows/py_checks.yml with master

* Revert "Sync .github/workflows/py_checks.yml with master"

This reverts commit 9ae2dd9f46.

* Change quotes

* Revert "Sync .github/workflows/py_checks.yml with master"

This reverts commit 9ae2dd9f46.

* Add static shared_objects map in FEM
- add unit tests for frontend lib close
- not use static FEM in ie network reader
- add main for gtest which can use manifest file to filter tests

* Move library pointers map to manger impl
- add to manger impl method to make frontend from loaded plugin

* Add shutdown function to ov namespace
it cleans the static resources

* Revert changes related to linking mian for tests

* Add python binding to ov::openvino_shutdown

* Renamed shutdown method and added to legacy C++ API

* Added C bindings

* Remove redundant files

* Fixed code style

* Cpp fix of python segfault, reverted pybind workaround (#10749)

* test fix of segfault

* styles applied

* added keep_alive to pybind

* remove redundant code

* fix json tests

* review remarks

* introduced correct path to dlls in CI

* removing passing path via env variable

* introduced cpp solution

* remove keep alive

* review remarks

* remove explicit removing model

* removed shared_objects from ir frontend

* core test updated

* unified approach to handle extensions by frontends

* added nullptr check

* Revert "added nullptr check"

This reverts commit 666f5e4489.

* Revert "unified approach to handle extensions by frontends"

This reverts commit bf85ac24a6.

* m_extensions declaration in Frontend

* added assert

* Revert "Disable loading of v7 reader for new IR versions (#12252)"

This reverts commit 60ee201d93.

* Removed old headers from OV 2.0 API

* FIxed clang

* [OMZ]: update submodule

* Fixed sampes build

* Fixed tets build

* Fixed docs compilation

* Disable ARM plugin build

* Disable MO

* Revert "FIxed clang"

This reverts commit 8ebc86935c.

* Revert "Removed old headers from OV 2.0 API"

This reverts commit 4e64eb22a1.

* Revert "Disable ARM plugin build"

This reverts commit 54f805c28b.

* Removed lib_close tests

Co-authored-by: Sergey Lyubimtsev <sergey.lyubimtsev@intel.com>
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
Co-authored-by: Maxim Vafin <maxim.vafin@intel.com>
Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
Co-authored-by: Yuan Hu <yuan2.hu@intel.com>
Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>
Co-authored-by: Maksim Kutakov <maksim.kutakov@intel.com>
Co-authored-by: Sergey Lyalin <sergey.lyalin@intel.com>
Co-authored-by: Ivan Tikhonov <ivan.tikhonov@intel.com>
Co-authored-by: Ilya Churaev <ilya.churaev@intel.com>
Co-authored-by: Nikolay Tyukaev <nikolay.tyukaev@intel.com>
Co-authored-by: Ekaterina Aidova <ekaterina.aidova@intel.com>
Co-authored-by: Evgenya Stepyreva <evgenya.stepyreva@intel.com>
Co-authored-by: Maxim Shevtsov <maxim.y.shevtsov@intel.com>
Co-authored-by: Ilya Naumov <ilya.naumov@intel.com>
Co-authored-by: Alexey Suhov <alexey.suhov@intel.com>
Co-authored-by: Chen Peter <peter.chen@intel.com>
Co-authored-by: Vladimir Dudnik <vladimir.dudnik@intel.com>
Co-authored-by: Alexey Lebedev <alexey.lebedev@intel.com>
Co-authored-by: Anastasia Kuporosova <anastasia.kuporosova@intel.com>
Co-authored-by: Eddy Kim <eddy.kim@intel.com>
Co-authored-by: Helena <helena.kloosterman@intel.com>
Co-authored-by: Alexander Kozlov <alexander.kozlov@intel.com>
Co-authored-by: Alexander Zhogov <alexander.zhogov@intel.com>
Co-authored-by: Evan <evan.juras@gmail.com>
Co-authored-by: FanJiangIntel <fan.jiang@intel.com>
Co-authored-by: Sebastian Golebiewski <sebastianx.golebiewski@intel.com>
Co-authored-by: Anuj Mittal <anuj.mittal@intel.com>
Co-authored-by: Mateusz Tabaka <mateusz.tabaka@intel.com>
Co-authored-by: Kevin Putnam <kevin.putnam@intel.com>
Co-authored-by: intelkevinputnam <intelkevinputnam@github.com>
Co-authored-by: Piotr Milewski <piotr.milewski@intel.com>
Co-authored-by: Mikhail Nosov <mikhail.nosov@intel.com>
Co-authored-by: Tomasz Jankowski <tomasz1.jankowski@intel.com>
Co-authored-by: stephenli2000 <stephen@aotu.ai>
Co-authored-by: Artur Kulikowski <artur.kulikowski@intel.com>
Co-authored-by: Jan Iwaszkiewicz <jan.iwaszkiewicz@intel.com>
Co-authored-by: p-wysocki <przemyslaw.wysocki@intel.com>
Co-authored-by: Raasz, Pawel <pawel.raasz@intel.com>
Co-authored-by: Mateusz Bencer <mateusz.bencer@intel.com>
2022-07-26 22:12:33 +00:00
Alexander Zhogov
9996a58fc6 Azure CI: Update branch for contrib and testdata repos (#11474) 2022-04-05 22:23:36 +03:00
Ilya Churaev
4192d8879d Port visibility hidden for get_type_info methods (#11320)
Co-authored-by: vurusovs <vitaliy.urusovskij@intel.com>
2022-03-31 09:47:29 +03:00
Nikolay Tyukaev
cdb9bec721 DOCS: Increase content width (#10995)
* fixes

* fix
2022-03-17 16:38:08 +03:00
Liubov Talamanova
baf4b23d9a Add configs to pypi pkg (#11008) 2022-03-17 16:02:21 +03:00
Yuan Xu
43fa3183dc Fix issues and integrate comments (#10980)
* updates

* adding gna to linux

* add missing reference

* update

* Update docs/install_guides/installing-model-dev-tools.md

Co-authored-by: Sergey Lyubimtsev <sergey.lyubimtsev@intel.com>

* Update docs/install_guides/installing-model-dev-tools.md

Co-authored-by: Sergey Lyubimtsev <sergey.lyubimtsev@intel.com>

* Update docs/install_guides/installing-model-dev-tools.md

Co-authored-by: Sergey Lyubimtsev <sergey.lyubimtsev@intel.com>

* Update docs/install_guides/installing-model-dev-tools.md

Co-authored-by: Sergey Lyubimtsev <sergey.lyubimtsev@intel.com>

* Update docs/install_guides/installing-model-dev-tools.md

Co-authored-by: Sergey Lyubimtsev <sergey.lyubimtsev@intel.com>

* update

* minor updates

* add gna item to yum and apt

* add gna to get started page

Co-authored-by: Sergey Lyubimtsev <sergey.lyubimtsev@intel.com>
2022-03-17 15:55:37 +03:00
Artyom Anokhov
63ca94179e Fix Deployment Manager configs for MacOS and Win-HDDL target (#10998)
* DM configs: Updated path for MacOS. Removed MovidiusDriver for HDDL target for Windows

* DM config MacOS: Updated name for libov_runtime
2022-03-17 12:44:52 +03:00
Mikhail Nosov
8723d1cc7e Fix coverity warnings in caching snippets (#11006) 2022-03-17 12:43:29 +03:00
Maxim Shevtsov
cbfb8a1678 Perf Hints docs and General Opt Guide refactoring (#10815)
* Brushed the general optimization page

* Opt GUIDE, WIP

* perf hints doc placeholder

* WIP

* WIP2

* WIP 3

* added streams and few other details

* fixed titles, misprints etc

* Perf hints

* movin the runtime optimizations intro

* fixed link

* Apply suggestions from code review

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* some details on the FIL and other means when pure inference time is not the only factor

* shuffled according to general->use-case->device-specifics flow, minor brushing

* next iter

* section on optimizing for tput and latency

* couple of links to the features support matrix

* Links, brushing, dedicated subsections for Latency/FIL/Tput

* had to make the link less specific (otherwise docs compilations fails)

* removing the Temp/Should be moved to the Opt Guide

* shuffled the tput/latency/etc info into separated documents. also the following docs moved from the temp into specific feature, general product desc or corresponding plugins

-   openvino_docs_IE_DG_Model_caching_overview
-   openvino_docs_IE_DG_Int8Inference
-   openvino_docs_IE_DG_Bfloat16Inference
-   openvino_docs_OV_UG_NoDynamicShapes

* fixed toc for ov_dynamic_shapes.md

* referring the openvino_docs_IE_DG_Bfloat16Inference to avoid docs compilation errors

* fixed main product TOC, removed ref from the second-level items

* reviewers remarks

* reverted the openvino_docs_OV_UG_NoDynamicShapes

* reverting openvino_docs_IE_DG_Bfloat16Inference and openvino_docs_IE_DG_Int8Inference

* "No dynamic shapes" to the "Dynamic shapes" as TOC

* removed duplication

* minor brushing

* Caching to the next level in TOC

* brushing

* more on the perf counters ( for latency and dynamic cases)

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>
2022-03-17 11:09:13 +03:00
Yegor Kruglov
1ed828982e [Release] Cascade RCNN res101 document model support (#10904)
* cascade rcnn model support

* fix typo

* specify model directory

* comments resolving
2022-03-16 18:04:46 +03:00
Alexander Zhogov
c670e4cc2b Azure CI: Enable IB again 2022-03-16 15:01:20 +03:00
Nikolay Tyukaev
e124d4f5df add ote repo (#10979) 2022-03-16 14:53:51 +03:00
Mikhail Nosov
09462af266 Docs: model caching page update according to OpenVINO API 2.0 (#10977)
* Docs: model caching page update according to OpenVINO API 2.0

* Fix assert
2022-03-16 12:35:01 +03:00
850 changed files with 15163 additions and 8875 deletions

View File

@@ -13,6 +13,7 @@ resources:
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/openvino_contrib
ref: releases/2022/1
jobs:
- job: android_arm64
@@ -109,7 +110,6 @@ jobs:
-DANDROID_ABI=$(ANDROID_ABI_CONFIG)
-DANDROID_STL=c++_shared
-DANDROID_PLATFORM=$(ANDROID_SDK_VERSION)
-DENABLE_OPENCV=OFF
-DENABLE_TESTS=ON
-DENABLE_SAMPLES=ON
-DENABLE_INTEL_MYRIAD=OFF

View File

@@ -13,11 +13,13 @@ resources:
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/openvino_contrib
ref: releases/2022/1
- repository: testdata
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/testdata
ref: releases/2022/1
jobs:
- job: Lin
@@ -155,6 +157,7 @@ jobs:
-DENABLE_FASTER_BUILD=ON
-DENABLE_STRICT_DEPENDENCIES=OFF
-DENABLE_REQUIREMENTS_INSTALL=OFF
-DENABLE_OPENCV=ON
-DIE_EXTRA_MODULES=$(OPENVINO_CONTRIB_REPO_DIR)/modules
-DCMAKE_CXX_COMPILER_LAUNCHER=ccache
-DCMAKE_C_COMPILER_LAUNCHER=ccache
@@ -252,6 +255,7 @@ jobs:
export MO_ROOT=$(INSTALL_DIR)/tools/mo
. $(SETUPVARS) -pyver 3.8 && python3 -m pytest -s $(INSTALL_DIR)/tests/mo/unit_tests --junitxml=TEST-ModelOptimizer.xml
displayName: 'Model Optimizer UT'
condition: false
continueOnError: false
- script: . $(SETUPVARS) && $(INSTALL_TEST_DIR)/ov_core_unit_tests --gtest_print_time=1 --gtest_filter=-*IE_GPU* --gtest_output=xml:TEST-NGraphUT.xml
@@ -336,6 +340,7 @@ jobs:
- script: |
export PATH=$HOME/.local/bin:$PATH
export IE_APP_PATH=$(INSTALL_DIR)/samples_bin
export LD_LIBRARY_PATH=$IE_APP_PATH:$LD_LIBRARY_PATH
export IE_APP_PYTHON_PATH=$(INSTALL_DIR)/samples/python/
export SHARE=$(INSTALL_DIR)/tests/smoke_tests/samples_smoke_tests_data/
export WORKSPACE=$(INSTALL_DIR)

View File

@@ -13,6 +13,7 @@ resources:
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/openvino_contrib
ref: releases/2022/1
jobs:
- job: linux_arm64
@@ -34,13 +35,13 @@ jobs:
OPENVINO_REPO_DIR: $(Build.Repository.LocalPath)
OPENVINO_CONTRIB_REPO_DIR: $(OPENVINO_REPO_DIR)/../openvino_contrib
OPENCV_REPO_DIR: $(OPENVINO_REPO_DIR)/../opencv
BUILD_PYTHON: $(WORK_DIR)/build_python
BUILD_PYTHON: $(WORK_DIR)/build_python
BUILD_OPENCV: $(WORK_DIR)/build_opencv
BUILD_OPENVINO: $(WORK_DIR)/build
BUILD_OPENVINO_PYTHON: $(WORK_DIR)/build_python
BUILD_OPEN_MODEL_ZOO: $(WORK_DIR)/build_open_model_zoo
INSTALL_OPENVINO: $(WORK_DIR)/install_openvino
INSTALL_PYTHON: $(INSTALL_OPENVINO)/extras/python
INSTALL_PYTHON: $(INSTALL_OPENVINO)/extras/python
INSTALL_OPENCV: $(INSTALL_OPENVINO)/extras/opencv
INSTALL_OPEN_MODEL_ZOO: $(INSTALL_OPENVINO)/extras/open_model_zoo
WORK_DIR: $(Pipeline.Workspace)/_w
@@ -125,20 +126,19 @@ jobs:
cmakeArgs: >
-GNinja
-DVERBOSE_BUILD=ON
-DOpenCV_DIR=$(INSTALL_OPENCV)/cmake
-DENABLE_OPENCV=OFF
-DPYTHON_INCLUDE_DIRS=$(INSTALL_PYTHON)/include/python3.8
-DPYTHON_LIBRARY=$(INSTALL_PYTHON)/lib/libpython3.8.so
-DENABLE_PYTHON=ON
-DPYTHON_MODULE_EXTENSION=".so"
-DENABLE_TESTS=ON
-DENABLE_FUNCTIONAL_TESTS=ON
-DENABLE_GAPI_TESTS=OFF
-DENABLE_GAPI_PREPROCESSING=OFF
-DENABLE_DATA=OFF
-DCMAKE_EXE_LINKER_FLAGS=-Wl,-rpath-link,$(INSTALL_OPENCV)/lib
-DTHREADING=SEQ -DENABLE_LTO=ON
-DCMAKE_TOOLCHAIN_FILE=$(OPENVINO_REPO_DIR)/cmake/arm64.toolchain.cmake
-DOpenCV_DIR=$(INSTALL_OPENCV)/cmake
-DPYTHON_INCLUDE_DIRS=$(INSTALL_PYTHON)/include/python3.8
-DPYTHON_LIBRARY=$(INSTALL_PYTHON)/lib/libpython3.8.so
-DENABLE_PYTHON=ON
-DPYTHON_MODULE_EXTENSION=".so"
-DENABLE_TESTS=ON
-DENABLE_FUNCTIONAL_TESTS=ON
-DENABLE_GAPI_TESTS=OFF
-DENABLE_GAPI_PREPROCESSING=OFF
-DENABLE_DATA=OFF
-DCMAKE_EXE_LINKER_FLAGS=-Wl,-rpath-link,$(INSTALL_OPENCV)/lib
-DTHREADING=SEQ -DENABLE_LTO=ON
-DCMAKE_TOOLCHAIN_FILE=$(OPENVINO_REPO_DIR)/cmake/arm64.toolchain.cmake
-DCMAKE_BUILD_TYPE=$(BUILD_TYPE)
-DENABLE_SAMPLES=ON
-DBUILD_java_api=OFF
@@ -173,19 +173,19 @@ jobs:
cmakeArgs: >
-GNinja
-DInferenceEngineDeveloperPackage_DIR=$(BUILD_OPENVINO)
-DENABLE_PYTHON=ON
-DPYTHON_EXECUTABLE=$(INSTALL_PYTHON)/bin/python3.8
-DPYTHON_INCLUDE_DIRS=$(INSTALL_PYTHON)/include/python3.8
-DENABLE_PYTHON=ON
-DPYTHON_EXECUTABLE=$(INSTALL_PYTHON)/bin/python3.8
-DPYTHON_INCLUDE_DIRS=$(INSTALL_PYTHON)/include/python3.8
-DPYTHON_LIBRARIES=$(INSTALL_PYTHON)/lib
-DPYTHON3_NUMPY_INCLUDE_DIRS=/usr/local/lib/python3.8/site-packages/numpy/core/include
-DPYTHON3_NUMPY_INCLUDE_DIRS=/usr/local/lib/python3.8/site-packages/numpy/core/include
-DPYTHON_MODULE_EXTENSION=".so"
-DPYBIND11_FINDPYTHON=OFF
-DPYBIND11_NOPYTHON=OFF
-DPYTHONLIBS_FOUND=TRUE
-DCMAKE_BUILD_TYPE=$(BUILD_TYPE)
-DENABLE_DATA=OFF
-DCMAKE_EXE_LINKER_FLAGS=-Wl,-rpath-link,$(INSTALL_OPENCV)/lib
-DCMAKE_TOOLCHAIN_FILE=$(OPENVINO_REPO_DIR)/cmake/arm64.toolchain.cmake
-DCMAKE_EXE_LINKER_FLAGS=-Wl,-rpath-link,$(INSTALL_OPENCV)/lib
-DCMAKE_TOOLCHAIN_FILE=$(OPENVINO_REPO_DIR)/cmake/arm64.toolchain.cmake
-DCMAKE_CXX_COMPILER_LAUNCHER=ccache
-DCMAKE_C_COMPILER_LAUNCHER=ccache
-DCMAKE_INSTALL_PREFIX=$(INSTALL_OPENVINO)
@@ -211,15 +211,15 @@ jobs:
inputs:
cmakeArgs: >
-GNinja
-DCMAKE_BUILD_TYPE=$(BUILD_TYPE)
-DENABLE_PYTHON=ON
-DCMAKE_BUILD_TYPE=$(BUILD_TYPE)
-DENABLE_PYTHON=ON
-DPYTHON_EXECUTABLE=/usr/local/bin/python3.8
-DPYTHON_INCLUDE_DIR=$(INSTALL_PYTHON)/include/python3.8
-DPYTHON_INCLUDE_DIR=$(INSTALL_PYTHON)/include/python3.8
-DPYTHON_LIBRARY=$(INSTALL_PYTHON)/lib
-DCMAKE_TOOLCHAIN_FILE=$(OPENVINO_REPO_DIR)/cmake/arm64.toolchain.cmake
-DOpenVINO_DIR=$(BUILD_OPENVINO)
-DInferenceEngine_DIR=$(BUILD_OPENVINO)
-DOpenCV_DIR=$(INSTALL_OPENCV)/cmake
-DOpenCV_DIR=$(INSTALL_OPENCV)/cmake
-Dngraph_DIR=$(BUILD_OPENVINO)
-DIE_EXTRA_MODULES=$(OPENVINO_CONTRIB_REPO_DIR)/modules
-DCMAKE_INSTALL_PREFIX=$(INSTALL_OPEN_MODEL_ZOO)

View File

@@ -4,6 +4,7 @@ resources:
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/openvino_contrib
ref: releases/2022/1
jobs:
- job: Lin

View File

@@ -95,7 +95,6 @@ jobs:
-DPYTHON_EXECUTABLE=/usr/bin/python3.8
-DENABLE_INTEL_MYRIAD_COMMON=OFF
-DENABLE_INTEL_GNA=OFF
-DENABLE_OPENCV=OFF
-DENABLE_CPPLINT=OFF
-DENABLE_TESTS=OFF
-DENABLE_INTEL_CPU=ON

View File

@@ -13,11 +13,13 @@ resources:
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/openvino_contrib
ref: releases/2022/1
- repository: testdata
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/testdata
ref: releases/2022/1
jobs:
- job: Mac
@@ -143,7 +145,6 @@ jobs:
set -e
mkdir -p $(INSTALL_DIR)/opencv/
cmake -DCMAKE_INSTALL_PREFIX=$(INSTALL_DIR) -DCOMPONENT=tests -P cmake_install.cmake
cp -R $(REPO_DIR)/temp/opencv_4.5.2_osx/opencv/* $(INSTALL_DIR)/opencv/
workingDirectory: $(BUILD_DIR)
displayName: 'Install tests'

View File

@@ -13,11 +13,13 @@ resources:
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/openvino_contrib
ref: releases/2022/1
- repository: testdata
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/testdata
ref: releases/2022/1
jobs:
- job: Win
@@ -87,7 +89,6 @@ jobs:
call install_ib_console.bat
workingDirectory: $(WORK_DIR)
displayName: 'Install IncrediBuild'
enabled: false
- checkout: self
clean: true
@@ -143,9 +144,9 @@ jobs:
- script: |
set PATH=$(WORK_DIR)\ninja-win;%PATH%
call "$(MSVS_VARS_PATH)" && $(CMAKE_CMD) --build . --config Release
call "$(MSVS_VARS_PATH)" && "C:\Program Files (x86)\IncrediBuild\BuildConsole.exe" /COMMAND="$(CMAKE_CMD) --build . --config Release"
workingDirectory: $(BUILD_DIR)
displayName: 'Build Win'
displayName: 'Build Win - IB'
- script: dir $(REPO_DIR)\bin\ /s
displayName: 'List bin files'
@@ -168,7 +169,7 @@ jobs:
workingDirectory: $(BUILD_SAMPLES_TESTS_DIR)
displayName: 'Install Samples Tests'
- script: $(CMAKE_CMD) -DCMAKE_INSTALL_PREFIX=$(INSTALL_DIR) -DCOMPONENT=tests -P cmake_install.cmake && xcopy $(REPO_DIR)\temp\opencv_4.5.2\opencv\* $(INSTALL_DIR)\opencv\ /e /h /y
- script: $(CMAKE_CMD) -DCMAKE_INSTALL_PREFIX=$(INSTALL_DIR) -DCOMPONENT=tests -P cmake_install.cmake
workingDirectory: $(BUILD_DIR)
displayName: 'Install tests'

View File

@@ -59,7 +59,6 @@ RUN cmake .. \
-DCMAKE_BUILD_TYPE=${BUILD_TYPE} \
-DENABLE_INTEL_MYRIAD_COMMON=OFF \
-DENABLE_INTEL_GNA=OFF \
-DENABLE_OPENCV=OFF \
-DENABLE_CPPLINT=OFF \
-DENABLE_NCC_STYLE=OFF \
-DENABLE_TESTS=OFF \

3
.gitignore vendored
View File

@@ -1,5 +1,8 @@
# build/artifact dirs
_*
[Bb]uild*/
cmake-build*
# but ensure we don't skip __init__.py and __main__.py
!__init__.py
!__main__.py

View File

@@ -4,7 +4,7 @@
if(DEFINED BUILD_SHARED_LIBS AND NOT BUILD_SHARED_LIBS)
# 'target_link_libraries' does not work correctly when called from
# different directly where 'add_library' is called: CMake generates
# different directory where 'add_library' is called: CMake generates
# incorrect OpenVINOConfig.cmake in this case
cmake_minimum_required(VERSION 3.17)
else()
@@ -13,7 +13,9 @@ endif()
project(OpenVINO DESCRIPTION "OpenVINO toolkit")
set(IE_MAIN_SOURCE_DIR ${OpenVINO_SOURCE_DIR}/inference-engine)
if(NOT CMAKE_BUILD_TYPE)
set(CMAKE_BUILD_TYPE "Release" CACHE STRING "CMake build type" FORCE)
endif()
find_package(IEDevScripts REQUIRED
PATHS "${OpenVINO_SOURCE_DIR}/cmake/developer_package"

View File

@@ -1,5 +1,5 @@
# OpenVINO™ Toolkit
[![Stable release](https://img.shields.io/badge/version-2021.4.2-green.svg)](https://github.com/openvinotoolkit/openvino/releases/tag/2021.4.2)
[![Stable release](https://img.shields.io/badge/version-2022.1-green.svg)](https://github.com/openvinotoolkit/openvino/releases/tag/2022.1)
[![Apache License Version 2.0](https://img.shields.io/badge/license-Apache_2.0-green.svg)](LICENSE)
![GitHub branch checks state](https://img.shields.io/github/checks-status/openvinotoolkit/openvino/master?label=GitHub%20checks)
![Azure DevOps builds (branch)](https://img.shields.io/azure-devops/build/openvinoci/b2bab62f-ab2f-4871-a538-86ea1be7d20f/13?label=Public%20CI)
@@ -42,9 +42,9 @@ Please report questions, issues and suggestions using:
\* Other names and brands may be claimed as the property of others.
[Open Model Zoo]:https://github.com/openvinotoolkit/open_model_zoo
[OpenVINO™ Runtime]:https://docs.openvino.ai/latest/openvino_docs_OV_Runtime_User_Guide.html
[OpenVINO™ Runtime]:https://docs.openvino.ai/latest/openvino_docs_OV_UG_OV_Runtime_User_Guide.html
[Model Optimizer]:https://docs.openvino.ai/latest/openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html
[Post-Training Optimization Tool]:https://docs.openvino.ai/latest/pot_README.html
[Post-Training Optimization Tool]:https://docs.openvino.ai/latest/pot_introduction.html
[Samples]:https://github.com/openvinotoolkit/openvino/tree/master/samples
[tag on StackOverflow]:https://stackoverflow.com/search?q=%23openvino

View File

@@ -23,9 +23,7 @@ message(STATUS "MODELS_PATH=" ${MODELS_PATH})
fetch_models_and_validation_set()
if(COMMAND get_linux_name)
get_linux_name(LINUX_OS_NAME)
endif()
get_linux_name(LINUX_OS_NAME)
if(CMAKE_CROSSCOMPILING AND CMAKE_HOST_SYSTEM_NAME MATCHES Linux AND CMAKE_HOST_SYSTEM_PROCESSOR MATCHES "amd64.*|x86_64.*|AMD64.*")
set(protoc_version "3.18.2")
@@ -93,7 +91,19 @@ if(THREADING STREQUAL "OMP")
endif()
## TBB package
if(THREADING STREQUAL "TBB" OR THREADING STREQUAL "TBB_AUTO")
unset(_ov_download_tbb_done CACHE)
#
# The function downloads prebuilt TBB package
# NOTE: the function should be used if system TBB is not found
# or ENABLE_SYSTEM_TBB is OFF
#
function(ov_download_tbb)
if(_ov_download_tbb_done OR NOT THREADING MATCHES "^(TBB|TBB_AUTO)$")
return()
endif()
set(_ov_download_tbb_done ON CACHE BOOL "Whether prebuilt TBB is already downloaded")
reset_deps_cache(TBBROOT TBB_DIR)
if(DEFINED ENV{THIRDPARTY_SERVER_PATH})
@@ -109,16 +119,6 @@ if(THREADING STREQUAL "TBB" OR THREADING STREQUAL "TBB_AUTO")
TARGET_PATH "${TEMP}/tbb"
ENVIRONMENT "TBBROOT"
SHA256 "f1c9b9e2861efdaa01552bd25312ccbc5feeb45551e5f91ae61e29221c5c1479")
if(ENABLE_TBBBIND_2_5)
RESOLVE_DEPENDENCY(TBBBIND_2_5
ARCHIVE_WIN "tbbbind_2_5_static_win_v1.zip"
TARGET_PATH "${TEMP}/tbbbind_2_5"
ENVIRONMENT "TBBBIND_2_5_ROOT"
SHA256 "a67afeea8cf194f97968c800dab5b5459972908295242e282045d6b8953573c1")
else()
message(WARNING "prebuilt TBBBIND_2_5 is not available.
Build oneTBB from sources and set TBBROOT environment var before OpenVINO cmake configure")
endif()
elseif(ANDROID) # Should be before LINUX due LINUX is detected as well
RESOLVE_DEPENDENCY(TBB
ARCHIVE_ANDROID "tbb2020_20200404_android.tgz"
@@ -131,16 +131,6 @@ if(THREADING STREQUAL "TBB" OR THREADING STREQUAL "TBB_AUTO")
TARGET_PATH "${TEMP}/tbb"
ENVIRONMENT "TBBROOT"
SHA256 "95b2f3b0b70c7376a0c7de351a355c2c514b42c4966e77e3e34271a599501008")
if(ENABLE_TBBBIND_2_5)
RESOLVE_DEPENDENCY(TBBBIND_2_5
ARCHIVE_LIN "tbbbind_2_5_static_lin_v2.tgz"
TARGET_PATH "${TEMP}/tbbbind_2_5"
ENVIRONMENT "TBBBIND_2_5_ROOT"
SHA256 "865e7894c58402233caf0d1b288056e0e6ab2bf7c9d00c9dc60561c484bc90f4")
else()
message(WARNING "prebuilt TBBBIND_2_5 is not available.
Build oneTBB from sources and set TBBROOT environment var before OpenVINO cmake configure")
endif()
elseif(LINUX AND AARCH64)
RESOLVE_DEPENDENCY(TBB
ARCHIVE_LIN "keembay/tbb2020_38404_kmb_lic.tgz"
@@ -160,18 +150,71 @@ if(THREADING STREQUAL "TBB" OR THREADING STREQUAL "TBB_AUTO")
update_deps_cache(TBBROOT "${TBB}" "Path to TBB root folder")
if(EXISTS "${TBBROOT}/lib/cmake/TBB/TBBConfig.cmake")
# oneTBB case
update_deps_cache(TBB_DIR "${TBB}/lib/cmake/TBB" "Path to TBB cmake folder")
update_deps_cache(TBB_DIR "${TBBROOT}/lib/cmake/TBB" "Path to TBB cmake folder")
elseif(EXISTS "${TBBROOT}/lib/cmake/tbb/TBBConfig.cmake")
# oneTBB release package version less than 2021.6.0
update_deps_cache(TBB_DIR "${TBBROOT}/lib/cmake/tbb" "Path to TBB cmake folder")
elseif(EXISTS "${TBBROOT}/lib64/cmake/TBB/TBBConfig.cmake")
# 64-bits oneTBB case
update_deps_cache(TBB_DIR "${TBBROOT}/lib64/cmake/TBB" "Path to TBB cmake folder")
elseif(EXISTS "${TBBROOT}/cmake/TBBConfig.cmake")
# custom downloaded or user provided TBB
update_deps_cache(TBB_DIR "${TBBROOT}/cmake" "Path to TBB cmake folder")
else()
update_deps_cache(TBB_DIR "${TBB}/cmake" "Path to TBB cmake folder")
message(WARNING "Failed to find TBBConfig.cmake in ${TBBROOT} tree. Custom TBBConfig.cmake will be used")
endif()
debug_message(STATUS "tbb=" ${TBB})
debug_message(STATUS "tbb_dir=" ${TBB_DIR})
debug_message(STATUS "tbbroot=" ${TBBROOT})
set(TBB "${TBB}" PARENT_SCOPE)
endfunction()
## TBBBind_2_5 package
unset(_ov_download_tbbbind_2_5_done CACHE)
#
# The function downloads static prebuilt TBBBind_2_5 package
# NOTE: the function should be called only we have TBB with version less 2021
#
function(ov_download_tbbbind_2_5)
if(_ov_download_tbbbind_2_5_done OR NOT ENABLE_TBBBIND_2_5)
return()
endif()
set(_ov_download_tbbbind_2_5_done ON CACHE BOOL "Whether prebuilt TBBBind_2_5 is already downloaded")
reset_deps_cache(TBBBIND_2_5_DIR)
if(DEFINED ENV{THIRDPARTY_SERVER_PATH})
set(IE_PATH_TO_DEPS "$ENV{THIRDPARTY_SERVER_PATH}")
elseif(DEFINED THIRDPARTY_SERVER_PATH)
set(IE_PATH_TO_DEPS "${THIRDPARTY_SERVER_PATH}")
endif()
if(WIN32 AND X86_64)
RESOLVE_DEPENDENCY(TBBBIND_2_5
ARCHIVE_WIN "tbbbind_2_5_static_win_v1.zip"
TARGET_PATH "${TEMP}/tbbbind_2_5"
ENVIRONMENT "TBBBIND_2_5_ROOT"
SHA256 "a67afeea8cf194f97968c800dab5b5459972908295242e282045d6b8953573c1")
elseif(ANDROID)
# don't have TBBBIND_2_5
elseif(LINUX AND X86_64)
RESOLVE_DEPENDENCY(TBBBIND_2_5
ARCHIVE_LIN "tbbbind_2_5_static_lin_v2.tgz"
TARGET_PATH "${TEMP}/tbbbind_2_5"
ENVIRONMENT "TBBBIND_2_5_ROOT"
SHA256 "865e7894c58402233caf0d1b288056e0e6ab2bf7c9d00c9dc60561c484bc90f4")
else()
message(WARNING "prebuilt TBBBIND_2_5 is not available.
Build oneTBB from sources and set TBBROOT environment var before OpenVINO cmake configure")
endif()
update_deps_cache(TBBBIND_2_5_DIR "${TBBBIND_2_5}/cmake" "Path to TBBBIND_2_5 cmake folder")
debug_message(STATUS "tbb=" ${TBB})
if(DEFINED IE_PATH_TO_DEPS)
unset(IE_PATH_TO_DEPS)
endif()
endif()
set(TBBBIND_2_5 "${TBBBIND_2_5}" PARENT_SCOPE)
endfunction()
## OpenCV
if(ENABLE_OPENCV)
@@ -265,8 +308,6 @@ else()
reset_deps_cache(OpenCV_DIR)
endif()
include(${OpenVINO_SOURCE_DIR}/src/cmake/ie_parallel.cmake)
if(ENABLE_INTEL_GNA)
reset_deps_cache(
GNA_EXT_DIR

View File

@@ -23,13 +23,13 @@ else()
unset(IE_OWN_TBB_CONFIG)
endif()
unset(TBB_DIR)
unset(TBB_DIR CACHE)
find_package(TBB
CONFIG
PATHS ${TBBROOT}/cmake
${TBBROOT}/lib/cmake/TBB # oneTBB case
${IEDevScripts_DIR}/${IE_OWN_TBB_CONFIG}
NO_CMAKE_FIND_ROOT_PATH
NO_DEFAULT_PATH
)
NO_DEFAULT_PATH)
find_package_handle_standard_args(TBB CONFIG_MODE)

View File

@@ -7,7 +7,7 @@ include(target_flags)
# FIXME: there are compiler failures with LTO and Cross-Compile toolchains. Disabling for now, but
# this must be addressed in a proper way
ie_dependent_option (ENABLE_LTO "Enable Link Time Optimization" OFF "LINUX;NOT CMAKE_CROSSCOMPILING; CMAKE_CXX_COMPILER_VERSION VERSION_GREATER 4.9" OFF)
ie_dependent_option (ENABLE_LTO "Enable Link Time Optimization" OFF "LINUX;NOT CMAKE_CROSSCOMPILING;CMAKE_CXX_COMPILER_VERSION VERSION_GREATER 4.9" OFF)
ie_option (OS_FOLDER "create OS dedicated folder in output" OFF)
@@ -79,6 +79,4 @@ if(ENABLE_AVX512F)
endif()
endif()
if (VERBOSE_BUILD)
set(CMAKE_VERBOSE_MAKEFILE ON CACHE BOOL "" FORCE)
endif()
set(CMAKE_VERBOSE_MAKEFILE ${VERBOSE_BUILD} CACHE BOOL "" FORCE)

View File

@@ -31,4 +31,8 @@ if (LINUX)
set(${res_var} NOTFOUND PARENT_SCOPE)
endif ()
endfunction()
else()
function(get_linux_name res_var)
set(${res_var} NOTFOUND PARENT_SCOPE)
endfunction()
endif ()

View File

@@ -23,7 +23,7 @@ execute_process(
ERROR_VARIABLE error_var)
if(NOT clang_find_result EQUAL "0")
message(WARNING "Please, install libclang-[N]-dev package (required for ncc naming style check)")
message(WARNING "Please, install clang-[N] libclang-[N]-dev package (required for ncc naming style check)")
message(WARNING "find_package(Clang) output: ${output_var}")
message(WARNING "find_package(Clang) error: ${error_var}")
set(ENABLE_NCC_STYLE OFF)
@@ -107,8 +107,11 @@ function(ov_ncc_naming_style)
list(APPEND NCC_STYLE_ADDITIONAL_INCLUDE_DIRECTORIES "${NCC_STYLE_SOURCE_DIRECTORY}")
# without it sources with same name from different directories will map to same .ncc_style target
file(RELATIVE_PATH source_dir_rel ${CMAKE_SOURCE_DIR} ${NCC_STYLE_SOURCE_DIRECTORY})
foreach(source IN LISTS sources)
set(output_file "${ncc_style_bin_dir}/${source}.ncc_style")
set(output_file "${ncc_style_bin_dir}/${source_dir_rel}/${source}.ncc_style")
set(full_source_path "${NCC_STYLE_SOURCE_DIRECTORY}/${source}")
add_custom_command(

View File

@@ -1,5 +1,5 @@
# custom OpenVINO values
CppMethod: '^(operator\W+|[a-z_\d]+|signaling_NaN|quiet_NaN)$'
CppMethod: '^(operator\W+|[a-z_\d]+|signaling_NaN|quiet_NaN|OPENVINO_OP)$'
ClassName: '^([A-Z][\w]+|b?float16|numeric_limits|ngraph_error|stopwatch|unsupported_op)$'
StructName: '^([A-Z][\w]+|element_type_traits|hash|oi_pair)$'
FunctionName: '^(operator\W+|[a-z_\d]+)|PrintTo$'

View File

@@ -19,7 +19,7 @@ function (commitHash VAR)
message(FATAL_ERROR "repo_root is not defined")
endif()
execute_process(
COMMAND git rev-parse HEAD
COMMAND git rev-parse --short=11 HEAD
WORKING_DIRECTORY ${repo_root}
OUTPUT_VARIABLE GIT_COMMIT_HASH
OUTPUT_STRIP_TRAILING_WHITESPACE)
@@ -33,6 +33,13 @@ macro(ie_parse_ci_build_number)
set(IE_VERSION_MINOR ${CMAKE_MATCH_2})
set(IE_VERSION_PATCH ${CMAKE_MATCH_3})
set(IE_VERSION_BUILD ${CMAKE_MATCH_4})
set(the_whole_version_is_defined_by_ci ON)
elseif(CI_BUILD_NUMBER MATCHES "^[0-9]+$")
set(IE_VERSION_BUILD ${CI_BUILD_NUMBER})
# only build number is defined by CI
set(the_whole_version_is_defined_by_ci OFF)
elseif(CI_BUILD_NUMBER)
message(FATAL_ERROR "Failed to parse CI_BUILD_NUMBER which is ${CI_BUILD_NUMBER}")
endif()
if(NOT DEFINED repo_root)
@@ -91,22 +98,34 @@ macro(ie_parse_ci_build_number)
endif()
set(IE_VERSION "${IE_VERSION_MAJOR}.${IE_VERSION_MINOR}.${IE_VERSION_PATCH}")
message(STATUS "OpenVINO version is ${IE_VERSION}")
message(STATUS "OpenVINO version is ${IE_VERSION} (Build ${IE_VERSION_BUILD})")
if(NOT the_whole_version_is_defined_by_ci)
# create CI_BUILD_NUMBER
branchName(GIT_BRANCH)
commitHash(GIT_COMMIT_HASH)
if(NOT GIT_BRANCH STREQUAL "master")
set(GIT_BRANCH_POSTFIX "-${GIT_BRANCH}")
endif()
set(CI_BUILD_NUMBER "${IE_VERSION}-${IE_VERSION_BUILD}-${GIT_COMMIT_HASH}${GIT_BRANCH_POSTFIX}")
unset(GIT_BRANCH_POSTFIX)
unset(GIT_BRANCH)
unset(GIT_COMMIT_HASH)
else()
unset(the_whole_version_is_defined_by_ci)
endif()
endmacro()
if (DEFINED ENV{CI_BUILD_NUMBER})
set(CI_BUILD_NUMBER $ENV{CI_BUILD_NUMBER})
else()
branchName(GIT_BRANCH)
commitHash(GIT_COMMIT_HASH)
set(custom_build "custom_${GIT_BRANCH}_${GIT_COMMIT_HASH}")
set(CI_BUILD_NUMBER "${custom_build}")
endif()
# provides Inference Engine version
# 1. If CI_BUILD_NUMBER is defined, parses this information
# 2. Otherwise, parses ie_version.hpp
# 2. Otherwise, parses openvino/core/version.hpp
if (DEFINED ENV{CI_BUILD_NUMBER})
set(CI_BUILD_NUMBER $ENV{CI_BUILD_NUMBER})
endif()
ie_parse_ci_build_number()
macro (addVersionDefines FILE)

View File

@@ -82,7 +82,7 @@ else()
set(ENABLE_TBBBIND_2_5_DEFAULT OFF)
endif()
ie_dependent_option (ENABLE_TBBBIND_2_5 "Enable TBBBind_2_5 static usage in OpenVINO runtime" ON "ENABLE_TBBBIND_2_5_DEFAULT" OFF)
ie_dependent_option (ENABLE_TBBBIND_2_5 "Enable TBBBind_2_5 static usage in OpenVINO runtime" ${ENABLE_TBBBIND_2_5_DEFAULT} "THREADING MATCHES TBB" OFF)
ie_dependent_option (ENABLE_INTEL_GNA "GNA support for inference engine" ON
"NOT APPLE;NOT ANDROID;X86_64;CMAKE_CXX_COMPILER_VERSION VERSION_GREATER_EQUAL 5.4" OFF)
@@ -126,7 +126,7 @@ ie_dependent_option (ENABLE_FUNCTIONAL_TESTS "functional tests" ON "ENABLE_TESTS
ie_dependent_option (ENABLE_SAMPLES "console samples are part of inference engine package" ON "NOT MINGW" OFF)
ie_option (ENABLE_OPENCV "enables OpenCV" ON)
ie_option (ENABLE_OPENCV "enables OpenCV" OFF)
ie_option (ENABLE_V7_SERIALIZE "enables serialization to IR v7" OFF)
@@ -136,6 +136,8 @@ ie_dependent_option(ENABLE_TBB_RELEASE_ONLY "Only Release TBB libraries are link
ie_dependent_option (ENABLE_SYSTEM_PUGIXML "use the system copy of pugixml" OFF "BUILD_SHARED_LIBS" OFF)
ie_dependent_option (ENABLE_SYSTEM_TBB "use the system version of TBB" OFF "THREADING MATCHES TBB;LINUX" OFF)
ie_option (ENABLE_DEBUG_CAPS "enable OpenVINO debug capabilities at runtime" OFF)
ie_dependent_option (ENABLE_GPU_DEBUG_CAPS "enable GPU debug capabilities at runtime" ON "ENABLE_DEBUG_CAPS" OFF)

View File

@@ -12,6 +12,7 @@
# * `Runtime`: OpenVINO C++ and C Core & Inference Runtime, frontend common
# * `ONNX`: OpenVINO ONNX frontend
# * `Paddle`: OpenVINO Paddle frontend
# * `TensorFlow`: OpenVINO TensorFlow frontend
#
# If no components are specified, `Runtime` component is provided:
#
@@ -146,14 +147,34 @@ set(_ov_package_prefix_dir "${PACKAGE_PREFIX_DIR}")
set(THREADING "@THREADING@")
if((THREADING STREQUAL "TBB" OR THREADING STREQUAL "TBB_AUTO") AND NOT TBB_FOUND)
set_and_check(_tbb_dir "@PACKAGE_IE_TBB_DIR@")
set(enable_system_tbb "@ENABLE_SYSTEM_TBB@")
if(NOT enable_system_tbb)
set_and_check(_tbb_dir "@PACKAGE_IE_TBB_DIR@")
if(DEFINED ENV{TBBROOT})
# see https://stackoverflow.com/questions/28070810/cmake-generate-error-on-windows-as-it-uses-as-escape-seq
file(TO_CMAKE_PATH $ENV{TBBROOT} ENV_TBBROOT)
endif()
set(find_package_tbb_extra_args
CONFIG
PATHS
# oneTBB case exposed via export TBBROOT=<custom TBB root>
"${ENV_TBBROOT}/lib64/cmake/TBB"
"${ENV_TBBROOT}/lib/cmake/TBB"
"${ENV_TBBROOT}/lib/cmake/tbb"
# "$ENV{TBB_DIR}"
# for custom TBB exposed via cmake -DTBBROOT=<custom TBB root>
"${TBBROOT}/cmake"
# _tbb_dir points to TBB_DIR (custom | temp | system) used to build OpenVINO
${_tbb_dir}
NO_CMAKE_FIND_ROOT_PATH
NO_DEFAULT_PATH)
unset(_tbb_dir)
endif()
unset(enable_system_tbb)
_ov_find_dependency(TBB
COMPONENTS tbb tbbmalloc
CONFIG
PATHS ${TBBROOT}/cmake
${_tbb_dir}
NO_CMAKE_FIND_ROOT_PATH
NO_DEFAULT_PATH)
${find_package_tbb_extra_args})
set(install_tbbbind "@install_tbbbind@")
if(install_tbbbind)
@@ -164,6 +185,7 @@ if((THREADING STREQUAL "TBB" OR THREADING STREQUAL "TBB_AUTO") AND NOT TBB_FOUND
NO_DEFAULT_PATH)
set_target_properties(${TBBBIND_2_5_IMPORTED_TARGETS} PROPERTIES IMPORTED_GLOBAL ON)
endif()
unset(install_tbbbind)
endif()
_ov_find_dependency(Threads)
@@ -175,7 +197,7 @@ if(ENABLE_INTEL_GNA AND NOT ENABLE_INTEL_GNA_SHARED AND NOT libGNA_FOUND)
_ov_find_dependency(libGNA
COMPONENTS KERNEL
CONFIG
PATHS ${CMAKE_CURRENT_LIST_DIR}
PATHS "${CMAKE_CURRENT_LIST_DIR}"
NO_CMAKE_FIND_ROOT_PATH
NO_DEFAULT_PATH)
endif()

View File

@@ -46,6 +46,7 @@ endif()
set(LINKCHECKER_PY "" CACHE FILEPATH "Path to linkchecker.py for documentation check dir.")
set(ENABLE_OPENVINO_NOTEBOOKS OFF CACHE BOOL "Build with openvino notebooks")
set(OMZ_DOCS_DIR "" CACHE PATH "Path to open_model_zoo documentation dir.")
set(OTE_DOCS_DIR "" CACHE PATH "Path to training_extensions documentation dir.")
set(WORKBENCH_DOCS_DIR "" CACHE PATH "Path to workbench documentation dir.")
set(OVMS_DOCS_DIR "" CACHE PATH "Path to model server documentation dir.")
set(GRAPH_CSV_DIR "" CACHE PATH "Path to the folder containing csv data for rendering graphs.")
@@ -159,6 +160,15 @@ function(build_docs)
--output_dir=${DOCS_BUILD_DIR}/workbench)
endif()
# ote doc files
if(EXISTS "${OTE_DOCS_DIR}")
get_filename_component(WORKBENCH_DOCS_DIR "${OTE_DOCS_DIR}" ABSOLUTE)
list(APPEND commands COMMAND ${PYTHON_EXECUTABLE} ${DOXY_MD_FILTER}
--input_dir=${OTE_DOCS_DIR}
--output_dir=${DOCS_BUILD_DIR}/ote)
endif()
# ovms doc files
if(EXISTS "${OVMS_DOCS_DIR}")
get_filename_component(OVMS_DOCS_DIR "${OVMS_DOCS_DIR}" ABSOLUTE)

View File

@@ -264,6 +264,10 @@ TAB_SIZE = 4
ALIASES = "ref_ie{1}=@ref InferenceEngine::\1 \"\1\""
ALIASES += sphinxdirective="\n\xmlonly<sphinxdirective>"
ALIASES += endsphinxdirective="</sphinxdirective>\endxmlonly"
ALIASES += sphinxtabset="\n\xmlonly<sphinxtabset></sphinxtabset>\endxmlonly\n"
ALIASES += endsphinxtabset="\n\xmlonly<endsphinxtabset></endsphinxtabset>\endxmlonly\n"
ALIASES += sphinxtab{1}="\n\xmlonly<sphinxtab>\1</sphinxtab>\endxmlonly\n"
ALIASES += endsphinxtab="\n\xmlonly<endsphinxtab></endsphinxtab>\endxmlonly\n"
# Set the OPTIMIZE_OUTPUT_FOR_C tag to YES if your project consists of C sources
# only. Doxygen will then generate output that is more tailored for C. For

View File

@@ -1,15 +1,25 @@
# How to Implement Custom GPU Operations {#openvino_docs_Extensibility_UG_GPU}
To enable operations not supported by OpenVINO out of the box, you may need an extension for OpenVINO operation set, and a custom kernel for the device you will target. This page describes custom kernel support for the GPU device.
To enable operations not supported by OpenVINO out of the box, you may need an extension for an OpenVINO operation set, and a custom kernel for the device you will target. This page describes custom kernel support for the GPU device.
The GPU codepath abstracts many details about OpenCL\*. You need to provide the kernel code in OpenCL C and an XML configuration file that connects the kernel and its parameters to the parameters of the operation.
The GPU codepath abstracts many details about OpenCL. You need to provide the kernel code in OpenCL C and an XML configuration file that connects the kernel and its parameters to the parameters of the operation.
There are two options for using the custom operation configuration file:
* Include a section with your kernels into the automatically-loaded `<lib_path>/cldnn_global_custom_kernels/cldnn_global_custom_kernels.xml` file.
* Call the `ov::Core::set_property()` method from your application with the `"CONFIG_FILE"` key and the configuration file name as a value before loading the network that uses custom operations to the plugin:
@snippet snippets/gpu/custom_kernels_api.cpp part0
@sphinxtabset
@sphinxtab{C++}
@snippet docs/snippets/gpu/custom_kernels_api.cpp part0
@endsphinxtab
@sphinxtab{Python}
@snippet docs/snippets/gpu/custom_kernels_api.py part0
@endsphinxtab
@endsphinxtabset
All OpenVINO samples, except the trivial `hello_classification`, and most Open Model Zoo demos
feature a dedicated command-line option `-c` to load custom kernels. For example, to load custom operations for the classification sample, run the command below:
@@ -21,7 +31,7 @@ $ ./classification_sample -m <path_to_model>/bvlc_alexnet_fp16.xml -i ./validati
## Configuration File Format <a name="config-file-format"></a>
The configuration file is expected to follow the `.xml` file structure
with a node of the type `CustomLayer` for every custom operation you provide.
with a node of the `CustomLayer` type for every custom operation you provide.
The definitions described in the sections below use the following notations:
@@ -212,7 +222,7 @@ __kernel void example_relu_kernel(
> **NOTE**: As described in the previous section, all items like
> `INPUT0_TYPE` are actually defined as OpenCL (pre-)compiler inputs by
> the OpenVINO for efficiency reasons. See [Debugging
> OpenVINO for efficiency reasons. See [Debugging
> Tips](#debugging-tips) for information on debugging the results.
## Debugging Tips<a name="debugging-tips"></a>

View File

@@ -1,4 +1,4 @@
# OpenVINO Extensibility Mechanism {#openvino_docs_Extensibility_UG_Intro}
# OpenVINO Extensibility Mechanism {#openvino_docs_Extensibility_UG_Intro}
@sphinxdirective
@@ -7,41 +7,68 @@
:hidden:
openvino_docs_Extensibility_UG_add_openvino_ops
openvino_docs_Extensibility_UG_Frontend_Extensions
openvino_docs_Extensibility_UG_GPU
openvino_docs_IE_DG_Extensibility_DG_VPU_Kernel
openvino_docs_MO_DG_prepare_model_customize_model_optimizer_Customize_Model_Optimizer
@endsphinxdirective
The Intel® Distribution of OpenVINO™ toolkit supports neural network models trained with various frameworks, including
TensorFlow, PyTorch, ONNX, PaddlePaddle, MXNet, Caffe, and Kaldi. The list of supported operations (layers) is different for
TensorFlow, PyTorch, ONNX, PaddlePaddle, MXNet, Caffe, and Kaldi. The list of supported operations is different for
each of the supported frameworks. To see the operations supported by your framework, refer to
[Supported Framework Operations](../MO_DG/prepare_model/Supported_Frameworks_Layers.md).
Custom operations, that is those not included in the list, are not recognized by OpenVINO™ out-of-the-box. Therefore, creating Intermediate Representation (IR) for a model using them requires additional steps. This guide illustrates the workflow for running inference on topologies featuring custom operations, allowing you to plug in your own implementation for existing or completely new operations.
Custom operations, that is those not included in the list, are not recognized by OpenVINO™ out-of-the-box. The need for a custom operation may appear in two main cases:
If your model contains operations not normally supported by OpenVINO™, the OpenVINO™ Extensibility API lets you add support for those custom operations and use one implementation for Model Optimizer and OpenVINO™ Runtime.
1. A regular framework operation that is new or rarely used, which is why it hasnt been implemented in OpenVINO yet.
There are two steps to support inference of a model with custom operation(s):
1. Add support for a [custom operation in the Model Optimizer](../MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md) so
the Model Optimizer can generate the IR with the operation.
2. Create a custom operation in it as described in the [Custom Operation](add_openvino_ops.md).
2. A new user operation that was created for some specific model topology by a model author using framework extension capabilities.
## OpenVINO™ Extensions
Importing models with such operations requires additional steps. This guide illustrates the workflow for running inference on models featuring custom operations, allowing you to plug in your own implementation for them. OpenVINO™ Extensibility API lets you add support for those custom operations and use one implementation for Model Optimizer and OpenVINO™ Runtime.
OpenVINO™ provides extensions for:
Defining a new custom operation basically consist of two parts:
* [Custom OpenVINO™ Operation](add_openvino_ops.md):
- Enables the creation of unsupported operations
- Enables the use of `ov::Core::read_model` to read models with unsupported operations
- Provides a shape inference mechanism for custom operations
- Provides an evaluate method that allows you to support the operation on CPU or perform constant folding
* [Model Optimizer Extensibility](../MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md):
- Enables support of new operations to generate IR
- Enables support of custom transformations to replace sub-graphs for performance optimization
1. Definition of operation semantics in OpenVINO, the code that describes how this operation should be inferred consuming input tensor(s) and producing output tensor(s). How to implement execution kernels for [GPU](./GPU_Extensibility.md) and [VPU](./VPU_Extensibility.md) is described in separate guides.
> **NOTE**: This documentation is written based on the [Template extension](https://github.com/openvinotoolkit/openvino/tree/master/docs/template_extension/new), which demonstrates extension development details. You can review the complete code, which is fully compilable and up-to-date, to see how it works.
2. Mapping rule that facilitates conversion of framework operation representation to OpenVINO defined operation semantics.
## Load extensions to OpenVINO™ Runtime
The first part is required for inference, the second part is required for successful import of a model containing such operations from the original framework model format. There are several options to implement each part, the next sections will describe them in detail.
## Definition of Operation Semantics
If the custom operation can be mathematically represented as a combination of exiting OpenVINO operations and such decomposition gives desired performance, then low-level operation implementation is not required. When deciding feasibility of such decomposition refer to the latest OpenVINO operation set. You can use any valid combination of exiting operations. How to map a custom operation is described in the next section of this document.
If such decomposition is not possible or appears too bulky with lots of consisting operations that are not performing well, then a new class for the custom operation should be implemented as described in the [Custom Operation Guide](add_openvino_ops.md).
Prefer implementing a custom operation class if you already have a generic C++ implementation of operation kernel. Otherwise try to decompose the operation first as described above and then after verifying correctness of inference and resulting performance, optionally invest to implementing bare metal C++ implementation.
## Mapping from Framework Operation
Depending on model format used for import, mapping of custom operation is implemented differently, choose one of:
1. If model is represented in ONNX (including models exported from Pytorch in ONNX) or PaddlePaddle formats, then one of the classes from [Frontend Extension API](frontend_extensions.md) should be used. It consists of several classes available in C++ which can be used with Model Optimizer `--extensions` option or when model is imported directly to OpenVINO run-time using read_model method. Python API is also available for run-time model importing.
2. If model is represented in TensorFlow, Caffe, Kaldi or MXNet formats, then [Model Optimizer Extensions](../MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md) should be used. This approach is available for model conversion in Model Optimizer only.
Existing of two approaches simultaneously is explained by two different types of frontends used for model conversion in OpenVINO: new frontends (ONNX, PaddlePaddle) and legacy frontends (TensorFlow, Caffe, Kaldi and MXNet). Model Optimizer can use both front-ends in contrast to the direct import of model with `read_model` method which can use new frontends only. Follow one of the appropriate guides referenced above to implement mappings depending on framework frontend.
If you are implementing extensions for ONNX or PaddlePaddle new frontends and plan to use Model Optimizer `--extension` option for model conversion, then the extensions should be
1. Implemented in C++ only
2. Compiled as a separate shared library (see details how to do that later in this guide).
You cannot write new frontend extensions using Python API if you plan to use them with Model Optimizer.
Remaining part of this guide uses Frontend Extension API applicable for new frontends.
## Registering Extensions
A custom operation class and a new mapping frontend extension class object should be registered to be usable in OpenVINO runtime.
> **NOTE**: This documentation is written based on the [Template extension](https://github.com/openvinotoolkit/openvino/tree/master/docs/template_extension/new), which demonstrates extension development details based on minimalistic `Identity` operation that is a placeholder for your real custom operation. You can review the complete code, which is fully compliable, to see how it works.
To load the extensions to the `ov::Core` object, use the `ov::Core::add_extension` method, this method allows to load library with extensions or extensions from the code.
@@ -49,27 +76,50 @@ To load the extensions to the `ov::Core` object, use the `ov::Core::add_extensio
Extensions can be loaded from code with `ov::Core::add_extension` method:
@sphinxtabset
@sphinxtab{C++}
@snippet docs/snippets/ov_extensions.cpp add_extension
@endsphinxtab
@sphinxtab{Python}
@snippet docs/snippets/ov_extensions.py add_extension
@endsphinxtab
@endsphinxtabset
`Identity` is custom operation class defined in [Custom Operation Guide](add_openvino_ops.md). This is enough to enable reading IR which uses `Identity` extension operation emitted by Model Optimizer. To be able to load original model directly to the runtime, you need to add also a mapping extension:
@sphinxdirective
.. tab:: C++
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
:language: cpp
:fragment: add_extension
:fragment: add_frontend_extension
.. tab:: Python
.. doxygensnippet:: docs/snippets/ov_extensions.py
:language: python
:fragment: add_extension
:fragment: add_frontend_extension
@endsphinxdirective
When Python API is used there is no way to implement a custom OpenVINO operation. Also, even if custom OpenVINO operation is implemented in C++ and loaded to the runtime through a shared library, there is still no way to add a frontend mapping extension that refers to this custom operation. Use C++ shared library approach to implement both operations semantics and framework mapping in this case.
You still can use Python for operation mapping and decomposition in case if operations from the standard OpenVINO operation set is used only.
### Create library with extensions
You need to create extension library in following cases:
- Load extensions to Model Optimizer
- Load extensions to Python application
You need to create extension library in the following cases:
- Convert model with custom operations in Model Optimizer
- Load model with custom operations in Python application. It is applicable for both framework model and IR.
- Loading models with custom operations in tools that support loading extensions from a library, for example `benchmark_app`.
If you want to create an extension library, for example in order to load these extensions to the Model Optimizer, you need to do next steps:
Create an entry point for extension library. OpenVINO™ provides an `OPENVINO_CREATE_EXTENSIONS()` macro, which allows to define an entry point to a library with OpenVINO™ Extensions.
@@ -97,24 +147,25 @@ $ cmake --build .
After the build you can use path to your extension library to load your extensions to OpenVINO™ Runtime:
@sphinxdirective
@sphinxtabset
.. tab:: C++
@sphinxtab{C++}
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
:language: cpp
:fragment: add_extension_lib
@snippet docs/snippets/ov_extensions.cpp add_extension_lib
.. tab:: Python
@endsphinxtab
.. doxygensnippet:: docs/snippets/ov_extensions.py
:language: python
:fragment: add_extension_lib
@sphinxtab{Python}
@endsphinxdirective
@snippet docs/snippets/ov_extensions.py add_extension_lib
@endsphinxtab
@endsphinxtabset
## See Also
* [OpenVINO Transformations](./ov_transformations.md)
* [Using Inference Engine Samples](../OV_Runtime_UG/Samples_Overview.md)
* [Using OpenVINO Runtime Samples](../OV_Runtime_UG/Samples_Overview.md)
* [Hello Shape Infer SSD sample](../../samples/cpp/hello_reshape_ssd/README.md)

View File

@@ -0,0 +1,619 @@
# How to Implement Custom Layers for VPU (Intel® Neural Compute Stick 2) {#openvino_docs_IE_DG_Extensibility_DG_VPU_Kernel}
To enable operations not supported by OpenVINO™ out of the box, you need a custom extension for Model Optimizer, a custom nGraph operation set, and a custom kernel for the device you will target. This page describes custom kernel support for one the VPU, the Intel® Neural Compute Stick 2 device, which uses the MYRIAD device plugin.
> **NOTES:**
> * OpenCL\* custom layer support is available in the preview mode.
> * This section assumes you are familiar with developing kernels using OpenCL.
To customize your topology with an OpenCL layer, carry out the tasks described on this page:
1. Write and compile your OpenCL code with the standalone offline OpenCL compiler (`clc`).
2. Write a configuration file to bind the OpenCL kernel to the topology file (`.xml`) of the model IR.
3. Pass the configuration file to the OpenVINO™ Runtime with the model IR.
## Compile OpenCL code for VPU (Intel® Neural Compute Stick 2)
> **NOTE**: OpenCL compiler, targeting Intel® Neural Compute Stick 2 for the SHAVE* processor only, is redistributed with OpenVINO.
OpenCL support is provided by ComputeAorta* and is distributed under a license agreement between Intel® and Codeplay* Software Ltd.
The OpenCL toolchain for the Intel® Neural Compute Stick 2 supports offline compilation only, so first compile OpenCL C code using the standalone `clc` compiler. You can find the compiler binary at `<INSTALL_DIR>/tools/cl_compiler`.
> **NOTE**: By design, custom OpenCL layers support any OpenCL kernels written assuming OpenCL version 1.2. It also supports half float extension and is optimized for this type, because it is a native type for Intel® Movidius™ VPUs.
1. Prior to running a compilation, make sure that the following variables are set:
* `SHAVE_MA2X8XLIBS_DIR=<INSTALL_DIR>/tools/cl_compiler/lib/`
* `SHAVE_LDSCRIPT_DIR=<INSTALL_DIR>/tools/cl_compiler/ldscripts/`
* `SHAVE_MYRIAD_LD_DIR=<INSTALL_DIR>/tools/cl_compiler/bin/`
* `SHAVE_MOVIASM_DIR=<INSTALL_DIR>/tools/cl_compiler/bin/`
2. Run the compilation with the command below. You should use `--strip-binary-header` to make an OpenCL runtime-agnostic binary runnable with the OpenVINO™ Runtime.
```bash
cd <INSTALL_DIR>/tools/cl_compiler/bin
./clc --strip-binary-header custom_layer.cl -o custom_layer.bin
```
## Write a Configuration File
To tie the topology IR for a layer you customize, prepare a configuration file, so that the OpenVINO™ Runtime can find parameters for your kernel and the execution work grid is described.
For example, consider the following OpenCL kernel signature:
```cpp
__kernel void reorg_nhwc(__global const half *src, __global half *out, int w, int h, int c, int stride);
```
A configuration file for this kernel might be the following:
```xml
<CustomLayer name="ReorgYolo" type="MVCL" version="1">
<Kernel entry="reorg_nhwc">
<Source filename="reorg.bin"/>
</Kernel>
<Parameters>
<Tensor arg-name="src" type="input" port-index="0" format="BYXF"/>
<Tensor arg-name="out" type="output" port-index="0" format="BYXF"/>
<Scalar arg-name="w" type="int" port-index="0" source="I.X" />
<Scalar arg-name="h" type="int" port-index="0" source="I.Y" />
<Scalar arg-name="c" type="int" port-index="0" source="I.F" />
<Scalar arg-name="stride" type="int" source="stride" />
</Parameters>
<WorkSizes dim="input,0" global="(Y+7)/8*8,1,1" local="8,1,1"/>
</CustomLayer>
```
Each custom layer is described with the `CustomLayer` node. It has the following nodes and attributes:
- Root node `CustomLayer` contains the following attributes:
- `name` (Required) The name of the OpenVINO™ Runtime layer to bind the kernel with.
- `type` and `version` (Required) Reserved for future use. Set them to `MVCL` and `1` respectively.
- `max-shaves` (Optional) The maximum number of SHAVE cores that should be dedicated for the layer. It is useful for debugging concurrency issues or for resource saving that memory bound kernel does not scale well with the number of cores, so more resources can be left for the rest of a topology.
- Sub-node `Kernel` must contain the following attributes:
- `entry` The name of your kernel function as you defined it in a source file. In the example above, it is `reorg_nhwc`.
- Node `Source` must contain the following attributes:
- `filename` The path to a compiled binary relative to the XML configuration file.
- Sub-node `Parameters` Describes parameters bindings. For more information, see the description below.
- Sub-node `WorkSizes` Describes local and global work group sizes and the source for dimension deduction as a pair `direction,port`. In the example above, the work group is described relatively to the dimension of the input tensor that comes through port 0 in the IR. `global` and `local` work group configurations support any simple math expressions with +,-,\*,/, and () from `B`(batch), `Y`(height), `X`(width) and `F`(channels).
- Sub-node `Where` Allows to customize bindings with the `key="value"` attribute. For example, to substitute only 3x3 convolutions, write `<Where kernel="3,3"/>` in the binding xml.
Parameter description supports `Tensor` of one of tensor types such as `input`, `output`, `input_buffer`, `output_buffer` or `data`, `Scalar`, or `Data` nodes and has the following format:
- Each `Tensor` node of `input` or `output` type must contain the following attributes:
- `arg-name` The name of a kernel parameter in the kernel signature.
- `type` Node type: `input` or `output` as specified in the IR.
- `port-index` A number of input/output ports as specified in the IR.
- `format` The channel order in the tensor. Optional conversion layers are generated if the custom layer format is not compatible with formats of neighboring layers. `BFXY`, `BYXF`, and `ANY` formats are supported currently.
- Each `Tensor` node of `input_buffer` or `output_buffer` type must contain the following attributes:
- `arg-name` The name of a kernel parameter in the kernel signature.
- `type` Node type: `input_buffer` or `output_buffer`. Use the appropriate type to bind multiple kernels that correspond to different stages of the same layer.
- `port-index` The unique identifier to bind by.
- `dim` The dim source with the same `direction,port` format used for `WorkSizes` bindings.
- `size` Amount of bytes needed. Current expression syntax supports only expression over dimensions of over selected input/output tensor or constants and might be expended in the future.
Here is an example of multi-stage MVN layer binding:
```xml
<CustomLayer name="MVN" stage="0" type="MVCL" version="1">
<Kernel entry="reduction_mean">
<Source filename="mvn.bin"/>
</Kernel>
<Parameters>
<Tensor arg-name="src" type="input" port-index="0" format="BFYX"/>
<Tensor arg-name="mean" type="output_buffer" port-index="0" dim="output,0" size="Y*F*4"/>
<Tensor arg-name="variance" type="output_buffer" port-index="1" dim="output,0" size="Y*F*4"/>
<!--other parameters -->
</Parameters>
<WorkSizes dim="output,0" global="((Y+7)/8)*8,F,1" local="8,1,1"/>
</CustomLayer>
<CustomLayer name="MVN" stage="1" type="MVCL" version="1">
<Kernel entry="mvn_scale">
<Source filename="mvn_scale_changed_orded.bin"/>
</Kernel>
<Parameters>
<Tensor arg-name="src_data" type="input" port-index="0" format="BFYX"/>
<Tensor arg-name="dst_data" type="output" port-index="0" format="BFYX"/>
<Tensor arg-name="mean_part" type="input_buffer" port-index="0" dim="output,0" size="Y*F*4"/>
<Tensor arg-name="power_mean" type="input_buffer" port-index="1" dim="output,0" size="Y*F*4"/>
<!--other parameters -->
</Parameters>
<WorkSizes dim="output,0" global="((Y+7)/8)*8,F,1" local="8,1,1"/>
</CustomLayer>
```
- Each `Tensor` node that has the type `data` must contain the following attributes:
- `source` A name of the blob as it is in the IR. Typical example is `weights` for convolution.
- `format` Specifies the channel order in the tensor. Optional conversion layers are generated if the custom layer format is not.
```xml
<CustomLayer name="BinaryConvolution" type="MVCL" version="1">
<Kernel entry="binary_convolution">
<Source filename="binary_layers.bin"/>
</Kernel>
<Parameters>
<Tensor arg-name="src_data" type="input" port-index="0" format="BFYX"/>
<Data arg-name="weights_data" type="data" source="weights" format="ANY"/>
<Tensor arg-name="dst_data" type="output" port-index="0" format="BFYX"/>
<!--other parameters -->
</Parameters>
<WorkSizes dim="output,0" global="X,Y,F" local="1,1,1"/>
</CustomLayer>
```
- Each `Scalar` node must contain the following attributes:
- `arg-name` The name of a kernel parameter in the kernel signature.
- `type` `int` or `float` value. It is used for correct argument extraction from IR parameters.
- `source` Contains the name of the parameter in the IR file or input/output (`I`/`O`, `In`/`On`, where `n` is a port number)
followed by dimension `B`(batch), `Y`(height), `X`(width), or `F`(channels).
- Each `Data` node must contain the following attributes:
- `arg-name` The name of a kernel parameter in the kernel signature.
- `type` Node type. Currently, `local_data` is the only supported value, which defines buffer allocated in fast local on-chip memory. It is limited to 100KB for all `__local` and
`__private` arrays defined inside the kernel as well as all `__local` parameters passed to the kernel. Note that a manual-DMA extension requires double buffering.
If the custom layer is detected to run out of local memory, the inference fails.
- `dim` The dim source with the same `direction,port` format used for `WorkSizes` bindings.
- `size` Amount of bytes needed. The current expression syntax supports only expression over dimensions of over selected input/output tensor or constants and may be extended in the future.
The example binding below illustrates a kernel with two local buffers passed to the kernel.
```xml
<CustomLayer name="GRN" type="MVCL" version="1">
<Kernel entry="grn_NCHW">
<Source filename="grn.bin"/>
</Kernel>
<Parameters>
<Tensor arg-name="src_data" type="input" port-index="0" format="BFYX"/>
<Tensor arg-name="dst_data" type="output" port-index="0" format="BFYX"/>
<Data arg-name="src" type="local_data" dim="input,0" size="X*F*2" />
<Data arg-name="dst" type="local_data" dim="input,0" size="X*F*2" />
<Scalar arg-name="C" type="int" port-index="0" source="I.F" />
<Scalar arg-name="bias" type="float" source="bias" />
</Parameters>
<WorkSizes dim="input,0" global="X,Y,1" local="X,1,1"/>
</CustomLayer>
```
## Pass Configuration File to OpenVINO™ Runtime
> **NOTE**: If both native and custom layer implementations are present, the custom kernel has a priority over the native one.
Before loading the network that features the custom layers, provide a separate configuration file and load it using the ov::Core::set_property() method with the "CONFIG_KEY" key and the configuration file name as a value before loading the network that uses custom operations to the plugin:
@snippet docs/snippets/vpu/custom_op.cpp part0
## Optimizing Kernels with OpenCL for VPU (Intel® Neural Compute Stick 2)
This section provides optimization guidelines on writing custom layers with OpenCL for VPU devices. Knowledge about general OpenCL
programming model and OpenCL kernel language is assumed and not a subject of this section. The OpenCL model mapping to VPU is described in the table below.
| OpenCL Model | VPU Mapping|
|-----|----|
| Device code | Executed on SHAVE cores |
| Private memory | Mapped to CMX internal memory, limited to 100KB per work group, valid only while the work group is executed |
| Local memory | Mapped to CMX internal memory, limited to 100KB per work group, valid only while the work group is executed |
| Global memory | Mapped to DDR, used to pass execution preserved parameters for inputs, outputs, and blobs |
| Work group | Executed on a single SHAVE core iterating over multiple work items |
Note that by the OpenCL specification, the work group execution order is not specified. This means that it is your
responsibility to ensure that race conditions among work groups are not introduced. Custom layer runtime spits evenly
work grid among available compute resources and executes them in an arbitrary order. This static scheduling approach works best if the load is evenly spread out across work groups, which is a typical case for Deep Learning kernels. The following guidelines are recommended to use for work group partitioning:
1. Split work evenly across work groups.
2. Adjust work group granularity to maintain equal workload for all compute codes.
3. Set the maximum number of cores using the `max-shaves` attribute for the `CustomLayer` node. This keeps more resources for the rest of topology. It is also useful if the kernel scalability reached its limits, which may happen while optimizing memory bound kernels or kernels with poor parallelization.
4. Try an alternate data layout (`BFXY`/`BYXF`) for the kernel if it improves work group partitioning or data access patterns.
Consider not just specific layer boost, but full topology performance because data conversion layers would be automatically inserted
as appropriate.
Offline OpenCL compiler (`clc`) features automatic vectorization over `get_global_id(0)` usage, if uniform access is detected.
For example, the kernel below could be automatically vectorized:
```cpp
__kernel void cvtf32f16(__global float* restrict inImage, __global half* restrict outImage,
float scale, float bais)
{
int idx = get_global_id(0) + get_global_id(1) * get_global_size(0) + get_global_id(2) * get_global_size(0) * get_global_size(1);
outImage[idx] = convert_half(inImage[idx]*scale+bais);
}
```
However, this work-group based vectorizer (WGV) conflicts with the default LLVM vectorizer based on superword level parallelism
(SLP) for the current compiler version. Manual vectorization is recommended to provide the best performance for non-uniform code
patterns. WGV works if and only if vector types are not used in the code.
Here is a short list of optimization tips:
1. Help auto-vectorizer ensure non-aliasing pointers for kernel parameters by putting `restrict` where possible.
- This can give a performance boost, especially for kernels with unrolling, like `ocl_grn` from the example below.
- Place `restrict` markers for kernels with manually vectorized codes. In the `ocl_grn` kernel below, the unrolled version without `restrict` is up to 20% slower than the most optimal one, which combines unrolling and `restrict`.
2. Put `#&zwj;pragma unroll N` to your loop header. The compiler does not trigger unrolling by default, so it is your responsibility to
annotate the code with pragmas as appropriate. The `ocl_grn` version with `#&zwj;pragma unroll 4` is up to 50% faster, most of which comes from unrolling the first loop, because LLVM, in general, is better in scheduling 3-stage loops (load-compute-store), while the fist loop
`variance += (float)(src_data[c*H*W + y*W + x] * src_data[c*H*W + y*W + x]);` is only 2-stage (load-compute). Pay
attention to unrolling such cases first. Unrolling factor is loop-dependent. Choose the smallest number that
still improves performance as an optimum between the kernel size and execution speed. For this specific kernel, changing the unroll factor from `4` to `6` results in the same performance, so unrolling factor equal to 4 is an optimum. For Intel® Neural Compute Stick 2, unrolling is conjugated with the automatic software pipelining for load, store, and compute stages:
```cpp
__kernel void ocl_grn(__global const half* restrict src_data, __global half* restrict dst_data, int C, float bias)
{
int x = get_global_id(0);
int W = get_global_size(0);
int y = get_global_id(1);
int H = get_global_size(1);
float variance = bias + 1e-9f;
#pragma unroll 4
for (int c = 0; c < C; c++)
variance += (float)(src_data[c*H*W + y*W + x] * src_data[c*H*W + y*W + x]);
variance = 1.f / native_sqrt(variance);
#pragma unroll 4
for (int c = 0; c < C; c++)
dst_data[c*H*W + y*W + x] = (half)((float)src_data[c*H*W + y*W + x] * variance);
}
```
To check the efficiency of WGV, you can compare performance of the kernel above with the kernel below, which is manually vectorized over width:
```cpp
__kernel void ocl_grn_line(__global const half* restrict src_data, __global half* restrict dst_data, int C, int W, float bias)
{
int y = get_global_id(1);
int H = get_global_size(1);
for (int x = 0; x < W/8; x++)
{
float8 variance = (float8)(bias+1e-9f);
#pragma unroll 4
for (int c = 0; c < C; c++)
{
__global const half8* restrict src_line = ((__global const half8 * restrict)(src_data + c*H*W + y*W));
half8 sh = src_line[x];
variance += convert_float8(sh*sh);
}
variance = 1.f/native_sqrt(variance);
#pragma unroll 4
for (int c = 0; c < C; c++)
{
__global const half8* restrict src_line = ((__global const half8 * restrict)(src_data + c*H*W + y*W));
__global half8* restrict dst_line = ((__global half8 * restrict)(dst_data + c*H*W + y*W));
dst_line[x] = convert_half8(convert_float8(src_line[x])*variance);
}
}
for (int x = W/8*8; x < W; x++)
{
float variance = bias+1e-9f;
#pragma unroll 4
for (int c = 0; c < C; c++)
variance += (float)(src_data[c*H*W + y*W + x]*src_data[c*H*W + y*W + x]);
variance = 1.f/native_sqrt(variance);
#pragma unroll 4
for (int c = 0; c < C; c++)
dst_data[c*H*W + y*W + x] = (float)src_data[c*H*W + y*W + x]*variance;
}
}
```
Both versions perform the same, but the second one has more complex code.
3. If it is easy to predict the work group size, you can also use the `reqd_work_group_size` kernel attribute to ask the compiler
to unroll the code up to the local size of the work group. Note that if the kernel is actually executed with the
different work group configuration, the result is undefined.
4. Prefer to use the `half` compute if it keeps reasonable accuracy. 16-bit float is a native type for Intel® Neural Compute Stick 2, most of the functions `half_*` are mapped to a single hardware instruction.
Use the standard `native_*` function for the rest of types.
5. Prefer to use the `convert_half` function over `vstore_half` if conversion to 32-bit float is required. `convert_half` is mapped to a single hardware instruction. For the `cvtf32f16` kernel above, the line `outImage[idx] = convert_half(inImage[idx]*scale+bais);` is eight times slower than the code with `vstore_half`.
6. Mind early exits. Early exit can be extremely costly for the current version of the `clc` compiler due to conflicts with the
auto-vectorizer. The generic advice would be to setup local size by `x` dimension equal to inputs or/and outputs width.
If it is impossible to define the work grid that exactly matches inputs or/and outputs to eliminate checks, for example,
`if (get_global_id(0) >= width) return`, use line-wise kernel variant with manual vectorization.
The kernel example below demonstrates the impact of early exits on kernel performance.
```cpp
// Initial version
__kernel void reorg(const __global half* restrict src, __global half* restrict out, int stride)
{
int w = get_global_id(0);
int W = get_global_size(0);
int h = get_global_id(1);
int H = get_global_size(1);
int c = get_global_id(2);
int C = get_global_size(2);
int C2 = C/(stride*stride);
int offset = c / C2;
int c2 = c - C2 * offset;
int H2 = H*stride;
int W2 = W*stride;
int h2 = h*stride + offset / stride;
int w2 = w*stride + offset - stride * (offset / stride);
out[W*H*c + W*h + w] = src[W2*H2*c2 + W2*h2 + w2];
}
```
This `reorg` kernel is auto-vectorizable, but an input for YOLO v2 topology is `NCHW=<1,64,26,26>` and it is not multiple of vector width, which is `8` for `half` data type. As a result, the Inference Engine does not select the auto-vectorized kernel.
To compare performance of auto-vectorized and scalar version of the kernel, change the input size to`NCHW=<1,64,26,32>`. This enables the auto-vectorized version to be selected by the Inference Engine and can give you about 30% uplift.
Since the auto-vectorized version is faster, it makes sense to enable it for the YOLO v2 topology input size by setting the local size multiple of vector, for example, 32, and adjust global sizes accordingly. As a result, the execution work grid exceeds actual input dimension, so out-of-bound checks should be inserted. See the updated kernel version below:
```cpp
// Version with out-of-bound checks added
__kernel void reorg(const __global half* restrict src, __global half* restrict out, int W, int stride)
{
int w = get_global_id(0);
w = min(w, W-1);
int h = get_global_id(1);
int H = get_global_size(1);
int c = get_global_id(2);
int C = get_global_size(2);
int C2 = C/(stride*stride);
int offset = c / C2;
int c2 = c - C2 * offset;
int H2 = H*stride;
int W2 = W*stride;
int h2 = h*stride + offset / stride;
int w2 = w*stride + offset - stride * (offset / stride);
out[W*H*c + W*h + w] = src[W2*H2*c2 + W2*h2 + w2];
}
```
This code performs the same as the initial kernel above (scalar) due to branching overhead. If you replace min/max expression `w = min(w, W-1);` with `if (w >= W) return;`, runtime increases up to 2x against to code without branching (initial version).<br>
If branching is inevitable for your element-based kernel, it is recommended to change the scheme to line-based. See the kernel variant below:
```cpp
// Line-wise version
__kernel void reorg(const __global half* restrict src, __global half* restrict out, int H, int W, int stride)
{
int h = min((int)get_global_id(0), H-1);
int c = get_global_id(1);
int C = get_global_size(1);
int C2 = C/(stride*stride);
int offset = c / C2;
int c2 = c - C2 * offset;
int H2 = H*stride;
int W2 = W*stride;
for (int w = 0; w < W; ++w)
{
int h2 = h*stride + offset / stride;
int w2 = w*stride + offset - stride * (offset / stride);
out[W*H*c + W*h + w] = src[W2*H2*c2 + W2*h2 + w2];
}
}
```
This decreases the execution time up to 40% against the best performing vectorized kernel without early exits (initial version).
7. Reuse computations among work items by using line-based kernels or sharing values though `__local` memory.
8. Improve data access locality. Most of custom kernels are memory bound while convolution and fully connected layers are hardware-implemented. The code below demonstrates a further optimized version of the `reorg` kernel unrolled by `stride`:
```cpp
// Unrolled line-wise version
__kernel void reorg_unrolled_by_stride(const __global half* restrict src, __global half* restrict dst,
int H, int W, int stride)
{
int h = min((int)get_global_id(0), H-1);
int c2 = get_global_id(1);
int C2 = get_global_size(1);
int C = C2*stride*stride;
int H2 = H*stride;
int W2 = W*stride;
for (int stride_y = 0; stride_y < stride; stride_y++)
for (int stride_x = 0; stride_x < stride; stride_x++)
for (int w2 = 0, w = 0; w < W; w2 += stride, w++)
dst[W*H*C2*(stride_y*stride+stride_x) + W*H*c2 + W*h + w] = src[W2*H2*c2 + W2*h*stride + W2*stride_y + w2 + stride_x];
}
```
`scr` data in this case loaded only once. As the result, the cycle count drops up to 45% against the line-wise version.
9. Copy data from `__dlobal` to `__local` or `__private` memory if the data is accessed more than once. Access to
`__dlobal` memory is orders of magnitude slower than access to `__local`/`__private` due to statically scheduled pipeline, which
stalls completely on memory access without any prefetch. The same recommendation is applicable for scalar load/store
from/to a `__blobal` pointer since work-group copying could be done in a vector fashion.
10. Use a manual DMA extension. Local (on-chip) memory throughput is up to 24x higher than DDR throughput. Starting from OpenVINO™ 2020.1, VPU OpenCL features manual-DMA kernel extension to copy sub-tensor used by work group into local memory and performing compute without DDR evolved. Here is the simple GRN kernel implementation that runs over DDR. Local size is in the form (width of the input tensor, 1, 1) to define a large enough work group to get code automatically vectorized and unrolled, while global size is (width of the input tensor, height of the input tensor, 1):
```cpp
__kernel void grn_NCHW(
__global const half* restrict src_data,
__global half* restrict dst_data,
int C,
float bias)
{
float variance = bias + 1e-9f;
#pragma unroll 4
for (int c = 0; c < C; c++)
{
float val = (float) src_data[c*get_global_size(1)*get_global_size(0) + get_global_id(1)*get_global_size(0) + get_global_id(0)];
variance += val*val;
}
half hvariance = (half)(native_rsqrt((half)(variance/16.f))*0.25f);
#pragma unroll 4
for (int c = 0; c < C; c++)
{
dst_data[c*get_global_size(1)*get_global_size(0) + get_global_id(1)*get_global_size(0) + get_global_id(0)]
= src_data[c*get_global_size(1)*get_global_size(0) + get_global_id(1)*get_global_size(0) + get_global_id(0)] * hvariance;
}
}
```
This kernel can be rewritten to introduce special data binding `__dma_preload` and `__dma_postwrite intrinsics`. This means that instead of one kernel, a group of three kernels should be implemented: `kernelName`, `__dma_preload_kernelName`, and `__dma_postwrite_kernelName`. `__dma_preload_kernelName` for a particular work group `n` is guaranteed to be executed before the `n`-th work group itself, while `__dma_postwrite_kernelName` is guaranteed to be executed after a corresponding work group. You can define one of those functions that are intended to be used to copy data from-to `__global` and `__local` memory. The syntactics requires exact functional signature match. The example below illustrates how to prepare your kernel for manual-DMA.
```cpp
__kernel void __dma_preload_grn_NCHW(
__global const half* restrict src,
__global half* restrict dst,
__local half* restrict local_src,
__local half* restrict local_dst,
int C,
float bias)
{
// ToDO: copy required piece of src tensor into local_src
}
__kernel void __dma_postwrite_grn_NCHW(
__global const half* restrict src,
__global half* restrict dst,
__local const half* restrict local_src,
__local half* restrict local_dst,
int C,
float bias)
{
// ToDO: copy back computed piece of local_dst into dst
}
__kernel void grn_NCHW(
__global const half* restrict src_data,
__global half* restrict dst_data,
__local half* restrict src,
__local half* restrict dst,
int C,
float bias)
{
// same as the example above
}
```
The GRN kernel operates on channel-major tensors to compute average over full channel range and then normalizes input elements to produce the output.
As a part of the manual DMA extension, a group of work group copy functions are introduced in addition to `async_work_group_copy`, which is also mapped to a DMA call.
Here is the list of supported functions:
```cpp
// 2D sub-tensor copy
event_t WorkGroupDmaCreateStrideTransaction(
const local T *src,
global T *dst,
size_t src_width, // width of the line of source in bytes
size_t dst_width, // width of the line of destination in bytes
size_t src_stride, // stride between corresponding 2 consecutive lines of source in bytes
size_t dst_stride, // stride between corresponding 2 consecutive lines of destination in bytes
size_t size, // total number of bytes loaded for all lines from source to destination
event_t event) __OVERLOAD;
event_t WorkGroupDmaCreateStrideTransaction(
const global T *src,
local T *dst,
size_t src_width, // width of the line of source in bytes
size_t dst_width, // width of the line of destination in bytes
size_t src_stride, // stride between corresponding 2 consecutive lines of source in bytes
size_t dst_stride, // stride between corresponding 2 consecutive lines of destination in bytes
size_t size, // total number of bytes loaded for all lines from source to destination
event_t event) __OVERLOAD;
// 3D sub-tensor copy
event_t WorkGroupDmaCreate3DTransaction(
const local T *src,
global T *dst,
size_t src_width, // width of the line of source in bytes
size_t dst_width, // width of the line of destination in bytes
size_t src_stride, // stride between corresponding 2 consecutive lines of source in bytes
size_t dst_stride, // stride between corresponding 2 consecutive lines of destination in bytes
size_t num_planes, // number of planes to be copied
size_t src_plane_stride, // stride between corresponding 2 consecutive planes of source in bytes
size_t dst_plane_stride, // stride between corresponding 2 consecutive planes of destination in bytes
size_t size, // size of the loaded plane in bytes, analogues to the size in 2D case
event_t event) __OVERLOAD;
event_t WorkGroupDmaCreate3DTransaction(
const global T *src,
local T *dst,
size_t src_width, // width of the line of source in bytes
size_t dst_width, // width of the line of destination in bytes
size_t src_stride, // stride between corresponding 2 consecutive lines of source in bytes
size_t dst_stride, // stride between corresponding 2 consecutive lines of destination in bytes
size_t num_planes, // number of planes to be copied
size_t src_plane_stride, // stride between corresponding 2 consecutive planes of source in bytes
size_t dst_plane_stride, // stride between corresponding 2 consecutive planes of destination in bytes
size_t size, // size of the loaded plane in bytes, analogues to the size in 2D case
event_t event) __OVERLOAD;
```
where `T` can be `uchar`, `char`, `short`, `ushort`, `int`, `uint`, `long`, `ulong`, `half` or `float`.
Modified version of the GRN kernel could be the following:
```cpp
__kernel void __dma_preload_grn_NCHW(
__global const half* restrict src,
__global half* restrict dst,
__local half* restrict local_src,
__local half* restrict local_dst,
int C,
float bias)
{
WorkGroupDmaCreate3DTransaction(
src + get_group_id(0)*get_local_size(0)
+ get_group_id(1)*get_local_size(1)*get_global_size(0), // src
local_src, // dst
get_local_size(0) * sizeof(half), // src width
get_local_size(0) * sizeof(half), // dst width
get_global_size(0) * sizeof(half), // src stride
get_local_size(0) * sizeof(half), // dst stride
C, // num planes
get_global_size(0) * get_global_size(1) * sizeof(half), // src plane stride
get_local_size(0) * get_local_size(1) * sizeof(half), // dst plane stride
get_local_size(0) * get_local_size(1) * sizeof(half), // plane size
0);
}
__kernel void __dma_postwrite_grn_NCHW(
__global const half* restrict src,
__global half* restrict dst,
__local const half* restrict local_src,
__local half* restrict local_dst,
int C,
float bias)
{
WorkGroupDmaCreate3DTransaction(
local_dst, // src
dst + get_group_id(0)*get_local_size(0)
+ get_group_id(1)*get_local_size(1)*get_global_size(0), // dst
get_local_size(0) * sizeof(half), // src width
get_local_size(0) * sizeof(half), // dst width
get_local_size(0) * sizeof(half), // src stride
get_global_size(0) * sizeof(half), // dst stride
C, // num planes
get_local_size(0) * get_local_size(1) * sizeof(half), // src plane stride
get_global_size(0) * get_global_size(1) * sizeof(half), // dst plane stride
get_local_size(0) * get_local_size(1) * sizeof(half), // plane size
0);
}
__kernel void grn_NCHW(
__global const half* restrict src_data,
__global half* restrict dst_data,
__local half* restrict src,
__local half* restrict dst,
int C,
float bias)
{
float variance = bias + 1e-9f;
#pragma unroll 8
for (int c = 0; c < C; c++)
{
float val = (float) src[c*get_local_size(1)*get_local_size(0) + get_local_id(1)*get_local_size(0) + get_local_id(0)];
variance += val*val;
}
half hvariance = (half)(native_rsqrt((half)(variance/16.f))*0.25f);
#pragma unroll 8
for (int c = 0; c < C; c++)
{
dst[c*get_local_size(1)*get_local_size(0) + get_local_id(1)*get_local_size(0) + get_local_id(0)]
= src[c*get_local_size(1)*get_local_size(0) + get_local_id(1)*get_local_size(0) + get_local_id(0)] * hvariance;
}
}
```
Note the `get_local_size` and `get_local_id` usage inside the kernel. 21x speedup is expected for a kernel on enet-curbs setup because it was completely limited by memory usage.
An alternative method to using DMA is to use work item copy extension. Those functions are executed inside a kernel and requires work groups equal to single work item.
Here is the list of supported work item functions:
```cpp
item_dma_event_t WorkItemDmaCreateTransaction(
const global T *src,
private T *dst,
size_t size,
item_dma_event_t event) __OVERLOAD;
item_dma_event_t WorkItemDmaCreateTransaction(
const private T *src,
global T *dst,
size_t size,
item_dma_event_t event) __OVERLOAD;
item_dma_event_t WorkItemDmaCreateStrideTransaction(
const global T *src,
private T *dst,
size_t src_width,
size_t dst_width,
size_t src_stride,
size_t dst_stride,
size_t size,
item_dma_event_t event) __OVERLOAD;
item_dma_event_t WorkItemDmaCreateStrideTransaction(
const private T *src,
global T *dst,
size_t src_width,
size_t dst_width,
size_t src_stride,
size_t dst_stride,
size_t size,
item_dma_event_t event) __OVERLOAD;
item_dma_event_t WorkItemDmaCreate3DTransaction(
const global T *src,
private T *dst,
size_t src_width,
size_t dst_width,
size_t src_stride,
size_t dst_stride,
size_t num_planes,
size_t src_plane_stride,
size_t dst_plane_stride,
size_t size,
item_dma_event_t event) __OVERLOAD;
item_dma_event_t WorkItemDmaCreate3DTransaction(
const private T *src,
global T *dst,
size_t src_width,
size_t dst_width,
size_t src_stride,
size_t dst_stride,
size_t num_planes,
size_t src_plane_stride,
size_t dst_plane_stride,
size_t size,
item_dma_event_t event) __OVERLOAD;
```
where `T` can be `uchar`, `char`, `short`, `ushort`, `int`, `uint`, `long`, `ulong`, `half` or `float`.

View File

@@ -1,4 +1,4 @@
# Custom OpenVINO™ Operations {#openvino_docs_Extensibility_UG_add_openvino_ops}
# Custom OpenVINO™ Operations {#openvino_docs_Extensibility_UG_add_openvino_ops}
OpenVINO™ Extension API allows you to register custom operations to support models with operations which OpenVINO™ does not support out-of-the-box.
@@ -20,14 +20,10 @@ Follow the steps below to add a custom operation:
5. Override the `visit_attributes` method, which enables serialization and deserialization of operation attributes. An `AttributeVisitor` is passed to the method, and the implementation is expected to walk over all the attributes in the op using the type-aware `on_attribute` helper. Helpers are already implemented for standard C++ types like `int64_t`, `float`, `bool`, `vector`, and for existing OpenVINO defined types.
6. Override `evaluate`, which is an optional method that enables fallback of some devices to this implementation and the application of constant folding if there is a custom operation on the constant branch. If your operation contains `evaluate` method you also need to override the `has_evaluate` method, this method allow to get information about availability of `evaluate` method for the operation.
7. Add the `OPENVINO_FRAMEWORK_MAP` macro if you want to map custom operation to framework operation with the same name. It is an optional macro which can be used for one to one mapping. In order to use this macro please include frontend specific headers:
@snippet template_extension/new/identity.hpp op:frontend_include
6. Override `evaluate`, which is an optional method that enables fallback of some devices to this implementation and the application of constant folding if there is a custom operation on the constant branch. If your operation contains `evaluate` method you also need to override the `has_evaluate` method, this method allows to get information about availability of `evaluate` method for the operation.
Based on that, declaration of an operation class can look as follows:
@snippet template_extension/new/identity.hpp op:header
### Operation Constructors
@@ -55,8 +51,9 @@ OpenVINO™ operation contains two constructors:
@snippet template_extension/new/identity.cpp op:visit_attributes
### `evaluate()` and `has_evaluate()`
### evaluate() and has_evaluate()
`ov::Node::evaluate` method enables you to apply constant folding to an operation.
@snippet template_extension/new/identity.cpp op:evaluate

View File

@@ -0,0 +1,105 @@
# Frontend Extensions {#openvino_docs_Extensibility_UG_Frontend_Extensions}
The goal of this chapter is to explain how to use Frontend extension classes to facilitate mapping of custom operations from framework model representation to OpenVINO representation. Refer to [Introduction to OpenVINO Extension](Intro.md) to understand entire flow.
This API is applicable for new frontends only, which exist for ONNX and PaddlePaddle. If a different model format is used, follow legacy [Model Optimizer Extensions](../MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md) guide.
> **NOTE**: This documentation is written based on the [Template extension](https://github.com/openvinotoolkit/openvino/tree/master/docs/template_extension/new), which demonstrates extension development details based on minimalistic `Identity` operation that is a placeholder for your real custom operation. You can review the complete code, which is fully compliable, to see how it works.
## Single Operation Mapping with OpExtension
This section covers the case when a single operation in framework representation is mapped to a single operation in OpenVINO representation. This is called *one-to-one mapping*. There is `OpExtension` class that works well if all the following conditions are satisfied:
1. Number of inputs to operation in the Framework representation is the same as in the OpenVINO representation.
2. Number of outputs is also the same in both representations.
3. Inputs can be indexed and are mapped in order correspondingly, e.g. input with index 0 in framework representation maps to input with index 0 in OpenVINO representation and so on.
4. The same for outputs.
5. Each attribute in OpenVINO operation can be initialized from one of the attributes of original operation or by some predefined constant value. Value of copied attributes cannot contain expressions, value is accepted as-is, so type of a value should be compatible.
> **NOTE**: `OpExtension` class is currently available for ONNX frontend only. PaddlePaddle frontend has named inputs and outputs for operation (not indexed) therefore OpExtension mapping is not applicable for this case.
The next example maps ONNX operation with type [“Identity”]( https://github.com/onnx/onnx/blob/main/docs/Operators.md#Identity) to OpenVINO template extension `Identity` class.
@snippet ov_extensions.cpp frontend_extension_Identity_header
@snippet ov_extensions.cpp frontend_extension_Identity
The mapping doesnt involve any attributes, as operation Identity doesnt have them.
Extension objects, like just constructed `extension` can be used to add to the OpenVINO runtime just before the loading a model that contains custom operations:
@snippet ov_extensions.cpp frontend_extension_read_model
Or extensions can be constructed in a separately compiled shared library. Separately compiled library can be used in Model Optimizer or `benchmark_app`. Read about how to build and load such library in chapter “Create library with extensions” in [Introduction to OpenVINO Extension](Intro.md).
If operation have multiple inputs and/or outputs they will be mapped in order. The type of elements in input/output tensors should match expected types in the surrounding operations. For example, if custom operation produces `f32` data type then operation that consumes this output should also support `f32`. Otherwise, model conversion fails with an error, there are no automatic type conversion happens.
### Converting to Standard OpenVINO Operation
`OpExtension` class can be used when mapping to one of the operations from standard OpenVINO operation set is what you need and there is no class like `TemplateExtension::Identity` implemented.
Here is an example for a custom framework operation “MyRelu”. Suppose it is mathematically equivalent to standard `Relu` that exists in OpenVINO operation set, but for some reason has type name “MyRelu”. In this case you can directly say that “MyRelu” -> `Relu` mapping should be used:
@snippet ov_extensions.cpp frontend_extension_MyRelu
In the resulting converted OpenVINO model, “MyRelu” operation will be replaced by the standard operation `Relu` from the latest available OpenVINO operation set. Notice that when standard operation is used, it can be specified using just a type string (“Relu”) instead of using a `ov::opset8::Relu` class name as a template parameter for `OpExtension`. This method is available for operations from the standard operation set only. For a user custom OpenVINO operation the corresponding class should be always specified as a template parameter as it was demonstrated with `TemplateExtension::Identity`.
### Attributes Mapping
As described above, `OpExtension` is useful when attributes can be mapped one by one or initialized by a constant. If the set of attributes in framework representation and OpenVINO representation completely match by their names and types, nothing should be specified in OpExtension constructor parameters. The attributes are discovered and mapped automatically based on `visit_attributes` method that should be defined for any OpenVINO operation.
Imagine you have CustomOperation class implementation that has two attributes with names `attr1` and `attr2`:
@snippet ov_extensions.cpp frontend_extension_CustomOperation
And original model in framework representation also has operation with name “CustomOperatoin” with the same `attr1` and `attr2` attributes. Then with the following code:
@snippet ov_extensions.cpp frontend_extension_CustomOperation_as_is
both `attr1` and `attr2` are copied from framework representation to OpenVINO representation automatically. If for some reason names of attributes are different but values still can be copied “as-is” you can pass attribute names mapping in `OpExtension` constructor:
@snippet ov_extensions.cpp frontend_extension_CustomOperation_rename
Where `fw_attr1` and `fw_attr2` are names for corresponding attributes in framework operation representation.
If copying of an attribute is not what you need, `OpExtension` also can set attribute to predefined constant value. For the same `CustomOperation`, imagine you want to set `attr2` to value 5 instead of copying from `fw_attr2`, to achieve that do the following:
@snippet ov_extensions.cpp frontend_extension_CustomOperation_rename_set
So the conclusion is that each attribute of target OpenVINO operation should be initialized either by
1. Setting automatically due to name matching
2. Mapped by attribute name
3. Set to a constant value
This is achieved by specifying maps as arguments for `OpExtension` constructor.
## Mapping to Multiple Operations with ConversionExtension
Previous sections cover the case when a single operation is mapped to a single operation with optional adjustment in names and attribute values. That is likely enough for your own custom operation with existing C++ kernel implementation. In this case your framework representation and OpenVINO representation for the operation are under your control and inputs/outpus/attributes can be aligned to make `OpExtension` usable.
In case if one-to-one mapping is not possible, *decomposition to multiple operations* should be considered. It is achieved by using more verbose and less automated `ConversionExtension` class. It enables writing arbitrary code to replace a single framework operation by multiple connected OpenVINO operations constructing dependency graph of any complexity.
`ConversionExtension` maps a single operation to a function which builds a graph using OpenVINO operation classes. Follow chapter [Build a Model in OpenVINO Runtime](@ref ov_ug_build_model) to learn how to use OpenVINO operation classes to build a fragment of model for replacement.
The next example illustrates using `ConversionExtension` for conversion of “ThresholdedRelu” from ONNX according to the formula: `ThresholdedRelu(x, alpha) -> Multiply(x, Convert(Greater(x, alpha), type=float))`.
> **NOTE**: `ThresholdedRelu` is one of the standard ONNX operators which is supported by ONNX frontend natively out-of-the-box. Here we are re-implementing it to illustrate how you can add a similar support for your custom operation instead of `ThresholdedRelu`.
@snippet ov_extensions.cpp frontend_extension_ThresholdedReLU_header
@snippet ov_extensions.cpp frontend_extension_ThresholdedReLU
To access original framework operation attribute value and connect to inputs, `node` object of type `NodeContext` is used. It has two main methods:
* `NodeContext::get_input` to get input with a given index,
* `NodeContext::get_attribute` to get attribute value with a given name.
The conversion function should return a vector of node outputs that are mapped to corresponding outputs of the original framework operation in the same order.

View File

@@ -37,7 +37,7 @@ The implementation `CompileNetwork` is fully device-specific.
The function accepts a const shared pointer to `ngraph::Function` object and performs the following steps:
1. Applies ngraph passes using `TransformNetwork` function, which defines plugin-specific conversion pipeline. To support low precision inference, the pipeline can include Low Precision Transformations. These transformations are usually hardware specific. You can find how to use and configure Low Precisions Transformations in [Low Precision Transformations](@ref openvino_docs_IE_DG_lpt) guide.
1. Applies ngraph passes using `TransformNetwork` function, which defines plugin-specific conversion pipeline. To support low precision inference, the pipeline can include Low Precision Transformations. These transformations are usually hardware specific. You can find how to use and configure Low Precisions Transformations in [Low Precision Transformations](@ref openvino_docs_OV_UG_lpt) guide.
2. Maps the transformed graph to a backend specific graph representation (for example, to MKLDNN graph for Intel CPU).
3. Allocates and fills memory for graph weights, backend specific memory handles and so on.

View File

@@ -9,11 +9,12 @@
Implement Plugin Functionality <openvino_docs_ie_plugin_dg_plugin>
Implement Executable Network Functionality <openvino_docs_ie_plugin_dg_executable_network>
openvino_docs_ie_plugin_dg_quantized_networks
Implement Synchronous Inference Request <openvino_docs_ie_plugin_dg_infer_request>
Implement Asynchronous Inference Request <openvino_docs_ie_plugin_dg_async_infer_request>
openvino_docs_ie_plugin_dg_plugin_build
openvino_docs_ie_plugin_dg_plugin_testing
openvino_docs_ie_plugin_detailed_guides
openvino_docs_ie_plugin_api_references
@endsphinxdirective
@@ -55,11 +56,11 @@ Detailed guides
* [Build](@ref openvino_docs_ie_plugin_dg_plugin_build) a plugin library using CMake\*
* Plugin and its components [testing](@ref openvino_docs_ie_plugin_dg_plugin_testing)
* [Quantized networks](@ref openvino_docs_ie_plugin_dg_quantized_networks)
* [Low precision transformations](@ref openvino_docs_IE_DG_lpt) guide
* [Low precision transformations](@ref openvino_docs_OV_UG_lpt) guide
* [Writing OpenVINO™ transformations](@ref openvino_docs_transformations) guide
API References
-----------------------
* [Inference Engine Plugin API](groupie_dev_api.html)
* [Inference Engine Transformation API](groupie_transformation_api.html)
* [Inference Engine Plugin API](@ref ie_dev_api)
* [Inference Engine Transformation API](@ref ie_transformation_api)

View File

@@ -9,7 +9,7 @@ For more details about low-precision model representation please refer to this [
During the model load each plugin can interpret quantization rules expressed in *FakeQuantize* operations:
- Independently based on the definition of *FakeQuantize* operation.
- Using a special library of low-precision transformations (LPT) which applies common rules for generic operations,
such as Convolution, Fully-Connected, Eltwise, etc., and translates "fake-quantized" models into the models with low-precision operations. For more information about low-precision flow please refer to the following [document](@ref openvino_docs_IE_DG_Int8Inference).
such as Convolution, Fully-Connected, Eltwise, etc., and translates "fake-quantized" models into models with low-precision operations.
Here we provide only a high-level overview of the interpretation rules of FakeQuantize.
At runtime each FakeQuantize can be split into two independent operations: **Quantize** and **Dequantize**.

View File

@@ -0,0 +1,18 @@
# Advanced Topics {#openvino_docs_ie_plugin_detailed_guides}
@sphinxdirective
.. toctree::
:maxdepth: 1
:hidden:
openvino_docs_ie_plugin_dg_quantized_networks
openvino_docs_OV_UG_lpt
@endsphinxdirective
The guides below provides extra information about specific features of OpenVINO needed for understanding during OpenVINO plugin development:
* [Quantized networks](@ref openvino_docs_ie_plugin_dg_quantized_networks)
* [Low precision transformations](@ref openvino_docs_OV_UG_lpt) guide
* [Writing OpenVINO™ transformations](@ref openvino_docs_transformations) guide

View File

@@ -0,0 +1,17 @@
# Plugin API Reference {#openvino_docs_ie_plugin_api_references}
@sphinxdirective
.. toctree::
:maxdepth: 1
:hidden:
../groupie_dev_api
../groupie_transformation_api
@endsphinxdirective
The guides below provides extra API references needed for OpenVINO plugin development:
* [OpenVINO Plugin API](@ref ie_dev_api)
* [OpenVINO Transformation API](@ref ie_transformation_api)

View File

@@ -5,74 +5,74 @@
<tab type="usergroup" url="index.html" title="Developer Guide for Inference Engine Plugin Library">
<tab type="user" url="@ref plugin" visibile="yes" title="Implement Plugin Functionality"/>
<tab type="user" url="@ref executable_network" visibile="yes" title="Implement Executable Network Functionality">
<tab type="usergroup" title="Low Precision Transformations" url="@ref openvino_docs_IE_DG_lpt">
<tab type="user" title="Attributes" url="@ref openvino_docs_IE_DG_lpt_attributes">
<tab type="user" title="AvgPoolPrecisionPreserved" url="@ref openvino_docs_IE_DG_lpt_AvgPoolPrecisionPreserved"/>
<tab type="user" title="IntervalsAlignment" url="@ref openvino_docs_IE_DG_lpt_IntervalsAlignment"/>
<tab type="user" title="PerTensorQuantization" url="@ref openvino_docs_IE_DG_lpt_PerTensorQuantization"/>
<tab type="user" title="PrecisionPreserved" url="@ref openvino_docs_IE_DG_lpt_PrecisionPreserved"/>
<tab type="user" title="Precisions" url="@ref openvino_docs_IE_DG_lpt_Precisions"/>
<tab type="user" title="QuantizationAlignment" url="@ref openvino_docs_IE_DG_lpt_QuantizationAlignment"/>
<tab type="usergroup" title="Low Precision Transformations" url="@ref openvino_docs_OV_UG_lpt">
<tab type="user" title="Attributes" url="@ref openvino_docs_OV_UG_lpt_attributes">
<tab type="user" title="AvgPoolPrecisionPreserved" url="@ref openvino_docs_OV_UG_lpt_AvgPoolPrecisionPreserved"/>
<tab type="user" title="IntervalsAlignment" url="@ref openvino_docs_OV_UG_lpt_IntervalsAlignment"/>
<tab type="user" title="PerTensorQuantization" url="@ref openvino_docs_OV_UG_lpt_PerTensorQuantization"/>
<tab type="user" title="PrecisionPreserved" url="@ref openvino_docs_OV_UG_lpt_PrecisionPreserved"/>
<tab type="user" title="Precisions" url="@ref openvino_docs_OV_UG_lpt_Precisions"/>
<tab type="user" title="QuantizationAlignment" url="@ref openvino_docs_OV_UG_lpt_QuantizationAlignment"/>
</tab>
<tab type="user" title="Step 1. Prerequisites transformations" url="@ref openvino_docs_IE_DG_lpt_step1_prerequisites">
<tab type="user" title="LinOpSequenceFusion" url="@ref openvino_docs_IE_DG_lpt_LinOpSequenceFusion"/>
<tab type="user" title="PullReshapeThroughDequantization" url="@ref openvino_docs_IE_DG_lpt_PullReshapeThroughDequantization"/>
<tab type="user" title="PullTransposeThroughDequantization" url="@ref openvino_docs_IE_DG_lpt_PullTransposeThroughDequantization"/>
<tab type="user" title="Step 1. Prerequisites transformations" url="@ref openvino_docs_OV_UG_lpt_step1_prerequisites">
<tab type="user" title="LinOpSequenceFusion" url="@ref openvino_docs_OV_UG_lpt_LinOpSequenceFusion"/>
<tab type="user" title="PullReshapeThroughDequantization" url="@ref openvino_docs_OV_UG_lpt_PullReshapeThroughDequantization"/>
<tab type="user" title="PullTransposeThroughDequantization" url="@ref openvino_docs_OV_UG_lpt_PullTransposeThroughDequantization"/>
</tab>
<tab type="user" title="Step 2. Markup transformations" url="@ref openvino_docs_IE_DG_lpt_step2_markup">
<tab type="user" title="AlignQuantizationIntervals" url="@ref openvino_docs_IE_DG_lpt_AlignQuantizationIntervals"/>
<tab type="user" title="AlignQuantizationParameters" url="@ref openvino_docs_IE_DG_lpt_AlignQuantizationParameters"/>
<tab type="user" title="CreateAttribute" url="@ref openvino_docs_IE_DG_lpt_CreateAttribute"/>
<tab type="user" title="CreatePrecisionsDependentAttribute" url="@ref openvino_docs_IE_DG_lpt_CreatePrecisionsDependentAttribute"/>
<tab type="user" title="MarkupAvgPoolPrecisionPreserved" url="@ref openvino_docs_IE_DG_lpt_MarkupAvgPoolPrecisionPreserved"/>
<tab type="user" title="MarkupCanBeQuantized" url="@ref openvino_docs_IE_DG_lpt_MarkupCanBeQuantized"/>
<tab type="user" title="MarkupPerTensorQuantization" url="@ref openvino_docs_IE_DG_lpt_MarkupPerTensorQuantization"/>
<tab type="user" title="MarkupPrecisions" url="@ref openvino_docs_IE_DG_lpt_MarkupPrecisions"/>
<tab type="user" title="PropagatePrecisions" url="@ref openvino_docs_IE_DG_lpt_PropagatePrecisions"/>
<tab type="user" title="PropagateThroughPrecisionPreserved" url="@ref openvino_docs_IE_DG_lpt_PropagateThroughPrecisionPreserved"/>
<tab type="user" title="PropagateToInput" url="@ref openvino_docs_IE_DG_lpt_PropagateToInput"/>
<tab type="user" title="UpdateSharedPrecisionPreserved" url="@ref openvino_docs_IE_DG_lpt_UpdateSharedPrecisionPreserved"/>
<tab type="user" title="Step 2. Markup transformations" url="@ref openvino_docs_OV_UG_lpt_step2_markup">
<tab type="user" title="AlignQuantizationIntervals" url="@ref openvino_docs_OV_UG_lpt_AlignQuantizationIntervals"/>
<tab type="user" title="AlignQuantizationParameters" url="@ref openvino_docs_OV_UG_lpt_AlignQuantizationParameters"/>
<tab type="user" title="CreateAttribute" url="@ref openvino_docs_OV_UG_lpt_CreateAttribute"/>
<tab type="user" title="CreatePrecisionsDependentAttribute" url="@ref openvino_docs_OV_UG_lpt_CreatePrecisionsDependentAttribute"/>
<tab type="user" title="MarkupAvgPoolPrecisionPreserved" url="@ref openvino_docs_OV_UG_lpt_MarkupAvgPoolPrecisionPreserved"/>
<tab type="user" title="MarkupCanBeQuantized" url="@ref openvino_docs_OV_UG_lpt_MarkupCanBeQuantized"/>
<tab type="user" title="MarkupPerTensorQuantization" url="@ref openvino_docs_OV_UG_lpt_MarkupPerTensorQuantization"/>
<tab type="user" title="MarkupPrecisions" url="@ref openvino_docs_OV_UG_lpt_MarkupPrecisions"/>
<tab type="user" title="PropagatePrecisions" url="@ref openvino_docs_OV_UG_lpt_PropagatePrecisions"/>
<tab type="user" title="PropagateThroughPrecisionPreserved" url="@ref openvino_docs_OV_UG_lpt_PropagateThroughPrecisionPreserved"/>
<tab type="user" title="PropagateToInput" url="@ref openvino_docs_OV_UG_lpt_PropagateToInput"/>
<tab type="user" title="UpdateSharedPrecisionPreserved" url="@ref openvino_docs_OV_UG_lpt_UpdateSharedPrecisionPreserved"/>
</tab>
<tab type="user" title="Step 3. Main transformations" url="@ref openvino_docs_IE_DG_lpt_step3_main">
<tab type="user" title="AddTransformation" url="@ref openvino_docs_IE_DG_lpt_AddTransformation"/>
<tab type="user" title="AvgPoolTransformation" url="@ref openvino_docs_IE_DG_lpt_AvgPoolTransformation"/>
<tab type="user" title="ClampTransformation" url="@ref openvino_docs_IE_DG_lpt_ClampTransformation"/>
<tab type="user" title="ConcatTransformation" url="@ref openvino_docs_IE_DG_lpt_ConcatTransformation"/>
<tab type="user" title="ConvolutionTransformation" url="@ref openvino_docs_IE_DG_lpt_ConvolutionTransformation"/>
<tab type="user" title="ConvolutionBackpropDataTransformation" url="@ref openvino_docs_IE_DG_lpt_ConvolutionBackpropDataTransformation"/>
<tab type="user" title="DepthToSpaceTransformation" url="@ref openvino_docs_IE_DG_lpt_DepthToSpaceTransformation"/>
<tab type="user" title="FakeQuantizeDecompositionTransformation" url="@ref openvino_docs_IE_DG_lpt_FakeQuantizeDecompositionTransformation"/>
<tab type="user" title="FakeQuantizeTransformation" url="@ref openvino_docs_IE_DG_lpt_FakeQuantizeTransformation"/>
<tab type="user" title="InterpolateTransformation" url="@ref openvino_docs_IE_DG_lpt_InterpolateTransformation"/>
<tab type="user" title="GroupConvolutionTransformation" url="@ref openvino_docs_IE_DG_lpt_GroupConvolutionTransformation"/>
<tab type="user" title="MatMulTransformation" url="@ref openvino_docs_IE_DG_lpt_MatMulTransformation"/>
<tab type="user" title="MaxPoolTransformation" url="@ref openvino_docs_IE_DG_lpt_MaxPoolTransformation"/>
<tab type="user" title="MultiplyTransformation" url="@ref openvino_docs_IE_DG_lpt_MultiplyTransformation"/>
<tab type="user" title="MVNTransformation" url="@ref openvino_docs_IE_DG_lpt_MVNTransformation"/>
<tab type="user" title="NormalizeL2Transformation" url="@ref openvino_docs_IE_DG_lpt_NormalizeL2Transformation"/>
<tab type="user" title="PadTransformation" url="@ref openvino_docs_IE_DG_lpt_PadTransformation"/>
<tab type="user" title="PReluTransformation" url="@ref openvino_docs_IE_DG_lpt_PReluTransformation"/>
<tab type="user" title="ReduceMaxTransformation" url="@ref openvino_docs_IE_DG_lpt_ReduceMaxTransformation"/>
<tab type="user" title="ReduceMeanTransformation" url="@ref openvino_docs_IE_DG_lpt_ReduceMeanTransformation"/>
<tab type="user" title="ReduceMinTransformation" url="@ref openvino_docs_IE_DG_lpt_ReduceMinTransformation"/>
<tab type="user" title="ReduceSumTransformation" url="@ref openvino_docs_IE_DG_lpt_ReduceSumTransformation"/>
<tab type="user" title="ReluTransformation" url="@ref openvino_docs_IE_DG_lpt_ReluTransformation"/>
<tab type="user" title="ReshapeTransformation" url="@ref openvino_docs_IE_DG_lpt_ReshapeTransformation"/>
<tab type="user" title="SqueezeTransformation" url="@ref openvino_docs_IE_DG_lpt_SqueezeTransformation"/>
<tab type="user" title="ShuffleChannelsTransformation" url="@ref openvino_docs_IE_DG_lpt_ShuffleChannelsTransformation"/>
<tab type="user" title="SplitTransformation" url="@ref openvino_docs_IE_DG_lpt_SplitTransformation"/>
<tab type="user" title="StridedSliceTransformation" url="@ref openvino_docs_IE_DG_lpt_StridedSliceTransformation"/>
<tab type="user" title="TransposeTransformation" url="@ref openvino_docs_IE_DG_lpt_TransposeTransformation"/>
<tab type="user" title="UnsqueezeTransformation" url="@ref openvino_docs_IE_DG_lpt_UnsqueezeTransformation"/>
<tab type="user" title="VariadicSplitTransformation" url="@ref openvino_docs_IE_DG_lpt_VariadicSplitTransformation"/>
<tab type="user" title="Step 3. Main transformations" url="@ref openvino_docs_OV_UG_lpt_step3_main">
<tab type="user" title="AddTransformation" url="@ref openvino_docs_OV_UG_lpt_AddTransformation"/>
<tab type="user" title="AvgPoolTransformation" url="@ref openvino_docs_OV_UG_lpt_AvgPoolTransformation"/>
<tab type="user" title="ClampTransformation" url="@ref openvino_docs_OV_UG_lpt_ClampTransformation"/>
<tab type="user" title="ConcatTransformation" url="@ref openvino_docs_OV_UG_lpt_ConcatTransformation"/>
<tab type="user" title="ConvolutionTransformation" url="@ref openvino_docs_OV_UG_lpt_ConvolutionTransformation"/>
<tab type="user" title="ConvolutionBackpropDataTransformation" url="@ref openvino_docs_OV_UG_lpt_ConvolutionBackpropDataTransformation"/>
<tab type="user" title="DepthToSpaceTransformation" url="@ref openvino_docs_OV_UG_lpt_DepthToSpaceTransformation"/>
<tab type="user" title="FakeQuantizeDecompositionTransformation" url="@ref openvino_docs_OV_UG_lpt_FakeQuantizeDecompositionTransformation"/>
<tab type="user" title="FakeQuantizeTransformation" url="@ref openvino_docs_OV_UG_lpt_FakeQuantizeTransformation"/>
<tab type="user" title="InterpolateTransformation" url="@ref openvino_docs_OV_UG_lpt_InterpolateTransformation"/>
<tab type="user" title="GroupConvolutionTransformation" url="@ref openvino_docs_OV_UG_lpt_GroupConvolutionTransformation"/>
<tab type="user" title="MatMulTransformation" url="@ref openvino_docs_OV_UG_lpt_MatMulTransformation"/>
<tab type="user" title="MaxPoolTransformation" url="@ref openvino_docs_OV_UG_lpt_MaxPoolTransformation"/>
<tab type="user" title="MultiplyTransformation" url="@ref openvino_docs_OV_UG_lpt_MultiplyTransformation"/>
<tab type="user" title="MVNTransformation" url="@ref openvino_docs_OV_UG_lpt_MVNTransformation"/>
<tab type="user" title="NormalizeL2Transformation" url="@ref openvino_docs_OV_UG_lpt_NormalizeL2Transformation"/>
<tab type="user" title="PadTransformation" url="@ref openvino_docs_OV_UG_lpt_PadTransformation"/>
<tab type="user" title="PReluTransformation" url="@ref openvino_docs_OV_UG_lpt_PReluTransformation"/>
<tab type="user" title="ReduceMaxTransformation" url="@ref openvino_docs_OV_UG_lpt_ReduceMaxTransformation"/>
<tab type="user" title="ReduceMeanTransformation" url="@ref openvino_docs_OV_UG_lpt_ReduceMeanTransformation"/>
<tab type="user" title="ReduceMinTransformation" url="@ref openvino_docs_OV_UG_lpt_ReduceMinTransformation"/>
<tab type="user" title="ReduceSumTransformation" url="@ref openvino_docs_OV_UG_lpt_ReduceSumTransformation"/>
<tab type="user" title="ReluTransformation" url="@ref openvino_docs_OV_UG_lpt_ReluTransformation"/>
<tab type="user" title="ReshapeTransformation" url="@ref openvino_docs_OV_UG_lpt_ReshapeTransformation"/>
<tab type="user" title="SqueezeTransformation" url="@ref openvino_docs_OV_UG_lpt_SqueezeTransformation"/>
<tab type="user" title="ShuffleChannelsTransformation" url="@ref openvino_docs_OV_UG_lpt_ShuffleChannelsTransformation"/>
<tab type="user" title="SplitTransformation" url="@ref openvino_docs_OV_UG_lpt_SplitTransformation"/>
<tab type="user" title="StridedSliceTransformation" url="@ref openvino_docs_OV_UG_lpt_StridedSliceTransformation"/>
<tab type="user" title="TransposeTransformation" url="@ref openvino_docs_OV_UG_lpt_TransposeTransformation"/>
<tab type="user" title="UnsqueezeTransformation" url="@ref openvino_docs_OV_UG_lpt_UnsqueezeTransformation"/>
<tab type="user" title="VariadicSplitTransformation" url="@ref openvino_docs_OV_UG_lpt_VariadicSplitTransformation"/>
</tab>
<tab type="user" title="Step 4. Cleanup transformations" url="@ref openvino_docs_IE_DG_lpt_step4_cleanup">
<tab type="user" title="FoldConvertTransformation" url="@ref openvino_docs_IE_DG_lpt_FoldConvertTransformation"/>
<tab type="user" title="FoldFakeQuantizeTransformation" url="@ref openvino_docs_IE_DG_lpt_FoldFakeQuantizeTransformation"/>
<tab type="user" title="FuseConvertTransformation" url="@ref openvino_docs_IE_DG_lpt_FuseConvertTransformation"/>
<tab type="user" title="FuseMultiplyToFakeQuantizeTransformation" url="@ref openvino_docs_IE_DG_lpt_FuseMultiplyToFakeQuantizeTransformation"/>
<tab type="user" title="FuseSubtractToFakeQuantizeTransformation" url="@ref openvino_docs_IE_DG_lpt_FuseSubtractToFakeQuantizeTransformation"/>
<tab type="user" title="MultiplyToGroupConvolutionTransformation" url="@ref openvino_docs_IE_DG_lpt_MultiplyToGroupConvolutionTransformation"/>
<tab type="user" title="Step 4. Cleanup transformations" url="@ref openvino_docs_OV_UG_lpt_step4_cleanup">
<tab type="user" title="FoldConvertTransformation" url="@ref openvino_docs_OV_UG_lpt_FoldConvertTransformation"/>
<tab type="user" title="FoldFakeQuantizeTransformation" url="@ref openvino_docs_OV_UG_lpt_FoldFakeQuantizeTransformation"/>
<tab type="user" title="FuseConvertTransformation" url="@ref openvino_docs_OV_UG_lpt_FuseConvertTransformation"/>
<tab type="user" title="FuseMultiplyToFakeQuantizeTransformation" url="@ref openvino_docs_OV_UG_lpt_FuseMultiplyToFakeQuantizeTransformation"/>
<tab type="user" title="FuseSubtractToFakeQuantizeTransformation" url="@ref openvino_docs_OV_UG_lpt_FuseSubtractToFakeQuantizeTransformation"/>
<tab type="user" title="MultiplyToGroupConvolutionTransformation" url="@ref openvino_docs_OV_UG_lpt_MultiplyToGroupConvolutionTransformation"/>
</tab>
</tab>
</tab>

View File

@@ -1,17 +0,0 @@
# Plugin Transformation Pipeline {#openvino_docs_IE_DG_plugin_transformation_pipeline}
@sphinxdirective
.. toctree::
:maxdepth: 1
:caption: Executable Network
:hidden:
Low Precision Transformations <openvino_docs_IE_DG_lpt>
@endsphinxdirective
Typical plugin transformation pipeline includes steps:
1. Common transformations
2. [Low precision transformations](@ref openvino_docs_IE_DG_lpt)
3. Plugin specific transformations

View File

@@ -1,4 +1,4 @@
# AvgPoolPrecisionPreserved attribute {#openvino_docs_IE_DG_lpt_AvgPoolPrecisionPreserved}
# AvgPoolPrecisionPreserved attribute {#openvino_docs_OV_UG_lpt_AvgPoolPrecisionPreserved}
ngraph::AvgPoolPrecisionPreservedAttribute class represents the `AvgPoolPrecisionPreserved` attribute.

View File

@@ -1,4 +1,4 @@
# IntervalsAlignment attribute {#openvino_docs_IE_DG_lpt_IntervalsAlignment}
# IntervalsAlignment attribute {#openvino_docs_OV_UG_lpt_IntervalsAlignment}
ngraph::IntervalsAlignmentAttribute class represents the `IntervalsAlignment` attribute.

View File

@@ -1,4 +1,4 @@
# PerTensorQuantization attribute {#openvino_docs_IE_DG_lpt_PerTensorQuantization}
# PerTensorQuantization attribute {#openvino_docs_OV_UG_lpt_PerTensorQuantization}
ngraph::PerTensorQuantizationAttribute class represents the `PerTensorQuantization` attribute.

View File

@@ -1,4 +1,4 @@
# PrecisionPreserved attribute {#openvino_docs_IE_DG_lpt_PrecisionPreserved}
# PrecisionPreserved attribute {#openvino_docs_OV_UG_lpt_PrecisionPreserved}
ngraph::PrecisionPreservedAttribute class represents the `PrecisionPreserved` attribute.

View File

@@ -1,4 +1,4 @@
# Precisions attribute {#openvino_docs_IE_DG_lpt_Precisions}
# Precisions attribute {#openvino_docs_OV_UG_lpt_Precisions}
ngraph::PrecisionsAttribute class represents the `Precisions` attribute.

View File

@@ -1,4 +1,4 @@
# QuantizationAlignment attribute {#openvino_docs_IE_DG_lpt_QuantizationAlignment}
# QuantizationAlignment attribute {#openvino_docs_OV_UG_lpt_QuantizationAlignment}
ngraph::QuantizationAlignmentAttribute class represents the `QuantizationAlignment` attribute.

View File

@@ -1,4 +1,4 @@
# OpenVINO™ Low Precision Transformations {#openvino_docs_IE_DG_lpt}
# OpenVINO™ Low Precision Transformations {#openvino_docs_OV_UG_lpt}
@sphinxdirective
@@ -7,13 +7,13 @@
:caption: Low Precision Transformations
:hidden:
Low Precision Transformations <openvino_docs_IE_DG_lpt>
Low Precision Transformations <openvino_docs_OV_UG_lpt>
Attributes <openvino_docs_IE_DG_lpt_attributes>
Step 1. Prerequisites transformations <openvino_docs_IE_DG_lpt_step1_prerequisites>
Step 2. Markup transformations <openvino_docs_IE_DG_lpt_step2_markup>
Step 3. Main transformations <openvino_docs_IE_DG_lpt_step3_main>
Step 4. Cleanup transformations <openvino_docs_IE_DG_lpt_step4_cleanup>
Attributes <openvino_docs_OV_UG_lpt_attributes>
Step 1. Prerequisites transformations <openvino_docs_OV_UG_lpt_step1_prerequisites>
Step 2. Markup transformations <openvino_docs_OV_UG_lpt_step2_markup>
Step 3. Main transformations <openvino_docs_OV_UG_lpt_step3_main>
Step 4. Cleanup transformations <openvino_docs_OV_UG_lpt_step4_cleanup>
@endsphinxdirective
@@ -72,11 +72,7 @@ For example, if you would like to infer a model with `Convolution` operation in
> There are several supported quantization approaches on activations and on weights. All supported approaches are described in [Quantization approaches](#quantization-approaches) section below. In demonstrated model [FakeQuantize operation quantization](#fakequantize-operation) approach is used.
### Low precision tools
There are two tools to quantize a model:
1. [Post-Training Optimization Toolkit](@ref pot_docs_LowPrecisionOptimizationGuide) (POT)
2. [Neural Network Compression Framework](https://github.com/openvinotoolkit/nncf) (NNCF)
Additionally, low precision transformations can handle ONNX quantized models.
For more details on how to get a quantized model, refer to [Model Optimization](@ref openvino_docs_model_optimization_guide) document.
## Quantization approaches
LPT transformations support two quantization approaches:
@@ -115,63 +111,63 @@ Inside each step LPT transformations handle input model operation by operation,
As result, usually all operations are inferred by plugin in low precision. If plugin doesn't support an operation inference in low precision, then corresponding LPT transformation can be disabled, and input tensor precisions for the operation will not be changed. In this case the operation is inferred in the original precision.
Low precision transformations pipeline includes four steps:
* [Step #1: Prerequisites](@ref openvino_docs_IE_DG_lpt_step1_prerequisites)
* [Step #2: Markup transformations](@ref openvino_docs_IE_DG_lpt_step2_markup)
* [Step #3: Main transformations](@ref openvino_docs_IE_DG_lpt_step3_main)
* [Step #4: Cleanup transformations](@ref openvino_docs_IE_DG_lpt_step4_cleanup)
* [Step #1: Prerequisites](@ref openvino_docs_OV_UG_lpt_step1_prerequisites)
* [Step #2: Markup transformations](@ref openvino_docs_OV_UG_lpt_step2_markup)
* [Step #3: Main transformations](@ref openvino_docs_OV_UG_lpt_step3_main)
* [Step #4: Cleanup transformations](@ref openvino_docs_OV_UG_lpt_step4_cleanup)
### Step 1. Prerequisites
This step fuses and propagates some operations in the model to prepare for the next step. It is required for OpenVINO plugins. Transformations:
* [PullReshapeThroughDequantization](@ref openvino_docs_IE_DG_lpt_PullReshapeThroughDequantization)
* [PullTransposeThroughDequantization](@ref openvino_docs_IE_DG_lpt_PullTransposeThroughDequantization)
* [LinOpSequenceFusion](@ref openvino_docs_IE_DG_lpt_LinOpSequenceFusion)
* [PullReshapeThroughDequantization](@ref openvino_docs_OV_UG_lpt_PullReshapeThroughDequantization)
* [PullTransposeThroughDequantization](@ref openvino_docs_OV_UG_lpt_PullTransposeThroughDequantization)
* [LinOpSequenceFusion](@ref openvino_docs_OV_UG_lpt_LinOpSequenceFusion)
The model on this step is changed. There are more details in developer guide [Prerequisites transformations](@ref openvino_docs_IE_DG_lpt_step1_prerequisites).
The model on this step is changed. There are more details in developer guide [Prerequisites transformations](@ref openvino_docs_OV_UG_lpt_step1_prerequisites).
### Step 2. Markup
This step creates runtime attributes for operations. These attributes will be used in next step. Transformations:
* [MarkupCanBeQuantized](@ref openvino_docs_IE_DG_lpt_MarkupCanBeQuantized)
* [MarkupPrecisions](@ref openvino_docs_IE_DG_lpt_MarkupPrecisions)
* [MarkupPerTensorQuantization](@ref openvino_docs_IE_DG_lpt_MarkupPerTensorQuantization)
* [MarkupAvgPoolPrecisionPreserved](@ref openvino_docs_IE_DG_lpt_MarkupAvgPoolPrecisionPreserved)
* [PropagatePrecisions](@ref openvino_docs_IE_DG_lpt_PropagatePrecisions)
* [AlignQuantizationIntervals](@ref openvino_docs_IE_DG_lpt_AlignQuantizationIntervals)
* [AlignQuantizationParameters](@ref openvino_docs_IE_DG_lpt_AlignQuantizationParameters)
* [MarkupCanBeQuantized](@ref openvino_docs_OV_UG_lpt_MarkupCanBeQuantized)
* [MarkupPrecisions](@ref openvino_docs_OV_UG_lpt_MarkupPrecisions)
* [MarkupPerTensorQuantization](@ref openvino_docs_OV_UG_lpt_MarkupPerTensorQuantization)
* [MarkupAvgPoolPrecisionPreserved](@ref openvino_docs_OV_UG_lpt_MarkupAvgPoolPrecisionPreserved)
* [PropagatePrecisions](@ref openvino_docs_OV_UG_lpt_PropagatePrecisions)
* [AlignQuantizationIntervals](@ref openvino_docs_OV_UG_lpt_AlignQuantizationIntervals)
* [AlignQuantizationParameters](@ref openvino_docs_OV_UG_lpt_AlignQuantizationParameters)
The model on this step is changed: only new attributes are added to some operations. There are more details in developer guide [Markup transformations](@ref openvino_docs_IE_DG_lpt_step2_markup).
The model on this step is changed: only new attributes are added to some operations. There are more details in developer guide [Markup transformations](@ref openvino_docs_OV_UG_lpt_step2_markup).
### Step 3. Main transformations, FakeQuantize decomposition and dequantization operations handling
This step has the most transformations. These transformations can be separated in two groups: decomposition transformation and dequantization operations handling. There are more details in developer guide [Main transformations](@ref openvino_docs_IE_DG_lpt_step3_main). Transformations:
* [AddTransformation](@ref openvino_docs_IE_DG_lpt_AddTransformation)
* [AvgPoolTransformation](@ref openvino_docs_IE_DG_lpt_AvgPoolTransformation)
* [ClampTransformation](@ref openvino_docs_IE_DG_lpt_AvgPoolTransformation)
* [ConcatTransformation](@ref openvino_docs_IE_DG_lpt_ConcatTransformation)
* [ConvolutionTransformation](@ref openvino_docs_IE_DG_lpt_ConvolutionTransformation)
* [ConvolutionBackpropDataTransformation](@ref openvino_docs_IE_DG_lpt_ConvolutionBackpropDataTransformation)
* [DepthToSpaceTransformation](@ref openvino_docs_IE_DG_lpt_DepthToSpaceTransformation)
* [FakeQuantizeDecompositionTransformation](@ref openvino_docs_IE_DG_lpt_FakeQuantizeDecompositionTransformation)
* [FakeQuantizeTransformation](@ref openvino_docs_IE_DG_lpt_FakeQuantizeTransformation)
* [InterpolateTransformation](@ref openvino_docs_IE_DG_lpt_InterpolateTransformation)
* [GroupConvolutionTransformation](@ref openvino_docs_IE_DG_lpt_GroupConvolutionTransformation)
* [MatMulTransformation](@ref openvino_docs_IE_DG_lpt_MatMulTransformation)
* [MaxPoolTransformation](@ref openvino_docs_IE_DG_lpt_MaxPoolTransformation)
* [MultiplyTransformation](@ref openvino_docs_IE_DG_lpt_MultiplyTransformation)
* [MVNTransformation](@ref openvino_docs_IE_DG_lpt_MVNTransformation)
* [NormalizeL2Transformation](@ref openvino_docs_IE_DG_lpt_NormalizeL2Transformation)
* [PReluTransformation](@ref openvino_docs_IE_DG_lpt_PReluTransformation)
* [ReduceMaxTransformation](@ref openvino_docs_IE_DG_lpt_ReduceMaxTransformation)
* [ReduceMeanTransformation](@ref openvino_docs_IE_DG_lpt_ReduceMeanTransformation)
* [ReduceMinTransformation](@ref openvino_docs_IE_DG_lpt_ReduceMinTransformation)
* [ReduceSumTransformation](@ref openvino_docs_IE_DG_lpt_ReduceSumTransformation)
* [ReluTransformation](@ref openvino_docs_IE_DG_lpt_ReluTransformation)
* [ReshapeTransformation](@ref openvino_docs_IE_DG_lpt_ReshapeTransformation)
* [SqueezeTransformation](@ref openvino_docs_IE_DG_lpt_SqueezeTransformation)
* [ShuffleChannelsTransformation](@ref openvino_docs_IE_DG_lpt_ShuffleChannelsTransformation)
* [SplitTransformation](@ref openvino_docs_IE_DG_lpt_SplitTransformation)
* [StridedSliceTransformation](@ref openvino_docs_IE_DG_lpt_StridedSliceTransformation)
* [TransposeTransformation](@ref openvino_docs_IE_DG_lpt_TransposeTransformation)
* [UnsqueezeTransformation](@ref openvino_docs_IE_DG_lpt_UnsqueezeTransformation)
* [VariadicSplitTransformation](@ref openvino_docs_IE_DG_lpt_VariadicSplitTransformation)
This step has the most transformations. These transformations can be separated in two groups: decomposition transformation and dequantization operations handling. There are more details in developer guide [Main transformations](@ref openvino_docs_OV_UG_lpt_step3_main). Transformations:
* [AddTransformation](@ref openvino_docs_OV_UG_lpt_AddTransformation)
* [AvgPoolTransformation](@ref openvino_docs_OV_UG_lpt_AvgPoolTransformation)
* [ClampTransformation](@ref openvino_docs_OV_UG_lpt_AvgPoolTransformation)
* [ConcatTransformation](@ref openvino_docs_OV_UG_lpt_ConcatTransformation)
* [ConvolutionTransformation](@ref openvino_docs_OV_UG_lpt_ConvolutionTransformation)
* [ConvolutionBackpropDataTransformation](@ref openvino_docs_OV_UG_lpt_ConvolutionBackpropDataTransformation)
* [DepthToSpaceTransformation](@ref openvino_docs_OV_UG_lpt_DepthToSpaceTransformation)
* [FakeQuantizeDecompositionTransformation](@ref openvino_docs_OV_UG_lpt_FakeQuantizeDecompositionTransformation)
* [FakeQuantizeTransformation](@ref openvino_docs_OV_UG_lpt_FakeQuantizeTransformation)
* [InterpolateTransformation](@ref openvino_docs_OV_UG_lpt_InterpolateTransformation)
* [GroupConvolutionTransformation](@ref openvino_docs_OV_UG_lpt_GroupConvolutionTransformation)
* [MatMulTransformation](@ref openvino_docs_OV_UG_lpt_MatMulTransformation)
* [MaxPoolTransformation](@ref openvino_docs_OV_UG_lpt_MaxPoolTransformation)
* [MultiplyTransformation](@ref openvino_docs_OV_UG_lpt_MultiplyTransformation)
* [MVNTransformation](@ref openvino_docs_OV_UG_lpt_MVNTransformation)
* [NormalizeL2Transformation](@ref openvino_docs_OV_UG_lpt_NormalizeL2Transformation)
* [PReluTransformation](@ref openvino_docs_OV_UG_lpt_PReluTransformation)
* [ReduceMaxTransformation](@ref openvino_docs_OV_UG_lpt_ReduceMaxTransformation)
* [ReduceMeanTransformation](@ref openvino_docs_OV_UG_lpt_ReduceMeanTransformation)
* [ReduceMinTransformation](@ref openvino_docs_OV_UG_lpt_ReduceMinTransformation)
* [ReduceSumTransformation](@ref openvino_docs_OV_UG_lpt_ReduceSumTransformation)
* [ReluTransformation](@ref openvino_docs_OV_UG_lpt_ReluTransformation)
* [ReshapeTransformation](@ref openvino_docs_OV_UG_lpt_ReshapeTransformation)
* [SqueezeTransformation](@ref openvino_docs_OV_UG_lpt_SqueezeTransformation)
* [ShuffleChannelsTransformation](@ref openvino_docs_OV_UG_lpt_ShuffleChannelsTransformation)
* [SplitTransformation](@ref openvino_docs_OV_UG_lpt_SplitTransformation)
* [StridedSliceTransformation](@ref openvino_docs_OV_UG_lpt_StridedSliceTransformation)
* [TransposeTransformation](@ref openvino_docs_OV_UG_lpt_TransposeTransformation)
* [UnsqueezeTransformation](@ref openvino_docs_OV_UG_lpt_UnsqueezeTransformation)
* [VariadicSplitTransformation](@ref openvino_docs_OV_UG_lpt_VariadicSplitTransformation)
#### Decomposition transformations
Decomposition transformations decompose the `FakeQuantize` operation to: quantize (`FakeQuantize` with low precision output) and dequantization operations (opposite to quantize, with low precision input and the original precision output). For dequantization operations LPT uses three operations: `Convert`, `Subtract` and `Multiply`. Element-wise operations `Subtract` and `Multiply` have constants on the second branches. If dequantization operations are not handled at the end of LPT pipeline, then they will be fused back to the `FakeQuantize`.
@@ -197,14 +193,14 @@ Original `Convolution` operation in FP32 with dequantization operations before:
### Step 4: Cleanup of the result model
LPT cleanup transformations is final stage in LPT pipeline. In this step LPT transformations clean up the result model to avoid not handled dequantization operations: fuse dequantization operations if possible (fuse at least `Convert` operations if not) to other model operations to cleanup result model. Transformations:
* [FoldConvertTransformation](@ref openvino_docs_IE_DG_lpt_FoldConvertTransformation)
* [FoldFakeQuantizeTransformation](@ref openvino_docs_IE_DG_lpt_FoldFakeQuantizeTransformation)
* [FuseConvertTransformation](@ref openvino_docs_IE_DG_lpt_FuseConvertTransformation)
* [FuseMultiplyToFakeQuantizeTransformation](@ref openvino_docs_IE_DG_lpt_FuseMultiplyToFakeQuantizeTransformation)
* [FuseSubtractToFakeQuantizeTransformation](@ref openvino_docs_IE_DG_lpt_FuseSubtractToFakeQuantizeTransformation)
* [MultiplyToGroupConvolutionTransformation](@ref openvino_docs_IE_DG_lpt_MultiplyToGroupConvolutionTransformation)
* [FoldConvertTransformation](@ref openvino_docs_OV_UG_lpt_FoldConvertTransformation)
* [FoldFakeQuantizeTransformation](@ref openvino_docs_OV_UG_lpt_FoldFakeQuantizeTransformation)
* [FuseConvertTransformation](@ref openvino_docs_OV_UG_lpt_FuseConvertTransformation)
* [FuseMultiplyToFakeQuantizeTransformation](@ref openvino_docs_OV_UG_lpt_FuseMultiplyToFakeQuantizeTransformation)
* [FuseSubtractToFakeQuantizeTransformation](@ref openvino_docs_OV_UG_lpt_FuseSubtractToFakeQuantizeTransformation)
* [MultiplyToGroupConvolutionTransformation](@ref openvino_docs_OV_UG_lpt_MultiplyToGroupConvolutionTransformation)
There are more details in developer guide [Cleanup transformations](@ref openvino_docs_IE_DG_lpt_step4_cleanup).
There are more details in developer guide [Cleanup transformations](@ref openvino_docs_OV_UG_lpt_step4_cleanup).
`FakeQuantize` operation with not handled dequantization operations:
![TODO: FakeQuantize operation with dequantization operations before LPT](quantization/img/fq.transformed.png)
@@ -236,11 +232,11 @@ This step is optional. It modifies the nGraph function to a device-specific oper
Let's explore quantized [TensorFlow* implementation of ResNet-50](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/resnet-50-tf) model. Use [Model Downloader](@ref omz_tools_downloader) tool to download the `fp16` model from [OpenVINO™ Toolkit - Open Model Zoo repository](https://github.com/openvinotoolkit/open_model_zoo):
```sh
./downloader.py --name resnet-50-tf --precisions FP16-INT8
omz_downloader --name resnet-50-tf --precisions FP16-INT8
```
After that you should quantize model by the [Model Quantizer](@ref omz_tools_downloader) tool.
```sh
./quantizer.py --model_dir public/resnet-50-tf --dataset_dir <DATASET_DIR> --precisions=FP16-INT8
omz_quantizer --model_dir public/resnet-50-tf --dataset_dir <DATASET_DIR> --precisions=FP16-INT8
```
### Inference
@@ -259,7 +255,7 @@ Result model depends on different factors:
Information about layer precision is stored in the performance counters that are
available from the Inference Engine API. For example, the part of performance counters table for quantized [TensorFlow* implementation of ResNet-50](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/resnet-50-tf) model inference on CPU Plugin looks as follows:
available from the OpenVINO Runtime API. For example, the part of performance counters table for quantized [TensorFlow* implementation of ResNet-50](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/resnet-50-tf) model inference on CPU Plugin looks as follows:
| layerName | execStatus | layerType | execType | realTime (ms) | cpuTime (ms) |

View File

@@ -1,4 +1,4 @@
# Attributes {#openvino_docs_IE_DG_lpt_attributes}
# Attributes {#openvino_docs_OV_UG_lpt_attributes}
@sphinxdirective
@@ -7,12 +7,12 @@
:caption: Attributes
:hidden:
AvgPoolPrecisionPreserved <openvino_docs_IE_DG_lpt_AvgPoolPrecisionPreserved>
IntervalsAlignment <openvino_docs_IE_DG_lpt_IntervalsAlignment>
PerTensorQuantization <openvino_docs_IE_DG_lpt_PerTensorQuantization>
PrecisionPreserved <openvino_docs_IE_DG_lpt_PrecisionPreserved>
Precisions <openvino_docs_IE_DG_lpt_Precisions>
QuantizationAlignment <openvino_docs_IE_DG_lpt_QuantizationAlignment>
AvgPoolPrecisionPreserved <openvino_docs_OV_UG_lpt_AvgPoolPrecisionPreserved>
IntervalsAlignment <openvino_docs_OV_UG_lpt_IntervalsAlignment>
PerTensorQuantization <openvino_docs_OV_UG_lpt_PerTensorQuantization>
PrecisionPreserved <openvino_docs_OV_UG_lpt_PrecisionPreserved>
Precisions <openvino_docs_OV_UG_lpt_Precisions>
QuantizationAlignment <openvino_docs_OV_UG_lpt_QuantizationAlignment>
@endsphinxdirective
@@ -20,12 +20,12 @@
| Name | Target | Required | Mutable |
|-------------------------------------------------------------------------------------|------------------------|----------|---------|
| [AvgPoolPrecisionPreserved](@ref openvino_docs_IE_DG_lpt_AvgPoolPrecisionPreserved) | Precision | No | Yes |
| [IntervalsAlignment](@ref openvino_docs_IE_DG_lpt_IntervalsAlignment) | Quantization interval | Yes | Yes |
| [PerTensorQuantization](@ref openvino_docs_IE_DG_lpt_PerTensorQuantization) | Precision | Yes | No |
| [PrecisionPreserved](@ref openvino_docs_IE_DG_lpt_PrecisionPreserved) | Precision | Yes | Yes |
| [Precisions](@ref openvino_docs_IE_DG_lpt_Precisions) | Precision | Yes | Yes |
| [QuantizationAlignment](@ref openvino_docs_IE_DG_lpt_QuantizationAlignment) | Quantization alignment | Yes | Yes |
| [AvgPoolPrecisionPreserved](@ref openvino_docs_OV_UG_lpt_AvgPoolPrecisionPreserved) | Precision | No | Yes |
| [IntervalsAlignment](@ref openvino_docs_OV_UG_lpt_IntervalsAlignment) | Quantization interval | Yes | Yes |
| [PerTensorQuantization](@ref openvino_docs_OV_UG_lpt_PerTensorQuantization) | Precision | Yes | No |
| [PrecisionPreserved](@ref openvino_docs_OV_UG_lpt_PrecisionPreserved) | Precision | Yes | Yes |
| [Precisions](@ref openvino_docs_OV_UG_lpt_Precisions) | Precision | Yes | Yes |
| [QuantizationAlignment](@ref openvino_docs_OV_UG_lpt_QuantizationAlignment) | Quantization alignment | Yes | Yes |
> `Target` attribute group defines attribute usage during model transformation for the best performance:
> - `Precision` - the attribute defines the most optimal output port precision.

View File

@@ -1,6 +1,6 @@
# Step 1. Prerequisites Transformations {#openvino_docs_IE_DG_lpt_step1_prerequisites}
# Step 1. Prerequisites Transformations {#openvino_docs_OV_UG_lpt_step1_prerequisites}
Prerequisites transformations are optional. The transformations prepare a model before running other low precision transformations. The transformations do not operate with dequantization operations or update precisions. Prerequisites transformations include:
* [PullReshapeThroughDequantization](@ref openvino_docs_IE_DG_lpt_PullReshapeThroughDequantization)
* [PullTransposeThroughDequantization](@ref openvino_docs_IE_DG_lpt_PullTransposeThroughDequantization)
* [LinOpSequenceFusion](@ref openvino_docs_IE_DG_lpt_LinOpSequenceFusion)
* [PullReshapeThroughDequantization](@ref openvino_docs_OV_UG_lpt_PullReshapeThroughDequantization)
* [PullTransposeThroughDequantization](@ref openvino_docs_OV_UG_lpt_PullTransposeThroughDequantization)
* [LinOpSequenceFusion](@ref openvino_docs_OV_UG_lpt_LinOpSequenceFusion)

View File

@@ -1,14 +1,14 @@
# Step 2. Markup Transformations {#openvino_docs_IE_DG_lpt_step2_markup}
# Step 2. Markup Transformations {#openvino_docs_OV_UG_lpt_step2_markup}
This step defines the optimal `FakeQuantize` decomposition precisions for the best inference performance via operations markup with runtime attribute instances. Attributes are created for input and output ports and operations. Transformations do not change the operation output port precisions. A model markup low precision logic is decomposed and implemented into the following common markup transformations. The order of transformations is important:
1. [MarkupCanBeQuantized](@ref openvino_docs_IE_DG_lpt_MarkupCanBeQuantized)
2. [MarkupPrecisions](@ref openvino_docs_IE_DG_lpt_MarkupPrecisions)
3. [MarkupPerTensorQuantization](@ref openvino_docs_IE_DG_lpt_MarkupPerTensorQuantization)
4. [MarkupAvgPoolPrecisionPreserved](@ref openvino_docs_IE_DG_lpt_MarkupAvgPoolPrecisionPreserved)
5. [PropagatePrecisions](@ref openvino_docs_IE_DG_lpt_PropagatePrecisions)
6. [AlignQuantizationIntervals](@ref openvino_docs_IE_DG_lpt_AlignQuantizationIntervals)
7. [AlignQuantizationParameters](@ref openvino_docs_IE_DG_lpt_AlignQuantizationParameters)
1. [MarkupCanBeQuantized](@ref openvino_docs_OV_UG_lpt_MarkupCanBeQuantized)
2. [MarkupPrecisions](@ref openvino_docs_OV_UG_lpt_MarkupPrecisions)
3. [MarkupPerTensorQuantization](@ref openvino_docs_OV_UG_lpt_MarkupPerTensorQuantization)
4. [MarkupAvgPoolPrecisionPreserved](@ref openvino_docs_OV_UG_lpt_MarkupAvgPoolPrecisionPreserved)
5. [PropagatePrecisions](@ref openvino_docs_OV_UG_lpt_PropagatePrecisions)
6. [AlignQuantizationIntervals](@ref openvino_docs_OV_UG_lpt_AlignQuantizationIntervals)
7. [AlignQuantizationParameters](@ref openvino_docs_OV_UG_lpt_AlignQuantizationParameters)
The table of transformations and used attributes:
@@ -25,11 +25,11 @@ The table of transformations and used attributes:
> **Note:** the same type of attribute instances can be created in different transformations. This approach is the result of the transformation single-responsibility principle. For example, `Precision` attribute instances are created in `MarkupCanBeQuantized` and `MarkupPrecisions` transformations, but the reasons for their creation are different
Common markup transformations can be decomposed into simpler utility markup transformations. The order of Markup utility transformations is not important:
* [CreateAttribute](@ref openvino_docs_IE_DG_lpt_CreateAttribute)
* [CreatePrecisionsDependentAttribute](@ref openvino_docs_IE_DG_lpt_CreatePrecisionsDependentAttribute)
* [PropagateThroughPrecisionPreserved](@ref openvino_docs_IE_DG_lpt_PropagateThroughPrecisionPreserved)
* [PropagateToInput](@ref openvino_docs_IE_DG_lpt_PropagateToInput)
* [UpdateSharedPrecisionPreserved](@ref openvino_docs_IE_DG_lpt_UpdateSharedPrecisionPreserved)
* [CreateAttribute](@ref openvino_docs_OV_UG_lpt_CreateAttribute)
* [CreatePrecisionsDependentAttribute](@ref openvino_docs_OV_UG_lpt_CreatePrecisionsDependentAttribute)
* [PropagateThroughPrecisionPreserved](@ref openvino_docs_OV_UG_lpt_PropagateThroughPrecisionPreserved)
* [PropagateToInput](@ref openvino_docs_OV_UG_lpt_PropagateToInput)
* [UpdateSharedPrecisionPreserved](@ref openvino_docs_OV_UG_lpt_UpdateSharedPrecisionPreserved)
Let's explore all transformations and their relations in detail, using one and the same model:

View File

@@ -1,36 +1,36 @@
# Step 3. Main Transformations {#openvino_docs_IE_DG_lpt_step3_main}
# Step 3. Main Transformations {#openvino_docs_OV_UG_lpt_step3_main}
Main transformations are the majority of low precision transformations. Transformations operate with dequantization operations. Main transformations include:
* [AddTransformation](@ref openvino_docs_IE_DG_lpt_AddTransformation)
* [AvgPoolTransformation](@ref openvino_docs_IE_DG_lpt_AvgPoolTransformation)
* [ClampTransformation](@ref openvino_docs_IE_DG_lpt_AvgPoolTransformation)
* [ConcatTransformation](@ref openvino_docs_IE_DG_lpt_ConcatTransformation)
* [ConvolutionTransformation](@ref openvino_docs_IE_DG_lpt_ConvolutionTransformation)
* [ConvolutionBackpropDataTransformation](@ref openvino_docs_IE_DG_lpt_ConvolutionBackpropDataTransformation)
* [DepthToSpaceTransformation](@ref openvino_docs_IE_DG_lpt_DepthToSpaceTransformation)
* [FakeQuantizeDecompositionTransformation](@ref openvino_docs_IE_DG_lpt_FakeQuantizeDecompositionTransformation)
* [FakeQuantizeTransformation](@ref openvino_docs_IE_DG_lpt_FakeQuantizeTransformation)
* [InterpolateTransformation](@ref openvino_docs_IE_DG_lpt_InterpolateTransformation)
* [GroupConvolutionTransformation](@ref openvino_docs_IE_DG_lpt_GroupConvolutionTransformation)
* [MatMulTransformation](@ref openvino_docs_IE_DG_lpt_MatMulTransformation)
* [MaxPoolTransformation](@ref openvino_docs_IE_DG_lpt_MaxPoolTransformation)
* [MultiplyTransformation](@ref openvino_docs_IE_DG_lpt_MultiplyTransformation)
* [MVNTransformation](@ref openvino_docs_IE_DG_lpt_MVNTransformation)
* [NormalizeL2Transformation](@ref openvino_docs_IE_DG_lpt_NormalizeL2Transformation)
* [PReluTransformation](@ref openvino_docs_IE_DG_lpt_PReluTransformation)
* [ReduceMaxTransformation](@ref openvino_docs_IE_DG_lpt_ReduceMaxTransformation)
* [ReduceMeanTransformation](@ref openvino_docs_IE_DG_lpt_ReduceMeanTransformation)
* [ReduceMinTransformation](@ref openvino_docs_IE_DG_lpt_ReduceMinTransformation)
* [ReduceSumTransformation](@ref openvino_docs_IE_DG_lpt_ReduceSumTransformation)
* [ReluTransformation](@ref openvino_docs_IE_DG_lpt_ReluTransformation)
* [ReshapeTransformation](@ref openvino_docs_IE_DG_lpt_ReshapeTransformation)
* [SqueezeTransformation](@ref openvino_docs_IE_DG_lpt_SqueezeTransformation)
* [ShuffleChannelsTransformation](@ref openvino_docs_IE_DG_lpt_ShuffleChannelsTransformation)
* [SplitTransformation](@ref openvino_docs_IE_DG_lpt_SplitTransformation)
* [StridedSliceTransformation](@ref openvino_docs_IE_DG_lpt_StridedSliceTransformation)
* [TransposeTransformation](@ref openvino_docs_IE_DG_lpt_TransposeTransformation)
* [UnsqueezeTransformation](@ref openvino_docs_IE_DG_lpt_UnsqueezeTransformation)
* [VariadicSplitTransformation](@ref openvino_docs_IE_DG_lpt_VariadicSplitTransformation)
* [AddTransformation](@ref openvino_docs_OV_UG_lpt_AddTransformation)
* [AvgPoolTransformation](@ref openvino_docs_OV_UG_lpt_AvgPoolTransformation)
* [ClampTransformation](@ref openvino_docs_OV_UG_lpt_AvgPoolTransformation)
* [ConcatTransformation](@ref openvino_docs_OV_UG_lpt_ConcatTransformation)
* [ConvolutionTransformation](@ref openvino_docs_OV_UG_lpt_ConvolutionTransformation)
* [ConvolutionBackpropDataTransformation](@ref openvino_docs_OV_UG_lpt_ConvolutionBackpropDataTransformation)
* [DepthToSpaceTransformation](@ref openvino_docs_OV_UG_lpt_DepthToSpaceTransformation)
* [FakeQuantizeDecompositionTransformation](@ref openvino_docs_OV_UG_lpt_FakeQuantizeDecompositionTransformation)
* [FakeQuantizeTransformation](@ref openvino_docs_OV_UG_lpt_FakeQuantizeTransformation)
* [InterpolateTransformation](@ref openvino_docs_OV_UG_lpt_InterpolateTransformation)
* [GroupConvolutionTransformation](@ref openvino_docs_OV_UG_lpt_GroupConvolutionTransformation)
* [MatMulTransformation](@ref openvino_docs_OV_UG_lpt_MatMulTransformation)
* [MaxPoolTransformation](@ref openvino_docs_OV_UG_lpt_MaxPoolTransformation)
* [MultiplyTransformation](@ref openvino_docs_OV_UG_lpt_MultiplyTransformation)
* [MVNTransformation](@ref openvino_docs_OV_UG_lpt_MVNTransformation)
* [NormalizeL2Transformation](@ref openvino_docs_OV_UG_lpt_NormalizeL2Transformation)
* [PReluTransformation](@ref openvino_docs_OV_UG_lpt_PReluTransformation)
* [ReduceMaxTransformation](@ref openvino_docs_OV_UG_lpt_ReduceMaxTransformation)
* [ReduceMeanTransformation](@ref openvino_docs_OV_UG_lpt_ReduceMeanTransformation)
* [ReduceMinTransformation](@ref openvino_docs_OV_UG_lpt_ReduceMinTransformation)
* [ReduceSumTransformation](@ref openvino_docs_OV_UG_lpt_ReduceSumTransformation)
* [ReluTransformation](@ref openvino_docs_OV_UG_lpt_ReluTransformation)
* [ReshapeTransformation](@ref openvino_docs_OV_UG_lpt_ReshapeTransformation)
* [SqueezeTransformation](@ref openvino_docs_OV_UG_lpt_SqueezeTransformation)
* [ShuffleChannelsTransformation](@ref openvino_docs_OV_UG_lpt_ShuffleChannelsTransformation)
* [SplitTransformation](@ref openvino_docs_OV_UG_lpt_SplitTransformation)
* [StridedSliceTransformation](@ref openvino_docs_OV_UG_lpt_StridedSliceTransformation)
* [TransposeTransformation](@ref openvino_docs_OV_UG_lpt_TransposeTransformation)
* [UnsqueezeTransformation](@ref openvino_docs_OV_UG_lpt_UnsqueezeTransformation)
* [VariadicSplitTransformation](@ref openvino_docs_OV_UG_lpt_VariadicSplitTransformation)
Let's explore some main transformations on the example model. Original model:

View File

@@ -1,8 +1,8 @@
# Step 4. Cleanup Transformations {#openvino_docs_IE_DG_lpt_step4_cleanup}
# Step 4. Cleanup Transformations {#openvino_docs_OV_UG_lpt_step4_cleanup}
* [FoldConvertTransformation](@ref openvino_docs_IE_DG_lpt_FoldConvertTransformation)
* [FoldFakeQuantizeTransformation](@ref openvino_docs_IE_DG_lpt_FoldFakeQuantizeTransformation)
* [FuseConvertTransformation](@ref openvino_docs_IE_DG_lpt_FuseConvertTransformation)
* [FuseMultiplyToFakeQuantizeTransformation](@ref openvino_docs_IE_DG_lpt_FuseMultiplyToFakeQuantizeTransformation)
* [FuseSubtractToFakeQuantizeTransformation](@ref openvino_docs_IE_DG_lpt_FuseSubtractToFakeQuantizeTransformation)
* [MultiplyToGroupConvolutionTransformation](@ref openvino_docs_IE_DG_lpt_MultiplyToGroupConvolutionTransformation)
* [FoldConvertTransformation](@ref openvino_docs_OV_UG_lpt_FoldConvertTransformation)
* [FoldFakeQuantizeTransformation](@ref openvino_docs_OV_UG_lpt_FoldFakeQuantizeTransformation)
* [FuseConvertTransformation](@ref openvino_docs_OV_UG_lpt_FuseConvertTransformation)
* [FuseMultiplyToFakeQuantizeTransformation](@ref openvino_docs_OV_UG_lpt_FuseMultiplyToFakeQuantizeTransformation)
* [FuseSubtractToFakeQuantizeTransformation](@ref openvino_docs_OV_UG_lpt_FuseSubtractToFakeQuantizeTransformation)
* [MultiplyToGroupConvolutionTransformation](@ref openvino_docs_OV_UG_lpt_MultiplyToGroupConvolutionTransformation)

View File

@@ -1,3 +1,3 @@
# ConvertSubtractConstant transformation {#openvino_docs_IE_DG_lpt_ConvertSubtractConstant}
# ConvertSubtractConstant transformation {#openvino_docs_OV_UG_lpt_ConvertSubtractConstant}
ngraph::pass::low_precision::ConvertSubtractConstant class represents the `ConvertSubtractConstant` transformation.

View File

@@ -1,4 +1,4 @@
# LinOpSequenceFusion transformation {#openvino_docs_IE_DG_lpt_LinOpSequenceFusion}
# LinOpSequenceFusion transformation {#openvino_docs_OV_UG_lpt_LinOpSequenceFusion}
ngraph::pass::LinOpSequenceFusion class represents the `LinOpSequenceFusion` transformation.

View File

@@ -1,3 +1,3 @@
# PullReshapeThroughDequantization transformation {#openvino_docs_IE_DG_lpt_PullReshapeThroughDequantization}
# PullReshapeThroughDequantization transformation {#openvino_docs_OV_UG_lpt_PullReshapeThroughDequantization}
ngraph::pass::low_precision::PullReshapeThroughDequantization class represents the `PullReshapeThroughDequantization` transformation.

View File

@@ -1,3 +1,3 @@
# PullTransposeThroughDequantization transformation {#openvino_docs_IE_DG_lpt_PullTransposeThroughDequantization}
# PullTransposeThroughDequantization transformation {#openvino_docs_OV_UG_lpt_PullTransposeThroughDequantization}
ngraph::pass::low_precision::PullTransposeThroughDequantization class represents the `PullTransposeThroughDequantization` transformation.

View File

@@ -1,3 +1,3 @@
# AlignQuantizationIntervals transformation {#openvino_docs_IE_DG_lpt_AlignQuantizationIntervals}
# AlignQuantizationIntervals transformation {#openvino_docs_OV_UG_lpt_AlignQuantizationIntervals}
ngraph::pass::low_precision::AlignQuantizationIntervals class represents the `AlignQuantizationIntervals` transformation.

View File

@@ -1,3 +1,3 @@
# AlignQuantizationParameters transformation {#openvino_docs_IE_DG_lpt_AlignQuantizationParameters}
# AlignQuantizationParameters transformation {#openvino_docs_OV_UG_lpt_AlignQuantizationParameters}
ngraph::pass::low_precision::AlignQuantizationParameters class represents the `AlignQuantizationParameters` transformation.

View File

@@ -1,3 +1,3 @@
# CreateAttribute transformation {#openvino_docs_IE_DG_lpt_CreateAttribute}
# CreateAttribute transformation {#openvino_docs_OV_UG_lpt_CreateAttribute}
ngraph::pass::low_precision::CreateAttribute class represents the `CreateAttribute` transformation.

View File

@@ -1,3 +1,3 @@
# CreatePrecisionsDependentAttribute transformation {#openvino_docs_IE_DG_lpt_CreatePrecisionsDependentAttribute}
# CreatePrecisionsDependentAttribute transformation {#openvino_docs_OV_UG_lpt_CreatePrecisionsDependentAttribute}
ngraph::pass::low_precision::CreatePrecisionsDependentAttribute class represents the `CreatePrecisionsDependentAttribute` transformation.

View File

@@ -1,3 +1,3 @@
# MarkupAvgPoolPrecisionPreserved transformation {#openvino_docs_IE_DG_lpt_MarkupAvgPoolPrecisionPreserved}
# MarkupAvgPoolPrecisionPreserved transformation {#openvino_docs_OV_UG_lpt_MarkupAvgPoolPrecisionPreserved}
ngraph::pass::low_precision::MarkupAvgPoolPrecisionPreserved class represents the `MarkupAvgPoolPrecisionPreserved` transformation.

View File

@@ -1,3 +1,3 @@
# MarkupCanBeQuantized transformation {#openvino_docs_IE_DG_lpt_MarkupCanBeQuantized}
# MarkupCanBeQuantized transformation {#openvino_docs_OV_UG_lpt_MarkupCanBeQuantized}
ngraph::pass::low_precision::MarkupCanBeQuantized class represents the `MarkupCanBeQuantized` transformation.

View File

@@ -1,3 +1,3 @@
# MarkupPerTensorQuantization transformation {#openvino_docs_IE_DG_lpt_MarkupPerTensorQuantization}
# MarkupPerTensorQuantization transformation {#openvino_docs_OV_UG_lpt_MarkupPerTensorQuantization}
ngraph::pass::low_precision::MarkupPerTensorQuantization class represents the `MarkupPerTensorQuantization` transformation.

View File

@@ -1,3 +1,3 @@
# MarkupPrecisions transformation {#openvino_docs_IE_DG_lpt_MarkupPrecisions}
# MarkupPrecisions transformation {#openvino_docs_OV_UG_lpt_MarkupPrecisions}
ngraph::pass::low_precision::MarkupPrecisions class represents the `MarkupPrecisions` transformation.

View File

@@ -1,3 +1,3 @@
# PropagatePrecisions transformation {#openvino_docs_IE_DG_lpt_PropagatePrecisions}
# PropagatePrecisions transformation {#openvino_docs_OV_UG_lpt_PropagatePrecisions}
ngraph::pass::low_precision::PropagatePrecisions class represents the `PropagatePrecisions` transformation.

View File

@@ -1,3 +1,3 @@
# PropagateSharedValue transformation {#openvino_docs_IE_DG_lpt_PropagateSharedValue}
# PropagateSharedValue transformation {#openvino_docs_OV_UG_lpt_PropagateSharedValue}
ngraph::pass::low_precision::PropagateSharedValue class represents the `PropagateSharedValue` transformation.

View File

@@ -1,3 +1,3 @@
# PropagateThroughPrecisionPreserved transformation {#openvino_docs_IE_DG_lpt_PropagateThroughPrecisionPreserved}
# PropagateThroughPrecisionPreserved transformation {#openvino_docs_OV_UG_lpt_PropagateThroughPrecisionPreserved}
ngraph::pass::low_precision::PropagateThroughPrecisionPreserved class represents the `PropagateThroughPrecisionPreserved` transformation.

View File

@@ -1,3 +1,3 @@
# PropagateToInput transformation {#openvino_docs_IE_DG_lpt_PropagateToInput}
# PropagateToInput transformation {#openvino_docs_OV_UG_lpt_PropagateToInput}
ngraph::pass::low_precision::PropagateToInput class represents the `PropagateToInput` transformation.

View File

@@ -1,3 +1,3 @@
# UpdateSharedPrecisionPreserved transformation {#openvino_docs_IE_DG_lpt_UpdateSharedPrecisionPreserved}
# UpdateSharedPrecisionPreserved transformation {#openvino_docs_OV_UG_lpt_UpdateSharedPrecisionPreserved}
ngraph::pass::low_precision::UpdateSharedPrecisionPreserved class represents the `UpdateSharedPrecisionPreserved` transformation.

View File

@@ -1,3 +1,3 @@
# ClampTransformation transformation {#openvino_docs_IE_DG_lpt_ClampTransformation}
# ClampTransformation transformation {#openvino_docs_OV_UG_lpt_ClampTransformation}
ngraph::pass::low_precision::ClampTransformation class represents the `Clamp` operation transformation.

View File

@@ -1,3 +1,3 @@
# PReluTransformation transformation {#openvino_docs_IE_DG_lpt_PReluTransformation}
# PReluTransformation transformation {#openvino_docs_OV_UG_lpt_PReluTransformation}
ngraph::pass::low_precision::PReluTransformation class represents the `PRelu` operation transformation.

View File

@@ -1,3 +1,3 @@
# ReluTransformation transformation {#openvino_docs_IE_DG_lpt_ReluTransformation}
# ReluTransformation transformation {#openvino_docs_OV_UG_lpt_ReluTransformation}
ngraph::pass::low_precision::ReluTransformation class represents the `Relu` operation transformation.

View File

@@ -1,4 +1,4 @@
# AddTransformation transformation {#openvino_docs_IE_DG_lpt_AddTransformation}
# AddTransformation transformation {#openvino_docs_OV_UG_lpt_AddTransformation}
ngraph::pass::low_precision::AddTransformation class represents the `Add` operation transformation.

View File

@@ -1,3 +1,3 @@
# MultiplyTransformation transformation {#openvino_docs_IE_DG_lpt_MultiplyTransformation}
# MultiplyTransformation transformation {#openvino_docs_OV_UG_lpt_MultiplyTransformation}
ngraph::pass::low_precision::MultiplyTransformation class represents the `Multiply` operation transformation.

View File

@@ -1,3 +1,3 @@
# SubtractTransformation transformation {#openvino_docs_IE_DG_lpt_SubtractTransformation}
# SubtractTransformation transformation {#openvino_docs_OV_UG_lpt_SubtractTransformation}
ngraph::pass::low_precision::SubtractTransformation class represents the `Subtract` operation transformation.

View File

@@ -1,4 +1,4 @@
# ConvolutionTransformation transformation {#openvino_docs_IE_DG_lpt_ConvolutionTransformation}
# ConvolutionTransformation transformation {#openvino_docs_OV_UG_lpt_ConvolutionTransformation}
ngraph::pass::low_precision::ConvolutionTransformation class represents the `Convolution` operation transformation.

View File

@@ -1,3 +1,3 @@
# ConvolutionBackpropDataTransformation transformation {#openvino_docs_IE_DG_lpt_ConvolutionBackpropDataTransformation}
# ConvolutionBackpropDataTransformation transformation {#openvino_docs_OV_UG_lpt_ConvolutionBackpropDataTransformation}
ngraph::pass::low_precision::ConvolutionBackpropDataTransformation class represents the `ConvolutionBackpropData` operation transformation.

View File

@@ -1,3 +1,3 @@
# GroupConvolutionTransformation transformation {#openvino_docs_IE_DG_lpt_GroupConvolutionTransformation}
# GroupConvolutionTransformation transformation {#openvino_docs_OV_UG_lpt_GroupConvolutionTransformation}
ngraph::pass::low_precision::GroupConvolutionTransformation class represents the `GroupConvolution` operation transformation.

View File

@@ -1,3 +1,3 @@
# InterpolateTransformation transformation {#openvino_docs_IE_DG_lpt_InterpolateTransformation}
# InterpolateTransformation transformation {#openvino_docs_OV_UG_lpt_InterpolateTransformation}
ngraph::pass::low_precision::InterpolateTransformation class represents the `Interpolate` operation transformation.

View File

@@ -1,3 +1,3 @@
# MatMulTransformation transformation {#openvino_docs_IE_DG_lpt_MatMulTransformation}
# MatMulTransformation transformation {#openvino_docs_OV_UG_lpt_MatMulTransformation}
ngraph::pass::low_precision::MatMulTransformation class represents the `MatMul` operation transformation.

View File

@@ -1,3 +1,3 @@
# ConcatTransformation transformation {#openvino_docs_IE_DG_lpt_ConcatTransformation}
# ConcatTransformation transformation {#openvino_docs_OV_UG_lpt_ConcatTransformation}
ngraph::pass::low_precision::ConcatTransformation class represents the `Concat` operation transformation.

View File

@@ -1,3 +1,3 @@
# DepthToSpaceTransformation transformation {#openvino_docs_IE_DG_lpt_DepthToSpaceTransformation}
# DepthToSpaceTransformation transformation {#openvino_docs_OV_UG_lpt_DepthToSpaceTransformation}
ngraph::pass::low_precision::DepthToSpaceTransformation class represents the `DepthToSpace` operation transformation.

View File

@@ -1,3 +1,3 @@
# PadTransformation transformation {#openvino_docs_IE_DG_lpt_PadTransformation}
# PadTransformation transformation {#openvino_docs_OV_UG_lpt_PadTransformation}
ngraph::pass::low_precision::PadTransformation class represents the `Pad` operation transformation.

View File

@@ -1,3 +1,3 @@
# ShuffleChannelsTransformation transformation {#openvino_docs_IE_DG_lpt_ShuffleChannelsTransformation}
# ShuffleChannelsTransformation transformation {#openvino_docs_OV_UG_lpt_ShuffleChannelsTransformation}
ngraph::pass::low_precision::ShuffleChannelsTransformation class represents the `ShuffleChannels` operation transformation.

View File

@@ -1,3 +1,3 @@
# SplitTransformation transformation {#openvino_docs_IE_DG_lpt_SplitTransformation}
# SplitTransformation transformation {#openvino_docs_OV_UG_lpt_SplitTransformation}
ngraph::pass::low_precision::SplitTransformation class represents the `Split` operation transformation.

View File

@@ -1,3 +1,3 @@
# StridedSliceTransformation transformation {#openvino_docs_IE_DG_lpt_StridedSliceTransformation}
# StridedSliceTransformation transformation {#openvino_docs_OV_UG_lpt_StridedSliceTransformation}
ngraph::pass::low_precision::StridedSliceTransformation class represents the `StridedSlice` operation transformation.

View File

@@ -1,3 +1,3 @@
# TransposeTransformation transformation {#openvino_docs_IE_DG_lpt_TransposeTransformation}
# TransposeTransformation transformation {#openvino_docs_OV_UG_lpt_TransposeTransformation}
ngraph::pass::low_precision::TransposeTransformation class represents the `Transpose` operation transformation.

View File

@@ -1,3 +1,3 @@
# VariadicSplitTransformation transformation {#openvino_docs_IE_DG_lpt_VariadicSplitTransformation}
# VariadicSplitTransformation transformation {#openvino_docs_OV_UG_lpt_VariadicSplitTransformation}
ngraph::pass::low_precision::VariadicSplitTransformation class represents the `VariadicSplit` operation transformation.

View File

@@ -1,3 +1,3 @@
# MVNTransformation transformation {#openvino_docs_IE_DG_lpt_MVNTransformation}
# MVNTransformation transformation {#openvino_docs_OV_UG_lpt_MVNTransformation}
ngraph::pass::low_precision::MVNTransformation class represents the `MVN` operation transformation.

View File

@@ -1,3 +1,3 @@
# NormalizeL2Transformation transformation {#openvino_docs_IE_DG_lpt_NormalizeL2Transformation}
# NormalizeL2Transformation transformation {#openvino_docs_OV_UG_lpt_NormalizeL2Transformation}
ngraph::pass::low_precision::NormalizeL2Transformation class represents the `NormalizeL2` operation transformation.

View File

@@ -1,3 +1,3 @@
# AvgPoolTransformation transformation {#openvino_docs_IE_DG_lpt_AvgPoolTransformation}
# AvgPoolTransformation transformation {#openvino_docs_OV_UG_lpt_AvgPoolTransformation}
ngraph::pass::low_precision::AvgPoolTransformation class represents the `AvgPool` operation transformation.

View File

@@ -1,3 +1,3 @@
# MaxPoolTransformation transformation {#openvino_docs_IE_DG_lpt_MaxPoolTransformation}
# MaxPoolTransformation transformation {#openvino_docs_OV_UG_lpt_MaxPoolTransformation}
ngraph::pass::low_precision::MaxPoolTransformation class represents the `MaxPool` operation transformation.

View File

@@ -1,3 +1,3 @@
# FakeQuantizeTransformation transformation {#openvino_docs_IE_DG_lpt_FakeQuantizeTransformation}
# FakeQuantizeTransformation transformation {#openvino_docs_OV_UG_lpt_FakeQuantizeTransformation}
ngraph::pass::low_precision::FakeQuantizeTransformation class represents the `FakeQuantize` operation transformation.

View File

@@ -1,3 +1,3 @@
# FoldFakeQuantizeTransformation transformation {#openvino_docs_IE_DG_lpt_FoldFakeQuantizeTransformation}
# FoldFakeQuantizeTransformation transformation {#openvino_docs_OV_UG_lpt_FoldFakeQuantizeTransformation}
ngraph::pass::low_precision::FoldFakeQuantizeTransformation class represents the `FoldFakeQuantize` operation transformation.

View File

@@ -1,3 +1,3 @@
# ReduceMaxTransformation transformation {#openvino_docs_IE_DG_lpt_ReduceMaxTransformation}
# ReduceMaxTransformation transformation {#openvino_docs_OV_UG_lpt_ReduceMaxTransformation}
ngraph::pass::low_precision::ReduceMaxTransformation class represents the `ReduceMax` operation transformation.

View File

@@ -1,3 +1,3 @@
# ReduceMeanTransformation transformation {#openvino_docs_IE_DG_lpt_ReduceMeanTransformation}
# ReduceMeanTransformation transformation {#openvino_docs_OV_UG_lpt_ReduceMeanTransformation}
ngraph::pass::low_precision::ReduceMeanTransformation class represents the `ReduceMean` operation transformation.

View File

@@ -1,3 +1,3 @@
# ReduceMinTransformation transformation {#openvino_docs_IE_DG_lpt_ReduceMinTransformation}
# ReduceMinTransformation transformation {#openvino_docs_OV_UG_lpt_ReduceMinTransformation}
ngraph::pass::low_precision::ReduceMinTransformation class represents the `ReduceMin` operation transformation.

View File

@@ -1,3 +1,3 @@
# ReduceSumTransformation transformation {#openvino_docs_IE_DG_lpt_ReduceSumTransformation}
# ReduceSumTransformation transformation {#openvino_docs_OV_UG_lpt_ReduceSumTransformation}
ngraph::pass::low_precision::ReduceSumTransformation class represents the `ReduceSum` operation transformation.

View File

@@ -1,3 +1,3 @@
# ReshapeTransformation transformation {#openvino_docs_IE_DG_lpt_ReshapeTransformation}
# ReshapeTransformation transformation {#openvino_docs_OV_UG_lpt_ReshapeTransformation}
ngraph::pass::low_precision::ReshapeTransformation class represents the `Reshape` operation transformation.

View File

@@ -1,3 +1,3 @@
# SqueezeTransformation transformation {#openvino_docs_IE_DG_lpt_SqueezeTransformation}
# SqueezeTransformation transformation {#openvino_docs_OV_UG_lpt_SqueezeTransformation}
ngraph::pass::low_precision::SqueezeTransformation class represents the `Squeeze` operation transformation.

View File

@@ -1,3 +1,3 @@
# UnsqueezeTransformation transformation {#openvino_docs_IE_DG_lpt_UnsqueezeTransformation}
# UnsqueezeTransformation transformation {#openvino_docs_OV_UG_lpt_UnsqueezeTransformation}
ngraph::pass::low_precision::UnsqueezeTransformation class represents the `Unsqueeze` operation transformation.

View File

@@ -1,3 +1,3 @@
# FakeQuantizeDecompositionTransformation transformation {#openvino_docs_IE_DG_lpt_FakeQuantizeDecompositionTransformation}
# FakeQuantizeDecompositionTransformation transformation {#openvino_docs_OV_UG_lpt_FakeQuantizeDecompositionTransformation}
ngraph::pass::low_precision::FakeQuantizeDecompositionTransformation class represents the `FakeQuantizeDecompositionTransformation` transformation.

View File

@@ -1,3 +1,3 @@
# FoldConvertTransformation transformation {#openvino_docs_IE_DG_lpt_FoldConvertTransformation}
# FoldConvertTransformation transformation {#openvino_docs_OV_UG_lpt_FoldConvertTransformation}
ngraph::pass::low_precision::FoldConvertTransformation class represents the `FoldConvertTransformation` transformation.

View File

@@ -1,3 +1,3 @@
# FuseConvertTransformation transformation {#openvino_docs_IE_DG_lpt_FuseConvertTransformation}
# FuseConvertTransformation transformation {#openvino_docs_OV_UG_lpt_FuseConvertTransformation}
ngraph::pass::low_precision::FuseConvertTransformation class represents the `FuseConvertTransformation` transformation.

View File

@@ -1,3 +1,3 @@
# FuseMultiplyToFakeQuantizeTransformation transformation {#openvino_docs_IE_DG_lpt_FuseMultiplyToFakeQuantizeTransformation}
# FuseMultiplyToFakeQuantizeTransformation transformation {#openvino_docs_OV_UG_lpt_FuseMultiplyToFakeQuantizeTransformation}
ngraph::pass::low_precision::FuseMultiplyToFakeQuantizeTransformation class represents the `FuseMultiplyToFakeQuantizeTransformation` transformation.

View File

@@ -1,3 +1,3 @@
# FuseSubtractToFakeQuantizeTransformation transformation {#openvino_docs_IE_DG_lpt_FuseSubtractToFakeQuantizeTransformation}
# FuseSubtractToFakeQuantizeTransformation transformation {#openvino_docs_OV_UG_lpt_FuseSubtractToFakeQuantizeTransformation}
ngraph::pass::low_precision::FuseSubtractToFakeQuantizeTransformation class represents the `FuseSubtractToFakeQuantizeTransformation` transformation.

Some files were not shown because too many files have changed in this diff Show More