Compare commits

...

76 Commits

Author SHA1 Message Date
Tomasz Dołbniak
39aba80957 OpenCV build switched off by default [2022/1.1] (#12358)
* OCV build off by default

* OCV on in the Linux Azure pipeline

* Copy of OCV in the linux pipeline
2022-08-02 20:01:26 +02:00
Evgenya Stepyreva
c62251c89a Auto Batch: if disabled during cmake (#12382) 2022-08-02 11:26:51 +04:00
Tomasz Dołbniak
e1865fd8e0 Update of zlib to 1.2.12 (#12357) 2022-08-01 10:10:41 +04:00
Ekaterina Aidova
fe4cfc1b43 [OMZ]: include fix for onnx version to release (#12360) 2022-07-30 09:27:52 +00:00
Alina Kladieva
f45fb8f7c8 Update patch version for 22.1.1 (#12347) 2022-07-28 20:21:30 +00:00
Tomasz Dołbniak
7a6df77198 Dependencies update (#12340) 2022-07-28 15:45:00 +02:00
Ilya Churaev
d1b48740cd Tbb fixes (#12321)
* Property to force terminate tbb threads

During inference done, tbb threads cannot be closed by itself, which cause memory leak and unload/lingering threads.
Sometimes the tbb threads need to be terminate for resource(memory, thread) consumption

This PR contains:
1. Add a new property to control whether force to terminate tbb threads.
2. Property key is "FORCE_TBB_TERMINATE", default value is false.
3. Explicitly to terminate tbb task scheduler during unload openvino dll if this property is set true.
    e.g: core.set_property(device, ov::force_tbb_terminate(true));
4. If not set FORCE_TBB_TERMINATE, there will be no any additional tbb operations.

Change-Id: I32dc0ba122bb19a9dbf3ba12fdd596aad9ac54b4

* Fix executorManager test case

Change executorManager from static to be dynamic, the test case should fit this change.

* Fix race condition between executor and executorManger

* Add test case for tbb property

1. Add basic test case for ov::force_tbb_terminate property
2. set ov::force_tbb_terminate to be false

* Avoid terminate tbb in case of no tbb thread created

* change tbb blocking_terminate to terminate

Tbb blocking_terminate calling will cause some segmentfault during run some special models,
the reason may comes from block_terminate cause current thread block here to wait for tbb exit,
but cannot handle some resource dependencies.
After adopt terminate(), the dependencies can be resolved and no segmentfault any more.

Change-Id: I0b920630a25cd3fd2747c57ec71ca749ba35573b

* Disable dynamic lib test case in static library compilation version

As CVS-68982 description, we should disable the test case which will load
dynamic library in openvino static library compilation.

* Address reviewer's comments

* Fix coverity issue in executorManager

1. fix coverity issue
2. avoid oneTBB build error due to different API with TBB

Change-Id: I0339446e33186e0ce57de07aa8492186f2f6e369

* oneTBB support terminate tbb thread

Change-Id: Iea618b72db193bd48bfbf0dba3586dcdb139c43f

* Add FORCE_TBB_TERMINATE to legacy API

* Put this config into proper place

* fix issue in property test

* Add some descriprion for this config

* Xiaoxia/onetbb old version (#12303)

* support oneTBB old version

* fix oneTBB version mismatch issues

* fix clang issue

* add 'tbb' path to setupvars.sh and OpenVINOConfig.cmake.in

* Update scripts/setupvars/setupvars.sh

Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
Co-authored-by: Shen, Wanglei <wanglei.shen@intel.com>

Co-authored-by: River,Li <river.li@intel.com>
Co-authored-by: Sun Xiaoxia <xiaoxia.sun@intel.com>
Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
Co-authored-by: Shen, Wanglei <wanglei.shen@intel.com>
2022-07-27 14:17:36 +00:00
Ilya Churaev
9287ae5d93 Port cc fix (#12296)
* Revert "Fixed 3 naming issue"

This reverts commit a92d3cfff5.

* Revert "Fix CC issues for transformation and snippets"

This reverts commit d08a3f5aac.

* Fix NGRAPH_PASS_CALLBACK issue to make it can work

* Fix matcher name missing issue

* Fixed build

Co-authored-by: River,Li <river.li@intel.com>
2022-07-27 07:22:17 +00:00
Ilya Lavrenov
b2200941ba Ported multiple fixes for 2022.1.1 release (#12249)
* Update for get started samples (#10975) (#11020)

* Update for get started samples

* Update docs/get_started/get_started_demos.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/get_started/get_started_demos.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/get_started/get_started_demos.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* formatting

* rewording

* fix links

* fix formatting

* Update docs/get_started/get_started_demos.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/get_started/get_started_demos.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* replace squeezenet1.1 with googlenet-v1

* GoogleNet v1 Caffe* model

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
(cherry picked from commit 412f2190d1)

* [DOCS] update HETERO execution (#11003)

the PR has been reviewed and accepted for master already, now updating 22.1

* Incremental improvement of MO user guide. (#11010) (#11028)

* Incremental improvement of MO user guide.

* Apply feedback

* POT documentation updates (#10578) (#11024)

* POT changes

* change install

* change img size

* remove cli option

* Documentation fixes (#11044)

* Benchmark app usage

* Fixed link to the devices

* More fixes

* Update docs/OV_Runtime_UG/multi_device.md

Co-authored-by: Sergey Lyubimtsev <sergey.lyubimtsev@intel.com>

* Removed several hardcoded links

Co-authored-by: Sergey Lyubimtsev <sergey.lyubimtsev@intel.com>

* Removed obsolete code snippets (#11061)

* Removed obsolete code snippets

* NCC style

* Fixed NCC for BA

* fix a reference link (#11048)

* updates

* adding gna to linux

* add missing reference

* update

* Update docs/install_guides/installing-model-dev-tools.md

Co-authored-by: Sergey Lyubimtsev <sergey.lyubimtsev@intel.com>

* Update docs/install_guides/installing-model-dev-tools.md

Co-authored-by: Sergey Lyubimtsev <sergey.lyubimtsev@intel.com>

* Update docs/install_guides/installing-model-dev-tools.md

Co-authored-by: Sergey Lyubimtsev <sergey.lyubimtsev@intel.com>

* Update docs/install_guides/installing-model-dev-tools.md

Co-authored-by: Sergey Lyubimtsev <sergey.lyubimtsev@intel.com>

* Update docs/install_guides/installing-model-dev-tools.md

Co-authored-by: Sergey Lyubimtsev <sergey.lyubimtsev@intel.com>

* update

* minor updates

* add gna item to yum and apt

* add gna to get started page

* update reference formatting

* merge commit

Co-authored-by: Sergey Lyubimtsev <sergey.lyubimtsev@intel.com>

* AUTO and MULTI Doc update for release 2022.1 (#11066)

* Update Auto plugin docs (#10623)

* Update Auto plugin docs

Revise auto plugin and auto plugin debugging articles. Include necessary image files.

* Update docs/OV_Runtime_UG/supported_plugins/AutoPlugin_Debugging.md

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/AutoPlugin_Debugging.md

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/AutoPlugin_Debugging.md

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

* Update docs/OV_Runtime_UG/auto_device_selection.md

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/AutoPlugin_Debugging.md

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

* Update docs/OV_Runtime_UG/auto_device_selection.md

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/AutoPlugin_Debugging.md

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

* Update AutoPlugin_Debugging.md

* include review corrections

* Update auto_device_selection.md

* Update auto_device_selection.md

* Update auto_device_selection.md

* Update auto_device_selection.md

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

* [AUTOPLUGIN] update multi plugin document for ov2.0 (#10688)

* update multi document

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* update

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* update snippets ov::enableProfile

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* fix build issue

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* use Anymap in snippets

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* fix format and set property

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* update python

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* try fo fix test document issue

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* removed NEW IE-CENTRIC API and upated set_property

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* update ov::optimal_number_of_infer_requests

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* Updated multi code snippets (#11037)

* [Auto PLUGIN] update Auto docs (#10889)

* update Auto docs

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* update python snippets

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* remove vpu, fix a mistaken in python code

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* update MYRIAD device full name

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* update API name

old API use name Inference Engine API
NEW API usen name OpenVINO Runtime API 2.0

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* update tab name, and code format

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* fix AUTO4 format issue

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* update set_property code

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* auto draft

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* mv code into .cpp and .py

modify the devicelist part accoding to the review

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* remove priority list in code and document

modify the begning of the document
remove perfomance data
remove old API
use compile_model instead of set_property
add a image about cpu accelerate

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* fix mis print and code is not match document

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* try to fix doc build issue

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* fix snippets code compile issue

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* update sh scripts with ```sh```

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>
Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>

* [CPU] CPU plugin docs refactoring backport to the release branch (#11039)

* CPU device documentation refresh

* Bfloat16 inference page aligned with the new API

* Bfloat16 inference section moved to CPU main

* First review comments applied

* Second review step comments applied

* OneDNN reference changed to the GitHub page

* AvgPool added to the oneDNN ops list

* Updated note about latency, added note about mem usage with dynamic shapes

* DOCS: API Reference (#11063)

* Renamed API reference

* Try to fix API reference for new API

* Fixes after self-review

* Reworked OpenVINO Plugin dev guide structure

* Properties

* Try to fix links

* Mark properties for MYRIAD & HDDL

* Extensibility guide with FE extensions and remove OV_FRAMEWORK_MAP from docs

* Rework of Extensibility Intro, adopted examples to missing OPENVINO_FRAMEWORK_MAP

* Removed OPENVINO_FRAMEWORK_MAP reference

* Frontend extension detailed documentation

* Fixed distributed snippets

* Fixed snippet inclusion in FE extension document and chapter headers

* Fixed wrong name in a snippet reference

* Fixed test for template extension due to changed number of loaded extensions

* Update docs/Extensibility_UG/frontend_extensions.md

Co-authored-by: Ivan Tikhonov <ivan.tikhonov@intel.com>

* Minor fixes in extension snippets

* Small grammar fix

Co-authored-by: Ivan Tikhonov <ivan.tikhonov@intel.com>

Co-authored-by: Ivan Tikhonov <ivan.tikhonov@intel.com>

* Update Benchmark guides (#11076)

* - Update Benchmark Tool usage message

- Remove not existed paths
- Fix examples

* remove reference on FPGA

* Added groups for core headers (#11068)

* DOCS: transition banner (#10973)

* transition banner

* minor fix

* update transition banner

* updates

* update custom.js

* updates

* updates

* Add a troubleshooting issue for PRC installation (#11074)

* updates

* adding gna to linux

* add missing reference

* update

* Update docs/install_guides/installing-model-dev-tools.md

Co-authored-by: Sergey Lyubimtsev <sergey.lyubimtsev@intel.com>

* Update docs/install_guides/installing-model-dev-tools.md

Co-authored-by: Sergey Lyubimtsev <sergey.lyubimtsev@intel.com>

* Update docs/install_guides/installing-model-dev-tools.md

Co-authored-by: Sergey Lyubimtsev <sergey.lyubimtsev@intel.com>

* Update docs/install_guides/installing-model-dev-tools.md

Co-authored-by: Sergey Lyubimtsev <sergey.lyubimtsev@intel.com>

* Update docs/install_guides/installing-model-dev-tools.md

Co-authored-by: Sergey Lyubimtsev <sergey.lyubimtsev@intel.com>

* update

* minor updates

* add gna item to yum and apt

* add gna to get started page

* update reference formatting

* merge commit

* add a troubleshooting issue

* update

* update

* fix CVS-71846

Co-authored-by: Sergey Lyubimtsev <sergey.lyubimtsev@intel.com>

* DOC Removed indentation before snippets (#11111)

* Removed indentation

* Fixed code style

* Added more information about tensor names (#11070)

* Added more information about tensor names

* Fixed comment and added documentation for extensions

* Fixed code style

* Fixed typo

* Added group for transformation passes (#11101)

* Added group for transformation passes

* Try to fix CI

* Docs: update AC info in API 2.0 migration guide (#11106)

* Docs: update AC info in API 2.0 migration guide

* Update docs/OV_Runtime_UG/migration_ov_2_0/intro.md

* Update docs/OV_Runtime_UG/migration_ov_2_0/intro.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update headings and some wordings for Transition Guide (#11065)

* updates

* update

* merge from releases/22/1

* update heading

* update headings and some wordings

* Feature/azaytsev/cherry pick pr11110 (#11115)

* Minor fixes

* Feature/azaytsev/img updates (#11110)

* Updated images

* Updated images

* DOCS: doxy sphinxtabs (#11027)

* initial implementation of doxy sphinxtabs

* fixes

* fixes

* fixes

* fixes

* fixes

* Reshape documentation (#10901) (#11108)

* Reshape documentation

* Converting Model : reshape metrined, Supported Devices: no shape inference mentioning

* demos removed

* Added deployment guide (#11060)

* Added deployment guide

* Added local distribution

* Updates

* Fixed more indentations

* update edit on github branches (#11129)

* DOCS: fixed hardcoded links  (#11100)

* Fixes

* Use links

* Updated documentation for compile_tool (#11049)

* Benchmarks 2022 1 (#11130)

* Minor fixes

* Updates for 2022.1

* Edits according to the review

* Edits according to review comments

* Edits according to review comments

* Edits according to review comments

* Fixed table

* Edits according to review comments

* Removed config for Intel® Core™ i7-11850HE

* Removed forward-tacotron-duration-prediction-241 graph

* Added resnet-18-pytorch

* [80085] New images for docs (#11114)

* change doc structure

* fix manager tools

* fix manager tools 3 step

* fix manager tools 3 step

* new img

* new img for OV Runtime

* fix steps

* steps

* fix intendents

* change list

* fix space

* fix space

* code snippets fix

* change display

* fix screenshot (#11140)

* applying reviewers comments to the Opt Guide (#11093)

* applying reviewrs comments

* fixed refs, more structuring (bold, bullets, etc)

* refactoring tput/latency sections

* next iteration (mostly latency), also brushed the auto-batching and other sections

* updates sync/async images

* common opts brushed

* WIP tput redesigned

* minor brushing of common and auto-batching

* Tput fully refactored

* fixed doc name in the link

* moved int8 perf counters to the right section

* fixed links

* fixed broken quotes

* fixed more links

* add ref to the internals to the TOC

* Added a note on the batch size

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

* Add info about Docker images in Deployment guide (#11136)

* [DOCS]transition_guide_intro_language (#11134) (#11142)

a few language suggestions and grammar issues
# Conflicts:
#	docs/OV_Runtime_UG/migration_ov_2_0/intro.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* [DOCS]autodevice_table_fix (#11141)

* Update release version in readme (#11146)

* [AUTO] Fix mess table in doc (#11149)

* update AUTO Debug doc with snippets (#11153)

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* Update ShapeInference.md (#11168)

* Benchmarks 2022 1 updates (#11180)

* Updated graphs

* Quick fix for TODO in Dynamic Shapes article

* Anchor link fixes

* [Docs][IE Samples] fix hard links (#11144) (#11186)

* fix hard links

* change encoding

* fix TM

Co-authored-by: CCR\ntyukaev <nikolay.tyukaev@intel.com>

Co-authored-by: CCR\ntyukaev <nikolay.tyukaev@intel.com>

* More conservative recommendations on dynamic shapes usage in docs (#11161)

* More conservative recommendations about using dynamic shapes

* Duplicated statement from C++ part to Python part of reshape doc (no semantical changes)

* Added software tab for Linux installer (#11159)

* Added software tab for Linux installer

* Added information for apt and yum

* Update docs/install_guides/installing-openvino-apt.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

Update docs/install_guides/installing-openvino-apt.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

Update docs/install_guides/installing-openvino-linux.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

Update docs/install_guides/installing-openvino-apt.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

Update docs/install_guides/installing-openvino-apt.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* [docs] python snippets for devices (#11174)

* Update CPU docs

* update GPU docs

* update with sphinxtab

* Fix docs

* Add preprocessig snippet

* Fix path

* Fixed DM config (#11199)

* Renamed user guides (#11137)

* [Python API] Fix documentation for Core API -- release (#11200)

* [Python API] Fix documentation for Core API

* fix style

* [OMZ]: port bugfix to 2022/1 branch (#11204)

* a bunch of doc fixes (#11230)

* Missing backslashes right after mo (#11252)

* Revert vpu custom kernel (#11226)

* Added original VPU custom kernel doc

* Moved to new API

* Added links from introduction

* Fixed intro

* DOCS-InstallGuide_review (#11217)

langage adjustment

* Docs labels adjustment (#11227)

* Adjusted documentation labels

* Renamed images

* fix doc tests

Co-authored-by: CCR\ntyukaev <nikolay.tyukaev@intel.com>

* cvs-80083 (#11280)

* fix wildcard sphinxdirective (#11263)

* [docs] python snippets for migration pages (#11224)

* save work

* Add common snipp

* update ie pipeline with python snippets

* ov_common_snippet

* Python snippets for graph construction

* Fix docs

Co-authored-by: Anastasia Kuporosova <anastasia.kuporosova@intel.com>

* [Python API][Docs] Fix references for several classes (#11260)

* next iteration after discussion with Yuri (#11197)

* next iteration after discussion with Yuri

* WIP tput

* Basic/Advanced Flow

* brushing/links

* wording, testing the failing link

* refactored levels, added hash

* added advanced tput to the TOC (required by sphinx)

* changed wording of the title to be more pro-active

* minor misprint, etc

* emphasized the flow names

* Update two paragraphs in performance hints docs

(cherry picked from commit 61415fd91f417b70eae595cc15976dec7af0865b)

* minor brushing

* e2e flow in the app design

* no separate hints doc

* minor brushing

* final, neat-picking brushing

Co-authored-by: Helena <helena.kloosterman@intel.com>

* [docs] add missed old python api snippets (#11233)

* Add missed old api snippets

* Fix names

* Fix markers

* Fix methods call

* Model optimizataion documentation update (#11072)

* Fixed Model Optimization Guide and NNCF docs

* Fixed the link to Optimum

* Updated installatin guide

* Changed API description

* Changes quantization documents

* Fixed links in the relevant components

* Fixed API description

* Revised CLI document

* Fixed formatting bugs in the main document

* Fixed formatting bugs in the main document

* Changed the structure. Added Default quantization usage via API

* Fixed E2E CLI example

* Added AccuracyAware usage description

* Revised structure and examples

* Fixed a link to POT intro

* Changed the structure for algorithms

* Fixed links

* Additional fixed of the links

* Revised Ranger documentation

* Some fixes

* Revised Best Practicies

* Fixed descriptions

* Fixed section names

* Changed the workflow one more time

* Additional fixes to the model structure

* Fixed AA usage

* Added DefaultQuantization flow image

* Fixed many issues

* Fixed many issues

* Applied many comments

* Additional fixes

* Fixed examples and provided links to them

* Changed DataLoader Example. Fixed FAQ

* Changed the main README for GitHub

* Fixed E2E CLI example

* Fixed links and code of DataLoader

* Fixed build issues

* Fixed more links

* Fixed one more documentation build issue

* Fixed more links

* Fixed code example

* Add multiple data loaders

* Add audio example

* Minor fixes in the code of sample loaders

* Add descriptions of dataloaders. Changed the behaviour of text loader

* Fixed typos

* Added a new item into the FAQ

* Apply wording corrections

* Update docs/OV_Runtime_UG/supported_plugins/CPU.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Fixed comments

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* [DOCS]continue_language_review-transitionguide (#11148)

* [DOCS]-continue_language_review-transitionguide

the overview has been merged, the remaining articles are reviewed here

* Update docs/OV_Runtime_UG/migration_ov_2_0/deployment_migration.md

* Update docs/OV_Runtime_UG/migration_ov_2_0/deployment_migration.md

* Update docs/OV_Runtime_UG/migration_ov_2_0/deployment_migration.md

* Update docs/OV_Runtime_UG/migration_ov_2_0/graph_construction.md

* Update docs/OV_Runtime_UG/migration_ov_2_0/configure_devices.md

* Configurable OpenCL usage in BA (#11344) (#11363)

* Feature/azaytsev/doc fixes 2022 1 1 (#11388)

* Removed a redundant image

* Fixed ops specifications and other issues

* converted html links to anchor links

* converted html links to anchor links

* Fixed a link

* Fixed a link

* Changed anchor links according to dev review

* [DOCS] polish autodevice article (#11171)

the article has been changed much and its language has been impacted in the process. Here are some corrections.

* sphinx google search (#11439)

* sphinx google search

* fixes

* fixes

* fix version tabs

* Fixed operation names (#11447)

* DOCS-transitionguide_name_correction (#11449)

OpenVINO™  2.0 => OpenVINO™ API 2.0

* Azure CI: Update branch for contrib and testdata repos (#11473)

* review GPU language changes (#11343)

As per ticket #CVS-80053
* int8 link removed

* DOCS-benchmarktool_python_correction (#11479)

add info on tool installation

Co-authored-by: Helena Kloosterman <helena.kloosterman@intel.com>

* DOCS-cpu_language_review (#11526)

Co-Authored-By: Yuan Xu <yuan1.xu@intel.com>

* Update Convert_Model_From_TensorFlow.md (#11425)

* [OMZ]: update submodule (#11286)

* Support config option for time_tests suite (#11628)

* Add links to MO installation and ONNX examples (#11617)

These edits help make it easier for a new user to find more information on how to convert ONNX models.

* Docs: Add links to specific examples (#11618)

* Update docs/OV_Runtime_UG/integrate_with_your_application.md
* Add links to specific examples

This edit adds links to more example applications, making it easier for users to discover how to build an OpenVINO application around their specific model.

* Fix failure of pytest in timetest (#11647)

* Update installing-openvino-windows-header.md (#11221) (#11592)

* Update installing-openvino-windows-header.md

* Update docs/install_guides/installing-openvino-windows-header.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

Co-authored-by: Sebastian Golebiewski <sebastianx.golebiewski@intel.com>

* Update Yocto documentation for 2022.1 (#11655)

* installing-openvino-yocto.md: fix install instructions (#10785)

Change _ to : as per the new override syntax.

Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>

* installing-openvino-yocto: update for 2022.1

Update the branch to be used for 2022.1 and remove reference to
-staticdev package which isn't generated anymore.

Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>

* DOCS-hetero_alignment_changes (#11643)

Align the HETERO article with the AUTO and MULTI template

* Fix CI on Windows (#11659)

- fix pip requirements in OMZ
- fix cpuFuncTests on AlderLake

* Docs multiplugin page-wide tabs merge (#11461)

* Update multi_device.md

* druga runda

* runda trzecia

11

* Update docs/OV_Runtime_UG/multi_device.md

* Update docs/OV_Runtime_UG/multi_device.md

* Update docs/OV_Runtime_UG/multi_device.md

* Update docs/OV_Runtime_UG/multi_device.md

* Update docs/OV_Runtime_UG/multi_device.md

* Update docs/OV_Runtime_UG/multi_device.md

* Update docs/OV_Runtime_UG/multi_device.md

* Update docs/OV_Runtime_UG/multi_device.md

* Update docs/OV_Runtime_UG/supported_plugins/Device_Plugins.md

* correct post review

* align the property table

* Update docs/OV_Runtime_UG/auto_device_selection.md

* Update docs/OV_Runtime_UG/multi_device.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/OV_Runtime_UG/multi_device.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/OV_Runtime_UG/multi_device.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/OV_Runtime_UG/multi_device.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/OV_Runtime_UG/multi_device.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/OV_Runtime_UG/multi_device.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/Device_Plugins.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/Device_Plugins.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/Device_Plugins.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/Device_Plugins.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Fix compilation error in docs snippets (#11675)

* plugin api separate config (#11109)

* Revert "plugin api separate config (#11109)" (#11705)

This reverts commit 3249e61bfb.

* Fix a heading in Auto (#11743)

* fix the heading

* fix headings

* Docs: Add source code links to OpenVINO Samples (#11803)

* Docs: Add links to Samples source code on GitHub

* Add link to source code on GitHub

* Add link to source code on GitHub

* Add link to source code on GitHub

* Add link to source code on GitHub

* Add link to source code on GitHub

* Add link to source code on GitHub

* Add link to source code on GitHub

* Add link to source code on GitHub

* Add link to source code on GitHub

* Add link to source code on GitHub

* Add link to source code on GitHub

* Add link to source code on GitHub

* Add link to source code on GitHub

* Add link to source code on GitHub

* Add link to source code on GitHub

* Add link to source code on GitHub

* Update docs/OV_Runtime_UG/Samples_Overview.md

* Update samples/c/hello_classification/README.md

* Update samples/c/hello_nv12_input_classification/README.md

* Update samples/cpp/classification_sample_async/README.md

* Update samples/cpp/hello_classification/README.md

* Update samples/cpp/hello_nv12_input_classification/README.md

* Update samples/python/classification_sample_async/README.md

* Update samples/python/hello_classification/README.md

* Update samples/python/hello_query_device/README.md

* Update samples/python/hello_reshape_ssd/README.md

* Update samples/python/speech_sample/README.md

* Update samples/cpp/hello_query_device/README.md

* Update samples/cpp/speech_sample/README.md

* Update samples/cpp/hello_reshape_ssd/README.md

* Update samples/cpp/model_creation_sample/README.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Docs: Add links to specific object detection examples (#11820)

* Docs: Add links to object detection examples

* Docs: Add links to specific examples

* Docs: Add links to specific examples

* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_YOLO_From_Tensorflow.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Docs: Add that ONNX models are compatible with OpenVINO (#11821)

* Docs: Add that ONNX models are compatible with OpenVINO

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_ONNX.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Docs: Add links to info on benchmark application (#11822)

* Docs: Add link to benchmark_app

* Docs: Add link to benchmark_app

* Docs: Add link to benchmark_app

* DOCS-add supported PdPd models_port (#11804) (#11827)

* fix formatting (#11904)

* DOCS-nncf_rephrasing-port #11997 (#12007)

* Puts page switch parameters in alphabetic order to support S3 (#11960) (#11966)

* Puts page switch parameters in alphabetic order to support S3 (#11960)

Signed-off-by: intelkevinputnam <intelkevinputnam@github.com>

Co-authored-by: intelkevinputnam <intelkevinputnam@github.com>

* DOCS-restore_gsearch_comma (#11980)

Co-authored-by: Kevin Putnam <kevin.putnam@intel.com>
Co-authored-by: intelkevinputnam <intelkevinputnam@github.com>
Co-authored-by: Piotr Milewski <piotr.milewski@intel.com>

* Install only proper GNA library files (#11243)

* If CMAKE_BUILD_TYPE is not set - set it to 'Release' by default (#11026)

This behavior is already used by default because ONNX is enabled by default and thirdparty/onnx/onnx/CMakeLists.txt forcing CMAKE_BUILD_TYPE to Release if it is not set

It fixes the following issues:
- When ONNX frontend is disabled - source is built for Debug, which is very unexpected comparing to Release with ONNX frontend enabled
- When ONNX frontend is disabled, even libopenvino.so could not be built due to some generated makefiles issues

It is set to 'Release' (not to 'Debug') to comply with default behavior when ONNX is enabled (it is default option working for most users)

* Build with system TBB (#11244)

* Build with system TBB

* Fixes

* Check whether system TBB is available

* Try to fix ONNX Runtime build with system TBB

* Test

* Fixed compilation of threading.cpp

* Fixed unset of cache dirs

* Limit dearch paths of TBB

* Try to enable pip packages with custom TBB

* Fix for TBB 2021.2

* Install only needed TBB libraries

* Install TBB from system to pip package

* Reverted usage of TBBROOT

* Fixed oneTBB case

* Try to fix Android

* Escape some paths

* Added samples path

* Fixed TBBBind usage for case of system TBB

* Disabled TBBBind usage for oneTBB (#11386)

* Tbb 2018 and older usage (#11411)

* fixed TBB

* Fixed compilation with old TBBs

* Fixed installation for custom provided TBB

* Fixed detection of sample type c / cpp (#11444)

* Tbb: download only if system libraries are not found (#11415)

* Download custom TBB on demand

* Download TBBBind on demand

* Fixed install steps

* FIxes

* Don't use system TBB

* Fixed WIndows backslash paths

* Revert "Install only proper GNA library files (#11243)"

This reverts commit 8a1a6e8b1a.

* Limit ONNX version (#11949)

OV does not currently support opset 17 introduced in onnx 1.12 release.

* setupvars.sh: Removing extra semicolon, which breaks glibc build (#11849)

This extra semicolon creates an output as example below. The extra
'::' is equivalent to add '.' as part of the LD_LIBRARY_PATH. This
breaks glibc build, and very often creates weird issue when launch
commands from different path.

...inference_engine/external/tbb/lib::/opt/intel/openvino_2021/...

We also noticed that :${parameter:+:$parameter} is widely used in
this file. Please review the code and fix as needed.

* Updated setupvars scripts

* Install external / user provided TBB as well

* Remove protobuf requirements in python bindings (#11886)

* Fixes for cases when TBB_DIR env var is set

* Disable loading of v7 reader for new IR versions (#12252)

* Disable loading of v7 reader for new IR versions

* Try to fix CI

* Fixed PDPD frontend

* Fixed error message creation

* Fixed newAPI for case if core was removed (#12207)

* Fixed newAPI for case if core was removed

* Fixed code style

* Fixed typo

* Use new API by default

* Create core with template plugin

* Added doxygen comment

* Updated build_samples.sh not to call make command

* Fixes

* Don't use make in build_samples.sh script

* Limit protobuf version

* Fix for Include dirs

* [PyOV] Fix bugbear's B023 (#12040)

* Sync .github/workflows/py_checks.yml with master

* Revert "Sync .github/workflows/py_checks.yml with master"

This reverts commit 9ae2dd9f46.

* Change quotes

* Revert "Sync .github/workflows/py_checks.yml with master"

This reverts commit 9ae2dd9f46.

* Add static shared_objects map in FEM
- add unit tests for frontend lib close
- not use static FEM in ie network reader
- add main for gtest which can use manifest file to filter tests

* Move library pointers map to manger impl
- add to manger impl method to make frontend from loaded plugin

* Add shutdown function to ov namespace
it cleans the static resources

* Revert changes related to linking mian for tests

* Add python binding to ov::openvino_shutdown

* Renamed shutdown method and added to legacy C++ API

* Added C bindings

* Remove redundant files

* Fixed code style

* Cpp fix of python segfault, reverted pybind workaround (#10749)

* test fix of segfault

* styles applied

* added keep_alive to pybind

* remove redundant code

* fix json tests

* review remarks

* introduced correct path to dlls in CI

* removing passing path via env variable

* introduced cpp solution

* remove keep alive

* review remarks

* remove explicit removing model

* removed shared_objects from ir frontend

* core test updated

* unified approach to handle extensions by frontends

* added nullptr check

* Revert "added nullptr check"

This reverts commit 666f5e4489.

* Revert "unified approach to handle extensions by frontends"

This reverts commit bf85ac24a6.

* m_extensions declaration in Frontend

* added assert

* Revert "Disable loading of v7 reader for new IR versions (#12252)"

This reverts commit 60ee201d93.

* Removed old headers from OV 2.0 API

* FIxed clang

* [OMZ]: update submodule

* Fixed sampes build

* Fixed tets build

* Fixed docs compilation

* Disable ARM plugin build

* Disable MO

* Revert "FIxed clang"

This reverts commit 8ebc86935c.

* Revert "Removed old headers from OV 2.0 API"

This reverts commit 4e64eb22a1.

* Revert "Disable ARM plugin build"

This reverts commit 54f805c28b.

* Removed lib_close tests

Co-authored-by: Sergey Lyubimtsev <sergey.lyubimtsev@intel.com>
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
Co-authored-by: Maxim Vafin <maxim.vafin@intel.com>
Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
Co-authored-by: Yuan Hu <yuan2.hu@intel.com>
Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>
Co-authored-by: Maksim Kutakov <maksim.kutakov@intel.com>
Co-authored-by: Sergey Lyalin <sergey.lyalin@intel.com>
Co-authored-by: Ivan Tikhonov <ivan.tikhonov@intel.com>
Co-authored-by: Ilya Churaev <ilya.churaev@intel.com>
Co-authored-by: Nikolay Tyukaev <nikolay.tyukaev@intel.com>
Co-authored-by: Ekaterina Aidova <ekaterina.aidova@intel.com>
Co-authored-by: Evgenya Stepyreva <evgenya.stepyreva@intel.com>
Co-authored-by: Maxim Shevtsov <maxim.y.shevtsov@intel.com>
Co-authored-by: Ilya Naumov <ilya.naumov@intel.com>
Co-authored-by: Alexey Suhov <alexey.suhov@intel.com>
Co-authored-by: Chen Peter <peter.chen@intel.com>
Co-authored-by: Vladimir Dudnik <vladimir.dudnik@intel.com>
Co-authored-by: Alexey Lebedev <alexey.lebedev@intel.com>
Co-authored-by: Anastasia Kuporosova <anastasia.kuporosova@intel.com>
Co-authored-by: Eddy Kim <eddy.kim@intel.com>
Co-authored-by: Helena <helena.kloosterman@intel.com>
Co-authored-by: Alexander Kozlov <alexander.kozlov@intel.com>
Co-authored-by: Alexander Zhogov <alexander.zhogov@intel.com>
Co-authored-by: Evan <evan.juras@gmail.com>
Co-authored-by: FanJiangIntel <fan.jiang@intel.com>
Co-authored-by: Sebastian Golebiewski <sebastianx.golebiewski@intel.com>
Co-authored-by: Anuj Mittal <anuj.mittal@intel.com>
Co-authored-by: Mateusz Tabaka <mateusz.tabaka@intel.com>
Co-authored-by: Kevin Putnam <kevin.putnam@intel.com>
Co-authored-by: intelkevinputnam <intelkevinputnam@github.com>
Co-authored-by: Piotr Milewski <piotr.milewski@intel.com>
Co-authored-by: Mikhail Nosov <mikhail.nosov@intel.com>
Co-authored-by: Tomasz Jankowski <tomasz1.jankowski@intel.com>
Co-authored-by: stephenli2000 <stephen@aotu.ai>
Co-authored-by: Artur Kulikowski <artur.kulikowski@intel.com>
Co-authored-by: Jan Iwaszkiewicz <jan.iwaszkiewicz@intel.com>
Co-authored-by: p-wysocki <przemyslaw.wysocki@intel.com>
Co-authored-by: Raasz, Pawel <pawel.raasz@intel.com>
Co-authored-by: Mateusz Bencer <mateusz.bencer@intel.com>
2022-07-26 22:12:33 +00:00
Alexander Zhogov
9996a58fc6 Azure CI: Update branch for contrib and testdata repos (#11474) 2022-04-05 22:23:36 +03:00
Ilya Churaev
4192d8879d Port visibility hidden for get_type_info methods (#11320)
Co-authored-by: vurusovs <vitaliy.urusovskij@intel.com>
2022-03-31 09:47:29 +03:00
Nikolay Tyukaev
cdb9bec721 DOCS: Increase content width (#10995)
* fixes

* fix
2022-03-17 16:38:08 +03:00
Liubov Talamanova
baf4b23d9a Add configs to pypi pkg (#11008) 2022-03-17 16:02:21 +03:00
Yuan Xu
43fa3183dc Fix issues and integrate comments (#10980)
* updates

* adding gna to linux

* add missing reference

* update

* Update docs/install_guides/installing-model-dev-tools.md

Co-authored-by: Sergey Lyubimtsev <sergey.lyubimtsev@intel.com>

* Update docs/install_guides/installing-model-dev-tools.md

Co-authored-by: Sergey Lyubimtsev <sergey.lyubimtsev@intel.com>

* Update docs/install_guides/installing-model-dev-tools.md

Co-authored-by: Sergey Lyubimtsev <sergey.lyubimtsev@intel.com>

* Update docs/install_guides/installing-model-dev-tools.md

Co-authored-by: Sergey Lyubimtsev <sergey.lyubimtsev@intel.com>

* Update docs/install_guides/installing-model-dev-tools.md

Co-authored-by: Sergey Lyubimtsev <sergey.lyubimtsev@intel.com>

* update

* minor updates

* add gna item to yum and apt

* add gna to get started page

Co-authored-by: Sergey Lyubimtsev <sergey.lyubimtsev@intel.com>
2022-03-17 15:55:37 +03:00
Artyom Anokhov
63ca94179e Fix Deployment Manager configs for MacOS and Win-HDDL target (#10998)
* DM configs: Updated path for MacOS. Removed MovidiusDriver for HDDL target for Windows

* DM config MacOS: Updated name for libov_runtime
2022-03-17 12:44:52 +03:00
Mikhail Nosov
8723d1cc7e Fix coverity warnings in caching snippets (#11006) 2022-03-17 12:43:29 +03:00
Maxim Shevtsov
cbfb8a1678 Perf Hints docs and General Opt Guide refactoring (#10815)
* Brushed the general optimization page

* Opt GUIDE, WIP

* perf hints doc placeholder

* WIP

* WIP2

* WIP 3

* added streams and few other details

* fixed titles, misprints etc

* Perf hints

* movin the runtime optimizations intro

* fixed link

* Apply suggestions from code review

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* some details on the FIL and other means when pure inference time is not the only factor

* shuffled according to general->use-case->device-specifics flow, minor brushing

* next iter

* section on optimizing for tput and latency

* couple of links to the features support matrix

* Links, brushing, dedicated subsections for Latency/FIL/Tput

* had to make the link less specific (otherwise docs compilations fails)

* removing the Temp/Should be moved to the Opt Guide

* shuffled the tput/latency/etc info into separated documents. also the following docs moved from the temp into specific feature, general product desc or corresponding plugins

-   openvino_docs_IE_DG_Model_caching_overview
-   openvino_docs_IE_DG_Int8Inference
-   openvino_docs_IE_DG_Bfloat16Inference
-   openvino_docs_OV_UG_NoDynamicShapes

* fixed toc for ov_dynamic_shapes.md

* referring the openvino_docs_IE_DG_Bfloat16Inference to avoid docs compilation errors

* fixed main product TOC, removed ref from the second-level items

* reviewers remarks

* reverted the openvino_docs_OV_UG_NoDynamicShapes

* reverting openvino_docs_IE_DG_Bfloat16Inference and openvino_docs_IE_DG_Int8Inference

* "No dynamic shapes" to the "Dynamic shapes" as TOC

* removed duplication

* minor brushing

* Caching to the next level in TOC

* brushing

* more on the perf counters ( for latency and dynamic cases)

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>
2022-03-17 11:09:13 +03:00
Yegor Kruglov
1ed828982e [Release] Cascade RCNN res101 document model support (#10904)
* cascade rcnn model support

* fix typo

* specify model directory

* comments resolving
2022-03-16 18:04:46 +03:00
Alexander Zhogov
c670e4cc2b Azure CI: Enable IB again 2022-03-16 15:01:20 +03:00
Nikolay Tyukaev
e124d4f5df add ote repo (#10979) 2022-03-16 14:53:51 +03:00
Mikhail Nosov
09462af266 Docs: model caching page update according to OpenVINO API 2.0 (#10977)
* Docs: model caching page update according to OpenVINO API 2.0

* Fix assert
2022-03-16 12:35:01 +03:00
Mikhail Nosov
0b08b9a14c Docs. Fix link in layout overview (#10967) 2022-03-16 11:09:36 +03:00
Nadezhda Ageeva
a98059daea [GNA] small docs fixes (#10959)
* [GNA] small docs fixes

* Update docs/OV_Runtime_UG/supported_plugins/GNA.md

Co-authored-by: Victoria Yashina <victoria.yashina@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GNA.md

Co-authored-by: Victoria Yashina <victoria.yashina@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GNA.md

Co-authored-by: Victoria Yashina <victoria.yashina@intel.com>

Co-authored-by: Victoria Yashina <victoria.yashina@intel.com>
2022-03-16 10:28:23 +03:00
Alexander Zhogov
27b5722944 Azure CI: Disable IB 2022-03-16 08:51:20 +03:00
Nikolay Tyukaev
c1fc602c7c fix broken anchors api reference (#10976) 2022-03-16 01:00:04 +03:00
Andrey Zaytsev
e65fc4c849 Changes to the OpenVINO 2.0 Transition Guide (#10936)
* Minor fixes

* Grammar fixes
2022-03-15 21:43:45 +03:00
Ilya Lavrenov
994b06b744 Getting started improvements (#10948) 2022-03-15 18:05:54 +03:00
Aleksandr Voron
6cf81ad6a3 [DOCS] ARM CPU plugin docs (#10885)
* initial commit

ARM_CPU.md added
ARM CPU is added to the list of supported devices

* Update the list of supported properties

* Update Device_Plugins.md

* Update CODEOWNERS

* Removed quotes in limitations section

* NVIDIA and Android are added to the list of supported devices

* Added See Also section and reg sign to arm

* Added Preprocessing acceleration section

* Update the list of supported layers

* updated list of supported layers

* fix typos

* Added support disclaimer

* update trade and reg symbols

* fixed typos

* fix typos

* reg fix

* add reg symbol back

Co-authored-by: Vitaly Tuzov <vitaly.tuzov@intel.com>
2022-03-15 17:10:14 +03:00
Victoria Yashina
a7f1710edf Onnx updates (#10962)
* onnx changes

* onnx updates

* onnx updates
2022-03-15 15:16:10 +03:00
Jan Iwaszkiewicz
e20e828a1f [DOCS] Python Exclusives overview (#10951)
* Add python docs

* Small fix

* Apply comments

* Fix style
2022-03-15 14:26:18 +03:00
Sergey Lyubimtsev
5835cac31c Add description for zsh: no matches found : openvino-dev[...] issue. (#10957) 2022-03-15 13:38:20 +03:00
Vladimir Zinoviev
b4b5f3333e [LPT] Turn back checks in reshape transformation when subtract is absent (#10940) 2022-03-15 11:34:05 +03:00
Yuan Xu
a423a2b802 add python version (#10874) 2022-03-15 10:28:15 +03:00
Bartek Szmelczynski
8890e2906a [DOCS] add python snippets for automatic batching (#10918)
* add python snippets for automatic branching

* add missing bracket]
2022-03-14 21:53:09 +03:00
Bartek Szmelczynski
e4fcfa74c2 add python snippets for device query page (#10920) 2022-03-14 20:44:20 +03:00
Nadezhda Ageeva
6474d2c94e [GNA] Update documentation (release) (#10873)
* parent 5f755d5e4a
author Nadezhda Ageeva <nadezhda.ageeva@intel.com> 1646919359 +0300
committer Nadezhda Ageeva <nadezhda.ageeva@intel.com> 1647270928 +0300

[GNA] Updte documentation (release)

Update docs/OV_Runtime_UG/supported_plugins/GNA.md

Co-authored-by: Denis Orlov <denis.orlov@intel.com>

Update docs/OV_Runtime_UG/supported_plugins/GNA.md

Co-authored-by: Denis Orlov <denis.orlov@intel.com>

Update docs/OV_Runtime_UG/supported_plugins/GNA.md

Co-authored-by: Denis Orlov <denis.orlov@intel.com>

Update docs/OV_Runtime_UG/supported_plugins/GNA.md

Co-authored-by: Denis Orlov <denis.orlov@intel.com>

Apply comments

Move snippets to separate file

Add notes about POT and 2d convolutions

* Add lins to GNA setup

* cleanup after rebase
2022-03-14 20:38:50 +03:00
Maxim Vafin
bf11b965e6 Update Model Optimizer User Guide (#10759) (#10934)
* Remove install prerequisites steps, order FWs, and move pre-processing details

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Update Introduction: examples of MO CLIs, references to parameters description pages

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Update Setting Input Shape section

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Update Optimizing Preprocessing Computation page

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Revert location of Additional_Optimizations.md

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Describe layout and FP16 support in MO

* Fix docs issue

* Apply feedback

* Apply review feedback

* Clean-up Resources

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Mention FP16 compression in MO Introduction

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Apply the first portion of feedback

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Apply the second portion of feedback

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Apply review feedback

* Apply review feedback

* Apply the third portion of feedback

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Apply suggestions from code review

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Apply feedback for FP16 compression documentation

* Apply review for FP16 page

* Apply suggestions from code review

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Update docs/MO_DG/prepare_model/Additional_Optimizations.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Apply feedback

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Apply feedback

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Apply feedback

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Address feedback about tutorials, input_shape option

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Rework Setting Input Shapes section

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Update "See also" list

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Correct conversion documents for each FW

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Refactor TensorFlow converting document and expand Embedding Preprocessing document

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Fix a link to POT

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Apply suggestions from code review

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

Co-authored-by: Maxim Vafin <maxim.vafin@intel.com>
Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>
Co-authored-by: Sergey Lyalin <sergey.lyalin@intel.com>

Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>
Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>
Co-authored-by: Sergey Lyalin <sergey.lyalin@intel.com>
2022-03-14 19:12:34 +03:00
Maxim Vafin
af5b31c413 Update Convert_YOLACT.md (#10943) 2022-03-14 15:39:47 +00:00
Yuan Xu
1d3fab80a8 Update Install&Deployment for migration guide to 22/1 (#10933)
* updates

* update
2022-03-14 15:39:55 +03:00
Mikhail Nosov
5891a79249 Squashed commit of the following: (#10921)
commit d37c9613e0
Author: Mikhail Nosov <mikhail.nosov@intel.com>
Date:   Fri Mar 11 20:13:53 2022 +0300

    Fix review comments

commit b5646fa707
Merge: bc9c68d431 6fdd983750
Author: Mikhail Nosov <mikhail.nosov@intel.com>
Date:   Fri Mar 11 19:29:06 2022 +0300

    Merge remote-tracking branch 'upstream/master' into preprocessing_docs2

commit 6fdd983750
Author: Andrey Noskov <andrey.noskov@intel.com>
Date:   Fri Mar 11 15:05:14 2022 +0300

    [GNA] Added multi crop test (#10459)

commit caaacb2db4
Author: Andrey Noskov <andrey.noskov@intel.com>
Date:   Fri Mar 11 15:03:16 2022 +0300

    [GNA] Moved single Lstm-cell test from deprecated tests  (#10472)

    * [GNA] Single lstm-cell test added

    * Added additional config for test

    * one more input and hidden shape

    * Added cell with ReLU
    Deleted deprecated test

    * test added as lstm_cell_basic

    * Enabled gna_compact_mode

    Co-authored-by: Mikhail Ryzhov <mikhail.ryzhov@intel.com>

    * enabled compact_mode in all tests

    Co-authored-by: Mikhail Ryzhov <mikhail.ryzhov@intel.com>

commit d93ce1e246
Author: Ilya Churaev <ilya.churaev@intel.com>
Date:   Fri Mar 11 14:27:11 2022 +0300

    Added intro to transformation guide (#10894)

commit f48b233629
Author: Vladimir Dudnik <vladimir.dudnik@intel.com>
Date:   Fri Mar 11 12:34:55 2022 +0300

    update omz intel models, fix docs (#10843)

commit 9d74f5cd76
Author: Vladislav Volkov <vladislav.volkov@intel.com>
Date:   Fri Mar 11 11:10:56 2022 +0300

    Export/import fixed for param->result and const->result models (#10838)

    Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>

commit 2940db0fb1
Author: Nikolay Tyukaev <nikolay.tyukaev@intel.com>
Date:   Fri Mar 11 11:10:11 2022 +0300

    benchmark legal, snippet margin bottom (#10886)

commit dd076264eb
Author: Sergey Lyubimtsev <sergey.lyubimtsev@intel.com>
Date:   Fri Mar 11 11:09:17 2022 +0300

    add pre-release description for wheels packages (2) (#10813)

    * add pre-release description for wheels packages

    * refactoring

    * lines

    * Revert "lines"

    This reverts commit 01a74dc168.

    * linters

    * linters

    * nighly revision of docs URL

commit 0dc2ab182b
Author: Sergey Lyubimtsev <sergey.lyubimtsev@intel.com>
Date:   Fri Mar 11 10:45:31 2022 +0300

    Update APT instructions according to repository configuration (#10869)

commit 97efdb5020
Author: Alexey Lebedev <alexey.lebedev@intel.com>
Date:   Fri Mar 11 08:42:33 2022 +0300

    [docs] python snippet for dynamic shapes (#10762)

    * Create snipp

    * link python snipp with doc

    * fix docs

    * Apply suggestions from code review

    Co-authored-by: Jan Iwaszkiewicz <jan.iwaszkiewicz@intel.com>

    * Fix cpp comments

    Co-authored-by: Jan Iwaszkiewicz <jan.iwaszkiewicz@intel.com>

commit 4e0a740eb3
Author: Elizaveta Lobanova <elizaveta.lobanova@intel.com>
Date:   Thu Mar 10 15:16:17 2022 +0300

    [GNA] Support of overload correction for MatMul with 2 non-constant layers (#10447)

commit 09246e2db8
Author: Vladimir Paramuzov <vladimir.paramuzov@intel.com>
Date:   Thu Mar 10 15:01:52 2022 +0300

    [GPU] GPU plugin docs (#10734)

commit a8a2640fb7
Author: Anton Pankratov <anton.pankratov@intel.com>
Date:   Thu Mar 10 14:00:42 2022 +0300

    Added callback and wait migration guide (#10775)

    * Added callback and wait migration guide

    * Added start async

    * Simplified wait

    * Added selector for sync async

    * fixed doc

    * fixed build

    * fixed doc

    * fixed doc

commit 5566b67238
Author: Irina Efode <irina.efode@intel.com>
Date:   Thu Mar 10 13:34:47 2022 +0300

    Frontend support in Subgraph dumper (#10765)

    * Init

    * Enable frontends

    * Update read_ir_compare_with_refs.cpp

    * Remove extra line

    * Update CMakeLists.txt

commit 4746d0881b
Author: Nikita Malinin <nikita.malinin@intel.com>
Date:   Thu Mar 10 10:28:47 2022 +0300

    [POT] Update BC with the Parameter nodes connection (#10848)

    * Update BC with the Parameter nodes connection

    * Update test_sanity with octave

commit d7372d678c
Author: Tatiana Savina <tatiana.savina@intel.com>
Date:   Thu Mar 10 09:10:54 2022 +0300

    [DOCS] fixes for nightly (#10842)

    * fixes for nightly

    * modify xfile

    * change launcher ref

commit 531fa9018d
Author: Katarzyna Mitrus <katarzyna.mitrus@intel.com>
Date:   Wed Mar 9 17:34:42 2022 +0100

    [DOCS] Python snippets for Hetero execution page (#10769)

    * Update docs ov hetero snippets

    * Add missing space

    * Update precision hint

    * Update hetero docs snippets with GPU profiling

commit 44ec4661a4
Author: Karol Blaszczak <karol.blaszczak@intel.com>
Date:   Wed Mar 9 16:09:37 2022 +0100

    Update Auto plugin docs (#10623)

    * Update Auto plugin docs

    Revise auto plugin and auto plugin debugging articles. Include necessary image files.

    * Update docs/OV_Runtime_UG/supported_plugins/AutoPlugin_Debugging.md

    Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

    * Update docs/OV_Runtime_UG/supported_plugins/AutoPlugin_Debugging.md

    Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

    * Update docs/OV_Runtime_UG/supported_plugins/AutoPlugin_Debugging.md

    Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

    * Update docs/OV_Runtime_UG/auto_device_selection.md

    Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

    * Update docs/OV_Runtime_UG/supported_plugins/AutoPlugin_Debugging.md

    Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

    * Update docs/OV_Runtime_UG/auto_device_selection.md

    Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

    * Update docs/OV_Runtime_UG/supported_plugins/AutoPlugin_Debugging.md

    Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

    * Update AutoPlugin_Debugging.md

    * include review corrections

    * Update auto_device_selection.md

    * Update auto_device_selection.md

    * Update auto_device_selection.md

    * Update auto_device_selection.md

    Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

commit 948347f3dd
Author: Serhii Pavlovskyi <82883030+serhii-pavlovskyi-altran@users.noreply.github.com>
Date:   Wed Mar 9 12:42:06 2022 +0200

    ncc build fixes (#10367)

    * fix .ncc_style target names

    it was breaking configure on system with libclang-12-dev, clang-12,
    ninja and cmake 3.17+(ninja complains about duplicate
    target). with lower cmake version configure succeeds, but build exits
    immediately with error. by replacing ninja with make error becomes
    warning(it's still significant, make just skips duplicate rules, i.e.
    doesn't check style of some source files, rule duplication is genuine
    bug). without libclang-12-dev and clang-12 ENABLE_NCC_STYLE is OFF and
    bug is not triggered

    * silence uninitialized warning in core_integration

    probably it was always initialized before use, but compiler wasn't made
    aware of it

    * fix function spelling to unbreak code style checks in benchmark_app

    * include <thread> for std::this_thread

    existing code was relying on namespace pollution by old libstdc++

    * replace is_pod with is_standard_layout && is_trivial

    is_pod is deprecated, it breaks build on current gcc

    Co-authored-by: Serhii Pavlovskyi <spavlovskyi@lohika.com>
    Co-authored-by: Ilya Churaev <ilya.churaev@intel.com>

commit d9976332b0
Author: Vladimir Dudnik <vladimir.dudnik@intel.com>
Date:   Wed Mar 9 11:48:47 2022 +0300

    upd open-model-zoo, upd docs, upd ac cfgs (#10676)

commit 702f8cf223
Author: Ilya Churaev <ilya.churaev@intel.com>
Date:   Wed Mar 9 11:06:12 2022 +0300

    Fixed duplicated words (#10827)

commit 3e7e0d5651
Author: Taylor Yeonbok Lee <taylor.lee@intel.com>
Date:   Mon Mar 7 13:37:21 2022 +0900

    [DRYRUN] Fix dryrun in partial build (#10761)

    When partial build is called for dryrun, do constant propagate too.
    In normal case, partial build is not doing constant propate for saving build time of internal program.
    However, if partial build is called with dryrun, it will fail at transfer_constants due to the generic nodes which does not have impl.

commit de47a3b4a4
Author: Tatiana Savina <tatiana.savina@intel.com>
Date:   Sun Mar 6 09:14:39 2022 +0300

    POT documentation updates (#10578)

    * POT changes

    * change install

    * change img size

    * remove cli option

commit 41818a377f
Author: Nikita Malinin <nikita.malinin@intel.com>
Date:   Sat Mar 5 15:49:21 2022 +0300

    [POT] Update IEEngine with the Dynamic model support (#10717)

    * Update IEEngine with the Dynamic models support

    * Update with the batch

    * Method naming fix

    * Update image_loader & tests with dynamic models

    * Update test_sanity.py

    * Replace custom_mo_config from the model

commit 3b8e960b10
Author: Egor Duplensky <egor.duplenskii@intel.com>
Date:   Sat Mar 5 14:37:50 2022 +0300

    [CPU] Avoid using cache for constant inplace or multi-child edges (#10573)

commit 3b8ca9f0af
Author: Tatiana Savina <tatiana.savina@intel.com>
Date:   Sat Mar 5 13:03:46 2022 +0300

    [DOCS] Fixes for nightly (#10806)

    * add img

    * wb img for input

    * dataset added

    * add img

    * wb img for input

    * dataset added

    * ov_fix

    * more imgs

    * new img

    * new img

    * nlp

    * new img

    * delete img

commit e87ea5d611
Author: Maksim Kutakov <maksim.kutakov@intel.com>
Date:   Sat Mar 5 12:32:11 2022 +0300

    [CPU] Use raw pointer to share peer data for constants (#10744)

commit 0f8c599ce7
Author: Andrey Zaytsev <andrey.zaytsev@intel.com>
Date:   Sat Mar 5 12:31:15 2022 +0300

    Re-structure Model Optimizer User Guide and Clean-up (#10801)

    * Modified the workflow diagram

    * Moved supported topology lists to separate topics

    * Additional changes

    * Removed Supported Topologies list and Deprecated pages

    * Created the Model Conversion Tutorials section for instructions for specific models

    * Topic names alignment, removed Default_Model_Optimizer_Optimizations.md

    * Additional structural changes

    * Fixed links

    * heading fixes

commit 0c20e7a3ca
Author: Roman Kazantsev <roman.kazantsev@intel.com>
Date:   Fri Mar 4 20:50:02 2022 +0300

    [MO] Remove IR frontend from available frontend list in MO (#10798)

    * [MO] Remove IR frontend from available frontend list in MO

    Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

    * Fix issue - forget to pass FEM

    Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

    * Fix issue for TF with new FE and default legacy

    Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

commit 3b24ed032a
Author: Yuan Xu <yuan1.xu@intel.com>
Date:   Sat Mar 5 00:32:10 2022 +0800

    Yuan install guide 22/1 (#10786)

    * Add Overview page

    * Revert "Add Overview page"

    * fix errors & formatting

    * fix article usage according to the styles

    * fix errors

    * update according to PXT comments

commit cb9049076b
Author: Ilya Churaev <ilya.churaev@intel.com>
Date:   Fri Mar 4 18:40:18 2022 +0300

    Enabled clang-format for cc and itt libs (#10793)

commit c28cebb2a6
Author: Dmitry Pigasin <dmitry.pigasin@intel.com>
Date:   Fri Mar 4 15:41:47 2022 +0300

    [CPP Speech Sample] Fix result saving when batch size is not 1 (#10714)

    * Fix result saving when batch size is not 1

    * Remove useless if statement

    * improved processing scores for model with more than one outputs

    * added checking on count of model outputs

    * improve if statements

    * divide fix for model with several outputs to other PR

    Co-authored-by: Maxim Gordeev <maxim.gordeev@intel.com>

commit 7e8bbf4968
Author: Anuj Mittal <anuj.mittal@intel.com>
Date:   Fri Mar 4 20:41:37 2022 +0800

    installing-openvino-yocto.md: fix install instructions (#10785)

    Change _ to : as per the new override syntax.

    Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>

commit 69ad9e80e1
Author: Nikita Malinin <nikita.malinin@intel.com>
Date:   Fri Mar 4 14:50:44 2022 +0300

    [POT] Update OverflowCorrection algo for nodes without bias (#10687)

    * Update OverflowCorrection algo for nodes without bias

    * Pylint line fix

    * Update OC with the last add name

    * Pylint fix

commit 32edd596e3
Author: Irina Efode <irina.efode@intel.com>
Date:   Fri Mar 4 14:42:16 2022 +0300

    [IE TESTS] Functional test review: Part 4 (#10772)

    * [IE TESTS] Move specific import_export_tests to gna and myriad

    * add

commit ed702910bd
Author: Ilya Churaev <ilya.churaev@intel.com>
Date:   Fri Mar 4 13:38:42 2022 +0300

    Enable clang for transformations (#10778)

    * Enable clang for transformations

    * Fixed code style

    * Fixed build

    * Fixed macOS

commit 082ebbcbf8
Author: Irina Efode <irina.efode@intel.com>
Date:   Fri Mar 4 12:52:58 2022 +0300

    [IE TESTS] Remove NgraphConversionTests (#10770)

commit 043a773f61
Author: Fedor Zharinov <fedor.zharinov@intel.com>
Date:   Fri Mar 4 09:49:03 2022 +0300

    [Benchmark_app]Check all I/O names (#10745)

    * Check all I/O names

    * stylefix

commit 5cee51e9c4
Author: hyunback kim <hyunback.kim@intel.com>
Date:   Fri Mar 4 14:30:07 2022 +0900

    [GPU] update to check quantize fusing condition in oneDNN (#10680)

    * [GPU] update the condition for minimize_local_reorders

    * Update to check needs reorder condition in quantize.

    Signed-off-by: hyunback <hyunback.kim@intel.com>

commit 8a2252b774
Author: yanlan song <bell.song@intel.com>
Date:   Fri Mar 4 13:13:12 2022 +0800

    fix multi infer result corrupt issue (#10704)

    * do not share blob

    Signed-off-by: fishbell <bell.song@intel.com>

    * build error

    Signed-off-by: fishbell <bell.song@intel.com>

    * remove comment codes

    Signed-off-by: fishbell <bell.song@intel.com>

commit fd18632d89
Author: Mateusz Bencer <mateusz.bencer@intel.com>
Date:   Fri Mar 4 05:24:52 2022 +0100

    Update --extenions MO doc (#10763)

commit 78c9f5b0a2
Author: Wang, Yang <yang4.wang@intel.com>
Date:   Fri Mar 4 10:04:48 2022 +0800

    Add coommon test of the key PERFORMANCE_HINT for AUTO plugin API 2.0. (#10505)

    * Add coommont test of the key PERFORMANCE_HINT for AUTO plugin API 2.0.

    Signed-off-by: Wang, Yang <yang4.wang@intel.com>

    * Add common test case for config check.

    Signed-off-by: Wang, Yang <yang4.wang@intel.com>

    * Update.

    Signed-off-by: Wang, Yang <yang4.wang@intel.com>

    * Update.

    Signed-off-by: Wang, Yang <yang4.wang@intel.com>

    * Use the implemented property test case.

    Signed-off-by: Wang, Yang <yang4.wang@intel.com>

commit 1bbd92a8f8
Author: Alexander Kozlov <alexander.kozlov@intel.com>
Date:   Thu Mar 3 18:58:58 2022 +0300

    Revised Tuning For Performance and Model optimization docs (#10276)

    * Revised Tuning for performance and Model optimization docs

    * Fixed links

    * Fixed link

    * Applied comments

    * Fixed one more comment

commit 554b50eb85
Author: Ilya Churaev <ilya.churaev@intel.com>
Date:   Thu Mar 3 18:01:59 2022 +0300

    Remove redundant calls from set_argument (#10701)

    * Remove redundant calls from set_argument

    * Fixed tests

commit f8ce57319b
Author: Vladimir Gavrilov <vladimir.gavrilov@intel.com>
Date:   Thu Mar 3 16:47:23 2022 +0300

    Specifications of operations RDFT and IRDFT (#10242)

    * Written the draft of the specification of the operation RFFT.

    * Started to write the specification of the operation IRFFT.

    * Small fix.

    * Renamed RFFT operation as RDFT.

    * Fix in Operations_specifications.md.

    * Written the specification of the operation IRDFT.

    * Fixes in examples.

    * Fixes in opset9.md and Operations_specifications.md.

    * Small fix.

    * Replaced opset8 by opset9 in opset9.md.

    * Deleted redundant sentences.

    * Small fix.

    * Replaced input_shape by data_shape.

    * Fixed mistypes.

    * Fixes of mistypes.

    * Fixed typo.

    * Fixed RDFT specification, in order to perform signal_size input as in TF and PyTorch.

    * Fixes in examples for RDFT.

    * Fixes in the output shape calculation of IRDFT. Now this calculation is as in TF and PyTorch.

commit f81f819ecd
Author: Maxim Gordeev <maxim.gordeev@intel.com>
Date:   Thu Mar 3 16:35:41 2022 +0300

    [IE Samples] Improved processing outputs for model with more than one output (#10737)

    * Improved processing outputs for model with more than one output

    * fixed condition

    * added checking count of output/reference files

commit 28889c4833
Author: Irina Efode <irina.efode@intel.com>
Date:   Thu Mar 3 14:10:07 2022 +0300

    [IE TESTS][CONFORMANCE] Fix Crashes in ReadIRTest::SetUp() (#10736)

    * [IE TESTS][CONFORMANCE] Fix Crashes in ReadIRTest::SetUp()

    * remove extra lines

    * Update read_ir.cpp

commit fdf12c9537
Author: Irina Efode <irina.efode@intel.com>
Date:   Thu Mar 3 14:09:55 2022 +0300

    Update main.cpp (#10740)

commit 8121de731c
Author: Steve Yoo <steve.yoo@intel.com>
Date:   Thu Mar 3 19:59:16 2022 +0900

    Add tests to OpImplCheckTest (#10413)

    * Add tests to OpImplCheckTest

    * Fix Gelu, Interpolate, LRN and related codes

commit bc9c68d431
Merge: 149954b4af 1fec99afa3
Author: Mikhail Nosov <mikhail.nosov@intel.com>
Date:   Thu Mar 3 13:28:37 2022 +0300

    Merge remote-tracking branch 'upstream/master' into preprocessing_docs2

commit d1630c9ac1
Author: Mateusz Bencer <mateusz.bencer@intel.com>
Date:   Thu Mar 3 11:22:42 2022 +0100

    Fix problem with segfault during using extension feature via Python (#10650)

commit 75f7bced65
Author: Dmitry Pigasin <dmitry.pigasin@intel.com>
Date:   Thu Mar 3 12:12:22 2022 +0300

    Fix `-layout` option (#10648)

commit 59cfdce73b
Author: Nikolay Tyukaev <nikolay.tyukaev@intel.com>
Date:   Thu Mar 3 11:25:54 2022 +0300

    ignore doc python errors sphinx (#10756)

    * fixes

    * fixes

    * Update workbench.md

    Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

commit 1fec99afa3
Author: Ilya Churaev <ilya.churaev@intel.com>
Date:   Thu Mar 3 09:50:54 2022 +0300

    Removed duplicated words (#10754)

commit 974ae136a6
Author: Ilya Lavrenov <ilya.lavrenov@intel.com>
Date:   Thu Mar 3 09:36:26 2022 +0300

    Enabled old BA only under ENABLE_SAMPLES (#10746)

commit 1c5e76c4db
Author: Sergey Lyalin <sergey.lyalin@intel.com>
Date:   Thu Mar 3 09:00:28 2022 +0300

    Dynamic Shapes Documentation (#10656)

    * Added draft of Dynamic Shapes Doc

    * Better wording

    Co-authored-by: Ilya Churaev <ilyachur@gmail.com>

    * Apply suggestions from code review

    Better wording, grammar, technical fixes. No significant content rework.

    Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>
    Co-authored-by: Evgenya Stepyreva <evgenya.stepyreva@intel.com>

    * Removed indentation in dynamic shapes snippets

    * Split dynamic shapes doc to two separate files, added more examples, fixed code review comments, connected to TOC

    * Fix links

    * Added aux doc to toc to avoid crash in docs build in CI

    * Added dynamicbatching in temp section

    * Apply suggestions from code review

    * Removed old DynamicBatching document

    * Applied @myshevts changes

    * Update docs/OV_Runtime_UG/ov_without_dynamic_shapes.md

    * Update ov_dynamic_shapes.md

    * Fix links to dynamic shapes doc

    Co-authored-by: Ilya Churaev <ilyachur@gmail.com>
    Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>
    Co-authored-by: Evgenya Stepyreva <evgenya.stepyreva@intel.com>

commit 7ba71f9c20
Author: FanJiangIntel <fan.jiang@intel.com>
Date:   Thu Mar 3 12:39:52 2022 +0800

    Enable apivalidator check when BUILD_SHARED_LIBS=OFF (#10461)

    * enable apivalidator for static build

    * add target _ie_plugins_hpp as dependency of inference_engine_obj

commit 3318dd6c68
Author: Nico Galoppo <nico.galoppo@intel.com>
Date:   Wed Mar 2 13:36:02 2022 -0800

    Fix MacOS DYLD_LIBRARY_PATH export (#10750)

commit 4f6ca1b85f
Author: Ilya Lavrenov <ilya.lavrenov@intel.com>
Date:   Wed Mar 2 21:30:44 2022 +0300

    Docs: update some rendering stuff (#10742)

    * Fixed small rendering issues

    * Updated picture

    * Give better name for stateful models

    * Removed the document

commit d670e77d97
Author: Ilya Churaev <ilya.churaev@intel.com>
Date:   Wed Mar 2 20:07:52 2022 +0300

    Docs: Changed OpenVINO Runtime User Guide integration (#10187)

    * Changed C++ OpenVINO Runtime User Guide integration

    * Remove IE from C++ guide

    * Fixed comments

    * Additional fix

    * Fixed some comments

    * Some new documents

    * Fixed some comments

    * Added Python snippets

    * Added sphinx tabs

    * Removed tabs

    * Removed group-tab

    * Added additional lines

    * Fixed typo

    * Fixed comments and build

    * Try to fix complex tabs

    * Fixed some typos

    * Added python code for model representation

    * Added more python code

    * Added serialize/visualize python examples

    * Simplify integration pipeline

    * Fixed typo

    * Try to fix tabs

    * Extend CompiledModel guide

    * Resolve merge conflict

    * Added separate infer request guide

    * Fixed build

    * Added cancel infer request method

    * Update docs/snippets/ov_model_snippets.py

    Co-authored-by: Jan Iwaszkiewicz <jan.iwaszkiewicz@intel.com>

    * Fixed comments

    * Fixed typo

    * Extend visualize pass

    * Fixed comments

    * Fixed build

    * Fixed typo

    * Update docs/snippets/ov_infer_request.py

    Co-authored-by: Jan Iwaszkiewicz <jan.iwaszkiewicz@intel.com>

    * Update docs/snippets/ov_infer_request.py

    Co-authored-by: Jan Iwaszkiewicz <jan.iwaszkiewicz@intel.com>

    * Update docs/OV_Runtime_UG/integrate_with_your_application.md

    Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

    * Update docs/OV_Runtime_UG/integrate_with_your_application.md

    Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

    * Update docs/OV_Runtime_UG/integrate_with_your_application.md

    Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

    * Update docs/OV_Runtime_UG/integrate_with_your_application.md

    Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

    * Update docs/OV_Runtime_UG/integrate_with_your_application.md

    Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

    * Update docs/OV_Runtime_UG/integrate_with_your_application.md

    Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

    * Update docs/OV_Runtime_UG/integrate_with_your_application.md

    Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

    * Update docs/OV_Runtime_UG/integrate_with_your_application.md

    Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

    * Update docs/OV_Runtime_UG/integrate_with_your_application.md

    Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

    * Update docs/OV_Runtime_UG/integrate_with_your_application.md

    Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

    * Update docs/OV_Runtime_UG/integrate_with_your_application.md

    Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

    * Update docs/OV_Runtime_UG/integrate_with_your_application.md

    Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

    * Update docs/OV_Runtime_UG/integrate_with_your_application.md

    Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

    * Update docs/OV_Runtime_UG/integrate_with_your_application.md

    Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

    * Update docs/OV_Runtime_UG/integrate_with_your_application.md

    Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

    * Update docs/OV_Runtime_UG/model_representation.md

    Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

    * Update docs/OV_Runtime_UG/model_representation.md

    Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

    * Fixed comments

    * Fixed doc

    * Fixed merge

    Co-authored-by: Jan Iwaszkiewicz <jan.iwaszkiewicz@intel.com>
    Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

commit 21185189d8
Author: Maxim Shevtsov <maxim.y.shevtsov@intel.com>
Date:   Wed Mar 2 19:45:42 2022 +0300

    adding 2.0 config param for auto_batch_timeout and the tests (#10719)

commit 24a5aab501
Author: Taylor Yeonbok Lee <taylor.lee@intel.com>
Date:   Thu Mar 3 01:27:32 2022 +0900

    Fixed bug: When external id of a loop is fused, the i/o map of a loop should be updated (#10726)

commit 4b55ef9911
Author: Evgenya Stepyreva <evgenya.stepyreva@intel.com>
Date:   Wed Mar 2 19:16:34 2022 +0300

    Static Shape constraints removed from Interpolate 1->4 transformation (#10732)

    * Static Shape constraints removed from Interpolate 1->4 transformation

    * Dynamic tests added

commit bea352f272
Author: Nesterov Alexander <alexander.nesterov@intel.com>
Date:   Wed Mar 2 18:00:32 2022 +0300

    Update Linux Azure CI (#10739)

commit 180f15e84c
Author: Maxim Shevtsov <maxim.y.shevtsov@intel.com>
Date:   Wed Mar 2 17:48:01 2022 +0300

    auto-batching- bare min of the info (#10190)

    * auto-batching- bare min of the info

    * renaming BATCH.MD to the automatic_batching.md, also aligned the link to the new naming convention

    * more info and brushed

    * added openvino_docs_OV_UG_Automatic_Batching to the main TOC

    * Apply suggestions from code review

    Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

    * close on the comments, added the code examples

    * Apply suggestions from code review

    Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

    * Update example

    * Update format

    * Update docs format

    * added couple of more perf considerations

    * more code examples

    * Apply suggestions from code review

    * Apply the rest from code review

    * Update header

    Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

commit 42d3893833
Author: Nikolay Tyukaev <nikolay.tyukaev@intel.com>
Date:   Wed Mar 2 17:46:49 2022 +0300

    doc fixes (#10738)

commit 7cd3c8e86e
Author: csy0225 <78470701+csy0225@users.noreply.github.com>
Date:   Wed Mar 2 21:31:37 2022 +0800

    Fix compile problem when open -Wnon-virtual-dtor compile flag (#10705)

    * Fix compile problem when open -Wnon-virtual-dtor compile flag

    * update code style

    * fix the code style

commit d3ded2fc36
Author: Ilya Churaev <ilya.churaev@intel.com>
Date:   Wed Mar 2 16:01:21 2022 +0300

    Fixed declaration of 'xxx' hides global declaration (#10733)

commit 40fc5334d8
Author: Gorokhov Dmitriy <dmitry.gorokhov@intel.com>
Date:   Wed Mar 2 15:44:34 2022 +0300

    [CPU] Fixed number of streams initialization for hint = throughput (#10728)

commit cd52cc6767
Author: Anastasia Kuporosova <anastasia.kuporosova@intel.com>
Date:   Wed Mar 2 15:36:31 2022 +0300

    [Python API][Docs] Remove excess info (#10672)

    * [Python API][Docs] Remove excess info

    * autodoc: add skip methods (#68)

    * remove utils from docs

    * undo changes

    Co-authored-by: Nikolay Tyukaev <nikolay.tyukaev@intel.com>

commit c54926ecb8
Author: Victor Kuznetsov <victor.kuznetsov@intel.com>
Date:   Wed Mar 2 13:03:28 2022 +0300

    Update nightly memcheck models scope (#10709)

commit 969060c8db
Author: Wilson Seok <wilson.seok@intel.com>
Date:   Wed Mar 2 01:50:31 2022 -0800

    Add op impl check tests (#10339)

    * Remove fp16 of Convert layer test from skip_tests.config.cpp as it works now

    * update repo

    * add initial op impl check tests

    * add op imple check tests

    * add op impl check tests

    * add rnn cell based ops

    * modify lstmsequence

    * update rnn cell base op test

    * add priorbox, priorboxclustered, proposal

    * add ROIAlign to ReverseSequence

    * add Roll to ScatterElementsUpdate

    * add select to swish tests

    * add tensoriterator to variadicsplit test

    * temporary block of LSTMCell v1 due to crash in mkldnn

    * use ov namespace instead of ngraph as possible

    * update indexing of vector array

    * update multiple parameter vector

    * add loop test

    * fix cpplint errors

    * fix build error

commit 86b175534a
Author: Ilya Lavrenov <ilya.lavrenov@intel.com>
Date:   Wed Mar 2 12:16:58 2022 +0300

    Docs: complete migration guide (#10652)

    * Updated glossary

    * Removed references to OpenVX

    * Moved migration_ov_2_0 to OpenVINO User guide

    * Replaced IE with OV runtime

    * Complete migration guide

    * Migration 2.0

    * Self-review

    * Added property migration guide

    * Fixed table

    * Added preprocessing migration

    * Update docs/OV_Runtime_UG/migration_ov_2_0/preprocessing.md

    Co-authored-by: Mikhail Nosov <mikhail.nosov@intel.com>

    * Update docs/OV_Runtime_UG/migration_ov_2_0/preprocessing.md

    Co-authored-by: Mikhail Nosov <mikhail.nosov@intel.com>

    * Update docs/snippets/ov_preprocessing_migration.cpp

    Co-authored-by: Mikhail Nosov <mikhail.nosov@intel.com>

    * reivew fixes

    * Preprocessing intro updated

    * Updated config migration guide

    * Updates

    * Fixes

    Co-authored-by: Mikhail Nosov <mikhail.nosov@intel.com>

commit d1bcb6d0fc
Author: Yuan Xu <yuan1.xu@intel.com>
Date:   Wed Mar 2 16:10:58 2022 +0800

    CVS-80445 (#10723)

    * Add Overview page

    * Revert "Add Overview page"

    * fix format

    * test formatting

    * test formatting

    * update

    * test formatting

    * minor changes

commit 9cd3bff7df
Author: Pavel Zamelin <pavel.zamelin@intel.com>
Date:   Wed Mar 2 03:39:30 2022 +0300

    Fix install failures for static libs with `EXCLUDE_FROM_ALL` (#10706)

    * Remove EXCLUDE_FROM_ALL for some static targets

    * Add install check for static libs

commit e75ee60bec
Author: Vladislav Golubev <vladislav.golubev@intel.com>
Date:   Tue Mar 1 22:33:42 2022 +0300

    [CPU] Disabled sequences decomposition for dynamic case (#10710)

commit 81cd9d86d1
Author: Nikolay Tyukaev <nikolay.tyukaev@intel.com>
Date:   Tue Mar 1 22:11:37 2022 +0300

    sphinxdirective: allow commented blocks (#10720)

    * sphinxdirective: allow commented blocks

    * minor correction

commit 5e023ebdd9
Author: Mikhail Nosov <mikhail.nosov@intel.com>
Date:   Tue Mar 1 17:32:36 2022 +0300

    Fix issue with default arguments in preprocessing python bindings (#10702)

    * Fix in Preprocessing python bindings - add correct default arguments for:
        - PreProcessSteps::convert_element_type
        - PostProcessSteps::convert_element_type
        - InputTensorInfo::set_color_format

    Otherwise, python users must always specify optional params

    E.g. instead of writing `tensor().set_color_format(ColorFormat.RGB)` python users will have to write `tensor().set_color_format(ColorFormat.RGB, [])`

    * Corrected 'help' output

    * Exposing 'openvino.runtime.Type.undefined' and use it in 'convert_element_type' documentation

commit 6b067bc0ed
Author: Ilya Lavrenov <ilya.lavrenov@intel.com>
Date:   Tue Mar 1 16:56:15 2022 +0300

    Fixed install on Apple  (#8302)

    * Fixed Apple install

    * Update path to libs in setupvars.sh

    * Fix IE_CPACK_RUNTIME_PATH for Apple

    * Fix wheels packaging

    Co-authored-by: Alexey Suhov <alexey.suhov@intel.com>

commit 18035209a0
Author: David Nam <david.nam@intel.com>
Date:   Tue Mar 1 22:27:11 2022 +0900

    Add op impl checkt tests (#10414)

    * Add op impl checkt tests

    * Add op impl check tests

    * Add op impl check tests

    * Add op impl check test

    * Add op impl check tests

    * Add op impl check tests

    * Fix usage of makeConstant()

    * Fix build error in ubuntu18_i386

    * Fix error in linux-macos

    Co-authored-by: PVA-CI <pva-ci@intel.com>

commit 0f409ccea9
Author: Anastasia Kuporosova <anastasia.kuporosova@intel.com>
Date:   Tue Mar 1 16:11:57 2022 +0300

    [Python API] Fix typo in method name (#10707)

commit 3f941e3c5f
Author: Anastasia Popova <anastasia.popova@intel.com>
Date:   Tue Mar 1 16:03:09 2022 +0300

    Corrected layout parsing error message. (#10651)

    * Corrected error message.

    * Corrected message.

    * Small correction

    * Corrected error message for source and target layout.

commit 9eca8515b8
Author: Irina Efode <irina.efode@intel.com>
Date:   Tue Mar 1 16:01:30 2022 +0300

    [IE TESTS] Extend EvaluatorMaps by Greater, If, Equal (#10026)

    * [IE TESTS] Extend EvaluatesMap

    * fix code style

commit 6c6aa8fa95
Author: Sergey Shlyapnikov <sergey.shlyapnikov@intel.com>
Date:   Tue Mar 1 15:15:04 2022 +0300

    [GPU] Fix RemoteBlob lock() and ulock() behaviour in case of multiple threads (#10685)

    * [GPU] Fix RemoteBlob lock() and ulock() behaviour in case of multiple threads and add tests

commit 1d469a2b87
Author: Karol Blaszczak <karol.blaszczak@intel.com>
Date:   Tue Mar 1 13:00:38 2022 +0100

    [DOCS] hddl update (#10616)

    * [DOCS] hddl update

    include info on hddl and myriad working at the same time

    * Update docs/OV_Runtime_UG/supported_plugins/MYRIAD.md

    Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

    * Update HDDL.md

    * Update MYRIAD.md

    Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
    Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

commit 8e0978818c
Author: Maxim Andronov <maxim.andronov@intel.com>
Date:   Tue Mar 1 14:31:21 2022 +0300

    [CPU] Prevent internalBlobs cleanup for dynamic deconv node (#10697)

commit 149954b4af
Author: Mikhail Nosov <mikhail.nosov@intel.com>
Date:   Tue Mar 1 13:47:31 2022 +0300

    Enable Model Caching to 'application code' section

commit f98c728591
Author: Mikhail Nosov <mikhail.nosov@intel.com>
Date:   Tue Mar 1 01:05:46 2022 +0300

    Docs: added preprocessing use case with saving resulting model to IR

commit 64fca57af4
Author: Nikita Semaev <nikita.semaev@intel.com>
Date:   Tue Mar 1 12:14:45 2022 +0300

    Fix NMS Conformance tests for Template plugin (#9273)

    * Added inputs argument to all compare() function overloads

    * Rewritten compare() function for NMS

    * Implemented sorting by name of expected outputs

    * Implemented sorting by name of actual outputs

    * Added accounting for simultaneous dynamism and the need to convert outputs in Template plugin

    * Added a separate case to the GetBlob function for correct dimensions

    * Rewritten Expected outputs sorting to work correctly on cpuFuncTests

    * Fixing code style problems

    * Implemented sorting by name of actual outputs for functional tests

    * Debug prints removed

    * Replacing a raw pointer with a vector

    * Fixing code style problems

    * Shifting the sorting place Expected outputs

    * Added sorting of Expected exits in one more place

    * Quality transition to SLT2.0

    * Removing unnecessary code after SLT2.0

    * Fix soft_nms_sigma argument

    * Removing unnecessary parts after SLT2.0

    * Remove unnecessary outputs sorting

    * Removing parts from the code for debugging

    * Fix for NMS

    * Trying to make CI green

    * Checking test passage without adding convert precision

    * Checking CI

    * There is an algorithm that adds Convert only if there is f16, fp16 in inputs

    * Add Convert Op in cases where inputs are not already installed f32

    * Check that the CI will go away if you put everything back

    * Revert changes, validate f32 change on ci

    * Adding Convert f16-f32 only if there is a function parameter of type f16

    * The presence of f16/bf16 as a parameter type is now mandatory to add Convert

    * Added prints for params, inputs, outputs

    * Logic checking the absence of Convert

    * Cosmetic fixes

    * Setting the correct value for selected_scores_type NMS-5

    * Fix bf

    * Increased readability

    * Missing parts added

    * Removed the static for the vector

commit 5f40ba9a23
Author: Ilya Lavrenov <ilya.lavrenov@intel.com>
Date:   Tue Mar 1 11:12:12 2022 +0300

    Fixed onecoreuap.toolchain.cmake (#10646)

    * Fixed onecoreuap.toolchain.cmake

    * Updated mt.runtime.win32.toolchain.cmake

commit 6c78715749
Author: Roman Kazantsev <roman.kazantsev@intel.com>
Date:   Tue Mar 1 10:57:24 2022 +0300

    [MO] Clean up Model Optimizer options, help, and documentation (#10653)

    * [MO] Clean-up MO cmd-line options

    Remove the following Model Optimizer deprecated options that are no longer used for several releases: disable_fusing, disable_gfusing, generate_deprecated_IR_V7,
    legacy_ir_generation, keep_shape_ops, move_to_preprocess
    Deprecate through CLI the following options for which functionality triggered from POT or automatically: disable_weights_compression, disable_nhwc_to_nchw,
    disable_resnet_optimization, finegrain_fusing.
    Correct and extend description of each MO option to be printed during model conversion.

    Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

    * Correct documentation about input shapes

    Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

    * Perform final corrections in documentation

    Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

    * Remove legacy_ir_generation overall

    Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

    * Clean-up tests from deprecated options

    Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

    * Recover disable_fusing option as deprecated

    Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

    * Fix keys for static_shape and extensions

    Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

    * Remove extension key that does not work

    Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

    * Apply feedback: remove disable_gfusing, correct docs

    Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

    * Recover disable_fusing option for unit-tests

    Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

    * Apply feedback for documentation

    Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

    * Apply feedback about parameters use_legacy_frontend and use_new_frontend

    Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

    * DO minor fixes for indentation of MO logs

    Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

    * Revert log.error for fallback message

    Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

    * Revert disable_weights_compression parameter for tests

    Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

commit 9da124544a
Author: Ilya Churaev <ilya.churaev@intel.com>
Date:   Tue Mar 1 09:03:59 2022 +0300

    Transformation guide (#10628)

    * Fixed some comments about transformations

    * Changed transformation guide

    * Fixed typo

    * Moved transformation doc to extensibility

    * Moved images to Extensibility_UG

    * Added separate document for each pass

    * Added see also section

    * Fixed comments

commit 4b29eed013
Author: Andrei Kochin <andrei.kochin@intel.com>
Date:   Mon Feb 28 18:55:44 2022 +0300

    Update MO requirements to allow TF1.15 if already installed (#10673)

    * Update MO requirements to allow TF1.15 if already installed

    * Removing pyhton version check as redundant

    * Updating requirements.txt as well

commit 173f328c53
Author: Mikhail Nosov <mikhail.nosov@intel.com>
Date:   Mon Feb 28 17:04:59 2022 +0300

    Checking compatibility between 'pyopenvino' and 'libopenvino' (#10668)

    * Checking compatibility between 'pyopenvino' and 'libopenvino' on 'import phase'

    This fix is to prevent undefined behavior when user loads OpenVINO from python, but pyopenvino loads different version of 'libopenvino'
    This may happen if user has several releases installed and played around PATH/PYTHONPATH environment variables.

    In such case, user may have undefined behavior - application may crash in the middle of the usage or use incorrect release.

    Fix checks build versions for pyopenvino and ov::get_openvino_version. If mismatch occurs, exception is thrown.

    This logic is disabled if user has built OpenVINO locally, experienced developers probably know what they're doing, so if version has 'custom_'  prefix - this logic is disabled

    * Removed custom logic for CI_BUILD_NUMBER, it is reused from already included version.cmake

    * Use addVersionDefines macro

commit b319acc672
Author: Maxim Andronov <maxim.andronov@intel.com>
Date:   Mon Feb 28 17:01:18 2022 +0300

    [CPU] Prohibit to load model with dynamic output shapes (#10643)

commit 4a8b142fef
Author: Mateusz Tabaka <mateusz.tabaka@intel.com>
Date:   Mon Feb 28 15:00:51 2022 +0100

    [PYTHON] fix importing lstm_sequence for opsets >= 5 (#10637)

    * [PYTHON] fix importing lstm_sequence for opsets >= 5

    * update compat opsets

commit 33ad1b96d4
Author: Nikita Malinin <nikita.malinin@intel.com>
Date:   Mon Feb 28 16:26:07 2022 +0300

    [POT] Update samples and samplers with the new DataLoader format (#10595)

    * Update samples and samplers with the new DataLoader format

    * Update with utils

    * Pylint updates

    * Update metric with the exception

    * Pylint

    * Update with the exception

    * Pylint

    * Revert index sampler changes

    * Update ImageLoader & SimplifiedEngine

    * Update with the different solution

    * Remove utils

    * Pylint

    * Remove list wrapping

    * Remove list from meta_data

commit 7d0d950b9a
Author: Maxim Vafin <maxim.vafin@intel.com>
Date:   Mon Feb 28 15:30:33 2022 +0300

    Add pytorch Resnext101 from fb into documentation (#10665)

commit f6fbef1f66
Author: Irina Efode <irina.efode@intel.com>
Date:   Mon Feb 28 15:06:03 2022 +0300

    Allow to specify conformance by shape_type (#10667)

    * Init

    * the solution

    * Remove extra

    * Update CMakeLists.txt

    * Readme

    * fix build

    * dd

commit bed0adf5ef
Author: Maxim Shevtsov <maxim.y.shevtsov@intel.com>
Date:   Mon Feb 28 15:04:03 2022 +0300

    creating remote ocl buffer/tensor per request, to avoid simulteneous locking of the same ocl buffer when auto-batching is used (#10607)

commit 1ceb9729e9
Author: Vladislav Golubev <vladislav.golubev@intel.com>
Date:   Mon Feb 28 14:06:17 2022 +0300

    [CPU] friendly name duplication fixed for the TypeRelaxed case (#10486)

commit b9ef57112e
Author: Maxim Gordeev <maxim.gordeev@intel.com>
Date:   Mon Feb 28 12:31:01 2022 +0300

    [IE Samples] Fixed memory allocation problem for speech sample (#10671)

commit d4f77f1d3e
Author: Vitaliy Urusovskij <vitaliy.urusovskij@intel.com>
Date:   Mon Feb 28 12:30:21 2022 +0300

    Mute 'maybe-uninitialized' error for RELWITHDEBINFO in intel_gpu (#10682)

commit f55e69d656
Author: Fedor Zharinov <fedor.zharinov@intel.com>
Date:   Mon Feb 28 12:26:41 2022 +0300

    Legacy benchmark_app is added (#10239)

    * Legacy benchmark_app is added

    * apply fix for supporting multiple -i arguments

    * new CMakeLists.txt with OpenCV auto detection

    * fixes

    * docs

    * docs2

    * Docs changes

    * docs

    * CMakeLists.txt modification

    * Update tools/legacy/benchmark_app/README.md

    Co-authored-by: ivikhrev <ivan.vikhrev@intel.com>
    Co-authored-by: Vladimir Dudnik <vladimir.dudnik@intel.com>

commit 5724c5ac44
Author: Andrey Zaytsev <andrey.zaytsev@intel.com>
Date:   Fri Feb 25 23:42:00 2022 +0300

    Image added (#10674)

commit 52b450a5fb
Author: Denis Orlov <denis.orlov@intel.com>
Date:   Fri Feb 25 18:55:15 2022 +0300

    [GNA] Update documentation (#10570)

commit 7b58f931b5
Author: Tatiana Savina <tatiana.savina@intel.com>
Date:   Fri Feb 25 18:22:13 2022 +0300

    [DOCS] Add wb images for nightly docs fix (#10663)

    * add img

    * wb img for input

    * dataset added

    * add img

    * wb img for input

    * dataset added

    * ov_fix

commit 18ff8afe63
Author: Egor Duplensky <egor.duplenskii@intel.com>
Date:   Fri Feb 25 16:11:16 2022 +0300

    [IE TESTS] Avoid extra checks for test skipping (#10609)

    Avoid double iteration over skip patterns
    Skip test after first pattern match

commit 94cbbe063b
Author: Ilya Znamenskiy <ilya.znamenskiy@intel.com>
Date:   Fri Feb 25 15:48:17 2022 +0300

    [GPU] Cum sum int32/64 support (#10629)

commit e9e59cb954
Author: Ilya Lavrenov <ilya.lavrenov@intel.com>
Date:   Fri Feb 25 15:47:21 2022 +0300

    Moved ngraphConfig.cmake to root (#10618)

commit 54f39294de
Author: Jan Iwaszkiewicz <jan.iwaszkiewicz@intel.com>
Date:   Fri Feb 25 11:02:04 2022 +0100

    [PYTHON] Fix style in python doc strings (#10606)

    * Fix style in python doc strings

    * New line quotes

commit 14d11a8998
Author: Yury Gaydaychuk <yury.gaydaychuk@intel.com>
Date:   Fri Feb 25 12:57:03 2022 +0300

    [CPU] Fix of invalid read in DefConv (#10481)

commit bdee939fe0
Author: Anuj Mittal <anuj.mittal@intel.com>
Date:   Fri Feb 25 17:31:32 2022 +0800

    installing-openvino-yocto: fix documentation links (#10546)

    * installing-openvino-yocto: fix documentation links

    Point to the new Yocto docs website.

    Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>

    * Update installing-openvino-yocto.md

    Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

commit 38d87dd9de
Author: Anton Pankratov <anton.pankratov@intel.com>
Date:   Fri Feb 25 11:57:23 2022 +0300

    Removed stream enum (#10645)

    * Removed stream enum

    * Fixed build

    * fixed build

    * Fixed test

commit a32ed5a07a
Author: Ilya Churaev <ilya.churaev@intel.com>
Date:   Fri Feb 25 11:41:23 2022 +0300

    Fixed build for CI (#10659)

commit bacf597516
Author: Dmitry Pigasin <dmitry.pigasin@intel.com>
Date:   Fri Feb 25 11:25:35 2022 +0300

    [CPP Speech Sample] Improve `-o` and `-oname` flags (#10321)

    * Improve `-o` and `-oname` flags

    * Apply clang-format tool

    * fix saving output files

    * Apply clang-format

    * Fix error when `-oname` not specified

    * apply clang format

    * Fix error `-oname`

    * Use output name with port to find model output

    * fix comment line breaking

    * fix comparison with reference for multiple outputs

    * Fix output name printing  error

    * try to fix clang format

    * fix problem with bs > 1

    * minimal change to rerun test pipeline

    * clang format

    * Revert "Fix error `-oname`"

    This reverts commit c33d5f16e8.

commit 9e3610c028
Author: Maksim Kutakov <maksim.kutakov@intel.com>
Date:   Fri Feb 25 10:55:59 2022 +0300

    [CPU] Fix for subnormal numbers nullifying routine (#10622)

commit 6062e3d4b7
Author: Nikolay Tyukaev <nikolay.tyukaev@intel.com>
Date:   Fri Feb 25 10:34:11 2022 +0300

    DOCS: benchmarks ovino vs tf (#10654)

    * benchmarks-ovino-vs-tf

    * minor fixes

commit 53d3ef8eab
Author: Ilya Lavrenov <ilya.lavrenov@intel.com>
Date:   Fri Feb 25 07:02:09 2022 +0300

    Removed ngraph mentions (#10647)

commit ffd63f9758
Author: Ilya Lavrenov <ilya.lavrenov@intel.com>
Date:   Fri Feb 25 00:44:48 2022 +0300

    Replaced IE with OV runtime: docs (#10642)

    * Updated glossary

    * Removed references to OpenVX

    * Moved migration_ov_2_0 to OpenVINO User guide

    * Replaced IE with OV runtime

commit 806ce96899
Author: Ilya Churaev <ilya.churaev@intel.com>
Date:   Thu Feb 24 19:41:47 2022 +0300

    Remove onnx_custom_op doc (#10638)

    * Remove onnx_custom_op doc

    * Remove test

    * Fixed tests

commit f2bbd5bbb8
Author: Anastasia Kazantaeva <anastasia.kazantaeva@intel.com>
Date:   Thu Feb 24 19:13:21 2022 +0300

    Add original contribution guide to root (#10644)

commit e906b3581f
Author: Sergey Shlyapnikov <sergey.shlyapnikov@intel.com>
Date:   Thu Feb 24 16:41:43 2022 +0300

    [GPU] Replace handle_permute optimization pass with proper Reorder adding instead of Permute primitive (#10569)

commit 163a79b232
Author: Paul Youngsoo Ahn <paul.y.ahn@intel.com>
Date:   Thu Feb 24 22:07:33 2022 +0900

    [GPU] Fix activation fusing issue(#10636) (#10636)

commit 1c18733ade
Author: Ilya Churaev <ilya.churaev@intel.com>
Date:   Thu Feb 24 15:50:31 2022 +0300

    Changed location of extensibility guide (#10433)

    * Changed location of extensibility guide

    * Removed hardware kernels legacy documentation

    * Changed all extension guild to new API

    * Removed Custom_Layers_Guide

    * Fixed build

    * Fixed some moments

    * Update docs/Extensibility_UG/Intro.md

    * Fixed build

    * Added more examples

    * Fixed typo

    * Fixed comments

    * Extend library topic

    * Fixed typo

commit a2f9963045
Author: Maksim Derbasov <maksim.derbasov@intel.com>
Date:   Thu Feb 24 15:33:30 2022 +0300

    Fix warnings from builders.hpp (#10568)

commit 85707198b3
Author: Ilya Churaev <ilya.churaev@intel.com>
Date:   Thu Feb 24 15:22:08 2022 +0300

    Revert "Disable reshape for new API (#10064)" (#10634)

    This reverts commit 3f4e384d5d.

commit 3de428c713
Author: Evgenya Stepyreva <evgenya.stepyreva@intel.com>
Date:   Thu Feb 24 14:37:03 2022 +0300

    Auto-batch ConvertLike enabled (#10631)

commit 4c01d6c50c
Author: Alina Kladieva <alina.kladieva@intel.com>
Date:   Thu Feb 24 12:03:36 2022 +0300

    Skip canRun3SyncRequestsConsistentlyFromThreads sporadic on Myriad (#10598)

commit 506303cc79
Author: Ivan Novoselov <ivan.novoselov@intel.com>
Date:   Thu Feb 24 11:54:15 2022 +0300

    [Snippets][CPU] Fix empty shapes handling in canonicalization (#10632)

commit 23b74840c1
Author: Vladimir Dudnik <vladimir.dudnik@intel.com>
Date:   Thu Feb 24 10:49:38 2022 +0300

    renamed streams property (#10620)

commit e544f5e66f
Author: Evgenya Stepyreva <evgenya.stepyreva@intel.com>
Date:   Wed Feb 23 18:29:12 2022 +0300

    Enable einsum shape inferenxe test (#10603)

commit 9dec8db964
Author: Anton Pankratov <anton.pankratov@intel.com>
Date:   Wed Feb 23 13:03:37 2022 +0300

    Common OV configuration tests (#10286)

    * Used new config for streams and threads

    * Fixed review coments in ba

    * format fix

    * fixed hello_query_device

    * Added STL string io

    * fixed tests

    * Fixed test

    * Fixed build

    * fixed format

    * Fixed build

    * try fix win

    * other any io specialization

    * Fixed after merge

    * renamed streams

    * build fixed

    * fixed build

    * fixed format

    * fix for old mac build

    * Fixed type of exception

    * test fix

    * Added ov configuration test

    * Added common OV properties tests

    * fix mklnn

    * fixed foramat

    * merge conflicts

    * Remoed compile_model tests

    * removed duplicated test

commit c1919a0f1d
Author: Karol Blaszczak <karol.blaszczak@intel.com>
Date:   Wed Feb 23 10:53:37 2022 +0100

    update documents for Paddle inclusion (#10613)

    Introduce PaddlePaddle articles and include PP references in other articles

commit 7ff8ada805
Author: Ilya Churaev <ilya.churaev@intel.com>
Date:   Wed Feb 23 06:29:03 2022 +0300

    Fixed API for transformations (#10584)

    * Fixed API for transformations

    * Fixed code style

    * Fixed build

    * Fixed typo

commit 75cca1e9e9
Author: Fedor Zharinov <fedor.zharinov@intel.com>
Date:   Wed Feb 23 01:30:08 2022 +0300

    [benchamrk_app] error if -b is set but there's no batch info (#10592)

    * Added code showing error message if -b is provided, but got no batch info for inputs

    * stylefix / batch>1 case

commit 817550fa0a
Author: Vladimir Dudnik <vladimir.dudnik@intel.com>
Date:   Tue Feb 22 23:37:55 2022 +0300

    [OMZ] update OMZ submodule, docs updated (#10594)

    * update OMZ submodule, docs updated

    * rebase to master

commit 3f4e384d5d
Author: Ilya Churaev <ilya.churaev@intel.com>
Date:   Tue Feb 22 23:05:23 2022 +0300

    Disable reshape for new API (#10064)

    * Disable reshape for new API

    * Update cnn_network_ngraph_impl.cpp

    Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>

commit 5b3b48aa17
Author: Ilya Lavrenov <ilya.lavrenov@intel.com>
Date:   Tue Feb 22 20:11:42 2022 +0300

    samples overview & model protection: docs (#10596)

    * Renamed hetero md

    * Renamed some guides

    * Updated OpenVINO_Runtime_User_Guide.md

    * Updated plugin's page

    * More updates

    * Fixed links

    * Updated link names

    * Fixed links

    * Fixed docs build

    * Self-review

    * Fixed issues in doc snippets

    * Updated Samples_Overview.md

    * Updated model protection guide

    * Renamed ngraph_function creation samples

commit 37923a9183
Author: Liubov Talamanova <piccione-mail@yandex.ru>
Date:   Tue Feb 22 18:38:08 2022 +0300

    [POT] Remove DataFreeEngine (#10600)

commit 14d31d59af
Author: hyunback kim <hyunback.kim@intel.com>
Date:   Wed Feb 23 00:25:26 2022 +0900

    [GPU] Enable deconv with oneDNN (#10580)

    * [GPU] Enable deconv with oneDNN

    remove post-op data_type into oneDNN.

    Signed-off-by: hyunback <hyunback.kim@intel.com>

    * Update to use data_type in conv sum post-op.

    Signed-off-by: hyunback <hyunback.kim@intel.com>

commit b12c3389ee
Author: Ivan Novoselov <ivan.novoselov@intel.com>
Date:   Tue Feb 22 18:18:49 2022 +0300

    [Sinppets] Add virt destructors to Emitter and TargetMachine (#10588)

commit e2df6d149b
Author: Indira Salyahova <indira.salyahova@intel.com>
Date:   Tue Feb 22 17:46:08 2022 +0300

    [POT] Update face detection sample (#10471)

    * support cascade model for sw api

    * update mtcnnengine

    * delete empty line

commit dab1a34aa2
Author: Maxim Shevtsov <maxim.y.shevtsov@intel.com>
Date:   Tue Feb 22 17:19:23 2022 +0300

    checking the network batch-ability (internal helper func on top of bat… (#10446)

    * checking the network batchability (internal helper func on top of batch tracking) before doing hetero

    * more general logic with respect to batch-ability of the network

    * a dynamism check that I've owed from the PR-10560

    * using the DO-detached mechanism for early hetero exit, also fixed this flag in the Batching plugin (although minor, as the DO is removed by HETERO)

    * adding the dimension tracking logic depending on whether implicitly/expicitly the auto-batching is enabled

    * changed the DetectionOutput affinity markup to go over results, also accomodate Convert, so only 2 subgraphs are made by the HETERO

commit e59739ce88
Author: Nikolay Shchegolev <nikolay.shchegolev@intel.com>
Date:   Tue Feb 22 16:57:26 2022 +0300

    [CPU] RNN node enforce bf16 mode does not work. (#9859)

commit 71a0a6d261
Author: Mikhail Ryzhov <mikhail.ryzhov@intel.com>
Date:   Tue Feb 22 16:54:56 2022 +0300

    [GNA] Klocwork fixes

commit bc0a84a1c1
Author: Roman Kazantsev <roman.kazantsev@intel.com>
Date:   Tue Feb 22 16:54:20 2022 +0300

    [MO] Print information about new API 2.0 (#10567)

    * [MO] Print information about new API 2.0

    Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

    * Apply feedback

    Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

    * Apply feedback

    Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

commit aced89a655
Author: Indira Salyahova <indira.salyahova@intel.com>
Date:   Tue Feb 22 16:53:53 2022 +0300

    fix: don't pass parametr inplace_statistic for weights (#10593)

commit 5bb8f77c3f
Author: Anastasia Kuporosova <anastasia.kuporosova@intel.com>
Date:   Tue Feb 22 16:51:41 2022 +0300

    [Python API] Remove get/set_config methods from the PyOV (#10587)

commit 435584bb91
Author: Maxim Vafin <maxim.vafin@intel.com>
Date:   Tue Feb 22 16:46:48 2022 +0300

    Support dynamic Broadcast and new pattern for TI condition (#9735)

    * Support dynamic Broadcast and new pattern for TI condition

    * Apply review feedback

    * Fix broadcast if statement

commit 487bb67995
Author: Min, Byungil <byungil.min@intel.com>
Date:   Tue Feb 22 22:23:45 2022 +0900

    Resolve onednn fc issue to enable bert-base (#10177)

    + Enabled bert-base-ber model
    + Resolve failure of onednn fc

    Signed-off-by: Min, Byungil <byungil.min@intel.com>

commit 850f93f21b
Author: Maksim Kutakov <maksim.kutakov@intel.com>
Date:   Tue Feb 22 15:42:26 2022 +0300

    [CPU] INT8 tests for convolution sum fusing (#10359)

    * int8 tests

    * Sum second term port selection fix

    * Fix after rebase

commit 51ef938385
Author: Tingqian Li <tingqian.li@intel.com>
Date:   Tue Feb 22 20:23:20 2022 +0800

    [CPU] fix crash in resnet binary model (#9761)

commit 6dc8b8b047
Author: Tatiana Savina <tatiana.savina@intel.com>
Date:   Tue Feb 22 14:50:37 2022 +0300

    add note (#10566)

commit c80a872f73
Author: Anton Romanov <anton.romanov@intel.com>
Date:   Tue Feb 22 14:49:35 2022 +0300

    Fix Coverity in samples (#10583)

    * Fix coverity samples

    * Fixed coverity issue in speech sample

commit a3004e7d80
Author: Alexey Lebedev <alexey.lebedev@intel.com>
Date:   Tue Feb 22 14:48:55 2022 +0300

    [PYTHON API] reshape helper (#10402)

    * Add reshape helper

    * add dimension(range)

    * Add partial_shape helper

    * Fix code style

    * fix comments

    * Split reshape on several overloads

    * Fix code style

    * correct exception

    * remove range support

    * fix code style

    * Add exception

    * Dimension from str, PartialShape from str, reshape(str) support

    * Apply review comments

    * Add default init for shape

    * Add PS syntax examples

    * Remove pshape parsing from benchmark_app

    * Update src/bindings/python/src/pyopenvino/graph/model.cpp

    Co-authored-by: Sergey Lyalin <sergey.lyalin@intel.com>

    * Update src/bindings/python/src/pyopenvino/graph/model.cpp

    Co-authored-by: Sergey Lyalin <sergey.lyalin@intel.com>

    * Apply suggestions from code review

    Co-authored-by: Sergey Lyalin <sergey.lyalin@intel.com>

    Co-authored-by: Sergey Lyalin <sergey.lyalin@intel.com>

commit 991c9db1c1
Author: Ilya Lavrenov <ilya.lavrenov@intel.com>
Date:   Tue Feb 22 14:32:57 2022 +0300

    Config api docs (#10563)

    * Renamed hetero md

    * Renamed some guides

    * Updated OpenVINO_Runtime_User_Guide.md

    * Updated plugin's page

    * More updates

    * Fixed links

    * Updated link names

    * Fixed links

    * Fixed docs build

    * Self-review

    * Fixed issues in doc snippets

commit 3f15afb926
Author: Sofya Balandina <sofya.balandina@intel.com>
Date:   Tue Feb 22 13:55:51 2022 +0300

    [IE TEST] Continue run after crash (#10037)

commit 3d223ebc2a
Author: Pavel Esir <pavel.esir@intel.com>
Date:   Tue Feb 22 13:51:10 2022 +0300

    [MO] update error message when reverse infer was not successful (#10576)

    * update error message when reverse infer was not successful

    * corrected message when there are several undefined Parameters

commit efd3c119fa
Author: Andrey Zaytsev <andrey.zaytsev@intel.com>
Date:   Tue Feb 22 13:33:44 2022 +0300

    Update Yocto documentation (#10547) (#10591)

    * installing-openvino-yocto: fix documentation links

    Point to the new Yocto docs website.

    Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>

    * Update installing-openvino-yocto.md

    * installing-openvino-yocto: add step to checkout specific branch

    Request users to checkout specific branch of meta-intel where this
    version of OpenVINO is available.

    Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>

    Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

    Co-authored-by: Anuj Mittal <anuj.mittal@intel.com>
    Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

commit 6500ec775d
Author: Ivan Novoselov <ivan.novoselov@intel.com>
Date:   Tue Feb 22 13:30:15 2022 +0300

    [Snippets] Check for cyclic dependencies during ternary merge. (#10374)

commit a3887f3328
Author: Alexey Varyzgin <alexey.varyzgin@intel.com>
Date:   Tue Feb 22 02:05:19 2022 -0800

    [CPU] Transpose node optimized with Reorder (#10551)

commit b7ead46943
Author: Irina Efode <irina.efode@intel.com>
Date:   Tue Feb 22 13:02:05 2022 +0300

    [IE TESTS] Functional tests Review. Part 2 (#10476)

    * [IE TESTS] Functional tests Review. Part 2

    * tmp

    * revert set_blob changes

commit d57fb75ba6
Author: Irina Efode <irina.efode@intel.com>
Date:   Tue Feb 22 12:58:07 2022 +0300

     migration to OV2.0 (#10562)

commit 171ad9536fce215e745aa91cdcaf5f6947ba0f94…
2022-03-14 07:39:49 +03:00
Maxim Gordeev
c790aa85cb [IE Samples] Fixed rights for file with image in hello_nv12_input_classification (#10925) 2022-03-12 12:41:02 +03:00
Dawid Kożykowski
f756d55dc6 Snippets for preprocessing migration page (#10917)
* update preprocessing snippets

* add missing file
2022-03-11 21:19:16 +03:00
Przemyslaw Wysocki
81ffb7a3bc [Docs] Add Python snippets for configure devices [2022.1] (#10916)
* Add configure devices Python snippets

* Minor changes
2022-03-11 21:17:04 +03:00
Mikhail Nosov
205e6ba573 Merge 10898 (#10903) 2022-03-11 17:42:19 +03:00
Vladimir Zinoviev
b8d23e04f1 [LPT] Fix out of bounds access in reshape (#10850) 2022-03-11 15:59:11 +03:00
Anton Dudchenko
a43369c152 [VPU] Fix MyriadPlugin build with enabled options of Conditional Compilation (#10812) 2022-03-11 14:54:10 +03:00
Ilya Churaev
0b4b627e02 Try to fix visualization (#10896)
* Try to fix visualization

* New try
2022-03-11 14:26:32 +03:00
Ilya Churaev
76c82ae844 Added intro to transformation guide (#10895) 2022-03-11 13:10:15 +03:00
Nikolay Tyukaev
939c420435 benchmark legal, snippet margin bottom (#10887) 2022-03-11 11:09:54 +03:00
Sergey Lyubimtsev
7d7af2a9bf Update APT instructions according to repository configuration (#10871) 2022-03-11 10:45:10 +03:00
Ilya Lavrenov
829c8c98c5 DOCS: Removed useless 4 spaces in snippets (#10870)
* Updated snippets

* Added link to encryption
2022-03-11 08:43:18 +03:00
Alexey Lebedev
5f19d22323 [docs] python snippet for dynamic shapes release branch (#10882)
* Create snipp

* link python snipp with doc

* fix docs

* Apply suggestions from code review

Co-authored-by: Jan Iwaszkiewicz <jan.iwaszkiewicz@intel.com>

* Fix cpp comments

Co-authored-by: Jan Iwaszkiewicz <jan.iwaszkiewicz@intel.com>
2022-03-11 08:41:55 +03:00
Andrey Zaytsev
cb635050fb Re-structure Model Optimizer User Guide and Clean-up (#10801) (#10879)
* Modified the workflow diagram

* Moved supported topology lists to separate topics

* Additional changes

* Removed Supported Topologies list and Deprecated pages

* Created the Model Conversion Tutorials section for instructions for specific models

* Topic names alignment, removed Default_Model_Optimizer_Optimizations.md

* Additional structural changes

* Fixed links

* heading fixes
2022-03-11 00:25:54 +03:00
Tatiana Savina
68863478d3 cherrypick (#10865) 2022-03-10 19:39:17 +03:00
Roman Kazantsev
8dacbf789d [MO] Remove IR frontend from available frontend list in MO (#10798) (#10807)
* [MO] Remove IR frontend from available frontend list in MO

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Fix issue - forget to pass FEM

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Fix issue for TF with new FE and default legacy

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>
2022-03-10 19:31:09 +03:00
Vladimir Dudnik
8f9c368aae update intel models, fix docs (#10847) 2022-03-10 18:32:11 +03:00
Anastasia Kuporosova
5f755d5e4a [Python API] Update doc style (#10854)
* [Python API] Update doc style

* apply comments
2022-03-10 15:05:11 +03:00
Anton Pankratov
22a8e75bb7 Added callback and wait migration guide release (#10804)
* Added async infernece migration guide

* fixed doc

* fixed build

* fixed doc

* fixed doc
2022-03-10 15:03:31 +03:00
Vladimir Paramuzov
d44cad85ed [GPU] GPU plugin docs (#10845) 2022-03-10 15:01:00 +03:00
Alexander Kozlov
0047db7377 Revised Tuning For Performance and Model optimization docs (#10276) (#10784)
* Revised Tuning for performance and Model optimization docs

* Fixed links

* Fixed link

* Applied comments

* Fixed one more comment
2022-03-10 10:04:02 +00:00
Maxim Vafin
4b677dd5b3 [MO] Fix swish value infer (#10792)
* [MO] Fix swish value infer

* Add test
2022-03-10 12:31:19 +03:00
Nikita Malinin
390ca9f45f [POT] Update BC with the Parameter nodes connection 22.1 (#10852)
* Update BC with the Parameter nodes connection

* Update test_sanity with octave
2022-03-10 11:05:32 +03:00
Katarzyna Mitrus
5f4f27cd73 [DOCS] Python snippets for Hetero execution page (#10824)
* Update ov_hetero snippets

* Update hetero docs snippets with GPU profiling

Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
2022-03-09 18:37:34 +03:00
Tatiana Savina
617160492f [DOCS] Fix images (#10849)
* [DOCS] Fixes for nightly (#10806)

* add img

* wb img for input

* dataset added

* add img

* wb img for input

* dataset added

* ov_fix

* more imgs

* new img

* new img

* nlp

* new img

* delete img

* cherrypicks
2022-03-09 17:34:39 +03:00
Ilya Lavrenov
8308b1e122 Updated common IE pipeline infer-request section (#10844)
* Updated common IE pipeline infer-reqest section

* Update ov_infer_request.md

* Apply suggestions from code review

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

Co-authored-by: Maxim Shevtsov <maxim.y.shevtsov@intel.com>
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
2022-03-09 17:34:11 +03:00
Maxim Shevtsov
07322aa5aa more info after the What's new Sessions' questions (#10803)
* more info after the What's new Sessions' questions

* generalizing the optimal_batch_size vs explicit value message

* Update docs/OV_Runtime_UG/automatic_batching.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Update docs/OV_Runtime_UG/automatic_batching.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Update docs/OV_Runtime_UG/automatic_batching.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Update docs/OV_Runtime_UG/automatic_batching.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Update docs/OV_Runtime_UG/automatic_batching.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Update docs/OV_Runtime_UG/automatic_batching.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>
2022-03-09 12:35:03 +00:00
Liubov Talamanova
d64c5d8c7c Moved quantization templates to openvino/tools/pot (#10816) 2022-03-09 15:14:58 +03:00
Ilya Churaev
c31129c7cd Fixed duplicated words (#10835) 2022-03-09 13:13:41 +03:00
Ilya Lavrenov
db05e54483 Added migration for deployment (#10800)
* Added migration for deployment

* Addressed comments
2022-03-05 15:18:23 +03:00
Egor Duplensky
c80e70a917 [CPU] Avoid using cache for constant inplace or multi-child edges (#10795) 2022-03-05 14:37:43 +03:00
Nikita Malinin
4d6b43d76f [POT] Update IEEngine with the Dynamic model support (22.1) (#10809)
* Update IEEngine with the Dynamic models support

* Update with the batch

* Method naming fix

* Update image_loader & tests with dynamic models

* Update test_sanity.py

* Replace custom_mo_config from the model
2022-03-05 14:35:59 +03:00
Maksim Kutakov
cdd4f56ba1 [CPU] Use raw pointer to share peer data for constants (#10794) 2022-03-05 12:31:57 +03:00
yanlan song
3c75a4fd16 fix multi infer result corrupt issue (#10777)
* do not share blob

Signed-off-by: fishbell <bell.song@intel.com>

* build error

Signed-off-by: fishbell <bell.song@intel.com>

* remove comment codes

Signed-off-by: fishbell <bell.song@intel.com>
2022-03-05 13:18:11 +08:00
Dmitry Pigasin
6354ac6b5d [CPP Speech Sample] Fix result saving when batch size is not 1 (#10797)
* Fix result saving when batch size is not 1

* Remove useless if statement

* improved processing scores for model with more than one outputs

* added checking on count of model outputs

* improve if statements

* divide fix for model with several outputs to other PR

Co-authored-by: Maxim Gordeev <maxim.gordeev@intel.com>
2022-03-04 19:10:41 +03:00
Maxim Gordeev
b51bc06077 Improved processing outputs for model with several outputs (#10780) 2022-03-04 15:49:13 +03:00
Mateusz Bencer
93320f4fd6 Update --extenions MO doc (#10782)
* update mo doc help

* Apply suggestions from code review

Co-authored-by: Anastasiya Ageeva <anastasiya.ageeva@intel.com>

* Update tools/mo/openvino/tools/mo/utils/cli_parser.py

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

Co-authored-by: Anastasiya Ageeva <anastasiya.ageeva@intel.com>
Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>
2022-03-04 15:47:54 +03:00
981 changed files with 49986 additions and 17526 deletions

View File

@@ -13,6 +13,7 @@ resources:
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/openvino_contrib
ref: releases/2022/1
jobs:
- job: android_arm64
@@ -109,7 +110,6 @@ jobs:
-DANDROID_ABI=$(ANDROID_ABI_CONFIG)
-DANDROID_STL=c++_shared
-DANDROID_PLATFORM=$(ANDROID_SDK_VERSION)
-DENABLE_OPENCV=OFF
-DENABLE_TESTS=ON
-DENABLE_SAMPLES=ON
-DENABLE_INTEL_MYRIAD=OFF

View File

@@ -13,11 +13,13 @@ resources:
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/openvino_contrib
ref: releases/2022/1
- repository: testdata
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/testdata
ref: releases/2022/1
jobs:
- job: Lin
@@ -155,6 +157,7 @@ jobs:
-DENABLE_FASTER_BUILD=ON
-DENABLE_STRICT_DEPENDENCIES=OFF
-DENABLE_REQUIREMENTS_INSTALL=OFF
-DENABLE_OPENCV=ON
-DIE_EXTRA_MODULES=$(OPENVINO_CONTRIB_REPO_DIR)/modules
-DCMAKE_CXX_COMPILER_LAUNCHER=ccache
-DCMAKE_C_COMPILER_LAUNCHER=ccache
@@ -252,6 +255,7 @@ jobs:
export MO_ROOT=$(INSTALL_DIR)/tools/mo
. $(SETUPVARS) -pyver 3.8 && python3 -m pytest -s $(INSTALL_DIR)/tests/mo/unit_tests --junitxml=TEST-ModelOptimizer.xml
displayName: 'Model Optimizer UT'
condition: false
continueOnError: false
- script: . $(SETUPVARS) && $(INSTALL_TEST_DIR)/ov_core_unit_tests --gtest_print_time=1 --gtest_filter=-*IE_GPU* --gtest_output=xml:TEST-NGraphUT.xml
@@ -336,6 +340,7 @@ jobs:
- script: |
export PATH=$HOME/.local/bin:$PATH
export IE_APP_PATH=$(INSTALL_DIR)/samples_bin
export LD_LIBRARY_PATH=$IE_APP_PATH:$LD_LIBRARY_PATH
export IE_APP_PYTHON_PATH=$(INSTALL_DIR)/samples/python/
export SHARE=$(INSTALL_DIR)/tests/smoke_tests/samples_smoke_tests_data/
export WORKSPACE=$(INSTALL_DIR)

View File

@@ -13,6 +13,7 @@ resources:
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/openvino_contrib
ref: releases/2022/1
jobs:
- job: linux_arm64
@@ -34,13 +35,13 @@ jobs:
OPENVINO_REPO_DIR: $(Build.Repository.LocalPath)
OPENVINO_CONTRIB_REPO_DIR: $(OPENVINO_REPO_DIR)/../openvino_contrib
OPENCV_REPO_DIR: $(OPENVINO_REPO_DIR)/../opencv
BUILD_PYTHON: $(WORK_DIR)/build_python
BUILD_PYTHON: $(WORK_DIR)/build_python
BUILD_OPENCV: $(WORK_DIR)/build_opencv
BUILD_OPENVINO: $(WORK_DIR)/build
BUILD_OPENVINO_PYTHON: $(WORK_DIR)/build_python
BUILD_OPEN_MODEL_ZOO: $(WORK_DIR)/build_open_model_zoo
INSTALL_OPENVINO: $(WORK_DIR)/install_openvino
INSTALL_PYTHON: $(INSTALL_OPENVINO)/extras/python
INSTALL_PYTHON: $(INSTALL_OPENVINO)/extras/python
INSTALL_OPENCV: $(INSTALL_OPENVINO)/extras/opencv
INSTALL_OPEN_MODEL_ZOO: $(INSTALL_OPENVINO)/extras/open_model_zoo
WORK_DIR: $(Pipeline.Workspace)/_w
@@ -125,20 +126,19 @@ jobs:
cmakeArgs: >
-GNinja
-DVERBOSE_BUILD=ON
-DOpenCV_DIR=$(INSTALL_OPENCV)/cmake
-DENABLE_OPENCV=OFF
-DPYTHON_INCLUDE_DIRS=$(INSTALL_PYTHON)/include/python3.8
-DPYTHON_LIBRARY=$(INSTALL_PYTHON)/lib/libpython3.8.so
-DENABLE_PYTHON=ON
-DPYTHON_MODULE_EXTENSION=".so"
-DENABLE_TESTS=ON
-DENABLE_FUNCTIONAL_TESTS=ON
-DENABLE_GAPI_TESTS=OFF
-DENABLE_GAPI_PREPROCESSING=OFF
-DENABLE_DATA=OFF
-DCMAKE_EXE_LINKER_FLAGS=-Wl,-rpath-link,$(INSTALL_OPENCV)/lib
-DTHREADING=SEQ -DENABLE_LTO=ON
-DCMAKE_TOOLCHAIN_FILE=$(OPENVINO_REPO_DIR)/cmake/arm64.toolchain.cmake
-DOpenCV_DIR=$(INSTALL_OPENCV)/cmake
-DPYTHON_INCLUDE_DIRS=$(INSTALL_PYTHON)/include/python3.8
-DPYTHON_LIBRARY=$(INSTALL_PYTHON)/lib/libpython3.8.so
-DENABLE_PYTHON=ON
-DPYTHON_MODULE_EXTENSION=".so"
-DENABLE_TESTS=ON
-DENABLE_FUNCTIONAL_TESTS=ON
-DENABLE_GAPI_TESTS=OFF
-DENABLE_GAPI_PREPROCESSING=OFF
-DENABLE_DATA=OFF
-DCMAKE_EXE_LINKER_FLAGS=-Wl,-rpath-link,$(INSTALL_OPENCV)/lib
-DTHREADING=SEQ -DENABLE_LTO=ON
-DCMAKE_TOOLCHAIN_FILE=$(OPENVINO_REPO_DIR)/cmake/arm64.toolchain.cmake
-DCMAKE_BUILD_TYPE=$(BUILD_TYPE)
-DENABLE_SAMPLES=ON
-DBUILD_java_api=OFF
@@ -173,19 +173,19 @@ jobs:
cmakeArgs: >
-GNinja
-DInferenceEngineDeveloperPackage_DIR=$(BUILD_OPENVINO)
-DENABLE_PYTHON=ON
-DPYTHON_EXECUTABLE=$(INSTALL_PYTHON)/bin/python3.8
-DPYTHON_INCLUDE_DIRS=$(INSTALL_PYTHON)/include/python3.8
-DENABLE_PYTHON=ON
-DPYTHON_EXECUTABLE=$(INSTALL_PYTHON)/bin/python3.8
-DPYTHON_INCLUDE_DIRS=$(INSTALL_PYTHON)/include/python3.8
-DPYTHON_LIBRARIES=$(INSTALL_PYTHON)/lib
-DPYTHON3_NUMPY_INCLUDE_DIRS=/usr/local/lib/python3.8/site-packages/numpy/core/include
-DPYTHON3_NUMPY_INCLUDE_DIRS=/usr/local/lib/python3.8/site-packages/numpy/core/include
-DPYTHON_MODULE_EXTENSION=".so"
-DPYBIND11_FINDPYTHON=OFF
-DPYBIND11_NOPYTHON=OFF
-DPYTHONLIBS_FOUND=TRUE
-DCMAKE_BUILD_TYPE=$(BUILD_TYPE)
-DENABLE_DATA=OFF
-DCMAKE_EXE_LINKER_FLAGS=-Wl,-rpath-link,$(INSTALL_OPENCV)/lib
-DCMAKE_TOOLCHAIN_FILE=$(OPENVINO_REPO_DIR)/cmake/arm64.toolchain.cmake
-DCMAKE_EXE_LINKER_FLAGS=-Wl,-rpath-link,$(INSTALL_OPENCV)/lib
-DCMAKE_TOOLCHAIN_FILE=$(OPENVINO_REPO_DIR)/cmake/arm64.toolchain.cmake
-DCMAKE_CXX_COMPILER_LAUNCHER=ccache
-DCMAKE_C_COMPILER_LAUNCHER=ccache
-DCMAKE_INSTALL_PREFIX=$(INSTALL_OPENVINO)
@@ -211,15 +211,15 @@ jobs:
inputs:
cmakeArgs: >
-GNinja
-DCMAKE_BUILD_TYPE=$(BUILD_TYPE)
-DENABLE_PYTHON=ON
-DCMAKE_BUILD_TYPE=$(BUILD_TYPE)
-DENABLE_PYTHON=ON
-DPYTHON_EXECUTABLE=/usr/local/bin/python3.8
-DPYTHON_INCLUDE_DIR=$(INSTALL_PYTHON)/include/python3.8
-DPYTHON_INCLUDE_DIR=$(INSTALL_PYTHON)/include/python3.8
-DPYTHON_LIBRARY=$(INSTALL_PYTHON)/lib
-DCMAKE_TOOLCHAIN_FILE=$(OPENVINO_REPO_DIR)/cmake/arm64.toolchain.cmake
-DOpenVINO_DIR=$(BUILD_OPENVINO)
-DInferenceEngine_DIR=$(BUILD_OPENVINO)
-DOpenCV_DIR=$(INSTALL_OPENCV)/cmake
-DOpenCV_DIR=$(INSTALL_OPENCV)/cmake
-Dngraph_DIR=$(BUILD_OPENVINO)
-DIE_EXTRA_MODULES=$(OPENVINO_CONTRIB_REPO_DIR)/modules
-DCMAKE_INSTALL_PREFIX=$(INSTALL_OPEN_MODEL_ZOO)

View File

@@ -4,6 +4,7 @@ resources:
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/openvino_contrib
ref: releases/2022/1
jobs:
- job: Lin

View File

@@ -95,7 +95,6 @@ jobs:
-DPYTHON_EXECUTABLE=/usr/bin/python3.8
-DENABLE_INTEL_MYRIAD_COMMON=OFF
-DENABLE_INTEL_GNA=OFF
-DENABLE_OPENCV=OFF
-DENABLE_CPPLINT=OFF
-DENABLE_TESTS=OFF
-DENABLE_INTEL_CPU=ON

View File

@@ -13,11 +13,13 @@ resources:
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/openvino_contrib
ref: releases/2022/1
- repository: testdata
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/testdata
ref: releases/2022/1
jobs:
- job: Mac
@@ -143,7 +145,6 @@ jobs:
set -e
mkdir -p $(INSTALL_DIR)/opencv/
cmake -DCMAKE_INSTALL_PREFIX=$(INSTALL_DIR) -DCOMPONENT=tests -P cmake_install.cmake
cp -R $(REPO_DIR)/temp/opencv_4.5.2_osx/opencv/* $(INSTALL_DIR)/opencv/
workingDirectory: $(BUILD_DIR)
displayName: 'Install tests'

View File

@@ -13,11 +13,13 @@ resources:
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/openvino_contrib
ref: releases/2022/1
- repository: testdata
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/testdata
ref: releases/2022/1
jobs:
- job: Win
@@ -30,7 +32,7 @@ jobs:
maxParallel: 2
# About 150% of total time
timeoutInMinutes: 150
timeoutInMinutes: 180
pool:
name: WIN_VMSS_VENV_D8S_WU2
@@ -167,7 +169,7 @@ jobs:
workingDirectory: $(BUILD_SAMPLES_TESTS_DIR)
displayName: 'Install Samples Tests'
- script: $(CMAKE_CMD) -DCMAKE_INSTALL_PREFIX=$(INSTALL_DIR) -DCOMPONENT=tests -P cmake_install.cmake && xcopy $(REPO_DIR)\temp\opencv_4.5.2\opencv\* $(INSTALL_DIR)\opencv\ /e /h /y
- script: $(CMAKE_CMD) -DCMAKE_INSTALL_PREFIX=$(INSTALL_DIR) -DCOMPONENT=tests -P cmake_install.cmake
workingDirectory: $(BUILD_DIR)
displayName: 'Install tests'

View File

@@ -59,7 +59,6 @@ RUN cmake .. \
-DCMAKE_BUILD_TYPE=${BUILD_TYPE} \
-DENABLE_INTEL_MYRIAD_COMMON=OFF \
-DENABLE_INTEL_GNA=OFF \
-DENABLE_OPENCV=OFF \
-DENABLE_CPPLINT=OFF \
-DENABLE_NCC_STYLE=OFF \
-DENABLE_TESTS=OFF \

3
.gitignore vendored
View File

@@ -1,5 +1,8 @@
# build/artifact dirs
_*
[Bb]uild*/
cmake-build*
# but ensure we don't skip __init__.py and __main__.py
!__init__.py
!__main__.py

View File

@@ -4,7 +4,7 @@
if(DEFINED BUILD_SHARED_LIBS AND NOT BUILD_SHARED_LIBS)
# 'target_link_libraries' does not work correctly when called from
# different directly where 'add_library' is called: CMake generates
# different directory where 'add_library' is called: CMake generates
# incorrect OpenVINOConfig.cmake in this case
cmake_minimum_required(VERSION 3.17)
else()
@@ -13,7 +13,9 @@ endif()
project(OpenVINO DESCRIPTION "OpenVINO toolkit")
set(IE_MAIN_SOURCE_DIR ${OpenVINO_SOURCE_DIR}/inference-engine)
if(NOT CMAKE_BUILD_TYPE)
set(CMAKE_BUILD_TYPE "Release" CACHE STRING "CMake build type" FORCE)
endif()
find_package(IEDevScripts REQUIRED
PATHS "${OpenVINO_SOURCE_DIR}/cmake/developer_package"

View File

@@ -47,6 +47,9 @@ Jenkinsfile @openvinotoolkit/openvino-admins
/src/inference/include/ie/cldnn/ @openvinotoolkit/openvino-ie-gpu-maintainers @openvinotoolkit/openvino-ie-gpu-developers
/src/inference/include/openvino/runtime/intel_gpu/ @openvinotoolkit/openvino-ie-gpu-maintainers @openvinotoolkit/openvino-ie-gpu-developers
/src/plugins/intel_gpu/ @openvinotoolkit/openvino-ie-gpu-maintainers @openvinotoolkit/openvino-ie-gpu-developers
/docs/snippets/gpu/ @openvinotoolkit/openvino-ie-gpu-maintainers @openvinotoolkit/openvino-ie-gpu-developers
/docs/OV_Runtime_UG/supported_plugins/GPU.md @openvinotoolkit/openvino-ie-gpu-maintainers @openvinotoolkit/openvino-ie-gpu-developers
/docs/OV_Runtime_UG/supported_plugins/GPU_RemoteTensor_API.md @openvinotoolkit/openvino-ie-gpu-maintainers @openvinotoolkit/openvino-ie-gpu-developers
# IE VPU:
/src/plugins/intel_myriad @openvinotoolkit/openvino-ie-vpu-maintainers
@@ -63,6 +66,9 @@ Jenkinsfile @openvinotoolkit/openvino-admins
/src/plugins/intel_gna/ @openvinotoolkit/openvino-ie-gna-maintainers
/src/inference/include/ie/gna/ @openvinotoolkit/openvino-ie-gna-maintainers
# IE ARM CPU:
/docs/OV_Runtime_UG/supported_plugins/ARM_CPU.md @openvinotoolkit/openvino_contrib-arm_plugin-maintainers
# IE Auto (MULTI) plugin:
/src/plugins/auto/ @openvinotoolkit/openvino-ie-auto-multi-maintainers
/src/inference/include/ie/multi-device/ @openvinotoolkit/openvino-ie-auto-multi-maintainers

View File

@@ -1,5 +1,5 @@
# OpenVINO™ Toolkit
[![Stable release](https://img.shields.io/badge/version-2021.4.2-green.svg)](https://github.com/openvinotoolkit/openvino/releases/tag/2021.4.2)
[![Stable release](https://img.shields.io/badge/version-2022.1-green.svg)](https://github.com/openvinotoolkit/openvino/releases/tag/2022.1)
[![Apache License Version 2.0](https://img.shields.io/badge/license-Apache_2.0-green.svg)](LICENSE)
![GitHub branch checks state](https://img.shields.io/github/checks-status/openvinotoolkit/openvino/master?label=GitHub%20checks)
![Azure DevOps builds (branch)](https://img.shields.io/azure-devops/build/openvinoci/b2bab62f-ab2f-4871-a538-86ea1be7d20f/13?label=Public%20CI)
@@ -42,9 +42,9 @@ Please report questions, issues and suggestions using:
\* Other names and brands may be claimed as the property of others.
[Open Model Zoo]:https://github.com/openvinotoolkit/open_model_zoo
[OpenVINO™ Runtime]:https://docs.openvino.ai/latest/openvino_docs_OV_Runtime_User_Guide.html
[OpenVINO™ Runtime]:https://docs.openvino.ai/latest/openvino_docs_OV_UG_OV_Runtime_User_Guide.html
[Model Optimizer]:https://docs.openvino.ai/latest/openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html
[Post-Training Optimization Tool]:https://docs.openvino.ai/latest/pot_README.html
[Post-Training Optimization Tool]:https://docs.openvino.ai/latest/pot_introduction.html
[Samples]:https://github.com/openvinotoolkit/openvino/tree/master/samples
[tag on StackOverflow]:https://stackoverflow.com/search?q=%23openvino

View File

@@ -23,9 +23,7 @@ message(STATUS "MODELS_PATH=" ${MODELS_PATH})
fetch_models_and_validation_set()
if(COMMAND get_linux_name)
get_linux_name(LINUX_OS_NAME)
endif()
get_linux_name(LINUX_OS_NAME)
if(CMAKE_CROSSCOMPILING AND CMAKE_HOST_SYSTEM_NAME MATCHES Linux AND CMAKE_HOST_SYSTEM_PROCESSOR MATCHES "amd64.*|x86_64.*|AMD64.*")
set(protoc_version "3.18.2")
@@ -93,7 +91,19 @@ if(THREADING STREQUAL "OMP")
endif()
## TBB package
if(THREADING STREQUAL "TBB" OR THREADING STREQUAL "TBB_AUTO")
unset(_ov_download_tbb_done CACHE)
#
# The function downloads prebuilt TBB package
# NOTE: the function should be used if system TBB is not found
# or ENABLE_SYSTEM_TBB is OFF
#
function(ov_download_tbb)
if(_ov_download_tbb_done OR NOT THREADING MATCHES "^(TBB|TBB_AUTO)$")
return()
endif()
set(_ov_download_tbb_done ON CACHE BOOL "Whether prebuilt TBB is already downloaded")
reset_deps_cache(TBBROOT TBB_DIR)
if(DEFINED ENV{THIRDPARTY_SERVER_PATH})
@@ -109,16 +119,6 @@ if(THREADING STREQUAL "TBB" OR THREADING STREQUAL "TBB_AUTO")
TARGET_PATH "${TEMP}/tbb"
ENVIRONMENT "TBBROOT"
SHA256 "f1c9b9e2861efdaa01552bd25312ccbc5feeb45551e5f91ae61e29221c5c1479")
if(ENABLE_TBBBIND_2_5)
RESOLVE_DEPENDENCY(TBBBIND_2_5
ARCHIVE_WIN "tbbbind_2_5_static_win_v1.zip"
TARGET_PATH "${TEMP}/tbbbind_2_5"
ENVIRONMENT "TBBBIND_2_5_ROOT"
SHA256 "a67afeea8cf194f97968c800dab5b5459972908295242e282045d6b8953573c1")
else()
message(WARNING "prebuilt TBBBIND_2_5 is not available.
Build oneTBB from sources and set TBBROOT environment var before OpenVINO cmake configure")
endif()
elseif(ANDROID) # Should be before LINUX due LINUX is detected as well
RESOLVE_DEPENDENCY(TBB
ARCHIVE_ANDROID "tbb2020_20200404_android.tgz"
@@ -131,16 +131,6 @@ if(THREADING STREQUAL "TBB" OR THREADING STREQUAL "TBB_AUTO")
TARGET_PATH "${TEMP}/tbb"
ENVIRONMENT "TBBROOT"
SHA256 "95b2f3b0b70c7376a0c7de351a355c2c514b42c4966e77e3e34271a599501008")
if(ENABLE_TBBBIND_2_5)
RESOLVE_DEPENDENCY(TBBBIND_2_5
ARCHIVE_LIN "tbbbind_2_5_static_lin_v2.tgz"
TARGET_PATH "${TEMP}/tbbbind_2_5"
ENVIRONMENT "TBBBIND_2_5_ROOT"
SHA256 "865e7894c58402233caf0d1b288056e0e6ab2bf7c9d00c9dc60561c484bc90f4")
else()
message(WARNING "prebuilt TBBBIND_2_5 is not available.
Build oneTBB from sources and set TBBROOT environment var before OpenVINO cmake configure")
endif()
elseif(LINUX AND AARCH64)
RESOLVE_DEPENDENCY(TBB
ARCHIVE_LIN "keembay/tbb2020_38404_kmb_lic.tgz"
@@ -160,18 +150,71 @@ if(THREADING STREQUAL "TBB" OR THREADING STREQUAL "TBB_AUTO")
update_deps_cache(TBBROOT "${TBB}" "Path to TBB root folder")
if(EXISTS "${TBBROOT}/lib/cmake/TBB/TBBConfig.cmake")
# oneTBB case
update_deps_cache(TBB_DIR "${TBB}/lib/cmake/TBB" "Path to TBB cmake folder")
update_deps_cache(TBB_DIR "${TBBROOT}/lib/cmake/TBB" "Path to TBB cmake folder")
elseif(EXISTS "${TBBROOT}/lib/cmake/tbb/TBBConfig.cmake")
# oneTBB release package version less than 2021.6.0
update_deps_cache(TBB_DIR "${TBBROOT}/lib/cmake/tbb" "Path to TBB cmake folder")
elseif(EXISTS "${TBBROOT}/lib64/cmake/TBB/TBBConfig.cmake")
# 64-bits oneTBB case
update_deps_cache(TBB_DIR "${TBBROOT}/lib64/cmake/TBB" "Path to TBB cmake folder")
elseif(EXISTS "${TBBROOT}/cmake/TBBConfig.cmake")
# custom downloaded or user provided TBB
update_deps_cache(TBB_DIR "${TBBROOT}/cmake" "Path to TBB cmake folder")
else()
update_deps_cache(TBB_DIR "${TBB}/cmake" "Path to TBB cmake folder")
message(WARNING "Failed to find TBBConfig.cmake in ${TBBROOT} tree. Custom TBBConfig.cmake will be used")
endif()
debug_message(STATUS "tbb=" ${TBB})
debug_message(STATUS "tbb_dir=" ${TBB_DIR})
debug_message(STATUS "tbbroot=" ${TBBROOT})
set(TBB "${TBB}" PARENT_SCOPE)
endfunction()
## TBBBind_2_5 package
unset(_ov_download_tbbbind_2_5_done CACHE)
#
# The function downloads static prebuilt TBBBind_2_5 package
# NOTE: the function should be called only we have TBB with version less 2021
#
function(ov_download_tbbbind_2_5)
if(_ov_download_tbbbind_2_5_done OR NOT ENABLE_TBBBIND_2_5)
return()
endif()
set(_ov_download_tbbbind_2_5_done ON CACHE BOOL "Whether prebuilt TBBBind_2_5 is already downloaded")
reset_deps_cache(TBBBIND_2_5_DIR)
if(DEFINED ENV{THIRDPARTY_SERVER_PATH})
set(IE_PATH_TO_DEPS "$ENV{THIRDPARTY_SERVER_PATH}")
elseif(DEFINED THIRDPARTY_SERVER_PATH)
set(IE_PATH_TO_DEPS "${THIRDPARTY_SERVER_PATH}")
endif()
if(WIN32 AND X86_64)
RESOLVE_DEPENDENCY(TBBBIND_2_5
ARCHIVE_WIN "tbbbind_2_5_static_win_v1.zip"
TARGET_PATH "${TEMP}/tbbbind_2_5"
ENVIRONMENT "TBBBIND_2_5_ROOT"
SHA256 "a67afeea8cf194f97968c800dab5b5459972908295242e282045d6b8953573c1")
elseif(ANDROID)
# don't have TBBBIND_2_5
elseif(LINUX AND X86_64)
RESOLVE_DEPENDENCY(TBBBIND_2_5
ARCHIVE_LIN "tbbbind_2_5_static_lin_v2.tgz"
TARGET_PATH "${TEMP}/tbbbind_2_5"
ENVIRONMENT "TBBBIND_2_5_ROOT"
SHA256 "865e7894c58402233caf0d1b288056e0e6ab2bf7c9d00c9dc60561c484bc90f4")
else()
message(WARNING "prebuilt TBBBIND_2_5 is not available.
Build oneTBB from sources and set TBBROOT environment var before OpenVINO cmake configure")
endif()
update_deps_cache(TBBBIND_2_5_DIR "${TBBBIND_2_5}/cmake" "Path to TBBBIND_2_5 cmake folder")
debug_message(STATUS "tbb=" ${TBB})
if(DEFINED IE_PATH_TO_DEPS)
unset(IE_PATH_TO_DEPS)
endif()
endif()
set(TBBBIND_2_5 "${TBBBIND_2_5}" PARENT_SCOPE)
endfunction()
## OpenCV
if(ENABLE_OPENCV)
@@ -265,8 +308,6 @@ else()
reset_deps_cache(OpenCV_DIR)
endif()
include(${OpenVINO_SOURCE_DIR}/src/cmake/ie_parallel.cmake)
if(ENABLE_INTEL_GNA)
reset_deps_cache(
GNA_EXT_DIR

View File

@@ -23,13 +23,13 @@ else()
unset(IE_OWN_TBB_CONFIG)
endif()
unset(TBB_DIR)
unset(TBB_DIR CACHE)
find_package(TBB
CONFIG
PATHS ${TBBROOT}/cmake
${TBBROOT}/lib/cmake/TBB # oneTBB case
${IEDevScripts_DIR}/${IE_OWN_TBB_CONFIG}
NO_CMAKE_FIND_ROOT_PATH
NO_DEFAULT_PATH
)
NO_DEFAULT_PATH)
find_package_handle_standard_args(TBB CONFIG_MODE)

View File

@@ -7,7 +7,7 @@ include(target_flags)
# FIXME: there are compiler failures with LTO and Cross-Compile toolchains. Disabling for now, but
# this must be addressed in a proper way
ie_dependent_option (ENABLE_LTO "Enable Link Time Optimization" OFF "LINUX;NOT CMAKE_CROSSCOMPILING; CMAKE_CXX_COMPILER_VERSION VERSION_GREATER 4.9" OFF)
ie_dependent_option (ENABLE_LTO "Enable Link Time Optimization" OFF "LINUX;NOT CMAKE_CROSSCOMPILING;CMAKE_CXX_COMPILER_VERSION VERSION_GREATER 4.9" OFF)
ie_option (OS_FOLDER "create OS dedicated folder in output" OFF)
@@ -79,6 +79,4 @@ if(ENABLE_AVX512F)
endif()
endif()
if (VERBOSE_BUILD)
set(CMAKE_VERBOSE_MAKEFILE ON CACHE BOOL "" FORCE)
endif()
set(CMAKE_VERBOSE_MAKEFILE ${VERBOSE_BUILD} CACHE BOOL "" FORCE)

View File

@@ -31,4 +31,8 @@ if (LINUX)
set(${res_var} NOTFOUND PARENT_SCOPE)
endif ()
endfunction()
else()
function(get_linux_name res_var)
set(${res_var} NOTFOUND PARENT_SCOPE)
endfunction()
endif ()

View File

@@ -23,7 +23,7 @@ execute_process(
ERROR_VARIABLE error_var)
if(NOT clang_find_result EQUAL "0")
message(WARNING "Please, install libclang-[N]-dev package (required for ncc naming style check)")
message(WARNING "Please, install clang-[N] libclang-[N]-dev package (required for ncc naming style check)")
message(WARNING "find_package(Clang) output: ${output_var}")
message(WARNING "find_package(Clang) error: ${error_var}")
set(ENABLE_NCC_STYLE OFF)
@@ -107,8 +107,11 @@ function(ov_ncc_naming_style)
list(APPEND NCC_STYLE_ADDITIONAL_INCLUDE_DIRECTORIES "${NCC_STYLE_SOURCE_DIRECTORY}")
# without it sources with same name from different directories will map to same .ncc_style target
file(RELATIVE_PATH source_dir_rel ${CMAKE_SOURCE_DIR} ${NCC_STYLE_SOURCE_DIRECTORY})
foreach(source IN LISTS sources)
set(output_file "${ncc_style_bin_dir}/${source}.ncc_style")
set(output_file "${ncc_style_bin_dir}/${source_dir_rel}/${source}.ncc_style")
set(full_source_path "${NCC_STYLE_SOURCE_DIRECTORY}/${source}")
add_custom_command(

View File

@@ -1,5 +1,5 @@
# custom OpenVINO values
CppMethod: '^(operator\W+|[a-z_\d]+|signaling_NaN|quiet_NaN)$'
CppMethod: '^(operator\W+|[a-z_\d]+|signaling_NaN|quiet_NaN|OPENVINO_OP)$'
ClassName: '^([A-Z][\w]+|b?float16|numeric_limits|ngraph_error|stopwatch|unsupported_op)$'
StructName: '^([A-Z][\w]+|element_type_traits|hash|oi_pair)$'
FunctionName: '^(operator\W+|[a-z_\d]+)|PrintTo$'

View File

@@ -82,7 +82,7 @@ else()
set(ENABLE_TBBBIND_2_5_DEFAULT OFF)
endif()
ie_dependent_option (ENABLE_TBBBIND_2_5 "Enable TBBBind_2_5 static usage in OpenVINO runtime" ON "ENABLE_TBBBIND_2_5_DEFAULT" OFF)
ie_dependent_option (ENABLE_TBBBIND_2_5 "Enable TBBBind_2_5 static usage in OpenVINO runtime" ${ENABLE_TBBBIND_2_5_DEFAULT} "THREADING MATCHES TBB" OFF)
ie_dependent_option (ENABLE_INTEL_GNA "GNA support for inference engine" ON
"NOT APPLE;NOT ANDROID;X86_64;CMAKE_CXX_COMPILER_VERSION VERSION_GREATER_EQUAL 5.4" OFF)
@@ -126,7 +126,7 @@ ie_dependent_option (ENABLE_FUNCTIONAL_TESTS "functional tests" ON "ENABLE_TESTS
ie_dependent_option (ENABLE_SAMPLES "console samples are part of inference engine package" ON "NOT MINGW" OFF)
ie_option (ENABLE_OPENCV "enables OpenCV" ON)
ie_option (ENABLE_OPENCV "enables OpenCV" OFF)
ie_option (ENABLE_V7_SERIALIZE "enables serialization to IR v7" OFF)
@@ -136,6 +136,8 @@ ie_dependent_option(ENABLE_TBB_RELEASE_ONLY "Only Release TBB libraries are link
ie_dependent_option (ENABLE_SYSTEM_PUGIXML "use the system copy of pugixml" OFF "BUILD_SHARED_LIBS" OFF)
ie_dependent_option (ENABLE_SYSTEM_TBB "use the system version of TBB" OFF "THREADING MATCHES TBB;LINUX" OFF)
ie_option (ENABLE_DEBUG_CAPS "enable OpenVINO debug capabilities at runtime" OFF)
ie_dependent_option (ENABLE_GPU_DEBUG_CAPS "enable GPU debug capabilities at runtime" ON "ENABLE_DEBUG_CAPS" OFF)

View File

@@ -12,6 +12,7 @@
# * `Runtime`: OpenVINO C++ and C Core & Inference Runtime, frontend common
# * `ONNX`: OpenVINO ONNX frontend
# * `Paddle`: OpenVINO Paddle frontend
# * `TensorFlow`: OpenVINO TensorFlow frontend
#
# If no components are specified, `Runtime` component is provided:
#
@@ -146,14 +147,34 @@ set(_ov_package_prefix_dir "${PACKAGE_PREFIX_DIR}")
set(THREADING "@THREADING@")
if((THREADING STREQUAL "TBB" OR THREADING STREQUAL "TBB_AUTO") AND NOT TBB_FOUND)
set_and_check(_tbb_dir "@PACKAGE_IE_TBB_DIR@")
set(enable_system_tbb "@ENABLE_SYSTEM_TBB@")
if(NOT enable_system_tbb)
set_and_check(_tbb_dir "@PACKAGE_IE_TBB_DIR@")
if(DEFINED ENV{TBBROOT})
# see https://stackoverflow.com/questions/28070810/cmake-generate-error-on-windows-as-it-uses-as-escape-seq
file(TO_CMAKE_PATH $ENV{TBBROOT} ENV_TBBROOT)
endif()
set(find_package_tbb_extra_args
CONFIG
PATHS
# oneTBB case exposed via export TBBROOT=<custom TBB root>
"${ENV_TBBROOT}/lib64/cmake/TBB"
"${ENV_TBBROOT}/lib/cmake/TBB"
"${ENV_TBBROOT}/lib/cmake/tbb"
# "$ENV{TBB_DIR}"
# for custom TBB exposed via cmake -DTBBROOT=<custom TBB root>
"${TBBROOT}/cmake"
# _tbb_dir points to TBB_DIR (custom | temp | system) used to build OpenVINO
${_tbb_dir}
NO_CMAKE_FIND_ROOT_PATH
NO_DEFAULT_PATH)
unset(_tbb_dir)
endif()
unset(enable_system_tbb)
_ov_find_dependency(TBB
COMPONENTS tbb tbbmalloc
CONFIG
PATHS ${TBBROOT}/cmake
${_tbb_dir}
NO_CMAKE_FIND_ROOT_PATH
NO_DEFAULT_PATH)
${find_package_tbb_extra_args})
set(install_tbbbind "@install_tbbbind@")
if(install_tbbbind)
@@ -164,6 +185,7 @@ if((THREADING STREQUAL "TBB" OR THREADING STREQUAL "TBB_AUTO") AND NOT TBB_FOUND
NO_DEFAULT_PATH)
set_target_properties(${TBBBIND_2_5_IMPORTED_TARGETS} PROPERTIES IMPORTED_GLOBAL ON)
endif()
unset(install_tbbbind)
endif()
_ov_find_dependency(Threads)
@@ -175,7 +197,7 @@ if(ENABLE_INTEL_GNA AND NOT ENABLE_INTEL_GNA_SHARED AND NOT libGNA_FOUND)
_ov_find_dependency(libGNA
COMPONENTS KERNEL
CONFIG
PATHS ${CMAKE_CURRENT_LIST_DIR}
PATHS "${CMAKE_CURRENT_LIST_DIR}"
NO_CMAKE_FIND_ROOT_PATH
NO_DEFAULT_PATH)
endif()

View File

@@ -46,6 +46,7 @@ endif()
set(LINKCHECKER_PY "" CACHE FILEPATH "Path to linkchecker.py for documentation check dir.")
set(ENABLE_OPENVINO_NOTEBOOKS OFF CACHE BOOL "Build with openvino notebooks")
set(OMZ_DOCS_DIR "" CACHE PATH "Path to open_model_zoo documentation dir.")
set(OTE_DOCS_DIR "" CACHE PATH "Path to training_extensions documentation dir.")
set(WORKBENCH_DOCS_DIR "" CACHE PATH "Path to workbench documentation dir.")
set(OVMS_DOCS_DIR "" CACHE PATH "Path to model server documentation dir.")
set(GRAPH_CSV_DIR "" CACHE PATH "Path to the folder containing csv data for rendering graphs.")
@@ -159,6 +160,15 @@ function(build_docs)
--output_dir=${DOCS_BUILD_DIR}/workbench)
endif()
# ote doc files
if(EXISTS "${OTE_DOCS_DIR}")
get_filename_component(WORKBENCH_DOCS_DIR "${OTE_DOCS_DIR}" ABSOLUTE)
list(APPEND commands COMMAND ${PYTHON_EXECUTABLE} ${DOXY_MD_FILTER}
--input_dir=${OTE_DOCS_DIR}
--output_dir=${DOCS_BUILD_DIR}/ote)
endif()
# ovms doc files
if(EXISTS "${OVMS_DOCS_DIR}")
get_filename_component(OVMS_DOCS_DIR "${OVMS_DOCS_DIR}" ABSOLUTE)

View File

@@ -264,6 +264,10 @@ TAB_SIZE = 4
ALIASES = "ref_ie{1}=@ref InferenceEngine::\1 \"\1\""
ALIASES += sphinxdirective="\n\xmlonly<sphinxdirective>"
ALIASES += endsphinxdirective="</sphinxdirective>\endxmlonly"
ALIASES += sphinxtabset="\n\xmlonly<sphinxtabset></sphinxtabset>\endxmlonly\n"
ALIASES += endsphinxtabset="\n\xmlonly<endsphinxtabset></endsphinxtabset>\endxmlonly\n"
ALIASES += sphinxtab{1}="\n\xmlonly<sphinxtab>\1</sphinxtab>\endxmlonly\n"
ALIASES += endsphinxtab="\n\xmlonly<endsphinxtab></endsphinxtab>\endxmlonly\n"
# Set the OPTIMIZE_OUTPUT_FOR_C tag to YES if your project consists of C sources
# only. Doxygen will then generate output that is more tailored for C. For

View File

@@ -0,0 +1,239 @@
# How to Implement Custom GPU Operations {#openvino_docs_Extensibility_UG_GPU}
To enable operations not supported by OpenVINO out of the box, you may need an extension for an OpenVINO operation set, and a custom kernel for the device you will target. This page describes custom kernel support for the GPU device.
The GPU codepath abstracts many details about OpenCL. You need to provide the kernel code in OpenCL C and an XML configuration file that connects the kernel and its parameters to the parameters of the operation.
There are two options for using the custom operation configuration file:
* Include a section with your kernels into the automatically-loaded `<lib_path>/cldnn_global_custom_kernels/cldnn_global_custom_kernels.xml` file.
* Call the `ov::Core::set_property()` method from your application with the `"CONFIG_FILE"` key and the configuration file name as a value before loading the network that uses custom operations to the plugin:
@sphinxtabset
@sphinxtab{C++}
@snippet docs/snippets/gpu/custom_kernels_api.cpp part0
@endsphinxtab
@sphinxtab{Python}
@snippet docs/snippets/gpu/custom_kernels_api.py part0
@endsphinxtab
@endsphinxtabset
All OpenVINO samples, except the trivial `hello_classification`, and most Open Model Zoo demos
feature a dedicated command-line option `-c` to load custom kernels. For example, to load custom operations for the classification sample, run the command below:
```sh
$ ./classification_sample -m <path_to_model>/bvlc_alexnet_fp16.xml -i ./validation_set/daily/227x227/apron.bmp -d GPU
-c <absolute_path_to_config>/custom_layer_example.xml
```
## Configuration File Format <a name="config-file-format"></a>
The configuration file is expected to follow the `.xml` file structure
with a node of the `CustomLayer` type for every custom operation you provide.
The definitions described in the sections below use the following notations:
Notation | Description
---|---
(0/1) | Can have zero or one instance of this node or attribute
(1) | Must have only one instance of this node or attribute
(0+) | Can have any number of instances of this node or attribute
(1+) | Can have one or more instances of this node or attribute
### CustomLayer Node and Sub-Node Structure
`CustomLayer` node contains the entire configuration for a single custom operation.
| Attribute Name |\# | Description |
|-----|-----|-----|
| `name` | (1) | The name of the operation type to be used. This name should be identical to the type used in the IR.|
| `type` | (1) | Must be `SimpleGPU`. |
| `version` | (1) | Must be `1`. |
**Sub-nodes**: `Kernel` (1), `Buffers` (1), `CompilerOptions` (0+),
`WorkSizes` (0/1)
### Kernel Node and Sub-Node Structure
`Kernel` node contains all kernel source code configuration.
**Sub-nodes**: `Source` (1+), `Define` (0+)
### Source Node and Sub-Node Structure
`Source` node points to a single OpenCL source file.
| Attribute Name | \# |Description|
|-----|-----|-----|
| `filename` | (1) | Name of the file containing OpenCL source code. Note that the path is relative to your executable. Multiple source nodes will have their sources concatenated in order. |
**Sub-nodes**: None
### Define Node and Sub-Node Structure
`Define` node configures a single `#&zwj;define` instruction to be added to
the sources during compilation (JIT).
| Attribute Name | \# | Description |
|------|-------|------|
| `name` | (1) | The name of the defined JIT. For static constants, this can include the value as well, which is taken as a string. |
| `param` | (0/1) | This parameter value is used as the value of this JIT definition. |
| `type` | (0/1) | The parameter type. Accepted values: `int`, `float`, and `int[]`, `float[]` for arrays. |
| `default` | (0/1) | The default value to be used if the specified parameters are missing from the operation in the IR. |
**Sub-nodes:** None
The resulting JIT has the following form:
`#&zwj;define [name] [type] [value/default]`.
### Buffers Node and Sub-Node Structure
`Buffers` node configures all input/output buffers for the OpenCL entry
function. No buffers node structure exists.
**Sub-nodes:** `Data` (0+), `Tensor` (1+)
### Data Node and Sub-Node Structure
`Data` node configures a single input with static data, for example,
weights or biases.
| Attribute Name | \# | Description |
|----|-----|------|
| `name` | (1) | Name of a blob attached to an operation in the IR |
| `arg-index` | (1) | 0-based index in the entry function arguments to be bound to |
**Sub-nodes**: None
### Tensor Node and Sub-Node Structure
`Tensor` node configures a single input or output tensor.
| Attribute Name | \# | Description |
|------|-------|-------|
| `arg-index` | (1) | 0-based index in the entry function arguments to be bound to. |
| `type` | (1) | `input` or `output` |
| `port-index` | (1) | 0-based index in the operation input/output ports in the IR |
| `format` | (0/1) | Data layout declaration for the tensor. Accepted values: `BFYX`, `BYXF`, `YXFB`, `FYXB`, and same values in all lowercase. Default value: `BFYX` |
### CompilerOptions Node and Sub-Node Structure
`CompilerOptions` node configures the compilation flags for the OpenCL
sources.
| Attribute Name | \# | Description |
|--------|-----|------|
| `options` | (1) | Options string to be passed to the OpenCL compiler |
**Sub-nodes**: None
### WorkSizes Node and Sub-Node Structure
`WorkSizes` node configures the global/local work sizes to be used when
queuing an OpenCL program for execution.
| Attribute Name | \# | Description |
|-----|------|-----|
| `global`<br>`local` | (0/1)<br>(0/1) | An array of up to three integers or formulas for defining OpenCL work-sizes to be used during execution.<br> The formulas can use the values of the B,F,Y,X dimensions and contain the operators: +,-,/,\*,%. All operators are evaluated in integer arithmetic. <br>Default value: `global=”B*F*Y*X” local=””` |
| `dim` | (0/1) | A tensor to take the work-size from. Accepted values: `input N`, `output`, where `N` is an index of input tensor starting with 0. Default value: `output` |
**Sub-nodes**: None
## Example Configuration File
The following code sample provides an example configuration file in XML
format. For information on the configuration file structure, see
[Configuration File Format](#config-file-format).
```xml
<CustomLayer name="ReLU" type="SimpleGPU" version="1">
<Kernel entry="example_relu_kernel">
<Source filename="custom_layer_kernel.cl"/>
<Define name="neg_slope" type="float" param="negative_slope" default="0.0"/>
</Kernel>
<Buffers>
<Tensor arg-index="0" type="input" port-index="0" format="BFYX"/>
<Tensor arg-index="1" type="output" port-index="0" format="BFYX"/>
</Buffers>
<CompilerOptions options="-cl-mad-enable"/>
<WorkSizes global="X,Y,B*F"/>
</CustomLayer>
```
## Built-In Definitions for Custom Layers
The following table includes definitions that are attached before
user sources.
For an example, see [Example Kernel](#example-kernel).
| Name | Value |
|---|---|
| `NUM_INPUTS` | Number of the input tensors bound to this kernel |
| `GLOBAL_WORKSIZE` | An array of global work sizes used to execute this kernel |
| `GLOBAL_WORKSIZE_SIZE` | The size of the `GLOBAL_WORKSIZE` array |
| `LOCAL_WORKSIZE` | An array of local work sizes used to execute this kernel |
| `LOCAL_WORKSIZE_SIZE` | The size of the `LOCAL_WORKSIZE` array |
| `<TENSOR>_DIMS`| An array of the tensor dimension sizes. Always ordered as `BFYX` |
| `<TENSOR>_DIMS_SIZE`| The size of the `<TENSOR>_DIMS` array.|
| `<TENSOR>_TYPE`| The datatype of the tensor: `float`, `half`, or `char`|
| `<TENSOR>_FORMAT_<TENSOR_FORMAT>` | The format of the tensor, BFYX, BYXF, YXFB , FYXB, or ANY. The format is concatenated to the defined name. You can use the tensor format to define codepaths in your code with `#&zwj;ifdef/#&zwj;endif`. |
| `<TENSOR>_LOWER_PADDING` | An array of padding elements used for the tensor dimensions before they start. Always ordered as BFYX.|
| `<TENSOR>_LOWER_PADDING_SIZE` | The size of the `<TENSOR>_LOWER_PADDING` array |
| `<TENSOR>_UPPER_PADDING` | An array of padding elements used for the tensor dimensions after they end. Always ordered as BFYX. |
| `<TENSOR>_UPPER_PADDING_SIZE` | The size of the `<TENSOR>_UPPER_PADDING` array |
| `<TENSOR>_PITCHES` | The offset (in elements) between adjacent elements in each dimension. Always ordered as BFYX.|
| `<TENSOR>_PITCHES_SIZE`| The size of the `<TENSOR>_PITCHES` array |
| `<TENSOR>_OFFSET`| The number of elements from the start of the tensor to the first valid element, bypassing the lower padding. |
All `<TENSOR>` values are automatically defined for every tensor
bound to this operation, such as `INPUT0`, `INPUT1`, and `OUTPUT0`, as shown
in the following example:
```c
#define INPUT0_DIMS_SIZE 4
#define INPUT0_DIMS (int []){ 1,96,55,55, }
```
## Example Kernel<a name="example-kernel"></a>
```c
#pragma OPENCL EXTENSION cl_khr_fp16 : enable
__kernel void example_relu_kernel(
const __global INPUT0_TYPE* input0,
__global OUTPUT0_TYPE* output)
{
const uint idx = get_global_id(0);
const uint idy = get_global_id(1);
const uint idbf = get_global_id(2); // batches*features, as OpenCL supports 3D nd-ranges only
const uint feature = idbf % OUTPUT0_DIMS[1];
const uint batch = idbf / OUTPUT0_DIMS[1];
//notice that pitches are in elements, not in bytes!
const uint in_id = batch*INPUT0_PITCHES[0] + feature*INPUT0_PITCHES[1] + idy*INPUT0_PITCHES[2] + idx*INPUT0_PITCHES[3] + INPUT0_OFFSET;
const uint out_id = batch*OUTPUT0_PITCHES[0] + feature*OUTPUT0_PITCHES[1] + idy*OUTPUT0_PITCHES[2] + idx*OUTPUT0_PITCHES[3] + OUTPUT0_OFFSET;
INPUT0_TYPE value = input0[in_id];
// neg_slope (which is non-zero for leaky ReLU) is put automatically as #define, refer to the config xml
output[out_id] = value < 0 ? value * neg_slope : value;
}
```
> **NOTE**: As described in the previous section, all items like
> `INPUT0_TYPE` are actually defined as OpenCL (pre-)compiler inputs by
> OpenVINO for efficiency reasons. See [Debugging
> Tips](#debugging-tips) for information on debugging the results.
## Debugging Tips<a name="debugging-tips"></a>
* **Using `printf` in the OpenCL™ Kernels**.
To debug the specific values, you can use `printf` in your kernels.
However, be careful not to output excessively, which
could generate too much data. The `printf` output is typical, so
your output can be truncated to fit the buffer. Also, because of
buffering, you actually get an entire buffer of output when the
execution ends.<br>
For more information, refer to the [printf
Function](https://www.khronos.org/registry/OpenCL/sdk/1.2/docs/man/xhtml/printfFunction.html).

View File

@@ -1,4 +1,4 @@
# OpenVINO Extensibility Mechanism {#openvino_docs_Extensibility_UG_Intro}
# OpenVINO Extensibility Mechanism {#openvino_docs_Extensibility_UG_Intro}
@sphinxdirective
@@ -7,36 +7,68 @@
:hidden:
openvino_docs_Extensibility_UG_add_openvino_ops
openvino_docs_Extensibility_UG_Frontend_Extensions
openvino_docs_Extensibility_UG_GPU
openvino_docs_IE_DG_Extensibility_DG_VPU_Kernel
openvino_docs_MO_DG_prepare_model_customize_model_optimizer_Customize_Model_Optimizer
@endsphinxdirective
The Intel® Distribution of OpenVINO™ toolkit supports neural network models trained with multiple frameworks including
TensorFlow, Caffe, MXNet, Kaldi, PaddlePaddle, and ONNX. The list of supported operations (layers) is different for
The Intel® Distribution of OpenVINO™ toolkit supports neural network models trained with various frameworks, including
TensorFlow, PyTorch, ONNX, PaddlePaddle, MXNet, Caffe, and Kaldi. The list of supported operations is different for
each of the supported frameworks. To see the operations supported by your framework, refer to
[Supported Framework Operations](../MO_DG/prepare_model/Supported_Frameworks_Layers.md).
Custom operations, that is those not included in the list, are not recognized by OpenVINO™ out-of-the-box. Therefore, creating Intermediate Representation (IR) for a model using them requires additional steps. This guide illustrates the workflow for running inference on topologies featuring custom operations, allowing you to plug in your own implementation for existing or completely new operations.
Custom operations, that is those not included in the list, are not recognized by OpenVINO™ out-of-the-box. The need for a custom operation may appear in two main cases:
If your model contains operations not normally supported by OpenVINO™, the OpenVINO™ Extensibility API lets you add support for those custom operations and use one implementation for Model Optimizer and OpenVINO™ Runtime.
1. A regular framework operation that is new or rarely used, which is why it hasnt been implemented in OpenVINO yet.
There are two steps to support inference of a model with custom operation(s):
1. Add support for a [custom operation in the Model Optimizer](../MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md) so
the Model Optimizer can generate the IR with the operation.
2. Create a custom operation in it as described in the [Custom Operation](add_openvino_ops.md).
2. A new user operation that was created for some specific model topology by a model author using framework extension capabilities.
## OpenVINO™ Extensions
Importing models with such operations requires additional steps. This guide illustrates the workflow for running inference on models featuring custom operations, allowing you to plug in your own implementation for them. OpenVINO™ Extensibility API lets you add support for those custom operations and use one implementation for Model Optimizer and OpenVINO™ Runtime.
An OpenVINO™ provides extensions for:
Defining a new custom operation basically consist of two parts:
* [Custom OpenVINO™ Operation](add_openvino_ops.md):
- Enables the creation of unsupported operations
- Enables the use of `ov::Core::read_model` to read models with unsupported operations
- Provides a shape inference mechanism for custom operations
- Provides an evaluate method which allow to support the operation on CPU or perform constant folding
1. Definition of operation semantics in OpenVINO, the code that describes how this operation should be inferred consuming input tensor(s) and producing output tensor(s). How to implement execution kernels for [GPU](./GPU_Extensibility.md) and [VPU](./VPU_Extensibility.md) is described in separate guides.
> **NOTE**: This documentation is written based on the [Template extension](https://github.com/openvinotoolkit/openvino/tree/master/docs/template_extension/new), which demonstrates extension development details. You can review the complete code, which is fully compilable and up-to-date, to see how it works.
2. Mapping rule that facilitates conversion of framework operation representation to OpenVINO defined operation semantics.
## Load extensions to OpenVINO™ Runtime
The first part is required for inference, the second part is required for successful import of a model containing such operations from the original framework model format. There are several options to implement each part, the next sections will describe them in detail.
## Definition of Operation Semantics
If the custom operation can be mathematically represented as a combination of exiting OpenVINO operations and such decomposition gives desired performance, then low-level operation implementation is not required. When deciding feasibility of such decomposition refer to the latest OpenVINO operation set. You can use any valid combination of exiting operations. How to map a custom operation is described in the next section of this document.
If such decomposition is not possible or appears too bulky with lots of consisting operations that are not performing well, then a new class for the custom operation should be implemented as described in the [Custom Operation Guide](add_openvino_ops.md).
Prefer implementing a custom operation class if you already have a generic C++ implementation of operation kernel. Otherwise try to decompose the operation first as described above and then after verifying correctness of inference and resulting performance, optionally invest to implementing bare metal C++ implementation.
## Mapping from Framework Operation
Depending on model format used for import, mapping of custom operation is implemented differently, choose one of:
1. If model is represented in ONNX (including models exported from Pytorch in ONNX) or PaddlePaddle formats, then one of the classes from [Frontend Extension API](frontend_extensions.md) should be used. It consists of several classes available in C++ which can be used with Model Optimizer `--extensions` option or when model is imported directly to OpenVINO run-time using read_model method. Python API is also available for run-time model importing.
2. If model is represented in TensorFlow, Caffe, Kaldi or MXNet formats, then [Model Optimizer Extensions](../MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md) should be used. This approach is available for model conversion in Model Optimizer only.
Existing of two approaches simultaneously is explained by two different types of frontends used for model conversion in OpenVINO: new frontends (ONNX, PaddlePaddle) and legacy frontends (TensorFlow, Caffe, Kaldi and MXNet). Model Optimizer can use both front-ends in contrast to the direct import of model with `read_model` method which can use new frontends only. Follow one of the appropriate guides referenced above to implement mappings depending on framework frontend.
If you are implementing extensions for ONNX or PaddlePaddle new frontends and plan to use Model Optimizer `--extension` option for model conversion, then the extensions should be
1. Implemented in C++ only
2. Compiled as a separate shared library (see details how to do that later in this guide).
You cannot write new frontend extensions using Python API if you plan to use them with Model Optimizer.
Remaining part of this guide uses Frontend Extension API applicable for new frontends.
## Registering Extensions
A custom operation class and a new mapping frontend extension class object should be registered to be usable in OpenVINO runtime.
> **NOTE**: This documentation is written based on the [Template extension](https://github.com/openvinotoolkit/openvino/tree/master/docs/template_extension/new), which demonstrates extension development details based on minimalistic `Identity` operation that is a placeholder for your real custom operation. You can review the complete code, which is fully compliable, to see how it works.
To load the extensions to the `ov::Core` object, use the `ov::Core::add_extension` method, this method allows to load library with extensions or extensions from the code.
@@ -44,27 +76,50 @@ To load the extensions to the `ov::Core` object, use the `ov::Core::add_extensio
Extensions can be loaded from code with `ov::Core::add_extension` method:
@sphinxtabset
@sphinxtab{C++}
@snippet docs/snippets/ov_extensions.cpp add_extension
@endsphinxtab
@sphinxtab{Python}
@snippet docs/snippets/ov_extensions.py add_extension
@endsphinxtab
@endsphinxtabset
`Identity` is custom operation class defined in [Custom Operation Guide](add_openvino_ops.md). This is enough to enable reading IR which uses `Identity` extension operation emitted by Model Optimizer. To be able to load original model directly to the runtime, you need to add also a mapping extension:
@sphinxdirective
.. tab:: C++
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
:language: cpp
:fragment: add_extension
:fragment: add_frontend_extension
.. tab:: Python
.. doxygensnippet:: docs/snippets/ov_extensions.py
:language: python
:fragment: add_extension
:fragment: add_frontend_extension
@endsphinxdirective
When Python API is used there is no way to implement a custom OpenVINO operation. Also, even if custom OpenVINO operation is implemented in C++ and loaded to the runtime through a shared library, there is still no way to add a frontend mapping extension that refers to this custom operation. Use C++ shared library approach to implement both operations semantics and framework mapping in this case.
You still can use Python for operation mapping and decomposition in case if operations from the standard OpenVINO operation set is used only.
### Create library with extensions
You need to create extension library in following cases:
- Load extensions to Model Optimizer
- Load extensions to Python application
You need to create extension library in the following cases:
- Convert model with custom operations in Model Optimizer
- Load model with custom operations in Python application. It is applicable for both framework model and IR.
- Loading models with custom operations in tools that support loading extensions from a library, for example `benchmark_app`.
If you want to create an extension library, for example in order to load these extensions to the Model Optimizer, you need to do next steps:
Create an entry point for extension library. OpenVINO™ provides an `OPENVINO_CREATE_EXTENSIONS()` macro, which allows to define an entry point to a library with OpenVINO™ Extensions.
@@ -92,24 +147,25 @@ $ cmake --build .
After the build you can use path to your extension library to load your extensions to OpenVINO™ Runtime:
@sphinxdirective
@sphinxtabset
.. tab:: C++
@sphinxtab{C++}
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
:language: cpp
:fragment: add_extension_lib
@snippet docs/snippets/ov_extensions.cpp add_extension_lib
.. tab:: Python
@endsphinxtab
.. doxygensnippet:: docs/snippets/ov_extensions.py
:language: python
:fragment: add_extension_lib
@sphinxtab{Python}
@endsphinxdirective
@snippet docs/snippets/ov_extensions.py add_extension_lib
@endsphinxtab
@endsphinxtabset
## See Also
* [OpenVINO Transformations](./ov_transformations.md)
* [Using Inference Engine Samples](../OV_Runtime_UG/Samples_Overview.md)
* [Using OpenVINO Runtime Samples](../OV_Runtime_UG/Samples_Overview.md)
* [Hello Shape Infer SSD sample](../../samples/cpp/hello_reshape_ssd/README.md)

View File

@@ -0,0 +1,619 @@
# How to Implement Custom Layers for VPU (Intel® Neural Compute Stick 2) {#openvino_docs_IE_DG_Extensibility_DG_VPU_Kernel}
To enable operations not supported by OpenVINO™ out of the box, you need a custom extension for Model Optimizer, a custom nGraph operation set, and a custom kernel for the device you will target. This page describes custom kernel support for one the VPU, the Intel® Neural Compute Stick 2 device, which uses the MYRIAD device plugin.
> **NOTES:**
> * OpenCL\* custom layer support is available in the preview mode.
> * This section assumes you are familiar with developing kernels using OpenCL.
To customize your topology with an OpenCL layer, carry out the tasks described on this page:
1. Write and compile your OpenCL code with the standalone offline OpenCL compiler (`clc`).
2. Write a configuration file to bind the OpenCL kernel to the topology file (`.xml`) of the model IR.
3. Pass the configuration file to the OpenVINO™ Runtime with the model IR.
## Compile OpenCL code for VPU (Intel® Neural Compute Stick 2)
> **NOTE**: OpenCL compiler, targeting Intel® Neural Compute Stick 2 for the SHAVE* processor only, is redistributed with OpenVINO.
OpenCL support is provided by ComputeAorta* and is distributed under a license agreement between Intel® and Codeplay* Software Ltd.
The OpenCL toolchain for the Intel® Neural Compute Stick 2 supports offline compilation only, so first compile OpenCL C code using the standalone `clc` compiler. You can find the compiler binary at `<INSTALL_DIR>/tools/cl_compiler`.
> **NOTE**: By design, custom OpenCL layers support any OpenCL kernels written assuming OpenCL version 1.2. It also supports half float extension and is optimized for this type, because it is a native type for Intel® Movidius™ VPUs.
1. Prior to running a compilation, make sure that the following variables are set:
* `SHAVE_MA2X8XLIBS_DIR=<INSTALL_DIR>/tools/cl_compiler/lib/`
* `SHAVE_LDSCRIPT_DIR=<INSTALL_DIR>/tools/cl_compiler/ldscripts/`
* `SHAVE_MYRIAD_LD_DIR=<INSTALL_DIR>/tools/cl_compiler/bin/`
* `SHAVE_MOVIASM_DIR=<INSTALL_DIR>/tools/cl_compiler/bin/`
2. Run the compilation with the command below. You should use `--strip-binary-header` to make an OpenCL runtime-agnostic binary runnable with the OpenVINO™ Runtime.
```bash
cd <INSTALL_DIR>/tools/cl_compiler/bin
./clc --strip-binary-header custom_layer.cl -o custom_layer.bin
```
## Write a Configuration File
To tie the topology IR for a layer you customize, prepare a configuration file, so that the OpenVINO™ Runtime can find parameters for your kernel and the execution work grid is described.
For example, consider the following OpenCL kernel signature:
```cpp
__kernel void reorg_nhwc(__global const half *src, __global half *out, int w, int h, int c, int stride);
```
A configuration file for this kernel might be the following:
```xml
<CustomLayer name="ReorgYolo" type="MVCL" version="1">
<Kernel entry="reorg_nhwc">
<Source filename="reorg.bin"/>
</Kernel>
<Parameters>
<Tensor arg-name="src" type="input" port-index="0" format="BYXF"/>
<Tensor arg-name="out" type="output" port-index="0" format="BYXF"/>
<Scalar arg-name="w" type="int" port-index="0" source="I.X" />
<Scalar arg-name="h" type="int" port-index="0" source="I.Y" />
<Scalar arg-name="c" type="int" port-index="0" source="I.F" />
<Scalar arg-name="stride" type="int" source="stride" />
</Parameters>
<WorkSizes dim="input,0" global="(Y+7)/8*8,1,1" local="8,1,1"/>
</CustomLayer>
```
Each custom layer is described with the `CustomLayer` node. It has the following nodes and attributes:
- Root node `CustomLayer` contains the following attributes:
- `name` (Required) The name of the OpenVINO™ Runtime layer to bind the kernel with.
- `type` and `version` (Required) Reserved for future use. Set them to `MVCL` and `1` respectively.
- `max-shaves` (Optional) The maximum number of SHAVE cores that should be dedicated for the layer. It is useful for debugging concurrency issues or for resource saving that memory bound kernel does not scale well with the number of cores, so more resources can be left for the rest of a topology.
- Sub-node `Kernel` must contain the following attributes:
- `entry` The name of your kernel function as you defined it in a source file. In the example above, it is `reorg_nhwc`.
- Node `Source` must contain the following attributes:
- `filename` The path to a compiled binary relative to the XML configuration file.
- Sub-node `Parameters` Describes parameters bindings. For more information, see the description below.
- Sub-node `WorkSizes` Describes local and global work group sizes and the source for dimension deduction as a pair `direction,port`. In the example above, the work group is described relatively to the dimension of the input tensor that comes through port 0 in the IR. `global` and `local` work group configurations support any simple math expressions with +,-,\*,/, and () from `B`(batch), `Y`(height), `X`(width) and `F`(channels).
- Sub-node `Where` Allows to customize bindings with the `key="value"` attribute. For example, to substitute only 3x3 convolutions, write `<Where kernel="3,3"/>` in the binding xml.
Parameter description supports `Tensor` of one of tensor types such as `input`, `output`, `input_buffer`, `output_buffer` or `data`, `Scalar`, or `Data` nodes and has the following format:
- Each `Tensor` node of `input` or `output` type must contain the following attributes:
- `arg-name` The name of a kernel parameter in the kernel signature.
- `type` Node type: `input` or `output` as specified in the IR.
- `port-index` A number of input/output ports as specified in the IR.
- `format` The channel order in the tensor. Optional conversion layers are generated if the custom layer format is not compatible with formats of neighboring layers. `BFXY`, `BYXF`, and `ANY` formats are supported currently.
- Each `Tensor` node of `input_buffer` or `output_buffer` type must contain the following attributes:
- `arg-name` The name of a kernel parameter in the kernel signature.
- `type` Node type: `input_buffer` or `output_buffer`. Use the appropriate type to bind multiple kernels that correspond to different stages of the same layer.
- `port-index` The unique identifier to bind by.
- `dim` The dim source with the same `direction,port` format used for `WorkSizes` bindings.
- `size` Amount of bytes needed. Current expression syntax supports only expression over dimensions of over selected input/output tensor or constants and might be expended in the future.
Here is an example of multi-stage MVN layer binding:
```xml
<CustomLayer name="MVN" stage="0" type="MVCL" version="1">
<Kernel entry="reduction_mean">
<Source filename="mvn.bin"/>
</Kernel>
<Parameters>
<Tensor arg-name="src" type="input" port-index="0" format="BFYX"/>
<Tensor arg-name="mean" type="output_buffer" port-index="0" dim="output,0" size="Y*F*4"/>
<Tensor arg-name="variance" type="output_buffer" port-index="1" dim="output,0" size="Y*F*4"/>
<!--other parameters -->
</Parameters>
<WorkSizes dim="output,0" global="((Y+7)/8)*8,F,1" local="8,1,1"/>
</CustomLayer>
<CustomLayer name="MVN" stage="1" type="MVCL" version="1">
<Kernel entry="mvn_scale">
<Source filename="mvn_scale_changed_orded.bin"/>
</Kernel>
<Parameters>
<Tensor arg-name="src_data" type="input" port-index="0" format="BFYX"/>
<Tensor arg-name="dst_data" type="output" port-index="0" format="BFYX"/>
<Tensor arg-name="mean_part" type="input_buffer" port-index="0" dim="output,0" size="Y*F*4"/>
<Tensor arg-name="power_mean" type="input_buffer" port-index="1" dim="output,0" size="Y*F*4"/>
<!--other parameters -->
</Parameters>
<WorkSizes dim="output,0" global="((Y+7)/8)*8,F,1" local="8,1,1"/>
</CustomLayer>
```
- Each `Tensor` node that has the type `data` must contain the following attributes:
- `source` A name of the blob as it is in the IR. Typical example is `weights` for convolution.
- `format` Specifies the channel order in the tensor. Optional conversion layers are generated if the custom layer format is not.
```xml
<CustomLayer name="BinaryConvolution" type="MVCL" version="1">
<Kernel entry="binary_convolution">
<Source filename="binary_layers.bin"/>
</Kernel>
<Parameters>
<Tensor arg-name="src_data" type="input" port-index="0" format="BFYX"/>
<Data arg-name="weights_data" type="data" source="weights" format="ANY"/>
<Tensor arg-name="dst_data" type="output" port-index="0" format="BFYX"/>
<!--other parameters -->
</Parameters>
<WorkSizes dim="output,0" global="X,Y,F" local="1,1,1"/>
</CustomLayer>
```
- Each `Scalar` node must contain the following attributes:
- `arg-name` The name of a kernel parameter in the kernel signature.
- `type` `int` or `float` value. It is used for correct argument extraction from IR parameters.
- `source` Contains the name of the parameter in the IR file or input/output (`I`/`O`, `In`/`On`, where `n` is a port number)
followed by dimension `B`(batch), `Y`(height), `X`(width), or `F`(channels).
- Each `Data` node must contain the following attributes:
- `arg-name` The name of a kernel parameter in the kernel signature.
- `type` Node type. Currently, `local_data` is the only supported value, which defines buffer allocated in fast local on-chip memory. It is limited to 100KB for all `__local` and
`__private` arrays defined inside the kernel as well as all `__local` parameters passed to the kernel. Note that a manual-DMA extension requires double buffering.
If the custom layer is detected to run out of local memory, the inference fails.
- `dim` The dim source with the same `direction,port` format used for `WorkSizes` bindings.
- `size` Amount of bytes needed. The current expression syntax supports only expression over dimensions of over selected input/output tensor or constants and may be extended in the future.
The example binding below illustrates a kernel with two local buffers passed to the kernel.
```xml
<CustomLayer name="GRN" type="MVCL" version="1">
<Kernel entry="grn_NCHW">
<Source filename="grn.bin"/>
</Kernel>
<Parameters>
<Tensor arg-name="src_data" type="input" port-index="0" format="BFYX"/>
<Tensor arg-name="dst_data" type="output" port-index="0" format="BFYX"/>
<Data arg-name="src" type="local_data" dim="input,0" size="X*F*2" />
<Data arg-name="dst" type="local_data" dim="input,0" size="X*F*2" />
<Scalar arg-name="C" type="int" port-index="0" source="I.F" />
<Scalar arg-name="bias" type="float" source="bias" />
</Parameters>
<WorkSizes dim="input,0" global="X,Y,1" local="X,1,1"/>
</CustomLayer>
```
## Pass Configuration File to OpenVINO™ Runtime
> **NOTE**: If both native and custom layer implementations are present, the custom kernel has a priority over the native one.
Before loading the network that features the custom layers, provide a separate configuration file and load it using the ov::Core::set_property() method with the "CONFIG_KEY" key and the configuration file name as a value before loading the network that uses custom operations to the plugin:
@snippet docs/snippets/vpu/custom_op.cpp part0
## Optimizing Kernels with OpenCL for VPU (Intel® Neural Compute Stick 2)
This section provides optimization guidelines on writing custom layers with OpenCL for VPU devices. Knowledge about general OpenCL
programming model and OpenCL kernel language is assumed and not a subject of this section. The OpenCL model mapping to VPU is described in the table below.
| OpenCL Model | VPU Mapping|
|-----|----|
| Device code | Executed on SHAVE cores |
| Private memory | Mapped to CMX internal memory, limited to 100KB per work group, valid only while the work group is executed |
| Local memory | Mapped to CMX internal memory, limited to 100KB per work group, valid only while the work group is executed |
| Global memory | Mapped to DDR, used to pass execution preserved parameters for inputs, outputs, and blobs |
| Work group | Executed on a single SHAVE core iterating over multiple work items |
Note that by the OpenCL specification, the work group execution order is not specified. This means that it is your
responsibility to ensure that race conditions among work groups are not introduced. Custom layer runtime spits evenly
work grid among available compute resources and executes them in an arbitrary order. This static scheduling approach works best if the load is evenly spread out across work groups, which is a typical case for Deep Learning kernels. The following guidelines are recommended to use for work group partitioning:
1. Split work evenly across work groups.
2. Adjust work group granularity to maintain equal workload for all compute codes.
3. Set the maximum number of cores using the `max-shaves` attribute for the `CustomLayer` node. This keeps more resources for the rest of topology. It is also useful if the kernel scalability reached its limits, which may happen while optimizing memory bound kernels or kernels with poor parallelization.
4. Try an alternate data layout (`BFXY`/`BYXF`) for the kernel if it improves work group partitioning or data access patterns.
Consider not just specific layer boost, but full topology performance because data conversion layers would be automatically inserted
as appropriate.
Offline OpenCL compiler (`clc`) features automatic vectorization over `get_global_id(0)` usage, if uniform access is detected.
For example, the kernel below could be automatically vectorized:
```cpp
__kernel void cvtf32f16(__global float* restrict inImage, __global half* restrict outImage,
float scale, float bais)
{
int idx = get_global_id(0) + get_global_id(1) * get_global_size(0) + get_global_id(2) * get_global_size(0) * get_global_size(1);
outImage[idx] = convert_half(inImage[idx]*scale+bais);
}
```
However, this work-group based vectorizer (WGV) conflicts with the default LLVM vectorizer based on superword level parallelism
(SLP) for the current compiler version. Manual vectorization is recommended to provide the best performance for non-uniform code
patterns. WGV works if and only if vector types are not used in the code.
Here is a short list of optimization tips:
1. Help auto-vectorizer ensure non-aliasing pointers for kernel parameters by putting `restrict` where possible.
- This can give a performance boost, especially for kernels with unrolling, like `ocl_grn` from the example below.
- Place `restrict` markers for kernels with manually vectorized codes. In the `ocl_grn` kernel below, the unrolled version without `restrict` is up to 20% slower than the most optimal one, which combines unrolling and `restrict`.
2. Put `#&zwj;pragma unroll N` to your loop header. The compiler does not trigger unrolling by default, so it is your responsibility to
annotate the code with pragmas as appropriate. The `ocl_grn` version with `#&zwj;pragma unroll 4` is up to 50% faster, most of which comes from unrolling the first loop, because LLVM, in general, is better in scheduling 3-stage loops (load-compute-store), while the fist loop
`variance += (float)(src_data[c*H*W + y*W + x] * src_data[c*H*W + y*W + x]);` is only 2-stage (load-compute). Pay
attention to unrolling such cases first. Unrolling factor is loop-dependent. Choose the smallest number that
still improves performance as an optimum between the kernel size and execution speed. For this specific kernel, changing the unroll factor from `4` to `6` results in the same performance, so unrolling factor equal to 4 is an optimum. For Intel® Neural Compute Stick 2, unrolling is conjugated with the automatic software pipelining for load, store, and compute stages:
```cpp
__kernel void ocl_grn(__global const half* restrict src_data, __global half* restrict dst_data, int C, float bias)
{
int x = get_global_id(0);
int W = get_global_size(0);
int y = get_global_id(1);
int H = get_global_size(1);
float variance = bias + 1e-9f;
#pragma unroll 4
for (int c = 0; c < C; c++)
variance += (float)(src_data[c*H*W + y*W + x] * src_data[c*H*W + y*W + x]);
variance = 1.f / native_sqrt(variance);
#pragma unroll 4
for (int c = 0; c < C; c++)
dst_data[c*H*W + y*W + x] = (half)((float)src_data[c*H*W + y*W + x] * variance);
}
```
To check the efficiency of WGV, you can compare performance of the kernel above with the kernel below, which is manually vectorized over width:
```cpp
__kernel void ocl_grn_line(__global const half* restrict src_data, __global half* restrict dst_data, int C, int W, float bias)
{
int y = get_global_id(1);
int H = get_global_size(1);
for (int x = 0; x < W/8; x++)
{
float8 variance = (float8)(bias+1e-9f);
#pragma unroll 4
for (int c = 0; c < C; c++)
{
__global const half8* restrict src_line = ((__global const half8 * restrict)(src_data + c*H*W + y*W));
half8 sh = src_line[x];
variance += convert_float8(sh*sh);
}
variance = 1.f/native_sqrt(variance);
#pragma unroll 4
for (int c = 0; c < C; c++)
{
__global const half8* restrict src_line = ((__global const half8 * restrict)(src_data + c*H*W + y*W));
__global half8* restrict dst_line = ((__global half8 * restrict)(dst_data + c*H*W + y*W));
dst_line[x] = convert_half8(convert_float8(src_line[x])*variance);
}
}
for (int x = W/8*8; x < W; x++)
{
float variance = bias+1e-9f;
#pragma unroll 4
for (int c = 0; c < C; c++)
variance += (float)(src_data[c*H*W + y*W + x]*src_data[c*H*W + y*W + x]);
variance = 1.f/native_sqrt(variance);
#pragma unroll 4
for (int c = 0; c < C; c++)
dst_data[c*H*W + y*W + x] = (float)src_data[c*H*W + y*W + x]*variance;
}
}
```
Both versions perform the same, but the second one has more complex code.
3. If it is easy to predict the work group size, you can also use the `reqd_work_group_size` kernel attribute to ask the compiler
to unroll the code up to the local size of the work group. Note that if the kernel is actually executed with the
different work group configuration, the result is undefined.
4. Prefer to use the `half` compute if it keeps reasonable accuracy. 16-bit float is a native type for Intel® Neural Compute Stick 2, most of the functions `half_*` are mapped to a single hardware instruction.
Use the standard `native_*` function for the rest of types.
5. Prefer to use the `convert_half` function over `vstore_half` if conversion to 32-bit float is required. `convert_half` is mapped to a single hardware instruction. For the `cvtf32f16` kernel above, the line `outImage[idx] = convert_half(inImage[idx]*scale+bais);` is eight times slower than the code with `vstore_half`.
6. Mind early exits. Early exit can be extremely costly for the current version of the `clc` compiler due to conflicts with the
auto-vectorizer. The generic advice would be to setup local size by `x` dimension equal to inputs or/and outputs width.
If it is impossible to define the work grid that exactly matches inputs or/and outputs to eliminate checks, for example,
`if (get_global_id(0) >= width) return`, use line-wise kernel variant with manual vectorization.
The kernel example below demonstrates the impact of early exits on kernel performance.
```cpp
// Initial version
__kernel void reorg(const __global half* restrict src, __global half* restrict out, int stride)
{
int w = get_global_id(0);
int W = get_global_size(0);
int h = get_global_id(1);
int H = get_global_size(1);
int c = get_global_id(2);
int C = get_global_size(2);
int C2 = C/(stride*stride);
int offset = c / C2;
int c2 = c - C2 * offset;
int H2 = H*stride;
int W2 = W*stride;
int h2 = h*stride + offset / stride;
int w2 = w*stride + offset - stride * (offset / stride);
out[W*H*c + W*h + w] = src[W2*H2*c2 + W2*h2 + w2];
}
```
This `reorg` kernel is auto-vectorizable, but an input for YOLO v2 topology is `NCHW=<1,64,26,26>` and it is not multiple of vector width, which is `8` for `half` data type. As a result, the Inference Engine does not select the auto-vectorized kernel.
To compare performance of auto-vectorized and scalar version of the kernel, change the input size to`NCHW=<1,64,26,32>`. This enables the auto-vectorized version to be selected by the Inference Engine and can give you about 30% uplift.
Since the auto-vectorized version is faster, it makes sense to enable it for the YOLO v2 topology input size by setting the local size multiple of vector, for example, 32, and adjust global sizes accordingly. As a result, the execution work grid exceeds actual input dimension, so out-of-bound checks should be inserted. See the updated kernel version below:
```cpp
// Version with out-of-bound checks added
__kernel void reorg(const __global half* restrict src, __global half* restrict out, int W, int stride)
{
int w = get_global_id(0);
w = min(w, W-1);
int h = get_global_id(1);
int H = get_global_size(1);
int c = get_global_id(2);
int C = get_global_size(2);
int C2 = C/(stride*stride);
int offset = c / C2;
int c2 = c - C2 * offset;
int H2 = H*stride;
int W2 = W*stride;
int h2 = h*stride + offset / stride;
int w2 = w*stride + offset - stride * (offset / stride);
out[W*H*c + W*h + w] = src[W2*H2*c2 + W2*h2 + w2];
}
```
This code performs the same as the initial kernel above (scalar) due to branching overhead. If you replace min/max expression `w = min(w, W-1);` with `if (w >= W) return;`, runtime increases up to 2x against to code without branching (initial version).<br>
If branching is inevitable for your element-based kernel, it is recommended to change the scheme to line-based. See the kernel variant below:
```cpp
// Line-wise version
__kernel void reorg(const __global half* restrict src, __global half* restrict out, int H, int W, int stride)
{
int h = min((int)get_global_id(0), H-1);
int c = get_global_id(1);
int C = get_global_size(1);
int C2 = C/(stride*stride);
int offset = c / C2;
int c2 = c - C2 * offset;
int H2 = H*stride;
int W2 = W*stride;
for (int w = 0; w < W; ++w)
{
int h2 = h*stride + offset / stride;
int w2 = w*stride + offset - stride * (offset / stride);
out[W*H*c + W*h + w] = src[W2*H2*c2 + W2*h2 + w2];
}
}
```
This decreases the execution time up to 40% against the best performing vectorized kernel without early exits (initial version).
7. Reuse computations among work items by using line-based kernels or sharing values though `__local` memory.
8. Improve data access locality. Most of custom kernels are memory bound while convolution and fully connected layers are hardware-implemented. The code below demonstrates a further optimized version of the `reorg` kernel unrolled by `stride`:
```cpp
// Unrolled line-wise version
__kernel void reorg_unrolled_by_stride(const __global half* restrict src, __global half* restrict dst,
int H, int W, int stride)
{
int h = min((int)get_global_id(0), H-1);
int c2 = get_global_id(1);
int C2 = get_global_size(1);
int C = C2*stride*stride;
int H2 = H*stride;
int W2 = W*stride;
for (int stride_y = 0; stride_y < stride; stride_y++)
for (int stride_x = 0; stride_x < stride; stride_x++)
for (int w2 = 0, w = 0; w < W; w2 += stride, w++)
dst[W*H*C2*(stride_y*stride+stride_x) + W*H*c2 + W*h + w] = src[W2*H2*c2 + W2*h*stride + W2*stride_y + w2 + stride_x];
}
```
`scr` data in this case loaded only once. As the result, the cycle count drops up to 45% against the line-wise version.
9. Copy data from `__dlobal` to `__local` or `__private` memory if the data is accessed more than once. Access to
`__dlobal` memory is orders of magnitude slower than access to `__local`/`__private` due to statically scheduled pipeline, which
stalls completely on memory access without any prefetch. The same recommendation is applicable for scalar load/store
from/to a `__blobal` pointer since work-group copying could be done in a vector fashion.
10. Use a manual DMA extension. Local (on-chip) memory throughput is up to 24x higher than DDR throughput. Starting from OpenVINO™ 2020.1, VPU OpenCL features manual-DMA kernel extension to copy sub-tensor used by work group into local memory and performing compute without DDR evolved. Here is the simple GRN kernel implementation that runs over DDR. Local size is in the form (width of the input tensor, 1, 1) to define a large enough work group to get code automatically vectorized and unrolled, while global size is (width of the input tensor, height of the input tensor, 1):
```cpp
__kernel void grn_NCHW(
__global const half* restrict src_data,
__global half* restrict dst_data,
int C,
float bias)
{
float variance = bias + 1e-9f;
#pragma unroll 4
for (int c = 0; c < C; c++)
{
float val = (float) src_data[c*get_global_size(1)*get_global_size(0) + get_global_id(1)*get_global_size(0) + get_global_id(0)];
variance += val*val;
}
half hvariance = (half)(native_rsqrt((half)(variance/16.f))*0.25f);
#pragma unroll 4
for (int c = 0; c < C; c++)
{
dst_data[c*get_global_size(1)*get_global_size(0) + get_global_id(1)*get_global_size(0) + get_global_id(0)]
= src_data[c*get_global_size(1)*get_global_size(0) + get_global_id(1)*get_global_size(0) + get_global_id(0)] * hvariance;
}
}
```
This kernel can be rewritten to introduce special data binding `__dma_preload` and `__dma_postwrite intrinsics`. This means that instead of one kernel, a group of three kernels should be implemented: `kernelName`, `__dma_preload_kernelName`, and `__dma_postwrite_kernelName`. `__dma_preload_kernelName` for a particular work group `n` is guaranteed to be executed before the `n`-th work group itself, while `__dma_postwrite_kernelName` is guaranteed to be executed after a corresponding work group. You can define one of those functions that are intended to be used to copy data from-to `__global` and `__local` memory. The syntactics requires exact functional signature match. The example below illustrates how to prepare your kernel for manual-DMA.
```cpp
__kernel void __dma_preload_grn_NCHW(
__global const half* restrict src,
__global half* restrict dst,
__local half* restrict local_src,
__local half* restrict local_dst,
int C,
float bias)
{
// ToDO: copy required piece of src tensor into local_src
}
__kernel void __dma_postwrite_grn_NCHW(
__global const half* restrict src,
__global half* restrict dst,
__local const half* restrict local_src,
__local half* restrict local_dst,
int C,
float bias)
{
// ToDO: copy back computed piece of local_dst into dst
}
__kernel void grn_NCHW(
__global const half* restrict src_data,
__global half* restrict dst_data,
__local half* restrict src,
__local half* restrict dst,
int C,
float bias)
{
// same as the example above
}
```
The GRN kernel operates on channel-major tensors to compute average over full channel range and then normalizes input elements to produce the output.
As a part of the manual DMA extension, a group of work group copy functions are introduced in addition to `async_work_group_copy`, which is also mapped to a DMA call.
Here is the list of supported functions:
```cpp
// 2D sub-tensor copy
event_t WorkGroupDmaCreateStrideTransaction(
const local T *src,
global T *dst,
size_t src_width, // width of the line of source in bytes
size_t dst_width, // width of the line of destination in bytes
size_t src_stride, // stride between corresponding 2 consecutive lines of source in bytes
size_t dst_stride, // stride between corresponding 2 consecutive lines of destination in bytes
size_t size, // total number of bytes loaded for all lines from source to destination
event_t event) __OVERLOAD;
event_t WorkGroupDmaCreateStrideTransaction(
const global T *src,
local T *dst,
size_t src_width, // width of the line of source in bytes
size_t dst_width, // width of the line of destination in bytes
size_t src_stride, // stride between corresponding 2 consecutive lines of source in bytes
size_t dst_stride, // stride between corresponding 2 consecutive lines of destination in bytes
size_t size, // total number of bytes loaded for all lines from source to destination
event_t event) __OVERLOAD;
// 3D sub-tensor copy
event_t WorkGroupDmaCreate3DTransaction(
const local T *src,
global T *dst,
size_t src_width, // width of the line of source in bytes
size_t dst_width, // width of the line of destination in bytes
size_t src_stride, // stride between corresponding 2 consecutive lines of source in bytes
size_t dst_stride, // stride between corresponding 2 consecutive lines of destination in bytes
size_t num_planes, // number of planes to be copied
size_t src_plane_stride, // stride between corresponding 2 consecutive planes of source in bytes
size_t dst_plane_stride, // stride between corresponding 2 consecutive planes of destination in bytes
size_t size, // size of the loaded plane in bytes, analogues to the size in 2D case
event_t event) __OVERLOAD;
event_t WorkGroupDmaCreate3DTransaction(
const global T *src,
local T *dst,
size_t src_width, // width of the line of source in bytes
size_t dst_width, // width of the line of destination in bytes
size_t src_stride, // stride between corresponding 2 consecutive lines of source in bytes
size_t dst_stride, // stride between corresponding 2 consecutive lines of destination in bytes
size_t num_planes, // number of planes to be copied
size_t src_plane_stride, // stride between corresponding 2 consecutive planes of source in bytes
size_t dst_plane_stride, // stride between corresponding 2 consecutive planes of destination in bytes
size_t size, // size of the loaded plane in bytes, analogues to the size in 2D case
event_t event) __OVERLOAD;
```
where `T` can be `uchar`, `char`, `short`, `ushort`, `int`, `uint`, `long`, `ulong`, `half` or `float`.
Modified version of the GRN kernel could be the following:
```cpp
__kernel void __dma_preload_grn_NCHW(
__global const half* restrict src,
__global half* restrict dst,
__local half* restrict local_src,
__local half* restrict local_dst,
int C,
float bias)
{
WorkGroupDmaCreate3DTransaction(
src + get_group_id(0)*get_local_size(0)
+ get_group_id(1)*get_local_size(1)*get_global_size(0), // src
local_src, // dst
get_local_size(0) * sizeof(half), // src width
get_local_size(0) * sizeof(half), // dst width
get_global_size(0) * sizeof(half), // src stride
get_local_size(0) * sizeof(half), // dst stride
C, // num planes
get_global_size(0) * get_global_size(1) * sizeof(half), // src plane stride
get_local_size(0) * get_local_size(1) * sizeof(half), // dst plane stride
get_local_size(0) * get_local_size(1) * sizeof(half), // plane size
0);
}
__kernel void __dma_postwrite_grn_NCHW(
__global const half* restrict src,
__global half* restrict dst,
__local const half* restrict local_src,
__local half* restrict local_dst,
int C,
float bias)
{
WorkGroupDmaCreate3DTransaction(
local_dst, // src
dst + get_group_id(0)*get_local_size(0)
+ get_group_id(1)*get_local_size(1)*get_global_size(0), // dst
get_local_size(0) * sizeof(half), // src width
get_local_size(0) * sizeof(half), // dst width
get_local_size(0) * sizeof(half), // src stride
get_global_size(0) * sizeof(half), // dst stride
C, // num planes
get_local_size(0) * get_local_size(1) * sizeof(half), // src plane stride
get_global_size(0) * get_global_size(1) * sizeof(half), // dst plane stride
get_local_size(0) * get_local_size(1) * sizeof(half), // plane size
0);
}
__kernel void grn_NCHW(
__global const half* restrict src_data,
__global half* restrict dst_data,
__local half* restrict src,
__local half* restrict dst,
int C,
float bias)
{
float variance = bias + 1e-9f;
#pragma unroll 8
for (int c = 0; c < C; c++)
{
float val = (float) src[c*get_local_size(1)*get_local_size(0) + get_local_id(1)*get_local_size(0) + get_local_id(0)];
variance += val*val;
}
half hvariance = (half)(native_rsqrt((half)(variance/16.f))*0.25f);
#pragma unroll 8
for (int c = 0; c < C; c++)
{
dst[c*get_local_size(1)*get_local_size(0) + get_local_id(1)*get_local_size(0) + get_local_id(0)]
= src[c*get_local_size(1)*get_local_size(0) + get_local_id(1)*get_local_size(0) + get_local_id(0)] * hvariance;
}
}
```
Note the `get_local_size` and `get_local_id` usage inside the kernel. 21x speedup is expected for a kernel on enet-curbs setup because it was completely limited by memory usage.
An alternative method to using DMA is to use work item copy extension. Those functions are executed inside a kernel and requires work groups equal to single work item.
Here is the list of supported work item functions:
```cpp
item_dma_event_t WorkItemDmaCreateTransaction(
const global T *src,
private T *dst,
size_t size,
item_dma_event_t event) __OVERLOAD;
item_dma_event_t WorkItemDmaCreateTransaction(
const private T *src,
global T *dst,
size_t size,
item_dma_event_t event) __OVERLOAD;
item_dma_event_t WorkItemDmaCreateStrideTransaction(
const global T *src,
private T *dst,
size_t src_width,
size_t dst_width,
size_t src_stride,
size_t dst_stride,
size_t size,
item_dma_event_t event) __OVERLOAD;
item_dma_event_t WorkItemDmaCreateStrideTransaction(
const private T *src,
global T *dst,
size_t src_width,
size_t dst_width,
size_t src_stride,
size_t dst_stride,
size_t size,
item_dma_event_t event) __OVERLOAD;
item_dma_event_t WorkItemDmaCreate3DTransaction(
const global T *src,
private T *dst,
size_t src_width,
size_t dst_width,
size_t src_stride,
size_t dst_stride,
size_t num_planes,
size_t src_plane_stride,
size_t dst_plane_stride,
size_t size,
item_dma_event_t event) __OVERLOAD;
item_dma_event_t WorkItemDmaCreate3DTransaction(
const private T *src,
global T *dst,
size_t src_width,
size_t dst_width,
size_t src_stride,
size_t dst_stride,
size_t num_planes,
size_t src_plane_stride,
size_t dst_plane_stride,
size_t size,
item_dma_event_t event) __OVERLOAD;
```
where `T` can be `uchar`, `char`, `short`, `ushort`, `int`, `uint`, `long`, `ulong`, `half` or `float`.

View File

@@ -1,4 +1,4 @@
# Custom OpenVINO™ Operations {#openvino_docs_Extensibility_UG_add_openvino_ops}
# Custom OpenVINO™ Operations {#openvino_docs_Extensibility_UG_add_openvino_ops}
OpenVINO™ Extension API allows you to register custom operations to support models with operations which OpenVINO™ does not support out-of-the-box.
@@ -20,14 +20,10 @@ Follow the steps below to add a custom operation:
5. Override the `visit_attributes` method, which enables serialization and deserialization of operation attributes. An `AttributeVisitor` is passed to the method, and the implementation is expected to walk over all the attributes in the op using the type-aware `on_attribute` helper. Helpers are already implemented for standard C++ types like `int64_t`, `float`, `bool`, `vector`, and for existing OpenVINO defined types.
6. Override `evaluate`, which is an optional method that enables fallback of some devices to this implementation and the application of constant folding if there is a custom operation on the constant branch. If your operation contains `evaluate` method you also need to override the `has_evaluate` method, this method allow to get information about availability of `evaluate` method for the operation.
7. Add the `OPENVINO_FRAMEWORK_MAP` macro if you want to map custom operation to framework operation with the same name. It is an optional macro which can be used for one to one mapping. In order to use this macro please include frontend specific headers:
@snippet template_extension/new/identity.hpp op:frontend_include
6. Override `evaluate`, which is an optional method that enables fallback of some devices to this implementation and the application of constant folding if there is a custom operation on the constant branch. If your operation contains `evaluate` method you also need to override the `has_evaluate` method, this method allows to get information about availability of `evaluate` method for the operation.
Based on that, declaration of an operation class can look as follows:
@snippet template_extension/new/identity.hpp op:header
### Operation Constructors
@@ -55,8 +51,9 @@ OpenVINO™ operation contains two constructors:
@snippet template_extension/new/identity.cpp op:visit_attributes
### `evaluate()` and `has_evaluate()`
### evaluate() and has_evaluate()
`ov::Node::evaluate` method enables you to apply constant folding to an operation.
@snippet template_extension/new/identity.cpp op:evaluate

View File

@@ -0,0 +1,105 @@
# Frontend Extensions {#openvino_docs_Extensibility_UG_Frontend_Extensions}
The goal of this chapter is to explain how to use Frontend extension classes to facilitate mapping of custom operations from framework model representation to OpenVINO representation. Refer to [Introduction to OpenVINO Extension](Intro.md) to understand entire flow.
This API is applicable for new frontends only, which exist for ONNX and PaddlePaddle. If a different model format is used, follow legacy [Model Optimizer Extensions](../MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md) guide.
> **NOTE**: This documentation is written based on the [Template extension](https://github.com/openvinotoolkit/openvino/tree/master/docs/template_extension/new), which demonstrates extension development details based on minimalistic `Identity` operation that is a placeholder for your real custom operation. You can review the complete code, which is fully compliable, to see how it works.
## Single Operation Mapping with OpExtension
This section covers the case when a single operation in framework representation is mapped to a single operation in OpenVINO representation. This is called *one-to-one mapping*. There is `OpExtension` class that works well if all the following conditions are satisfied:
1. Number of inputs to operation in the Framework representation is the same as in the OpenVINO representation.
2. Number of outputs is also the same in both representations.
3. Inputs can be indexed and are mapped in order correspondingly, e.g. input with index 0 in framework representation maps to input with index 0 in OpenVINO representation and so on.
4. The same for outputs.
5. Each attribute in OpenVINO operation can be initialized from one of the attributes of original operation or by some predefined constant value. Value of copied attributes cannot contain expressions, value is accepted as-is, so type of a value should be compatible.
> **NOTE**: `OpExtension` class is currently available for ONNX frontend only. PaddlePaddle frontend has named inputs and outputs for operation (not indexed) therefore OpExtension mapping is not applicable for this case.
The next example maps ONNX operation with type [“Identity”]( https://github.com/onnx/onnx/blob/main/docs/Operators.md#Identity) to OpenVINO template extension `Identity` class.
@snippet ov_extensions.cpp frontend_extension_Identity_header
@snippet ov_extensions.cpp frontend_extension_Identity
The mapping doesnt involve any attributes, as operation Identity doesnt have them.
Extension objects, like just constructed `extension` can be used to add to the OpenVINO runtime just before the loading a model that contains custom operations:
@snippet ov_extensions.cpp frontend_extension_read_model
Or extensions can be constructed in a separately compiled shared library. Separately compiled library can be used in Model Optimizer or `benchmark_app`. Read about how to build and load such library in chapter “Create library with extensions” in [Introduction to OpenVINO Extension](Intro.md).
If operation have multiple inputs and/or outputs they will be mapped in order. The type of elements in input/output tensors should match expected types in the surrounding operations. For example, if custom operation produces `f32` data type then operation that consumes this output should also support `f32`. Otherwise, model conversion fails with an error, there are no automatic type conversion happens.
### Converting to Standard OpenVINO Operation
`OpExtension` class can be used when mapping to one of the operations from standard OpenVINO operation set is what you need and there is no class like `TemplateExtension::Identity` implemented.
Here is an example for a custom framework operation “MyRelu”. Suppose it is mathematically equivalent to standard `Relu` that exists in OpenVINO operation set, but for some reason has type name “MyRelu”. In this case you can directly say that “MyRelu” -> `Relu` mapping should be used:
@snippet ov_extensions.cpp frontend_extension_MyRelu
In the resulting converted OpenVINO model, “MyRelu” operation will be replaced by the standard operation `Relu` from the latest available OpenVINO operation set. Notice that when standard operation is used, it can be specified using just a type string (“Relu”) instead of using a `ov::opset8::Relu` class name as a template parameter for `OpExtension`. This method is available for operations from the standard operation set only. For a user custom OpenVINO operation the corresponding class should be always specified as a template parameter as it was demonstrated with `TemplateExtension::Identity`.
### Attributes Mapping
As described above, `OpExtension` is useful when attributes can be mapped one by one or initialized by a constant. If the set of attributes in framework representation and OpenVINO representation completely match by their names and types, nothing should be specified in OpExtension constructor parameters. The attributes are discovered and mapped automatically based on `visit_attributes` method that should be defined for any OpenVINO operation.
Imagine you have CustomOperation class implementation that has two attributes with names `attr1` and `attr2`:
@snippet ov_extensions.cpp frontend_extension_CustomOperation
And original model in framework representation also has operation with name “CustomOperatoin” with the same `attr1` and `attr2` attributes. Then with the following code:
@snippet ov_extensions.cpp frontend_extension_CustomOperation_as_is
both `attr1` and `attr2` are copied from framework representation to OpenVINO representation automatically. If for some reason names of attributes are different but values still can be copied “as-is” you can pass attribute names mapping in `OpExtension` constructor:
@snippet ov_extensions.cpp frontend_extension_CustomOperation_rename
Where `fw_attr1` and `fw_attr2` are names for corresponding attributes in framework operation representation.
If copying of an attribute is not what you need, `OpExtension` also can set attribute to predefined constant value. For the same `CustomOperation`, imagine you want to set `attr2` to value 5 instead of copying from `fw_attr2`, to achieve that do the following:
@snippet ov_extensions.cpp frontend_extension_CustomOperation_rename_set
So the conclusion is that each attribute of target OpenVINO operation should be initialized either by
1. Setting automatically due to name matching
2. Mapped by attribute name
3. Set to a constant value
This is achieved by specifying maps as arguments for `OpExtension` constructor.
## Mapping to Multiple Operations with ConversionExtension
Previous sections cover the case when a single operation is mapped to a single operation with optional adjustment in names and attribute values. That is likely enough for your own custom operation with existing C++ kernel implementation. In this case your framework representation and OpenVINO representation for the operation are under your control and inputs/outpus/attributes can be aligned to make `OpExtension` usable.
In case if one-to-one mapping is not possible, *decomposition to multiple operations* should be considered. It is achieved by using more verbose and less automated `ConversionExtension` class. It enables writing arbitrary code to replace a single framework operation by multiple connected OpenVINO operations constructing dependency graph of any complexity.
`ConversionExtension` maps a single operation to a function which builds a graph using OpenVINO operation classes. Follow chapter [Build a Model in OpenVINO Runtime](@ref ov_ug_build_model) to learn how to use OpenVINO operation classes to build a fragment of model for replacement.
The next example illustrates using `ConversionExtension` for conversion of “ThresholdedRelu” from ONNX according to the formula: `ThresholdedRelu(x, alpha) -> Multiply(x, Convert(Greater(x, alpha), type=float))`.
> **NOTE**: `ThresholdedRelu` is one of the standard ONNX operators which is supported by ONNX frontend natively out-of-the-box. Here we are re-implementing it to illustrate how you can add a similar support for your custom operation instead of `ThresholdedRelu`.
@snippet ov_extensions.cpp frontend_extension_ThresholdedReLU_header
@snippet ov_extensions.cpp frontend_extension_ThresholdedReLU
To access original framework operation attribute value and connect to inputs, `node` object of type `NodeContext` is used. It has two main methods:
* `NodeContext::get_input` to get input with a given index,
* `NodeContext::get_attribute` to get attribute value with a given name.
The conversion function should return a vector of node outputs that are mapped to corresponding outputs of the original framework operation in the same order.

View File

@@ -12,6 +12,7 @@
@endsphinxdirective
OpenVINO Transformation mechanism allows to develop transformation passes to modify `ov::Model`. You can use this mechanism to apply additional optimizations to the original Model or transform unsupported subgraphs and operations to new operations which are supported by the plugin.
This guide contains all necessary information that you need to start implementing OpenVINO™ transformations.
## Working with Model

View File

@@ -37,7 +37,7 @@ The implementation `CompileNetwork` is fully device-specific.
The function accepts a const shared pointer to `ngraph::Function` object and performs the following steps:
1. Applies ngraph passes using `TransformNetwork` function, which defines plugin-specific conversion pipeline. To support low precision inference, the pipeline can include Low Precision Transformations. These transformations are usually hardware specific. You can find how to use and configure Low Precisions Transformations in [Low Precision Transformations](@ref openvino_docs_IE_DG_lpt) guide.
1. Applies ngraph passes using `TransformNetwork` function, which defines plugin-specific conversion pipeline. To support low precision inference, the pipeline can include Low Precision Transformations. These transformations are usually hardware specific. You can find how to use and configure Low Precisions Transformations in [Low Precision Transformations](@ref openvino_docs_OV_UG_lpt) guide.
2. Maps the transformed graph to a backend specific graph representation (for example, to MKLDNN graph for Intel CPU).
3. Allocates and fills memory for graph weights, backend specific memory handles and so on.

View File

@@ -9,11 +9,12 @@
Implement Plugin Functionality <openvino_docs_ie_plugin_dg_plugin>
Implement Executable Network Functionality <openvino_docs_ie_plugin_dg_executable_network>
openvino_docs_ie_plugin_dg_quantized_networks
Implement Synchronous Inference Request <openvino_docs_ie_plugin_dg_infer_request>
Implement Asynchronous Inference Request <openvino_docs_ie_plugin_dg_async_infer_request>
openvino_docs_ie_plugin_dg_plugin_build
openvino_docs_ie_plugin_dg_plugin_testing
openvino_docs_ie_plugin_detailed_guides
openvino_docs_ie_plugin_api_references
@endsphinxdirective
@@ -55,11 +56,11 @@ Detailed guides
* [Build](@ref openvino_docs_ie_plugin_dg_plugin_build) a plugin library using CMake\*
* Plugin and its components [testing](@ref openvino_docs_ie_plugin_dg_plugin_testing)
* [Quantized networks](@ref openvino_docs_ie_plugin_dg_quantized_networks)
* [Low precision transformations](@ref openvino_docs_IE_DG_lpt) guide
* [Low precision transformations](@ref openvino_docs_OV_UG_lpt) guide
* [Writing OpenVINO™ transformations](@ref openvino_docs_transformations) guide
API References
-----------------------
* [Inference Engine Plugin API](groupie_dev_api.html)
* [Inference Engine Transformation API](groupie_transformation_api.html)
* [Inference Engine Plugin API](@ref ie_dev_api)
* [Inference Engine Transformation API](@ref ie_transformation_api)

View File

@@ -9,7 +9,7 @@ For more details about low-precision model representation please refer to this [
During the model load each plugin can interpret quantization rules expressed in *FakeQuantize* operations:
- Independently based on the definition of *FakeQuantize* operation.
- Using a special library of low-precision transformations (LPT) which applies common rules for generic operations,
such as Convolution, Fully-Connected, Eltwise, etc., and translates "fake-quantized" models into the models with low-precision operations. For more information about low-precision flow please refer to the following [document](@ref openvino_docs_IE_DG_Int8Inference).
such as Convolution, Fully-Connected, Eltwise, etc., and translates "fake-quantized" models into models with low-precision operations.
Here we provide only a high-level overview of the interpretation rules of FakeQuantize.
At runtime each FakeQuantize can be split into two independent operations: **Quantize** and **Dequantize**.

View File

@@ -0,0 +1,18 @@
# Advanced Topics {#openvino_docs_ie_plugin_detailed_guides}
@sphinxdirective
.. toctree::
:maxdepth: 1
:hidden:
openvino_docs_ie_plugin_dg_quantized_networks
openvino_docs_OV_UG_lpt
@endsphinxdirective
The guides below provides extra information about specific features of OpenVINO needed for understanding during OpenVINO plugin development:
* [Quantized networks](@ref openvino_docs_ie_plugin_dg_quantized_networks)
* [Low precision transformations](@ref openvino_docs_OV_UG_lpt) guide
* [Writing OpenVINO™ transformations](@ref openvino_docs_transformations) guide

View File

@@ -0,0 +1,17 @@
# Plugin API Reference {#openvino_docs_ie_plugin_api_references}
@sphinxdirective
.. toctree::
:maxdepth: 1
:hidden:
../groupie_dev_api
../groupie_transformation_api
@endsphinxdirective
The guides below provides extra API references needed for OpenVINO plugin development:
* [OpenVINO Plugin API](@ref ie_dev_api)
* [OpenVINO Transformation API](@ref ie_transformation_api)

View File

@@ -5,74 +5,74 @@
<tab type="usergroup" url="index.html" title="Developer Guide for Inference Engine Plugin Library">
<tab type="user" url="@ref plugin" visibile="yes" title="Implement Plugin Functionality"/>
<tab type="user" url="@ref executable_network" visibile="yes" title="Implement Executable Network Functionality">
<tab type="usergroup" title="Low Precision Transformations" url="@ref openvino_docs_IE_DG_lpt">
<tab type="user" title="Attributes" url="@ref openvino_docs_IE_DG_lpt_attributes">
<tab type="user" title="AvgPoolPrecisionPreserved" url="@ref openvino_docs_IE_DG_lpt_AvgPoolPrecisionPreserved"/>
<tab type="user" title="IntervalsAlignment" url="@ref openvino_docs_IE_DG_lpt_IntervalsAlignment"/>
<tab type="user" title="PerTensorQuantization" url="@ref openvino_docs_IE_DG_lpt_PerTensorQuantization"/>
<tab type="user" title="PrecisionPreserved" url="@ref openvino_docs_IE_DG_lpt_PrecisionPreserved"/>
<tab type="user" title="Precisions" url="@ref openvino_docs_IE_DG_lpt_Precisions"/>
<tab type="user" title="QuantizationAlignment" url="@ref openvino_docs_IE_DG_lpt_QuantizationAlignment"/>
<tab type="usergroup" title="Low Precision Transformations" url="@ref openvino_docs_OV_UG_lpt">
<tab type="user" title="Attributes" url="@ref openvino_docs_OV_UG_lpt_attributes">
<tab type="user" title="AvgPoolPrecisionPreserved" url="@ref openvino_docs_OV_UG_lpt_AvgPoolPrecisionPreserved"/>
<tab type="user" title="IntervalsAlignment" url="@ref openvino_docs_OV_UG_lpt_IntervalsAlignment"/>
<tab type="user" title="PerTensorQuantization" url="@ref openvino_docs_OV_UG_lpt_PerTensorQuantization"/>
<tab type="user" title="PrecisionPreserved" url="@ref openvino_docs_OV_UG_lpt_PrecisionPreserved"/>
<tab type="user" title="Precisions" url="@ref openvino_docs_OV_UG_lpt_Precisions"/>
<tab type="user" title="QuantizationAlignment" url="@ref openvino_docs_OV_UG_lpt_QuantizationAlignment"/>
</tab>
<tab type="user" title="Step 1. Prerequisites transformations" url="@ref openvino_docs_IE_DG_lpt_step1_prerequisites">
<tab type="user" title="LinOpSequenceFusion" url="@ref openvino_docs_IE_DG_lpt_LinOpSequenceFusion"/>
<tab type="user" title="PullReshapeThroughDequantization" url="@ref openvino_docs_IE_DG_lpt_PullReshapeThroughDequantization"/>
<tab type="user" title="PullTransposeThroughDequantization" url="@ref openvino_docs_IE_DG_lpt_PullTransposeThroughDequantization"/>
<tab type="user" title="Step 1. Prerequisites transformations" url="@ref openvino_docs_OV_UG_lpt_step1_prerequisites">
<tab type="user" title="LinOpSequenceFusion" url="@ref openvino_docs_OV_UG_lpt_LinOpSequenceFusion"/>
<tab type="user" title="PullReshapeThroughDequantization" url="@ref openvino_docs_OV_UG_lpt_PullReshapeThroughDequantization"/>
<tab type="user" title="PullTransposeThroughDequantization" url="@ref openvino_docs_OV_UG_lpt_PullTransposeThroughDequantization"/>
</tab>
<tab type="user" title="Step 2. Markup transformations" url="@ref openvino_docs_IE_DG_lpt_step2_markup">
<tab type="user" title="AlignQuantizationIntervals" url="@ref openvino_docs_IE_DG_lpt_AlignQuantizationIntervals"/>
<tab type="user" title="AlignQuantizationParameters" url="@ref openvino_docs_IE_DG_lpt_AlignQuantizationParameters"/>
<tab type="user" title="CreateAttribute" url="@ref openvino_docs_IE_DG_lpt_CreateAttribute"/>
<tab type="user" title="CreatePrecisionsDependentAttribute" url="@ref openvino_docs_IE_DG_lpt_CreatePrecisionsDependentAttribute"/>
<tab type="user" title="MarkupAvgPoolPrecisionPreserved" url="@ref openvino_docs_IE_DG_lpt_MarkupAvgPoolPrecisionPreserved"/>
<tab type="user" title="MarkupCanBeQuantized" url="@ref openvino_docs_IE_DG_lpt_MarkupCanBeQuantized"/>
<tab type="user" title="MarkupPerTensorQuantization" url="@ref openvino_docs_IE_DG_lpt_MarkupPerTensorQuantization"/>
<tab type="user" title="MarkupPrecisions" url="@ref openvino_docs_IE_DG_lpt_MarkupPrecisions"/>
<tab type="user" title="PropagatePrecisions" url="@ref openvino_docs_IE_DG_lpt_PropagatePrecisions"/>
<tab type="user" title="PropagateThroughPrecisionPreserved" url="@ref openvino_docs_IE_DG_lpt_PropagateThroughPrecisionPreserved"/>
<tab type="user" title="PropagateToInput" url="@ref openvino_docs_IE_DG_lpt_PropagateToInput"/>
<tab type="user" title="UpdateSharedPrecisionPreserved" url="@ref openvino_docs_IE_DG_lpt_UpdateSharedPrecisionPreserved"/>
<tab type="user" title="Step 2. Markup transformations" url="@ref openvino_docs_OV_UG_lpt_step2_markup">
<tab type="user" title="AlignQuantizationIntervals" url="@ref openvino_docs_OV_UG_lpt_AlignQuantizationIntervals"/>
<tab type="user" title="AlignQuantizationParameters" url="@ref openvino_docs_OV_UG_lpt_AlignQuantizationParameters"/>
<tab type="user" title="CreateAttribute" url="@ref openvino_docs_OV_UG_lpt_CreateAttribute"/>
<tab type="user" title="CreatePrecisionsDependentAttribute" url="@ref openvino_docs_OV_UG_lpt_CreatePrecisionsDependentAttribute"/>
<tab type="user" title="MarkupAvgPoolPrecisionPreserved" url="@ref openvino_docs_OV_UG_lpt_MarkupAvgPoolPrecisionPreserved"/>
<tab type="user" title="MarkupCanBeQuantized" url="@ref openvino_docs_OV_UG_lpt_MarkupCanBeQuantized"/>
<tab type="user" title="MarkupPerTensorQuantization" url="@ref openvino_docs_OV_UG_lpt_MarkupPerTensorQuantization"/>
<tab type="user" title="MarkupPrecisions" url="@ref openvino_docs_OV_UG_lpt_MarkupPrecisions"/>
<tab type="user" title="PropagatePrecisions" url="@ref openvino_docs_OV_UG_lpt_PropagatePrecisions"/>
<tab type="user" title="PropagateThroughPrecisionPreserved" url="@ref openvino_docs_OV_UG_lpt_PropagateThroughPrecisionPreserved"/>
<tab type="user" title="PropagateToInput" url="@ref openvino_docs_OV_UG_lpt_PropagateToInput"/>
<tab type="user" title="UpdateSharedPrecisionPreserved" url="@ref openvino_docs_OV_UG_lpt_UpdateSharedPrecisionPreserved"/>
</tab>
<tab type="user" title="Step 3. Main transformations" url="@ref openvino_docs_IE_DG_lpt_step3_main">
<tab type="user" title="AddTransformation" url="@ref openvino_docs_IE_DG_lpt_AddTransformation"/>
<tab type="user" title="AvgPoolTransformation" url="@ref openvino_docs_IE_DG_lpt_AvgPoolTransformation"/>
<tab type="user" title="ClampTransformation" url="@ref openvino_docs_IE_DG_lpt_ClampTransformation"/>
<tab type="user" title="ConcatTransformation" url="@ref openvino_docs_IE_DG_lpt_ConcatTransformation"/>
<tab type="user" title="ConvolutionTransformation" url="@ref openvino_docs_IE_DG_lpt_ConvolutionTransformation"/>
<tab type="user" title="ConvolutionBackpropDataTransformation" url="@ref openvino_docs_IE_DG_lpt_ConvolutionBackpropDataTransformation"/>
<tab type="user" title="DepthToSpaceTransformation" url="@ref openvino_docs_IE_DG_lpt_DepthToSpaceTransformation"/>
<tab type="user" title="FakeQuantizeDecompositionTransformation" url="@ref openvino_docs_IE_DG_lpt_FakeQuantizeDecompositionTransformation"/>
<tab type="user" title="FakeQuantizeTransformation" url="@ref openvino_docs_IE_DG_lpt_FakeQuantizeTransformation"/>
<tab type="user" title="InterpolateTransformation" url="@ref openvino_docs_IE_DG_lpt_InterpolateTransformation"/>
<tab type="user" title="GroupConvolutionTransformation" url="@ref openvino_docs_IE_DG_lpt_GroupConvolutionTransformation"/>
<tab type="user" title="MatMulTransformation" url="@ref openvino_docs_IE_DG_lpt_MatMulTransformation"/>
<tab type="user" title="MaxPoolTransformation" url="@ref openvino_docs_IE_DG_lpt_MaxPoolTransformation"/>
<tab type="user" title="MultiplyTransformation" url="@ref openvino_docs_IE_DG_lpt_MultiplyTransformation"/>
<tab type="user" title="MVNTransformation" url="@ref openvino_docs_IE_DG_lpt_MVNTransformation"/>
<tab type="user" title="NormalizeL2Transformation" url="@ref openvino_docs_IE_DG_lpt_NormalizeL2Transformation"/>
<tab type="user" title="PadTransformation" url="@ref openvino_docs_IE_DG_lpt_PadTransformation"/>
<tab type="user" title="PReluTransformation" url="@ref openvino_docs_IE_DG_lpt_PReluTransformation"/>
<tab type="user" title="ReduceMaxTransformation" url="@ref openvino_docs_IE_DG_lpt_ReduceMaxTransformation"/>
<tab type="user" title="ReduceMeanTransformation" url="@ref openvino_docs_IE_DG_lpt_ReduceMeanTransformation"/>
<tab type="user" title="ReduceMinTransformation" url="@ref openvino_docs_IE_DG_lpt_ReduceMinTransformation"/>
<tab type="user" title="ReduceSumTransformation" url="@ref openvino_docs_IE_DG_lpt_ReduceSumTransformation"/>
<tab type="user" title="ReluTransformation" url="@ref openvino_docs_IE_DG_lpt_ReluTransformation"/>
<tab type="user" title="ReshapeTransformation" url="@ref openvino_docs_IE_DG_lpt_ReshapeTransformation"/>
<tab type="user" title="SqueezeTransformation" url="@ref openvino_docs_IE_DG_lpt_SqueezeTransformation"/>
<tab type="user" title="ShuffleChannelsTransformation" url="@ref openvino_docs_IE_DG_lpt_ShuffleChannelsTransformation"/>
<tab type="user" title="SplitTransformation" url="@ref openvino_docs_IE_DG_lpt_SplitTransformation"/>
<tab type="user" title="StridedSliceTransformation" url="@ref openvino_docs_IE_DG_lpt_StridedSliceTransformation"/>
<tab type="user" title="TransposeTransformation" url="@ref openvino_docs_IE_DG_lpt_TransposeTransformation"/>
<tab type="user" title="UnsqueezeTransformation" url="@ref openvino_docs_IE_DG_lpt_UnsqueezeTransformation"/>
<tab type="user" title="VariadicSplitTransformation" url="@ref openvino_docs_IE_DG_lpt_VariadicSplitTransformation"/>
<tab type="user" title="Step 3. Main transformations" url="@ref openvino_docs_OV_UG_lpt_step3_main">
<tab type="user" title="AddTransformation" url="@ref openvino_docs_OV_UG_lpt_AddTransformation"/>
<tab type="user" title="AvgPoolTransformation" url="@ref openvino_docs_OV_UG_lpt_AvgPoolTransformation"/>
<tab type="user" title="ClampTransformation" url="@ref openvino_docs_OV_UG_lpt_ClampTransformation"/>
<tab type="user" title="ConcatTransformation" url="@ref openvino_docs_OV_UG_lpt_ConcatTransformation"/>
<tab type="user" title="ConvolutionTransformation" url="@ref openvino_docs_OV_UG_lpt_ConvolutionTransformation"/>
<tab type="user" title="ConvolutionBackpropDataTransformation" url="@ref openvino_docs_OV_UG_lpt_ConvolutionBackpropDataTransformation"/>
<tab type="user" title="DepthToSpaceTransformation" url="@ref openvino_docs_OV_UG_lpt_DepthToSpaceTransformation"/>
<tab type="user" title="FakeQuantizeDecompositionTransformation" url="@ref openvino_docs_OV_UG_lpt_FakeQuantizeDecompositionTransformation"/>
<tab type="user" title="FakeQuantizeTransformation" url="@ref openvino_docs_OV_UG_lpt_FakeQuantizeTransformation"/>
<tab type="user" title="InterpolateTransformation" url="@ref openvino_docs_OV_UG_lpt_InterpolateTransformation"/>
<tab type="user" title="GroupConvolutionTransformation" url="@ref openvino_docs_OV_UG_lpt_GroupConvolutionTransformation"/>
<tab type="user" title="MatMulTransformation" url="@ref openvino_docs_OV_UG_lpt_MatMulTransformation"/>
<tab type="user" title="MaxPoolTransformation" url="@ref openvino_docs_OV_UG_lpt_MaxPoolTransformation"/>
<tab type="user" title="MultiplyTransformation" url="@ref openvino_docs_OV_UG_lpt_MultiplyTransformation"/>
<tab type="user" title="MVNTransformation" url="@ref openvino_docs_OV_UG_lpt_MVNTransformation"/>
<tab type="user" title="NormalizeL2Transformation" url="@ref openvino_docs_OV_UG_lpt_NormalizeL2Transformation"/>
<tab type="user" title="PadTransformation" url="@ref openvino_docs_OV_UG_lpt_PadTransformation"/>
<tab type="user" title="PReluTransformation" url="@ref openvino_docs_OV_UG_lpt_PReluTransformation"/>
<tab type="user" title="ReduceMaxTransformation" url="@ref openvino_docs_OV_UG_lpt_ReduceMaxTransformation"/>
<tab type="user" title="ReduceMeanTransformation" url="@ref openvino_docs_OV_UG_lpt_ReduceMeanTransformation"/>
<tab type="user" title="ReduceMinTransformation" url="@ref openvino_docs_OV_UG_lpt_ReduceMinTransformation"/>
<tab type="user" title="ReduceSumTransformation" url="@ref openvino_docs_OV_UG_lpt_ReduceSumTransformation"/>
<tab type="user" title="ReluTransformation" url="@ref openvino_docs_OV_UG_lpt_ReluTransformation"/>
<tab type="user" title="ReshapeTransformation" url="@ref openvino_docs_OV_UG_lpt_ReshapeTransformation"/>
<tab type="user" title="SqueezeTransformation" url="@ref openvino_docs_OV_UG_lpt_SqueezeTransformation"/>
<tab type="user" title="ShuffleChannelsTransformation" url="@ref openvino_docs_OV_UG_lpt_ShuffleChannelsTransformation"/>
<tab type="user" title="SplitTransformation" url="@ref openvino_docs_OV_UG_lpt_SplitTransformation"/>
<tab type="user" title="StridedSliceTransformation" url="@ref openvino_docs_OV_UG_lpt_StridedSliceTransformation"/>
<tab type="user" title="TransposeTransformation" url="@ref openvino_docs_OV_UG_lpt_TransposeTransformation"/>
<tab type="user" title="UnsqueezeTransformation" url="@ref openvino_docs_OV_UG_lpt_UnsqueezeTransformation"/>
<tab type="user" title="VariadicSplitTransformation" url="@ref openvino_docs_OV_UG_lpt_VariadicSplitTransformation"/>
</tab>
<tab type="user" title="Step 4. Cleanup transformations" url="@ref openvino_docs_IE_DG_lpt_step4_cleanup">
<tab type="user" title="FoldConvertTransformation" url="@ref openvino_docs_IE_DG_lpt_FoldConvertTransformation"/>
<tab type="user" title="FoldFakeQuantizeTransformation" url="@ref openvino_docs_IE_DG_lpt_FoldFakeQuantizeTransformation"/>
<tab type="user" title="FuseConvertTransformation" url="@ref openvino_docs_IE_DG_lpt_FuseConvertTransformation"/>
<tab type="user" title="FuseMultiplyToFakeQuantizeTransformation" url="@ref openvino_docs_IE_DG_lpt_FuseMultiplyToFakeQuantizeTransformation"/>
<tab type="user" title="FuseSubtractToFakeQuantizeTransformation" url="@ref openvino_docs_IE_DG_lpt_FuseSubtractToFakeQuantizeTransformation"/>
<tab type="user" title="MultiplyToGroupConvolutionTransformation" url="@ref openvino_docs_IE_DG_lpt_MultiplyToGroupConvolutionTransformation"/>
<tab type="user" title="Step 4. Cleanup transformations" url="@ref openvino_docs_OV_UG_lpt_step4_cleanup">
<tab type="user" title="FoldConvertTransformation" url="@ref openvino_docs_OV_UG_lpt_FoldConvertTransformation"/>
<tab type="user" title="FoldFakeQuantizeTransformation" url="@ref openvino_docs_OV_UG_lpt_FoldFakeQuantizeTransformation"/>
<tab type="user" title="FuseConvertTransformation" url="@ref openvino_docs_OV_UG_lpt_FuseConvertTransformation"/>
<tab type="user" title="FuseMultiplyToFakeQuantizeTransformation" url="@ref openvino_docs_OV_UG_lpt_FuseMultiplyToFakeQuantizeTransformation"/>
<tab type="user" title="FuseSubtractToFakeQuantizeTransformation" url="@ref openvino_docs_OV_UG_lpt_FuseSubtractToFakeQuantizeTransformation"/>
<tab type="user" title="MultiplyToGroupConvolutionTransformation" url="@ref openvino_docs_OV_UG_lpt_MultiplyToGroupConvolutionTransformation"/>
</tab>
</tab>
</tab>

View File

@@ -1,17 +0,0 @@
# Plugin Transformation Pipeline {#openvino_docs_IE_DG_plugin_transformation_pipeline}
@sphinxdirective
.. toctree::
:maxdepth: 1
:caption: Executable Network
:hidden:
Low Precision Transformations <openvino_docs_IE_DG_lpt>
@endsphinxdirective
Typical plugin transformation pipeline includes steps:
1. Common transformations
2. [Low precision transformations](@ref openvino_docs_IE_DG_lpt)
3. Plugin specific transformations

View File

@@ -1,4 +1,4 @@
# AvgPoolPrecisionPreserved attribute {#openvino_docs_IE_DG_lpt_AvgPoolPrecisionPreserved}
# AvgPoolPrecisionPreserved attribute {#openvino_docs_OV_UG_lpt_AvgPoolPrecisionPreserved}
ngraph::AvgPoolPrecisionPreservedAttribute class represents the `AvgPoolPrecisionPreserved` attribute.

View File

@@ -1,4 +1,4 @@
# IntervalsAlignment attribute {#openvino_docs_IE_DG_lpt_IntervalsAlignment}
# IntervalsAlignment attribute {#openvino_docs_OV_UG_lpt_IntervalsAlignment}
ngraph::IntervalsAlignmentAttribute class represents the `IntervalsAlignment` attribute.

View File

@@ -1,4 +1,4 @@
# PerTensorQuantization attribute {#openvino_docs_IE_DG_lpt_PerTensorQuantization}
# PerTensorQuantization attribute {#openvino_docs_OV_UG_lpt_PerTensorQuantization}
ngraph::PerTensorQuantizationAttribute class represents the `PerTensorQuantization` attribute.

View File

@@ -1,4 +1,4 @@
# PrecisionPreserved attribute {#openvino_docs_IE_DG_lpt_PrecisionPreserved}
# PrecisionPreserved attribute {#openvino_docs_OV_UG_lpt_PrecisionPreserved}
ngraph::PrecisionPreservedAttribute class represents the `PrecisionPreserved` attribute.

View File

@@ -1,4 +1,4 @@
# Precisions attribute {#openvino_docs_IE_DG_lpt_Precisions}
# Precisions attribute {#openvino_docs_OV_UG_lpt_Precisions}
ngraph::PrecisionsAttribute class represents the `Precisions` attribute.

View File

@@ -1,4 +1,4 @@
# QuantizationAlignment attribute {#openvino_docs_IE_DG_lpt_QuantizationAlignment}
# QuantizationAlignment attribute {#openvino_docs_OV_UG_lpt_QuantizationAlignment}
ngraph::QuantizationAlignmentAttribute class represents the `QuantizationAlignment` attribute.

View File

@@ -1,4 +1,4 @@
# OpenVINO™ Low Precision Transformations {#openvino_docs_IE_DG_lpt}
# OpenVINO™ Low Precision Transformations {#openvino_docs_OV_UG_lpt}
@sphinxdirective
@@ -7,13 +7,13 @@
:caption: Low Precision Transformations
:hidden:
Low Precision Transformations <openvino_docs_IE_DG_lpt>
Low Precision Transformations <openvino_docs_OV_UG_lpt>
Attributes <openvino_docs_IE_DG_lpt_attributes>
Step 1. Prerequisites transformations <openvino_docs_IE_DG_lpt_step1_prerequisites>
Step 2. Markup transformations <openvino_docs_IE_DG_lpt_step2_markup>
Step 3. Main transformations <openvino_docs_IE_DG_lpt_step3_main>
Step 4. Cleanup transformations <openvino_docs_IE_DG_lpt_step4_cleanup>
Attributes <openvino_docs_OV_UG_lpt_attributes>
Step 1. Prerequisites transformations <openvino_docs_OV_UG_lpt_step1_prerequisites>
Step 2. Markup transformations <openvino_docs_OV_UG_lpt_step2_markup>
Step 3. Main transformations <openvino_docs_OV_UG_lpt_step3_main>
Step 4. Cleanup transformations <openvino_docs_OV_UG_lpt_step4_cleanup>
@endsphinxdirective
@@ -72,11 +72,7 @@ For example, if you would like to infer a model with `Convolution` operation in
> There are several supported quantization approaches on activations and on weights. All supported approaches are described in [Quantization approaches](#quantization-approaches) section below. In demonstrated model [FakeQuantize operation quantization](#fakequantize-operation) approach is used.
### Low precision tools
There are two tools to quantize a model:
1. [Post-Training Optimization Toolkit](@ref pot_docs_LowPrecisionOptimizationGuide) (POT)
2. [Neural Network Compression Framework](https://github.com/openvinotoolkit/nncf) (NNCF)
Additionally, low precision transformations can handle ONNX quantized models.
For more details on how to get a quantized model, refer to [Model Optimization](@ref openvino_docs_model_optimization_guide) document.
## Quantization approaches
LPT transformations support two quantization approaches:
@@ -115,63 +111,63 @@ Inside each step LPT transformations handle input model operation by operation,
As result, usually all operations are inferred by plugin in low precision. If plugin doesn't support an operation inference in low precision, then corresponding LPT transformation can be disabled, and input tensor precisions for the operation will not be changed. In this case the operation is inferred in the original precision.
Low precision transformations pipeline includes four steps:
* [Step #1: Prerequisites](@ref openvino_docs_IE_DG_lpt_step1_prerequisites)
* [Step #2: Markup transformations](@ref openvino_docs_IE_DG_lpt_step2_markup)
* [Step #3: Main transformations](@ref openvino_docs_IE_DG_lpt_step3_main)
* [Step #4: Cleanup transformations](@ref openvino_docs_IE_DG_lpt_step4_cleanup)
* [Step #1: Prerequisites](@ref openvino_docs_OV_UG_lpt_step1_prerequisites)
* [Step #2: Markup transformations](@ref openvino_docs_OV_UG_lpt_step2_markup)
* [Step #3: Main transformations](@ref openvino_docs_OV_UG_lpt_step3_main)
* [Step #4: Cleanup transformations](@ref openvino_docs_OV_UG_lpt_step4_cleanup)
### Step 1. Prerequisites
This step fuses and propagates some operations in the model to prepare for the next step. It is required for OpenVINO plugins. Transformations:
* [PullReshapeThroughDequantization](@ref openvino_docs_IE_DG_lpt_PullReshapeThroughDequantization)
* [PullTransposeThroughDequantization](@ref openvino_docs_IE_DG_lpt_PullTransposeThroughDequantization)
* [LinOpSequenceFusion](@ref openvino_docs_IE_DG_lpt_LinOpSequenceFusion)
* [PullReshapeThroughDequantization](@ref openvino_docs_OV_UG_lpt_PullReshapeThroughDequantization)
* [PullTransposeThroughDequantization](@ref openvino_docs_OV_UG_lpt_PullTransposeThroughDequantization)
* [LinOpSequenceFusion](@ref openvino_docs_OV_UG_lpt_LinOpSequenceFusion)
The model on this step is changed. There are more details in developer guide [Prerequisites transformations](@ref openvino_docs_IE_DG_lpt_step1_prerequisites).
The model on this step is changed. There are more details in developer guide [Prerequisites transformations](@ref openvino_docs_OV_UG_lpt_step1_prerequisites).
### Step 2. Markup
This step creates runtime attributes for operations. These attributes will be used in next step. Transformations:
* [MarkupCanBeQuantized](@ref openvino_docs_IE_DG_lpt_MarkupCanBeQuantized)
* [MarkupPrecisions](@ref openvino_docs_IE_DG_lpt_MarkupPrecisions)
* [MarkupPerTensorQuantization](@ref openvino_docs_IE_DG_lpt_MarkupPerTensorQuantization)
* [MarkupAvgPoolPrecisionPreserved](@ref openvino_docs_IE_DG_lpt_MarkupAvgPoolPrecisionPreserved)
* [PropagatePrecisions](@ref openvino_docs_IE_DG_lpt_PropagatePrecisions)
* [AlignQuantizationIntervals](@ref openvino_docs_IE_DG_lpt_AlignQuantizationIntervals)
* [AlignQuantizationParameters](@ref openvino_docs_IE_DG_lpt_AlignQuantizationParameters)
* [MarkupCanBeQuantized](@ref openvino_docs_OV_UG_lpt_MarkupCanBeQuantized)
* [MarkupPrecisions](@ref openvino_docs_OV_UG_lpt_MarkupPrecisions)
* [MarkupPerTensorQuantization](@ref openvino_docs_OV_UG_lpt_MarkupPerTensorQuantization)
* [MarkupAvgPoolPrecisionPreserved](@ref openvino_docs_OV_UG_lpt_MarkupAvgPoolPrecisionPreserved)
* [PropagatePrecisions](@ref openvino_docs_OV_UG_lpt_PropagatePrecisions)
* [AlignQuantizationIntervals](@ref openvino_docs_OV_UG_lpt_AlignQuantizationIntervals)
* [AlignQuantizationParameters](@ref openvino_docs_OV_UG_lpt_AlignQuantizationParameters)
The model on this step is changed: only new attributes are added to some operations. There are more details in developer guide [Markup transformations](@ref openvino_docs_IE_DG_lpt_step2_markup).
The model on this step is changed: only new attributes are added to some operations. There are more details in developer guide [Markup transformations](@ref openvino_docs_OV_UG_lpt_step2_markup).
### Step 3. Main transformations, FakeQuantize decomposition and dequantization operations handling
This step has the most transformations. These transformations can be separated in two groups: decomposition transformation and dequantization operations handling. There are more details in developer guide [Main transformations](@ref openvino_docs_IE_DG_lpt_step3_main). Transformations:
* [AddTransformation](@ref openvino_docs_IE_DG_lpt_AddTransformation)
* [AvgPoolTransformation](@ref openvino_docs_IE_DG_lpt_AvgPoolTransformation)
* [ClampTransformation](@ref openvino_docs_IE_DG_lpt_AvgPoolTransformation)
* [ConcatTransformation](@ref openvino_docs_IE_DG_lpt_ConcatTransformation)
* [ConvolutionTransformation](@ref openvino_docs_IE_DG_lpt_ConvolutionTransformation)
* [ConvolutionBackpropDataTransformation](@ref openvino_docs_IE_DG_lpt_ConvolutionBackpropDataTransformation)
* [DepthToSpaceTransformation](@ref openvino_docs_IE_DG_lpt_DepthToSpaceTransformation)
* [FakeQuantizeDecompositionTransformation](@ref openvino_docs_IE_DG_lpt_FakeQuantizeDecompositionTransformation)
* [FakeQuantizeTransformation](@ref openvino_docs_IE_DG_lpt_FakeQuantizeTransformation)
* [InterpolateTransformation](@ref openvino_docs_IE_DG_lpt_InterpolateTransformation)
* [GroupConvolutionTransformation](@ref openvino_docs_IE_DG_lpt_GroupConvolutionTransformation)
* [MatMulTransformation](@ref openvino_docs_IE_DG_lpt_MatMulTransformation)
* [MaxPoolTransformation](@ref openvino_docs_IE_DG_lpt_MaxPoolTransformation)
* [MultiplyTransformation](@ref openvino_docs_IE_DG_lpt_MultiplyTransformation)
* [MVNTransformation](@ref openvino_docs_IE_DG_lpt_MVNTransformation)
* [NormalizeL2Transformation](@ref openvino_docs_IE_DG_lpt_NormalizeL2Transformation)
* [PReluTransformation](@ref openvino_docs_IE_DG_lpt_PReluTransformation)
* [ReduceMaxTransformation](@ref openvino_docs_IE_DG_lpt_ReduceMaxTransformation)
* [ReduceMeanTransformation](@ref openvino_docs_IE_DG_lpt_ReduceMeanTransformation)
* [ReduceMinTransformation](@ref openvino_docs_IE_DG_lpt_ReduceMinTransformation)
* [ReduceSumTransformation](@ref openvino_docs_IE_DG_lpt_ReduceSumTransformation)
* [ReluTransformation](@ref openvino_docs_IE_DG_lpt_ReluTransformation)
* [ReshapeTransformation](@ref openvino_docs_IE_DG_lpt_ReshapeTransformation)
* [SqueezeTransformation](@ref openvino_docs_IE_DG_lpt_SqueezeTransformation)
* [ShuffleChannelsTransformation](@ref openvino_docs_IE_DG_lpt_ShuffleChannelsTransformation)
* [SplitTransformation](@ref openvino_docs_IE_DG_lpt_SplitTransformation)
* [StridedSliceTransformation](@ref openvino_docs_IE_DG_lpt_StridedSliceTransformation)
* [TransposeTransformation](@ref openvino_docs_IE_DG_lpt_TransposeTransformation)
* [UnsqueezeTransformation](@ref openvino_docs_IE_DG_lpt_UnsqueezeTransformation)
* [VariadicSplitTransformation](@ref openvino_docs_IE_DG_lpt_VariadicSplitTransformation)
This step has the most transformations. These transformations can be separated in two groups: decomposition transformation and dequantization operations handling. There are more details in developer guide [Main transformations](@ref openvino_docs_OV_UG_lpt_step3_main). Transformations:
* [AddTransformation](@ref openvino_docs_OV_UG_lpt_AddTransformation)
* [AvgPoolTransformation](@ref openvino_docs_OV_UG_lpt_AvgPoolTransformation)
* [ClampTransformation](@ref openvino_docs_OV_UG_lpt_AvgPoolTransformation)
* [ConcatTransformation](@ref openvino_docs_OV_UG_lpt_ConcatTransformation)
* [ConvolutionTransformation](@ref openvino_docs_OV_UG_lpt_ConvolutionTransformation)
* [ConvolutionBackpropDataTransformation](@ref openvino_docs_OV_UG_lpt_ConvolutionBackpropDataTransformation)
* [DepthToSpaceTransformation](@ref openvino_docs_OV_UG_lpt_DepthToSpaceTransformation)
* [FakeQuantizeDecompositionTransformation](@ref openvino_docs_OV_UG_lpt_FakeQuantizeDecompositionTransformation)
* [FakeQuantizeTransformation](@ref openvino_docs_OV_UG_lpt_FakeQuantizeTransformation)
* [InterpolateTransformation](@ref openvino_docs_OV_UG_lpt_InterpolateTransformation)
* [GroupConvolutionTransformation](@ref openvino_docs_OV_UG_lpt_GroupConvolutionTransformation)
* [MatMulTransformation](@ref openvino_docs_OV_UG_lpt_MatMulTransformation)
* [MaxPoolTransformation](@ref openvino_docs_OV_UG_lpt_MaxPoolTransformation)
* [MultiplyTransformation](@ref openvino_docs_OV_UG_lpt_MultiplyTransformation)
* [MVNTransformation](@ref openvino_docs_OV_UG_lpt_MVNTransformation)
* [NormalizeL2Transformation](@ref openvino_docs_OV_UG_lpt_NormalizeL2Transformation)
* [PReluTransformation](@ref openvino_docs_OV_UG_lpt_PReluTransformation)
* [ReduceMaxTransformation](@ref openvino_docs_OV_UG_lpt_ReduceMaxTransformation)
* [ReduceMeanTransformation](@ref openvino_docs_OV_UG_lpt_ReduceMeanTransformation)
* [ReduceMinTransformation](@ref openvino_docs_OV_UG_lpt_ReduceMinTransformation)
* [ReduceSumTransformation](@ref openvino_docs_OV_UG_lpt_ReduceSumTransformation)
* [ReluTransformation](@ref openvino_docs_OV_UG_lpt_ReluTransformation)
* [ReshapeTransformation](@ref openvino_docs_OV_UG_lpt_ReshapeTransformation)
* [SqueezeTransformation](@ref openvino_docs_OV_UG_lpt_SqueezeTransformation)
* [ShuffleChannelsTransformation](@ref openvino_docs_OV_UG_lpt_ShuffleChannelsTransformation)
* [SplitTransformation](@ref openvino_docs_OV_UG_lpt_SplitTransformation)
* [StridedSliceTransformation](@ref openvino_docs_OV_UG_lpt_StridedSliceTransformation)
* [TransposeTransformation](@ref openvino_docs_OV_UG_lpt_TransposeTransformation)
* [UnsqueezeTransformation](@ref openvino_docs_OV_UG_lpt_UnsqueezeTransformation)
* [VariadicSplitTransformation](@ref openvino_docs_OV_UG_lpt_VariadicSplitTransformation)
#### Decomposition transformations
Decomposition transformations decompose the `FakeQuantize` operation to: quantize (`FakeQuantize` with low precision output) and dequantization operations (opposite to quantize, with low precision input and the original precision output). For dequantization operations LPT uses three operations: `Convert`, `Subtract` and `Multiply`. Element-wise operations `Subtract` and `Multiply` have constants on the second branches. If dequantization operations are not handled at the end of LPT pipeline, then they will be fused back to the `FakeQuantize`.
@@ -197,14 +193,14 @@ Original `Convolution` operation in FP32 with dequantization operations before:
### Step 4: Cleanup of the result model
LPT cleanup transformations is final stage in LPT pipeline. In this step LPT transformations clean up the result model to avoid not handled dequantization operations: fuse dequantization operations if possible (fuse at least `Convert` operations if not) to other model operations to cleanup result model. Transformations:
* [FoldConvertTransformation](@ref openvino_docs_IE_DG_lpt_FoldConvertTransformation)
* [FoldFakeQuantizeTransformation](@ref openvino_docs_IE_DG_lpt_FoldFakeQuantizeTransformation)
* [FuseConvertTransformation](@ref openvino_docs_IE_DG_lpt_FuseConvertTransformation)
* [FuseMultiplyToFakeQuantizeTransformation](@ref openvino_docs_IE_DG_lpt_FuseMultiplyToFakeQuantizeTransformation)
* [FuseSubtractToFakeQuantizeTransformation](@ref openvino_docs_IE_DG_lpt_FuseSubtractToFakeQuantizeTransformation)
* [MultiplyToGroupConvolutionTransformation](@ref openvino_docs_IE_DG_lpt_MultiplyToGroupConvolutionTransformation)
* [FoldConvertTransformation](@ref openvino_docs_OV_UG_lpt_FoldConvertTransformation)
* [FoldFakeQuantizeTransformation](@ref openvino_docs_OV_UG_lpt_FoldFakeQuantizeTransformation)
* [FuseConvertTransformation](@ref openvino_docs_OV_UG_lpt_FuseConvertTransformation)
* [FuseMultiplyToFakeQuantizeTransformation](@ref openvino_docs_OV_UG_lpt_FuseMultiplyToFakeQuantizeTransformation)
* [FuseSubtractToFakeQuantizeTransformation](@ref openvino_docs_OV_UG_lpt_FuseSubtractToFakeQuantizeTransformation)
* [MultiplyToGroupConvolutionTransformation](@ref openvino_docs_OV_UG_lpt_MultiplyToGroupConvolutionTransformation)
There are more details in developer guide [Cleanup transformations](@ref openvino_docs_IE_DG_lpt_step4_cleanup).
There are more details in developer guide [Cleanup transformations](@ref openvino_docs_OV_UG_lpt_step4_cleanup).
`FakeQuantize` operation with not handled dequantization operations:
![TODO: FakeQuantize operation with dequantization operations before LPT](quantization/img/fq.transformed.png)
@@ -236,11 +232,11 @@ This step is optional. It modifies the nGraph function to a device-specific oper
Let's explore quantized [TensorFlow* implementation of ResNet-50](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/resnet-50-tf) model. Use [Model Downloader](@ref omz_tools_downloader) tool to download the `fp16` model from [OpenVINO™ Toolkit - Open Model Zoo repository](https://github.com/openvinotoolkit/open_model_zoo):
```sh
./downloader.py --name resnet-50-tf --precisions FP16-INT8
omz_downloader --name resnet-50-tf --precisions FP16-INT8
```
After that you should quantize model by the [Model Quantizer](@ref omz_tools_downloader) tool.
```sh
./quantizer.py --model_dir public/resnet-50-tf --dataset_dir <DATASET_DIR> --precisions=FP16-INT8
omz_quantizer --model_dir public/resnet-50-tf --dataset_dir <DATASET_DIR> --precisions=FP16-INT8
```
### Inference
@@ -259,7 +255,7 @@ Result model depends on different factors:
Information about layer precision is stored in the performance counters that are
available from the Inference Engine API. For example, the part of performance counters table for quantized [TensorFlow* implementation of ResNet-50](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/resnet-50-tf) model inference on CPU Plugin looks as follows:
available from the OpenVINO Runtime API. For example, the part of performance counters table for quantized [TensorFlow* implementation of ResNet-50](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/resnet-50-tf) model inference on CPU Plugin looks as follows:
| layerName | execStatus | layerType | execType | realTime (ms) | cpuTime (ms) |

View File

@@ -1,4 +1,4 @@
# Attributes {#openvino_docs_IE_DG_lpt_attributes}
# Attributes {#openvino_docs_OV_UG_lpt_attributes}
@sphinxdirective
@@ -7,12 +7,12 @@
:caption: Attributes
:hidden:
AvgPoolPrecisionPreserved <openvino_docs_IE_DG_lpt_AvgPoolPrecisionPreserved>
IntervalsAlignment <openvino_docs_IE_DG_lpt_IntervalsAlignment>
PerTensorQuantization <openvino_docs_IE_DG_lpt_PerTensorQuantization>
PrecisionPreserved <openvino_docs_IE_DG_lpt_PrecisionPreserved>
Precisions <openvino_docs_IE_DG_lpt_Precisions>
QuantizationAlignment <openvino_docs_IE_DG_lpt_QuantizationAlignment>
AvgPoolPrecisionPreserved <openvino_docs_OV_UG_lpt_AvgPoolPrecisionPreserved>
IntervalsAlignment <openvino_docs_OV_UG_lpt_IntervalsAlignment>
PerTensorQuantization <openvino_docs_OV_UG_lpt_PerTensorQuantization>
PrecisionPreserved <openvino_docs_OV_UG_lpt_PrecisionPreserved>
Precisions <openvino_docs_OV_UG_lpt_Precisions>
QuantizationAlignment <openvino_docs_OV_UG_lpt_QuantizationAlignment>
@endsphinxdirective
@@ -20,12 +20,12 @@
| Name | Target | Required | Mutable |
|-------------------------------------------------------------------------------------|------------------------|----------|---------|
| [AvgPoolPrecisionPreserved](@ref openvino_docs_IE_DG_lpt_AvgPoolPrecisionPreserved) | Precision | No | Yes |
| [IntervalsAlignment](@ref openvino_docs_IE_DG_lpt_IntervalsAlignment) | Quantization interval | Yes | Yes |
| [PerTensorQuantization](@ref openvino_docs_IE_DG_lpt_PerTensorQuantization) | Precision | Yes | No |
| [PrecisionPreserved](@ref openvino_docs_IE_DG_lpt_PrecisionPreserved) | Precision | Yes | Yes |
| [Precisions](@ref openvino_docs_IE_DG_lpt_Precisions) | Precision | Yes | Yes |
| [QuantizationAlignment](@ref openvino_docs_IE_DG_lpt_QuantizationAlignment) | Quantization alignment | Yes | Yes |
| [AvgPoolPrecisionPreserved](@ref openvino_docs_OV_UG_lpt_AvgPoolPrecisionPreserved) | Precision | No | Yes |
| [IntervalsAlignment](@ref openvino_docs_OV_UG_lpt_IntervalsAlignment) | Quantization interval | Yes | Yes |
| [PerTensorQuantization](@ref openvino_docs_OV_UG_lpt_PerTensorQuantization) | Precision | Yes | No |
| [PrecisionPreserved](@ref openvino_docs_OV_UG_lpt_PrecisionPreserved) | Precision | Yes | Yes |
| [Precisions](@ref openvino_docs_OV_UG_lpt_Precisions) | Precision | Yes | Yes |
| [QuantizationAlignment](@ref openvino_docs_OV_UG_lpt_QuantizationAlignment) | Quantization alignment | Yes | Yes |
> `Target` attribute group defines attribute usage during model transformation for the best performance:
> - `Precision` - the attribute defines the most optimal output port precision.

View File

@@ -1,6 +1,6 @@
# Step 1. Prerequisites Transformations {#openvino_docs_IE_DG_lpt_step1_prerequisites}
# Step 1. Prerequisites Transformations {#openvino_docs_OV_UG_lpt_step1_prerequisites}
Prerequisites transformations are optional. The transformations prepare a model before running other low precision transformations. The transformations do not operate with dequantization operations or update precisions. Prerequisites transformations include:
* [PullReshapeThroughDequantization](@ref openvino_docs_IE_DG_lpt_PullReshapeThroughDequantization)
* [PullTransposeThroughDequantization](@ref openvino_docs_IE_DG_lpt_PullTransposeThroughDequantization)
* [LinOpSequenceFusion](@ref openvino_docs_IE_DG_lpt_LinOpSequenceFusion)
* [PullReshapeThroughDequantization](@ref openvino_docs_OV_UG_lpt_PullReshapeThroughDequantization)
* [PullTransposeThroughDequantization](@ref openvino_docs_OV_UG_lpt_PullTransposeThroughDequantization)
* [LinOpSequenceFusion](@ref openvino_docs_OV_UG_lpt_LinOpSequenceFusion)

View File

@@ -1,14 +1,14 @@
# Step 2. Markup Transformations {#openvino_docs_IE_DG_lpt_step2_markup}
# Step 2. Markup Transformations {#openvino_docs_OV_UG_lpt_step2_markup}
This step defines the optimal `FakeQuantize` decomposition precisions for the best inference performance via operations markup with runtime attribute instances. Attributes are created for input and output ports and operations. Transformations do not change the operation output port precisions. A model markup low precision logic is decomposed and implemented into the following common markup transformations. The order of transformations is important:
1. [MarkupCanBeQuantized](@ref openvino_docs_IE_DG_lpt_MarkupCanBeQuantized)
2. [MarkupPrecisions](@ref openvino_docs_IE_DG_lpt_MarkupPrecisions)
3. [MarkupPerTensorQuantization](@ref openvino_docs_IE_DG_lpt_MarkupPerTensorQuantization)
4. [MarkupAvgPoolPrecisionPreserved](@ref openvino_docs_IE_DG_lpt_MarkupAvgPoolPrecisionPreserved)
5. [PropagatePrecisions](@ref openvino_docs_IE_DG_lpt_PropagatePrecisions)
6. [AlignQuantizationIntervals](@ref openvino_docs_IE_DG_lpt_AlignQuantizationIntervals)
7. [AlignQuantizationParameters](@ref openvino_docs_IE_DG_lpt_AlignQuantizationParameters)
1. [MarkupCanBeQuantized](@ref openvino_docs_OV_UG_lpt_MarkupCanBeQuantized)
2. [MarkupPrecisions](@ref openvino_docs_OV_UG_lpt_MarkupPrecisions)
3. [MarkupPerTensorQuantization](@ref openvino_docs_OV_UG_lpt_MarkupPerTensorQuantization)
4. [MarkupAvgPoolPrecisionPreserved](@ref openvino_docs_OV_UG_lpt_MarkupAvgPoolPrecisionPreserved)
5. [PropagatePrecisions](@ref openvino_docs_OV_UG_lpt_PropagatePrecisions)
6. [AlignQuantizationIntervals](@ref openvino_docs_OV_UG_lpt_AlignQuantizationIntervals)
7. [AlignQuantizationParameters](@ref openvino_docs_OV_UG_lpt_AlignQuantizationParameters)
The table of transformations and used attributes:
@@ -25,11 +25,11 @@ The table of transformations and used attributes:
> **Note:** the same type of attribute instances can be created in different transformations. This approach is the result of the transformation single-responsibility principle. For example, `Precision` attribute instances are created in `MarkupCanBeQuantized` and `MarkupPrecisions` transformations, but the reasons for their creation are different
Common markup transformations can be decomposed into simpler utility markup transformations. The order of Markup utility transformations is not important:
* [CreateAttribute](@ref openvino_docs_IE_DG_lpt_CreateAttribute)
* [CreatePrecisionsDependentAttribute](@ref openvino_docs_IE_DG_lpt_CreatePrecisionsDependentAttribute)
* [PropagateThroughPrecisionPreserved](@ref openvino_docs_IE_DG_lpt_PropagateThroughPrecisionPreserved)
* [PropagateToInput](@ref openvino_docs_IE_DG_lpt_PropagateToInput)
* [UpdateSharedPrecisionPreserved](@ref openvino_docs_IE_DG_lpt_UpdateSharedPrecisionPreserved)
* [CreateAttribute](@ref openvino_docs_OV_UG_lpt_CreateAttribute)
* [CreatePrecisionsDependentAttribute](@ref openvino_docs_OV_UG_lpt_CreatePrecisionsDependentAttribute)
* [PropagateThroughPrecisionPreserved](@ref openvino_docs_OV_UG_lpt_PropagateThroughPrecisionPreserved)
* [PropagateToInput](@ref openvino_docs_OV_UG_lpt_PropagateToInput)
* [UpdateSharedPrecisionPreserved](@ref openvino_docs_OV_UG_lpt_UpdateSharedPrecisionPreserved)
Let's explore all transformations and their relations in detail, using one and the same model:

View File

@@ -1,36 +1,36 @@
# Step 3. Main Transformations {#openvino_docs_IE_DG_lpt_step3_main}
# Step 3. Main Transformations {#openvino_docs_OV_UG_lpt_step3_main}
Main transformations are the majority of low precision transformations. Transformations operate with dequantization operations. Main transformations include:
* [AddTransformation](@ref openvino_docs_IE_DG_lpt_AddTransformation)
* [AvgPoolTransformation](@ref openvino_docs_IE_DG_lpt_AvgPoolTransformation)
* [ClampTransformation](@ref openvino_docs_IE_DG_lpt_AvgPoolTransformation)
* [ConcatTransformation](@ref openvino_docs_IE_DG_lpt_ConcatTransformation)
* [ConvolutionTransformation](@ref openvino_docs_IE_DG_lpt_ConvolutionTransformation)
* [ConvolutionBackpropDataTransformation](@ref openvino_docs_IE_DG_lpt_ConvolutionBackpropDataTransformation)
* [DepthToSpaceTransformation](@ref openvino_docs_IE_DG_lpt_DepthToSpaceTransformation)
* [FakeQuantizeDecompositionTransformation](@ref openvino_docs_IE_DG_lpt_FakeQuantizeDecompositionTransformation)
* [FakeQuantizeTransformation](@ref openvino_docs_IE_DG_lpt_FakeQuantizeTransformation)
* [InterpolateTransformation](@ref openvino_docs_IE_DG_lpt_InterpolateTransformation)
* [GroupConvolutionTransformation](@ref openvino_docs_IE_DG_lpt_GroupConvolutionTransformation)
* [MatMulTransformation](@ref openvino_docs_IE_DG_lpt_MatMulTransformation)
* [MaxPoolTransformation](@ref openvino_docs_IE_DG_lpt_MaxPoolTransformation)
* [MultiplyTransformation](@ref openvino_docs_IE_DG_lpt_MultiplyTransformation)
* [MVNTransformation](@ref openvino_docs_IE_DG_lpt_MVNTransformation)
* [NormalizeL2Transformation](@ref openvino_docs_IE_DG_lpt_NormalizeL2Transformation)
* [PReluTransformation](@ref openvino_docs_IE_DG_lpt_PReluTransformation)
* [ReduceMaxTransformation](@ref openvino_docs_IE_DG_lpt_ReduceMaxTransformation)
* [ReduceMeanTransformation](@ref openvino_docs_IE_DG_lpt_ReduceMeanTransformation)
* [ReduceMinTransformation](@ref openvino_docs_IE_DG_lpt_ReduceMinTransformation)
* [ReduceSumTransformation](@ref openvino_docs_IE_DG_lpt_ReduceSumTransformation)
* [ReluTransformation](@ref openvino_docs_IE_DG_lpt_ReluTransformation)
* [ReshapeTransformation](@ref openvino_docs_IE_DG_lpt_ReshapeTransformation)
* [SqueezeTransformation](@ref openvino_docs_IE_DG_lpt_SqueezeTransformation)
* [ShuffleChannelsTransformation](@ref openvino_docs_IE_DG_lpt_ShuffleChannelsTransformation)
* [SplitTransformation](@ref openvino_docs_IE_DG_lpt_SplitTransformation)
* [StridedSliceTransformation](@ref openvino_docs_IE_DG_lpt_StridedSliceTransformation)
* [TransposeTransformation](@ref openvino_docs_IE_DG_lpt_TransposeTransformation)
* [UnsqueezeTransformation](@ref openvino_docs_IE_DG_lpt_UnsqueezeTransformation)
* [VariadicSplitTransformation](@ref openvino_docs_IE_DG_lpt_VariadicSplitTransformation)
* [AddTransformation](@ref openvino_docs_OV_UG_lpt_AddTransformation)
* [AvgPoolTransformation](@ref openvino_docs_OV_UG_lpt_AvgPoolTransformation)
* [ClampTransformation](@ref openvino_docs_OV_UG_lpt_AvgPoolTransformation)
* [ConcatTransformation](@ref openvino_docs_OV_UG_lpt_ConcatTransformation)
* [ConvolutionTransformation](@ref openvino_docs_OV_UG_lpt_ConvolutionTransformation)
* [ConvolutionBackpropDataTransformation](@ref openvino_docs_OV_UG_lpt_ConvolutionBackpropDataTransformation)
* [DepthToSpaceTransformation](@ref openvino_docs_OV_UG_lpt_DepthToSpaceTransformation)
* [FakeQuantizeDecompositionTransformation](@ref openvino_docs_OV_UG_lpt_FakeQuantizeDecompositionTransformation)
* [FakeQuantizeTransformation](@ref openvino_docs_OV_UG_lpt_FakeQuantizeTransformation)
* [InterpolateTransformation](@ref openvino_docs_OV_UG_lpt_InterpolateTransformation)
* [GroupConvolutionTransformation](@ref openvino_docs_OV_UG_lpt_GroupConvolutionTransformation)
* [MatMulTransformation](@ref openvino_docs_OV_UG_lpt_MatMulTransformation)
* [MaxPoolTransformation](@ref openvino_docs_OV_UG_lpt_MaxPoolTransformation)
* [MultiplyTransformation](@ref openvino_docs_OV_UG_lpt_MultiplyTransformation)
* [MVNTransformation](@ref openvino_docs_OV_UG_lpt_MVNTransformation)
* [NormalizeL2Transformation](@ref openvino_docs_OV_UG_lpt_NormalizeL2Transformation)
* [PReluTransformation](@ref openvino_docs_OV_UG_lpt_PReluTransformation)
* [ReduceMaxTransformation](@ref openvino_docs_OV_UG_lpt_ReduceMaxTransformation)
* [ReduceMeanTransformation](@ref openvino_docs_OV_UG_lpt_ReduceMeanTransformation)
* [ReduceMinTransformation](@ref openvino_docs_OV_UG_lpt_ReduceMinTransformation)
* [ReduceSumTransformation](@ref openvino_docs_OV_UG_lpt_ReduceSumTransformation)
* [ReluTransformation](@ref openvino_docs_OV_UG_lpt_ReluTransformation)
* [ReshapeTransformation](@ref openvino_docs_OV_UG_lpt_ReshapeTransformation)
* [SqueezeTransformation](@ref openvino_docs_OV_UG_lpt_SqueezeTransformation)
* [ShuffleChannelsTransformation](@ref openvino_docs_OV_UG_lpt_ShuffleChannelsTransformation)
* [SplitTransformation](@ref openvino_docs_OV_UG_lpt_SplitTransformation)
* [StridedSliceTransformation](@ref openvino_docs_OV_UG_lpt_StridedSliceTransformation)
* [TransposeTransformation](@ref openvino_docs_OV_UG_lpt_TransposeTransformation)
* [UnsqueezeTransformation](@ref openvino_docs_OV_UG_lpt_UnsqueezeTransformation)
* [VariadicSplitTransformation](@ref openvino_docs_OV_UG_lpt_VariadicSplitTransformation)
Let's explore some main transformations on the example model. Original model:

View File

@@ -1,8 +1,8 @@
# Step 4. Cleanup Transformations {#openvino_docs_IE_DG_lpt_step4_cleanup}
# Step 4. Cleanup Transformations {#openvino_docs_OV_UG_lpt_step4_cleanup}
* [FoldConvertTransformation](@ref openvino_docs_IE_DG_lpt_FoldConvertTransformation)
* [FoldFakeQuantizeTransformation](@ref openvino_docs_IE_DG_lpt_FoldFakeQuantizeTransformation)
* [FuseConvertTransformation](@ref openvino_docs_IE_DG_lpt_FuseConvertTransformation)
* [FuseMultiplyToFakeQuantizeTransformation](@ref openvino_docs_IE_DG_lpt_FuseMultiplyToFakeQuantizeTransformation)
* [FuseSubtractToFakeQuantizeTransformation](@ref openvino_docs_IE_DG_lpt_FuseSubtractToFakeQuantizeTransformation)
* [MultiplyToGroupConvolutionTransformation](@ref openvino_docs_IE_DG_lpt_MultiplyToGroupConvolutionTransformation)
* [FoldConvertTransformation](@ref openvino_docs_OV_UG_lpt_FoldConvertTransformation)
* [FoldFakeQuantizeTransformation](@ref openvino_docs_OV_UG_lpt_FoldFakeQuantizeTransformation)
* [FuseConvertTransformation](@ref openvino_docs_OV_UG_lpt_FuseConvertTransformation)
* [FuseMultiplyToFakeQuantizeTransformation](@ref openvino_docs_OV_UG_lpt_FuseMultiplyToFakeQuantizeTransformation)
* [FuseSubtractToFakeQuantizeTransformation](@ref openvino_docs_OV_UG_lpt_FuseSubtractToFakeQuantizeTransformation)
* [MultiplyToGroupConvolutionTransformation](@ref openvino_docs_OV_UG_lpt_MultiplyToGroupConvolutionTransformation)

View File

@@ -1,3 +1,3 @@
# ConvertSubtractConstant transformation {#openvino_docs_IE_DG_lpt_ConvertSubtractConstant}
# ConvertSubtractConstant transformation {#openvino_docs_OV_UG_lpt_ConvertSubtractConstant}
ngraph::pass::low_precision::ConvertSubtractConstant class represents the `ConvertSubtractConstant` transformation.

View File

@@ -1,4 +1,4 @@
# LinOpSequenceFusion transformation {#openvino_docs_IE_DG_lpt_LinOpSequenceFusion}
# LinOpSequenceFusion transformation {#openvino_docs_OV_UG_lpt_LinOpSequenceFusion}
ngraph::pass::LinOpSequenceFusion class represents the `LinOpSequenceFusion` transformation.

View File

@@ -1,3 +1,3 @@
# PullReshapeThroughDequantization transformation {#openvino_docs_IE_DG_lpt_PullReshapeThroughDequantization}
# PullReshapeThroughDequantization transformation {#openvino_docs_OV_UG_lpt_PullReshapeThroughDequantization}
ngraph::pass::low_precision::PullReshapeThroughDequantization class represents the `PullReshapeThroughDequantization` transformation.

View File

@@ -1,3 +1,3 @@
# PullTransposeThroughDequantization transformation {#openvino_docs_IE_DG_lpt_PullTransposeThroughDequantization}
# PullTransposeThroughDequantization transformation {#openvino_docs_OV_UG_lpt_PullTransposeThroughDequantization}
ngraph::pass::low_precision::PullTransposeThroughDequantization class represents the `PullTransposeThroughDequantization` transformation.

View File

@@ -1,3 +1,3 @@
# AlignQuantizationIntervals transformation {#openvino_docs_IE_DG_lpt_AlignQuantizationIntervals}
# AlignQuantizationIntervals transformation {#openvino_docs_OV_UG_lpt_AlignQuantizationIntervals}
ngraph::pass::low_precision::AlignQuantizationIntervals class represents the `AlignQuantizationIntervals` transformation.

View File

@@ -1,3 +1,3 @@
# AlignQuantizationParameters transformation {#openvino_docs_IE_DG_lpt_AlignQuantizationParameters}
# AlignQuantizationParameters transformation {#openvino_docs_OV_UG_lpt_AlignQuantizationParameters}
ngraph::pass::low_precision::AlignQuantizationParameters class represents the `AlignQuantizationParameters` transformation.

View File

@@ -1,3 +1,3 @@
# CreateAttribute transformation {#openvino_docs_IE_DG_lpt_CreateAttribute}
# CreateAttribute transformation {#openvino_docs_OV_UG_lpt_CreateAttribute}
ngraph::pass::low_precision::CreateAttribute class represents the `CreateAttribute` transformation.

View File

@@ -1,3 +1,3 @@
# CreatePrecisionsDependentAttribute transformation {#openvino_docs_IE_DG_lpt_CreatePrecisionsDependentAttribute}
# CreatePrecisionsDependentAttribute transformation {#openvino_docs_OV_UG_lpt_CreatePrecisionsDependentAttribute}
ngraph::pass::low_precision::CreatePrecisionsDependentAttribute class represents the `CreatePrecisionsDependentAttribute` transformation.

View File

@@ -1,3 +1,3 @@
# MarkupAvgPoolPrecisionPreserved transformation {#openvino_docs_IE_DG_lpt_MarkupAvgPoolPrecisionPreserved}
# MarkupAvgPoolPrecisionPreserved transformation {#openvino_docs_OV_UG_lpt_MarkupAvgPoolPrecisionPreserved}
ngraph::pass::low_precision::MarkupAvgPoolPrecisionPreserved class represents the `MarkupAvgPoolPrecisionPreserved` transformation.

View File

@@ -1,3 +1,3 @@
# MarkupCanBeQuantized transformation {#openvino_docs_IE_DG_lpt_MarkupCanBeQuantized}
# MarkupCanBeQuantized transformation {#openvino_docs_OV_UG_lpt_MarkupCanBeQuantized}
ngraph::pass::low_precision::MarkupCanBeQuantized class represents the `MarkupCanBeQuantized` transformation.

View File

@@ -1,3 +1,3 @@
# MarkupPerTensorQuantization transformation {#openvino_docs_IE_DG_lpt_MarkupPerTensorQuantization}
# MarkupPerTensorQuantization transformation {#openvino_docs_OV_UG_lpt_MarkupPerTensorQuantization}
ngraph::pass::low_precision::MarkupPerTensorQuantization class represents the `MarkupPerTensorQuantization` transformation.

View File

@@ -1,3 +1,3 @@
# MarkupPrecisions transformation {#openvino_docs_IE_DG_lpt_MarkupPrecisions}
# MarkupPrecisions transformation {#openvino_docs_OV_UG_lpt_MarkupPrecisions}
ngraph::pass::low_precision::MarkupPrecisions class represents the `MarkupPrecisions` transformation.

View File

@@ -1,3 +1,3 @@
# PropagatePrecisions transformation {#openvino_docs_IE_DG_lpt_PropagatePrecisions}
# PropagatePrecisions transformation {#openvino_docs_OV_UG_lpt_PropagatePrecisions}
ngraph::pass::low_precision::PropagatePrecisions class represents the `PropagatePrecisions` transformation.

View File

@@ -1,3 +1,3 @@
# PropagateSharedValue transformation {#openvino_docs_IE_DG_lpt_PropagateSharedValue}
# PropagateSharedValue transformation {#openvino_docs_OV_UG_lpt_PropagateSharedValue}
ngraph::pass::low_precision::PropagateSharedValue class represents the `PropagateSharedValue` transformation.

View File

@@ -1,3 +1,3 @@
# PropagateThroughPrecisionPreserved transformation {#openvino_docs_IE_DG_lpt_PropagateThroughPrecisionPreserved}
# PropagateThroughPrecisionPreserved transformation {#openvino_docs_OV_UG_lpt_PropagateThroughPrecisionPreserved}
ngraph::pass::low_precision::PropagateThroughPrecisionPreserved class represents the `PropagateThroughPrecisionPreserved` transformation.

View File

@@ -1,3 +1,3 @@
# PropagateToInput transformation {#openvino_docs_IE_DG_lpt_PropagateToInput}
# PropagateToInput transformation {#openvino_docs_OV_UG_lpt_PropagateToInput}
ngraph::pass::low_precision::PropagateToInput class represents the `PropagateToInput` transformation.

View File

@@ -1,3 +1,3 @@
# UpdateSharedPrecisionPreserved transformation {#openvino_docs_IE_DG_lpt_UpdateSharedPrecisionPreserved}
# UpdateSharedPrecisionPreserved transformation {#openvino_docs_OV_UG_lpt_UpdateSharedPrecisionPreserved}
ngraph::pass::low_precision::UpdateSharedPrecisionPreserved class represents the `UpdateSharedPrecisionPreserved` transformation.

View File

@@ -1,3 +1,3 @@
# ClampTransformation transformation {#openvino_docs_IE_DG_lpt_ClampTransformation}
# ClampTransformation transformation {#openvino_docs_OV_UG_lpt_ClampTransformation}
ngraph::pass::low_precision::ClampTransformation class represents the `Clamp` operation transformation.

View File

@@ -1,3 +1,3 @@
# PReluTransformation transformation {#openvino_docs_IE_DG_lpt_PReluTransformation}
# PReluTransformation transformation {#openvino_docs_OV_UG_lpt_PReluTransformation}
ngraph::pass::low_precision::PReluTransformation class represents the `PRelu` operation transformation.

View File

@@ -1,3 +1,3 @@
# ReluTransformation transformation {#openvino_docs_IE_DG_lpt_ReluTransformation}
# ReluTransformation transformation {#openvino_docs_OV_UG_lpt_ReluTransformation}
ngraph::pass::low_precision::ReluTransformation class represents the `Relu` operation transformation.

View File

@@ -1,4 +1,4 @@
# AddTransformation transformation {#openvino_docs_IE_DG_lpt_AddTransformation}
# AddTransformation transformation {#openvino_docs_OV_UG_lpt_AddTransformation}
ngraph::pass::low_precision::AddTransformation class represents the `Add` operation transformation.

View File

@@ -1,3 +1,3 @@
# MultiplyTransformation transformation {#openvino_docs_IE_DG_lpt_MultiplyTransformation}
# MultiplyTransformation transformation {#openvino_docs_OV_UG_lpt_MultiplyTransformation}
ngraph::pass::low_precision::MultiplyTransformation class represents the `Multiply` operation transformation.

View File

@@ -1,3 +1,3 @@
# SubtractTransformation transformation {#openvino_docs_IE_DG_lpt_SubtractTransformation}
# SubtractTransformation transformation {#openvino_docs_OV_UG_lpt_SubtractTransformation}
ngraph::pass::low_precision::SubtractTransformation class represents the `Subtract` operation transformation.

View File

@@ -1,4 +1,4 @@
# ConvolutionTransformation transformation {#openvino_docs_IE_DG_lpt_ConvolutionTransformation}
# ConvolutionTransformation transformation {#openvino_docs_OV_UG_lpt_ConvolutionTransformation}
ngraph::pass::low_precision::ConvolutionTransformation class represents the `Convolution` operation transformation.

View File

@@ -1,3 +1,3 @@
# ConvolutionBackpropDataTransformation transformation {#openvino_docs_IE_DG_lpt_ConvolutionBackpropDataTransformation}
# ConvolutionBackpropDataTransformation transformation {#openvino_docs_OV_UG_lpt_ConvolutionBackpropDataTransformation}
ngraph::pass::low_precision::ConvolutionBackpropDataTransformation class represents the `ConvolutionBackpropData` operation transformation.

View File

@@ -1,3 +1,3 @@
# GroupConvolutionTransformation transformation {#openvino_docs_IE_DG_lpt_GroupConvolutionTransformation}
# GroupConvolutionTransformation transformation {#openvino_docs_OV_UG_lpt_GroupConvolutionTransformation}
ngraph::pass::low_precision::GroupConvolutionTransformation class represents the `GroupConvolution` operation transformation.

View File

@@ -1,3 +1,3 @@
# InterpolateTransformation transformation {#openvino_docs_IE_DG_lpt_InterpolateTransformation}
# InterpolateTransformation transformation {#openvino_docs_OV_UG_lpt_InterpolateTransformation}
ngraph::pass::low_precision::InterpolateTransformation class represents the `Interpolate` operation transformation.

View File

@@ -1,3 +1,3 @@
# MatMulTransformation transformation {#openvino_docs_IE_DG_lpt_MatMulTransformation}
# MatMulTransformation transformation {#openvino_docs_OV_UG_lpt_MatMulTransformation}
ngraph::pass::low_precision::MatMulTransformation class represents the `MatMul` operation transformation.

View File

@@ -1,3 +1,3 @@
# ConcatTransformation transformation {#openvino_docs_IE_DG_lpt_ConcatTransformation}
# ConcatTransformation transformation {#openvino_docs_OV_UG_lpt_ConcatTransformation}
ngraph::pass::low_precision::ConcatTransformation class represents the `Concat` operation transformation.

View File

@@ -1,3 +1,3 @@
# DepthToSpaceTransformation transformation {#openvino_docs_IE_DG_lpt_DepthToSpaceTransformation}
# DepthToSpaceTransformation transformation {#openvino_docs_OV_UG_lpt_DepthToSpaceTransformation}
ngraph::pass::low_precision::DepthToSpaceTransformation class represents the `DepthToSpace` operation transformation.

View File

@@ -1,3 +1,3 @@
# PadTransformation transformation {#openvino_docs_IE_DG_lpt_PadTransformation}
# PadTransformation transformation {#openvino_docs_OV_UG_lpt_PadTransformation}
ngraph::pass::low_precision::PadTransformation class represents the `Pad` operation transformation.

View File

@@ -1,3 +1,3 @@
# ShuffleChannelsTransformation transformation {#openvino_docs_IE_DG_lpt_ShuffleChannelsTransformation}
# ShuffleChannelsTransformation transformation {#openvino_docs_OV_UG_lpt_ShuffleChannelsTransformation}
ngraph::pass::low_precision::ShuffleChannelsTransformation class represents the `ShuffleChannels` operation transformation.

View File

@@ -1,3 +1,3 @@
# SplitTransformation transformation {#openvino_docs_IE_DG_lpt_SplitTransformation}
# SplitTransformation transformation {#openvino_docs_OV_UG_lpt_SplitTransformation}
ngraph::pass::low_precision::SplitTransformation class represents the `Split` operation transformation.

View File

@@ -1,3 +1,3 @@
# StridedSliceTransformation transformation {#openvino_docs_IE_DG_lpt_StridedSliceTransformation}
# StridedSliceTransformation transformation {#openvino_docs_OV_UG_lpt_StridedSliceTransformation}
ngraph::pass::low_precision::StridedSliceTransformation class represents the `StridedSlice` operation transformation.

View File

@@ -1,3 +1,3 @@
# TransposeTransformation transformation {#openvino_docs_IE_DG_lpt_TransposeTransformation}
# TransposeTransformation transformation {#openvino_docs_OV_UG_lpt_TransposeTransformation}
ngraph::pass::low_precision::TransposeTransformation class represents the `Transpose` operation transformation.

View File

@@ -1,3 +1,3 @@
# VariadicSplitTransformation transformation {#openvino_docs_IE_DG_lpt_VariadicSplitTransformation}
# VariadicSplitTransformation transformation {#openvino_docs_OV_UG_lpt_VariadicSplitTransformation}
ngraph::pass::low_precision::VariadicSplitTransformation class represents the `VariadicSplit` operation transformation.

View File

@@ -1,3 +1,3 @@
# MVNTransformation transformation {#openvino_docs_IE_DG_lpt_MVNTransformation}
# MVNTransformation transformation {#openvino_docs_OV_UG_lpt_MVNTransformation}
ngraph::pass::low_precision::MVNTransformation class represents the `MVN` operation transformation.

View File

@@ -1,3 +1,3 @@
# NormalizeL2Transformation transformation {#openvino_docs_IE_DG_lpt_NormalizeL2Transformation}
# NormalizeL2Transformation transformation {#openvino_docs_OV_UG_lpt_NormalizeL2Transformation}
ngraph::pass::low_precision::NormalizeL2Transformation class represents the `NormalizeL2` operation transformation.

View File

@@ -1,3 +1,3 @@
# AvgPoolTransformation transformation {#openvino_docs_IE_DG_lpt_AvgPoolTransformation}
# AvgPoolTransformation transformation {#openvino_docs_OV_UG_lpt_AvgPoolTransformation}
ngraph::pass::low_precision::AvgPoolTransformation class represents the `AvgPool` operation transformation.

View File

@@ -1,3 +1,3 @@
# MaxPoolTransformation transformation {#openvino_docs_IE_DG_lpt_MaxPoolTransformation}
# MaxPoolTransformation transformation {#openvino_docs_OV_UG_lpt_MaxPoolTransformation}
ngraph::pass::low_precision::MaxPoolTransformation class represents the `MaxPool` operation transformation.

View File

@@ -1,3 +1,3 @@
# FakeQuantizeTransformation transformation {#openvino_docs_IE_DG_lpt_FakeQuantizeTransformation}
# FakeQuantizeTransformation transformation {#openvino_docs_OV_UG_lpt_FakeQuantizeTransformation}
ngraph::pass::low_precision::FakeQuantizeTransformation class represents the `FakeQuantize` operation transformation.

View File

@@ -1,3 +1,3 @@
# FoldFakeQuantizeTransformation transformation {#openvino_docs_IE_DG_lpt_FoldFakeQuantizeTransformation}
# FoldFakeQuantizeTransformation transformation {#openvino_docs_OV_UG_lpt_FoldFakeQuantizeTransformation}
ngraph::pass::low_precision::FoldFakeQuantizeTransformation class represents the `FoldFakeQuantize` operation transformation.

View File

@@ -1,3 +1,3 @@
# ReduceMaxTransformation transformation {#openvino_docs_IE_DG_lpt_ReduceMaxTransformation}
# ReduceMaxTransformation transformation {#openvino_docs_OV_UG_lpt_ReduceMaxTransformation}
ngraph::pass::low_precision::ReduceMaxTransformation class represents the `ReduceMax` operation transformation.

View File

@@ -1,3 +1,3 @@
# ReduceMeanTransformation transformation {#openvino_docs_IE_DG_lpt_ReduceMeanTransformation}
# ReduceMeanTransformation transformation {#openvino_docs_OV_UG_lpt_ReduceMeanTransformation}
ngraph::pass::low_precision::ReduceMeanTransformation class represents the `ReduceMean` operation transformation.

View File

@@ -1,3 +1,3 @@
# ReduceMinTransformation transformation {#openvino_docs_IE_DG_lpt_ReduceMinTransformation}
# ReduceMinTransformation transformation {#openvino_docs_OV_UG_lpt_ReduceMinTransformation}
ngraph::pass::low_precision::ReduceMinTransformation class represents the `ReduceMin` operation transformation.

View File

@@ -1,3 +1,3 @@
# ReduceSumTransformation transformation {#openvino_docs_IE_DG_lpt_ReduceSumTransformation}
# ReduceSumTransformation transformation {#openvino_docs_OV_UG_lpt_ReduceSumTransformation}
ngraph::pass::low_precision::ReduceSumTransformation class represents the `ReduceSum` operation transformation.

View File

@@ -1,3 +1,3 @@
# ReshapeTransformation transformation {#openvino_docs_IE_DG_lpt_ReshapeTransformation}
# ReshapeTransformation transformation {#openvino_docs_OV_UG_lpt_ReshapeTransformation}
ngraph::pass::low_precision::ReshapeTransformation class represents the `Reshape` operation transformation.

View File

@@ -1,3 +1,3 @@
# SqueezeTransformation transformation {#openvino_docs_IE_DG_lpt_SqueezeTransformation}
# SqueezeTransformation transformation {#openvino_docs_OV_UG_lpt_SqueezeTransformation}
ngraph::pass::low_precision::SqueezeTransformation class represents the `Squeeze` operation transformation.

View File

@@ -1,3 +1,3 @@
# UnsqueezeTransformation transformation {#openvino_docs_IE_DG_lpt_UnsqueezeTransformation}
# UnsqueezeTransformation transformation {#openvino_docs_OV_UG_lpt_UnsqueezeTransformation}
ngraph::pass::low_precision::UnsqueezeTransformation class represents the `Unsqueeze` operation transformation.

View File

@@ -1,3 +1,3 @@
# FakeQuantizeDecompositionTransformation transformation {#openvino_docs_IE_DG_lpt_FakeQuantizeDecompositionTransformation}
# FakeQuantizeDecompositionTransformation transformation {#openvino_docs_OV_UG_lpt_FakeQuantizeDecompositionTransformation}
ngraph::pass::low_precision::FakeQuantizeDecompositionTransformation class represents the `FakeQuantizeDecompositionTransformation` transformation.

View File

@@ -1,3 +1,3 @@
# FoldConvertTransformation transformation {#openvino_docs_IE_DG_lpt_FoldConvertTransformation}
# FoldConvertTransformation transformation {#openvino_docs_OV_UG_lpt_FoldConvertTransformation}
ngraph::pass::low_precision::FoldConvertTransformation class represents the `FoldConvertTransformation` transformation.

View File

@@ -1,3 +1,3 @@
# FuseConvertTransformation transformation {#openvino_docs_IE_DG_lpt_FuseConvertTransformation}
# FuseConvertTransformation transformation {#openvino_docs_OV_UG_lpt_FuseConvertTransformation}
ngraph::pass::low_precision::FuseConvertTransformation class represents the `FuseConvertTransformation` transformation.

View File

@@ -1,3 +1,3 @@
# FuseMultiplyToFakeQuantizeTransformation transformation {#openvino_docs_IE_DG_lpt_FuseMultiplyToFakeQuantizeTransformation}
# FuseMultiplyToFakeQuantizeTransformation transformation {#openvino_docs_OV_UG_lpt_FuseMultiplyToFakeQuantizeTransformation}
ngraph::pass::low_precision::FuseMultiplyToFakeQuantizeTransformation class represents the `FuseMultiplyToFakeQuantizeTransformation` transformation.

Some files were not shown because too many files have changed in this diff Show More