Compare commits

...

239 Commits

Author SHA1 Message Date
Sebastian Golebiewski
ec53191909 [DOCS] Supported Layers update - for 22.2 (#15361)
* LessEqual Not Supported

* porting #13997

* porting #13995
2023-02-08 10:44:29 +01:00
Xiake Sun
1ee54505a0 [Docs] Port fix convert tf crnn model docs for release 22.2 (#15468)
* Port fix convert tf crnn model for release 22.2
2023-02-08 08:46:46 +01:00
Sebastian Golebiewski
a75a93252e [DOCS] Fixing links in 'Install Openvino on Windows from Archive' article - for 22.2 (#15358)
* fix links

* Update docs/install_guides/installing-openvino-from-archive-windows.md

* Update docs/install_guides/installing-openvino-from-archive-windows.md
2023-02-06 15:25:42 +08:00
Ilya Lavrenov
3b72991477 Migrate SVG files under LFS (#15323) 2023-01-26 16:22:47 +04:00
Sebastian Golebiewski
f1479f19a9 fix indentation (#15237) 2023-01-23 09:54:32 +01:00
Maciej Smyk
7e6e08571a scheme3 (#14873)
Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
2023-01-20 19:25:32 +04:00
Ilya Lavrenov
08e3ed0966 Added SVG files to lfs (#15230) 2023-01-20 16:35:02 +04:00
Maciej Smyk
645847fae1 default_quantization_flow (#14849) 2023-01-20 14:24:10 +04:00
Sebastian Golebiewski
d8a8daa1bb fix formatting (#15201)
fix formatting and links
2023-01-20 10:26:45 +01:00
Maciej Smyk
f3551dd009 DOCS: Model Caching Overview image recreation for 22.2 (#15024)
* Model Caching Overview

* graph-background-fix
2023-01-20 08:21:30 +01:00
Maciej Smyk
c078192273 DOCS: OpenVINO™ Security Add-on image recreation for 22.2 (#15082)
* Security Add-on

* Update ovsa_example.svg
2023-01-20 08:16:20 +01:00
Tatiana Savina
441496d79b update footer 2022.2 (#15155) 2023-01-16 17:31:11 +00:00
Maciej Smyk
da99f390c4 DOCS: Libraries for Local Distribution image recreation for 22.2 (#14960)
* deployment_full

* Update deployment_full.svg
2023-01-05 23:09:17 +04:00
Sebastian Golebiewski
8d3fa0e6c2 DOCS: Hiding Transition to API 2.0 banner - for 22.2 (#14953)
Using cookies to keep the banner hidden once the user have closed it.
2023-01-05 13:59:43 +01:00
Sebastian Golebiewski
ae60d612c6 DOCS: Updating Interactive Tutorials - for 22.2 (#14948)
Porting: #14945

Adding new tutorials:
404-style-transfer-webcam
406-3D-pose-estimation-webcam
2023-01-05 13:48:51 +01:00
Maciej Smyk
bd5fc754c7 Fix inference pipeline C++ doc: refer to the correct input blob (#14733) 2023-01-05 01:20:26 +04:00
Maciej Smyk
92987ecb33 yolo_tiny_v1 (#14880) 2023-01-04 11:11:54 +01:00
Yuan Xu
b149ea0e42 add ways to find samples for PyPI installation (#14658) (#14899) 2023-01-04 10:30:17 +08:00
Maciej Smyk
0eae631776 nncf_workflow (#14866) 2023-01-03 17:15:36 +01:00
Maciej Smyk
4105180e99 deployment_simplified (#14855) 2023-01-03 15:59:45 +01:00
Maciej Smyk
97ed68051a autoplugin_accelerate (#14840) 2023-01-03 14:55:34 +01:00
Maciej Smyk
c1dfd56358 DOCS: The LowLatency Transformation images recreation for 22.2 (#14831) 2023-01-03 14:48:00 +01:00
Sebastian Golebiewski
394d3b481a format pre tags (#14914)
Porting:
https://github.com/openvinotoolkit/openvino/pull/14889

This fix addresses word wrapping in &lt;pre&gt; tags in the output html files of documentation.
2023-01-03 13:16:08 +01:00
Yuan Xu
1b90b897d1 remove a space (#14757) 2022-12-21 12:24:13 +03:00
Maciej Smyk
da2aa1aac0 DOCS: Quantization doc rewrites for 22.2 (#14372)
* Update introduction.md

* Update introduction.md

* header fix

* Update Introduction.md

* Update Introduction.md

* graph-fix

* Update Introduction.md

* Update Introduction.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
2022-12-16 10:01:28 +01:00
Maciej Smyk
ce1aa513a0 DOCS: Low Precision Transformations proofreading for 22.2 (#14068)
* Attributes
* Update lpt_attributes.md
* defines fix
* Update docs/IE_PLUGIN_DG/plugin_transformation_pipeline/low_precision_transformations/lpt_attributes.md
2022-12-16 10:24:11 +03:00
Maciej Smyk
69c166bf3a Stateful models (#14662) 2022-12-15 13:54:12 +01:00
Maciej Smyk
2c48911c49 DOCS: Samples Overview proofreading for 22.2 (#14086) 2022-12-14 10:51:51 +01:00
Maciej Smyk
801deae368 DOCS: Proofreading C Samples for 22.2 (#14121) 2022-12-14 10:50:56 +01:00
Maciej Smyk
448b4bb838 DOCS: Proofreading Samples Python - 22.2 (#14168)
Co-authored-by: Helena Kloosterman <helena.kloosterman@intel.com>
Co-authored-by: totoka-intel <107121967+totoka-intel@users.noreply.github.com>
2022-12-14 10:50:29 +01:00
Maciej Smyk
179cb63b00 DOCS: Proofreading Samples C++ for 22.2 (#14184) 2022-12-14 10:49:29 +01:00
Maciej Smyk
608d002402 DOCS: Proofreading OpenVINO Extensibility for 22.2 (#14032) 2022-12-13 11:19:14 +01:00
Sebastian Golebiewski
ec21e6906b porting #13917 (#14577)
This pull request introduces a significant rewrite to the Get Started page. The rewrites re-organize the content to add a learning path for new users and provides more links to tutorials and features.

Details:
The same HTML and CSS code is used for the top portion of the page to create the three blue display blocks. Markdown is used to implement the rest of the page.
2022-12-13 15:39:08 +08:00
Maciej Smyk
961abdb0b7 FaceNet (#13792) 2022-12-12 13:12:09 +01:00
Maciej Smyk
bf02b11a63 lm_1b (#13785) 2022-12-12 13:10:19 +01:00
Maciej Smyk
673d61126f NCF_start images (#13788) 2022-12-12 13:07:09 +01:00
Maciej Smyk
79cf494d27 image-fix (#13861) 2022-12-12 13:04:27 +01:00
Maciej Smyk
993857a52b DOCS: Cutting Off Parts of a Model - graph fix for 22.2 (#13853)
* image fix
2022-12-12 11:21:21 +01:00
Sebastian Golebiewski
af945e4913 revert data type compression parameter (#14495) 2022-12-09 14:34:07 +08:00
Sebastian Golebiewski
850b88983f DOCS: Updating 'Create a YOCTO image' article - porting #14130 for 22.2 (#14247)
* Porting #14130

Porting
https://github.com/openvinotoolkit/openvino/pull/14130

This PR addresses the https://jira.devtools.intel.com/browse/CVS-75090 ticket in Jira.
Installation steps in the article have been updated, a troubleshooting section and additional resources have been added.

* reverting the steps

Reverting the installation steps to previous order.
Emphasizing that Step 2 is an example to create the minimal image.

* correcting numbering
2022-12-07 08:38:09 +08:00
Karol Blaszczak
332d4d3b69 Docs reword model support port 22.2 (#14440)
port from master
* Update supported_model_formats.md
2022-12-06 17:29:01 +01:00
Sebastian Golebiewski
f169440f83 DOCS: Updating Readme.md—Post merge port of #13252 for 22.2 (#13474)
* Updating Readme.md - Post merge port of #13252 for 22.2

Applying post merge changes from #13252:

https://github.com/openvinotoolkit/openvino/pull/13252

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update README.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update README.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update README.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update README.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update README.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update README.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update README.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update README.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update README.md

* Updating links

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
2022-12-06 17:17:58 +04:00
Sebastian Golebiewski
ef97282841 Fixing Python API links (#14423)
Fixing the reference to Python API.
2022-12-06 11:55:14 +01:00
Sebastian Golebiewski
49afa6bb06 DOCS: Edits to Basic OpenVINO Workflow page - porting #13807 to 22.2 (#14402)
* Update get_started_demos.md

docs: Update intro and prerequisites
docs: Update Steps 1 - 3
docs: Re-organize CPU, GPU, MYRIAD examples
docs: Change examples header
docs: revise Other Demos/Samples section
docs: Change OpenVINO Runtime install links
docs: Update intro and prerequisites
docs: Update Steps 1 - 3
docs: Re-organize CPU, GPU, MYRIAD examples
docs: Change examples header
docs: revise Other Demos/Samples section
docs: Change OpenVINO Runtime install links
docs: Update intro and prerequisites
docs: Update Steps 1 - 3
docs: Re-organize CPU, GPU, MYRIAD examples
docs: Change examples header
docs: revise Other Demos/Samples section
docs: Change OpenVINO Runtime install links
Update docs/get_started/get_started_demos.md
Update docs/get_started/get_started_demos.md
Update docs/get_started/get_started_demos.md
Update docs/get_started/get_started_demos.md
Update docs/get_started/get_started_demos.md
Update docs/get_started/get_started_demos.md
Update docs/get_started/get_started_demos.md
Update docs/get_started/get_started_demos.md
Update docs/get_started/get_started_demos.md
Update docs/get_started/get_started_demos.md
Update docs/get_started/get_started_demos.md
Update docs/get_started/get_started_demos.md
Update docs/get_started/get_started_demos.md
docs: edit OpenVINO Runtime section
docs: add link to build from source
Update docs/get_started/get_started_demos.md
Update docs/get_started/get_started_demos.md
Update docs/get_started/get_started_demos.md
Update docs/get_started/get_started_demos.md
Update docs/get_started/get_started_demos.md
docs: change Basic OpenVINO Workflow in toctree
docs: minor edit to OpenVINO Dev Tools section
docs: edit Build Samples section
docs: change Prerequisites section header levels
docs: edits to Step 1
docs: remove links to OMZ Demos build instructions
docs: fix links, remove "the"s , TMs, and *s
Apply suggestions from code review
Update get_started_demos.md
Update get_started_demos.md
Update get_started_demos.md

Co-Authored-By: Yuan Xu <yuan1.xu@intel.com>
Co-Authored-By: Karol Blaszczak <karol.blaszczak@intel.com>

* Update googlenet-v3_asymmetric.json

Co-authored-by: Evan <evan.juras@gmail.com>
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
2022-12-06 14:47:39 +08:00
Karol Blaszczak
f303df8a63 reintroduce benchmarks for ovms (#14271)
Recreate performance_benchmarks_ovms.md and add it to TOC of performance_benchmarks.md
graph files updated accordingly
2022-12-02 08:36:55 +01:00
Sebastian Golebiewski
1a3a3e89ec Fixing links to API (#14253)
Addressing:
https://jira.devtools.intel.com/browse/CVS-96910

Fixing links to API
2022-11-29 12:50:45 +08:00
Karol Blaszczak
81cb88b6c5 Update OV_flow_optimization_hvr.svg (#14182) 2022-11-23 09:57:51 +01:00
Maciej Smyk
4da2c945d6 tf_openvino (#13780) 2022-11-15 08:03:09 +01:00
Yuan Xu
e57005afcb Install raspbian updates 22/2 (#13798)
* update raspbian installation

* fix formatting

* update

* update unlink command

* update the architecture
2022-11-14 16:54:14 +08:00
Yuan Xu
4dbdba1ac3 update GPU config with info about install_NEO_OCL_driver.sh (#13839)
* update

* Update configurations-for-intel-gpu.md
2022-11-14 16:49:50 +08:00
Sebastian Golebiewski
c6e7336118 DOCS: Fixing a list in Model Optimizer Extensibility for 22.2 (#13350)
A minor fix that creates a list of main transformations responsible for a layout change.
2022-11-09 17:38:52 +01:00
Sebastian Golebiewski
ab52ba5efd DOCS: Fixing formatting in Multi Device for 22.2 (#13296)
Porting:
https://github.com/openvinotoolkit/openvino/pull/13292
2022-11-09 17:02:49 +01:00
Sebastian Golebiewski
8be1ae96bc Updating links in Model Optimization Guide (#13300)
Adding a link to Model Optimizer.
2022-11-09 16:58:54 +01:00
Sebastian Golebiewski
e829bfd858 Fixing indentation in General Optimizations for 22.2 (#13302)
A minor fix that corrects indentation of snippets.
2022-11-09 16:54:17 +01:00
Maciej Smyk
c6b0b9c255 DOCS: Model Conversion Tutorials fix for 22.2 (#13423) 2022-11-09 16:19:54 +01:00
Sebastian Golebiewski
b00cbf59cb DOCS: Language-agnostic version of 'Changing Input Shapes' - for -22.2 (#13813)
Removing the 'global' tabs and preparing a language-agnostic version of the article. Replacing png image with a scalable svg file. Proofreading the article.
2022-11-09 15:29:42 +01:00
Maciej Smyk
1e2c657895 DOCS: Edits to streamline Install OpenVINO Overview Page - Port from master (#13830)
* 13156

* Update docs/install_guides/installing-model-dev-tools.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update installing-model-dev-tools.md

* dev-tools-13820

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
2022-11-04 11:33:13 +03:00
Yuan Xu
7c78f17438 Docs: Update "Install OpenVINO Runtime on Windows from Archive File" page (#13332) (#13846)
* docs: big update to Windows archive install steps

* docs: apply correct note format

* docs: add link to archives

* docs: minor update

* docs: change archive download link to GitHub

* Update docs/install_guides/installing-openvino-from-archive-windows.md

Co-authored-by: Helena Kloosterman <helena.kloosterman@intel.com>

* docs: typo fix

* docs: minor change

* docs: remove "For Python developers" in Software tab

* docs: fix curl command

* docs: clarify that archive install is for C++ users

* docs: add link to PyPI page

* docs: Change back to numbered instructions

* Update docs/install_guides/installing-openvino-from-archive-windows.md

* Update docs/install_guides/installing-openvino-from-archive-windows.md

* Update docs/install_guides/installing-openvino-from-archive-windows.md

* Apply suggestions from code review

* Update installing-openvino-from-archive-windows.md

* Update docs/install_guides/installing-openvino-from-archive-windows.md

* Update installing-openvino-from-archive-windows.md

* Update installing-openvino-from-archive-windows.md

* Update installing-openvino-from-archive-windows.md

* Update docs/install_guides/installing-openvino-from-archive-windows.md

* Update installing-openvino-from-archive-windows.md

* Update installing-openvino-from-archive-windows.md

* Update installing-openvino-from-archive-windows.md

* Update installing-openvino-from-archive-windows.md

* Update docs/install_guides/installing-openvino-from-archive-windows.md

* Update docs/install_guides/installing-openvino-from-archive-windows.md

* Update installing-openvino-from-archive-windows.md

Co-authored-by: Helena Kloosterman <helena.kloosterman@intel.com>
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

Co-authored-by: Evan <evan.juras@gmail.com>
Co-authored-by: Helena Kloosterman <helena.kloosterman@intel.com>
2022-11-04 16:32:27 +08:00
Maciej Smyk
737319b6a0 DOCS: update for consistent usage of OpenVINO Runtime - Port to master (#13829)
* 13154

* Update docs/install_guides/installing-openvino-windows-header.md

* Update docs/install_guides/installing-openvino-macos-header.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
2022-11-04 05:21:44 +03:00
Maciej Smyk
cd74d8c668 Update installing-openvino-pip.md (#13827) 2022-11-04 10:03:52 +08:00
Yuan Xu
5e3f0720cd Docs: Update "Install OpenVINO Runtime on Linux from Archive File" page (#13345) (#13824) 2022-11-03 21:28:52 +08:00
Yuan Xu
140cf689a2 Docs: Update "Install OpenVINO Runtime on macOS from Archive File" page (#13347) (#13825)
* docs: Update intro and step 1

* docs: finish updates

* docs: fix duplicate section

* docs: fix curl command

* docs: clarify archive install is for C++ users

* docs: add link to PyPI install page

* docs: minor fixes

* docs: add link to Release Notes

* docs: Change back to numbered instructions

* docs: typo fix

* Update docs/install_guides/installing-openvino-from-archive-macos.md

* Update docs/install_guides/installing-openvino-from-archive-macos.md

* Update docs/install_guides/installing-openvino-from-archive-macos.md

* Update docs/install_guides/installing-openvino-from-archive-macos.md

* Update docs/install_guides/installing-openvino-from-archive-macos.md

* Update docs/install_guides/installing-openvino-from-archive-macos.md

* Update docs/install_guides/installing-openvino-from-archive-macos.md

* Update docs/install_guides/installing-openvino-from-archive-macos.md

* Update docs/install_guides/installing-openvino-from-archive-macos.md

* Update docs/install_guides/installing-openvino-from-archive-macos.md

* Update docs/install_guides/installing-openvino-from-archive-macos.md

* Update docs/install_guides/installing-openvino-from-archive-macos.md

* Update docs/install_guides/installing-openvino-from-archive-macos.md

* Update docs/install_guides/installing-openvino-from-archive-macos.md

* Update docs/install_guides/installing-openvino-from-archive-macos.md

* Apply suggestions from code review

* Update installing-openvino-from-archive-macos.md

* Update installing-openvino-from-archive-macos.md

* Update installing-openvino-from-archive-macos.md

* Update installing-openvino-from-archive-macos.md

* Update installing-openvino-from-archive-macos.md

* Update docs/install_guides/installing-openvino-from-archive-macos.md

* Update docs/install_guides/installing-openvino-from-archive-macos.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

Co-authored-by: Evan <evan.juras@gmail.com>
Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>
2022-11-03 21:28:37 +08:00
Maciej Smyk
73fe0afe3e DOCS: Rewrite "Install OpenVINO Development Tools" page - port to 22.2 (#13820)
* Update installing-model-dev-tools.md

* what's next update

* Update docs/install_guides/installing-model-dev-tools.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
2022-11-03 15:29:17 +03:00
Karol Blaszczak
b0c7a05d24 adjust footer content to meet legal requirement (#13776) 2022-11-02 11:24:47 +01:00
Sebastian Golebiewski
31a7187ccb DOCS: Fixing indentation note local distribution for 22.2 (#13338)
A minor fix that corrects the indentation of the note admonition.
2022-11-02 11:15:19 +01:00
Sebastian Golebiewski
894b501bce DOCS: Fix pot best practices for 22.2 (#13301)
A minor fix that corrects the ordered list.
2022-11-02 11:12:08 +01:00
Maciej Smyk
d78bcbe150 DOCS: Fix OpenVINO Deep Learning Workbench Overview (#13238) 2022-11-02 10:17:50 +01:00
Maciej Smyk
897dff88a7 DOCS: API 2.0 Inference Pipeline Fix for 22.2 (#13337)
* Update common_inference_pipeline.md
2022-11-02 10:16:05 +01:00
Sebastian Golebiewski
de3cdf1067 DOCS: Fixing snippet in Optimization for Throughput - for 22.2 (#13341)
A minor fix that corrects the code snippets.
2022-11-02 10:11:37 +01:00
Maciej Smyk
65ac02865f DOC: Fix for archive installation docs for 22.2 (#13297)
* Fix

* Update docs/install_guides/installing-openvino-from-archive-linux.md

Co-authored-by: Helena Kloosterman <helena.kloosterman@intel.com>

* fix-repository

Co-authored-by: Helena Kloosterman <helena.kloosterman@intel.com>
2022-11-02 17:04:45 +08:00
Sebastian Golebiewski
01822ff343 Fixing list in Deployment Manager Tool - for 22.2 (#13343)
A minor fix that corrects the numbered list of steps in Deploying Package on Target Systems
2022-11-02 10:00:36 +01:00
Maciej Smyk
59e2f86f9b Model-Formats-fix (#13352)
Standardized Additional Resources section for Supported Model Formats along with fixing some ref links in articles.
2022-11-02 09:58:35 +01:00
Maciej Smyk
d966611e28 DOCS: Model Optimizer Usage fix for 22.2 (#13453)
* ref link fix
2022-11-02 09:56:13 +01:00
Sebastian Golebiewski
d1d95ff5fc Fixing indentation in Build Plugin Using CMake (#13467)
Minor fixes that correct indentation of code snippets and note admonitions.

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>
2022-11-02 09:51:17 +01:00
Sebastian Golebiewski
28ee91fa46 DOCS: Replace image in Protecting Model Guide - for 22.2 (#13629)
Changing the png image to scalable svg format.
2022-11-02 09:47:42 +01:00
Sebastian Golebiewski
d771eb44f4 DOCS: Updating the large_batch_approach.svg image (#13703)
* Updating the Large Batch Approach.svg image
2022-10-28 08:24:43 +02:00
Sebastian Golebiewski
27b76dba44 DOCS: Improving Readability of Further Low-Level Implementation Details - for 22.2 (#13520)
* Improving Readability of Further Low-Level Implementation Details

The changes include recreation of the graphics to improve the readability of the article. Minor proofreading corrections have been applied as well.
2022-10-27 15:46:11 +02:00
Maciej Smyk
7be33a6079 DOCS: Security Add-on directive fix for 22.2 (#13259)
* Update ovsa_get_started.md
2022-10-27 08:25:59 +02:00
Karol Blaszczak
d437958466 DOCS-diagram_fix for 22.2 (#13634) 2022-10-25 16:32:26 +02:00
Sebastian Golebiewski
9d6b193201 Fixing indentation in Plugin Testing - for 22.2 (#13375)
A minor fix that corrects indentation of snippets.
2022-10-25 12:47:48 +02:00
Maciej Smyk
a57e3a9697 DOCS: Fix for Runtime Inference - 22.2 (#13549)
Fixed the following issues:

Switching between C++ and Python docs for "Shape Inference",
Removed repetitions,
Quote background in bullet list at the beginning of "Multi-device execution",
Broken note directives,
Fixed video player size in "Inference with OpenVINO Runtime",
Standardize Additional Resources throughout Runtime Inference.
2022-10-25 12:42:23 +02:00
Yuan Xu
fe1954aa25 update troubleshooting parent page (#13228)
* update wording
2022-10-25 12:25:38 +02:00
Sebastian Golebiewski
bc8582469e DOCS: Update GPU_Extensibility.md - for 22.2 (#13404)
Minor fixes, including indentation of code snippet and removing unordered list in Debugging Tips.
2022-10-18 18:24:23 +02:00
Sebastian Golebiewski
4795a4ac4a DOCS: Updating link to OMZ Demos (#13256)
* DOCS: Updating link to OMZ Demos

Changing the version of the docs to which link directs.

* Update integrate_with_your_application.md
2022-10-17 10:15:55 +02:00
Karol Blaszczak
33960aa4e8 DOCS - benchmarks table update 22.2 (#13437)
Co-authored-by: Alina Kladieva <alina.kladieva@intel.com>
2022-10-13 07:18:49 +04:00
Sebastian Golebiewski
cad5b795f8 DOCS: Fixing Readme for 2022.2 (#13239)
Minor linguistic corrections and fixing links
2022-10-12 19:04:13 +02:00
Maciej Smyk
1ed2e8b156 DOCS: NNCF Fix for 22.2 (#13215) 2022-10-12 17:08:19 +02:00
Haiqi Pan
e4d599713a Correct the GPU number that triggers the CPU removal in CTPUT (#13278) 2022-10-12 15:00:38 +08:00
Sebastian Golebiewski
b5562c4ddf DOCS: Fixing version selector dropdown for 22.2 (#13241)
* DOCS: Fixing version selector dropdown for 22.2

Fixing the version selector dropdown, to avoid horizontal scrollbar and trimming text.

Porting:
https://github.com/openvinotoolkit/openvino/pull/13187

* Adding overflow

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
2022-09-28 16:17:43 +04:00
Sebastian Golebiewski
3c9745990c DOCS: Fixing math equation for 22.2 (#13211)
A small fix for a math equation that was not rendered properly.

Porting:
https://github.com/openvinotoolkit/openvino/pull/13210
2022-09-28 15:36:32 +04:00
Karol Blaszczak
128e950d49 Update ov_chart.png (#13192)
Update ov_chart.png
2022-09-28 08:43:05 +02:00
Karol Blaszczak
f4c8920cf3 Update performance_int8_vs_fp32.md (#13191) 2022-09-28 08:32:19 +02:00
Yuan Xu
67934ce37e Fix a link for install archive pages (#13230)
* update links

* update for test

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
2022-09-27 21:48:53 +04:00
Karol Blaszczak
d183c1ca44 DOCS-homepage-adjustment (#13208)
* DOCS-homepage-adjustment

adjustment in 2 images

* Update docs/Documentation/deployment_guide_introduction.md
2022-09-27 14:30:38 +04:00
Yuan Xu
f43b1ef805 update linux section (#13123) 2022-09-26 13:30:10 +04:00
Wang Wangwang
dc8fcaf6e2 Docs: Update the doc on how to manually set operations affinity & fix some spelling errors (#13204)
* Docs: Fix spelling errors

* Docs: Update the doc on how to manually set operations affinity
2022-09-26 10:39:41 +04:00
Maciej Smyk
c8c5c2eb14 Update Extending_Model_Optimizer_with_Caffe_Python_Layers.md (#13142) 2022-09-23 01:43:51 +04:00
Trawinski, Dariusz
7f01b0a8eb test ipython extension (#13175) 2022-09-22 21:13:21 +04:00
Karol Blaszczak
0b5cd796d4 Docs benchmarks table update (#13174)
* update table

* remove mask rcnn resnet
2022-09-22 21:03:11 +04:00
Sebastian Golebiewski
f361cc2d6b DOCS: NNCF documentation for 22.2 (#13173)
* Updating NNCF documentation

* nncf-doc-update-ms

* Adding python files

* Changing ID of Range Supervision

* Minor fixes

Fixing formatting and renaming ID

* Proofreading

Minor corrections and removal of Neural Network Compression Framework article

Co-authored-by: msmykx <101244365+msmykx-intel@users.noreply.github.com>
2022-09-22 21:01:55 +04:00
Karol Blaszczak
2e8acae6f2 Docs benchmarks page update port22.2 (#13165)
* update page and benchmark config data

benchmarks articles
update data tables
delete image

* hide / remove ovms benchmarks page

Co-authored-by: Ilya Churaev <ilya.churaev@intel.com>
2022-09-22 17:04:45 +04:00
Ilya Churaev
ea92b38c44 Test change (#13169)
* Test change

* New change

* Disabled docs for linux

* Added new file to check

* Try to fix CI

* Additional try

* Remove redundant change

* Fixed configuration

* Enabled for .ci changes

* Revert "Added new file to check"

This reverts commit da05ad4bd4.

* Revert "Test change"

This reverts commit 6f670d6112.

* Revert "New change"

This reverts commit efeccd6537.
2022-09-22 16:04:08 +04:00
Ilya Churaev
20d2477124 Update CI trigger rules (#13167)
* Update CI trigger rules

* Code test change

* Revert "Code test change"

This reverts commit 086bde7ca8.

* Test change

* Fixed CI

* Revert "Test change"

This reverts commit c72c9077cd.
2022-09-22 15:13:17 +04:00
Karol Blaszczak
356289adc1 test for build failures (#13145)
notebooks link seems to be breaking documentation buils
2022-09-21 19:20:05 +04:00
Sebastian Golebiewski
3717201e99 DOCS: New Tutorials homepage for 22.2 version (#13080)
* DOCS: New Tutorials homepage for 22.2 version

Updating tutorials homepage and including notebooks generated on 13.09.2022:

https://repository.toolbox.iotg.sclab.intel.com/projects/ov-notebook/0.1.0-latest/20220913220807/dist/rst_files/

* Update requirements.txt

* Update requirements.txt

* Update notebooks-installation.md

* Update tutorials.md
2022-09-21 11:31:22 +04:00
Maciej Smyk
5bf210cbea DOCS: Fix OpenVINO Extensibility for 22.2 (#13108)
* Extensibility-fix

* Extensibility-fix-2

* Update Customize_Model_Optimizer.md

* Update Customize_Model_Optimizer.md
2022-09-20 15:07:53 +04:00
Maciej Smyk
6835565610 Media-Processing (#13086) 2022-09-20 15:07:42 +04:00
Sebastian Golebiewski
2d2af81a08 DOCS: Fix in Protecting Model (#13109)
* DOCS: Fix in Protecting Model

A small fix for a not working reference link to the schematic in "Experimental: Protecting Deep Learning Model " article

* Update README.md
2022-09-19 23:23:49 +04:00
Sebastian Golebiewski
f08632615c DOCS: Fixing formatting in Samples (#13085)
Fixing incorrectly numbered lists and indentation of code blocks.
2022-09-19 23:20:12 +04:00
Maciej Smyk
db49a6b662 Update ovsa_get_started.md (#13087) 2022-09-19 18:40:07 +04:00
Sebastian Golebiewski
b5db7ec6b1 DOCS: Fixing Model Representation for 22.2 (#13088)
* DOCS: Fixing Model Representation for 22.2

Fixing the snippets in tabs.

A follow up of:
https://github.com/openvinotoolkit/openvino/pull/12495/

* Update model_representation.md

Changing "See Also" to "Additional Resources"

* Update model_representation.md

* Update model_representation.md

* Update model_representation.md

* Update model_representation.md
2022-09-19 18:39:02 +04:00
Sebastian Golebiewski
12ca62bed5 DOCS: Fixing broken link to PaddleClas for 22.2 (#13101)
A small fix for a broken link to PaddleClas
2022-09-19 18:38:54 +04:00
Sebastian Golebiewski
465f19ae60 DOCS: Fixing missing heading in Infer Request for 22.2 (#13103)
Fixing missing "Examples of Infer Request Usages" heading.
2022-09-19 18:38:34 +04:00
Sebastian Golebiewski
4066218fa0 DOCS: Fixing note admonition in Pytorch Convert RNNT for 22.2 (#13104)
A small fix for a broken note admonition in "Converting a PyTorch RNN-T Model" article.
2022-09-19 18:38:13 +04:00
Sebastian Golebiewski
9c652198f0 DOCS: Fixing link to General Optimizations article for 22.2 (#13106)
A small fix for a broken link.
2022-09-19 18:35:35 +04:00
Sebastian Golebiewski
8856b95234 DOCS: Fixing link to MobileNetV1 FPN for 22.2 (#13107)
* DOCS: Fixing link to MobileNetV1 FPN for 22.2

A small fix for a broken link to MobileNetV1 FPN model in "Quantizing Object Detection Model with Accuracy Control" article

* Update README.md

Fixing broken code block.
2022-09-19 18:34:39 +04:00
Maciej Smyk
944c8b7fb5 Update openvino_ecosystem.md (#13098) 2022-09-19 18:33:51 +04:00
Yuan Xu
5792a4a6df Fix language switcher for 22/2 (#13076)
* port fix from master

* Revert "port fix from master"

This reverts commit 903abd946a.

* Revert "Revert "port fix from master""

This reverts commit 63e1e944a0.
2022-09-19 16:50:46 +04:00
Karol Blaszczak
8fa3b23c6d DOCS-doc_structure_step_2 - recreated (#13082)
* DOCS-doc_structure_step_2

- adjustments to the previous change based on feedback
- changes focusing on ModelOptimizer section to mitigate the removal of ONNX and PdPd articles

* remove 2 files we brought back after 22.1
2022-09-19 16:50:29 +04:00
Sebastian Golebiewski
d83741f433 Porting: Change notebooks fetching link for documentation #12750 (#13046)
* Porting: Change notebooks fetching link for documentation

Porting:

#12750

There are newly generated files (since 30.08.2022) that seem to be fine but apparently "latest" is not build in the docs:

https://repository.toolbox.iotg.sclab.intel.com/projects/ov-notebook/0.1.0-latest/20220913220807/dist/rst_files/

The question remains, why is it so?

* Update consts.py

Updating to the most recent version from 13.09

https://repository.toolbox.iotg.sclab.intel.com/projects/ov-notebook/0.1.0-latest/20220913220807/dist/rst_files/
2022-09-19 16:48:29 +04:00
Karol Blaszczak
9a0a0c4e2c [DOC][CPU] Updated CPU suppported i/o precisions map (#12839) (#13077)
Co-authored-by: Gorokhov Dmitriy <dmitry.gorokhov@intel.com>
2022-09-19 16:48:14 +04:00
Karol Blaszczak
37a0278204 change 2 images for asynch mode (#13071)
changing two screenshots in "general optimizations" to one comparison csv image
2022-09-19 16:47:56 +04:00
Karol Blaszczak
48ea77df85 change hello reshape ssd sample port to 22.2 (#12657) (#13068)
* change hello reshape ssd sample (#12657)

ssdlite_mobilenet_v2 changed to mobilenet-ssd, as per J. Espinoza's request, to fix 
84516

* one more correction of mobilnet
2022-09-19 16:47:15 +04:00
Karol Blaszczak
4451ef7d42 Math equation in POT - port to 22.2 (#13067) 2022-09-19 16:46:49 +04:00
Sebastian Golebiewski
427900eca7 DOCS-homepage-restyling-pt1-to-22.2 (#12300)
Porting to 2022.2 branch
2022-09-19 16:46:41 +04:00
yanlan song
72c3bf222b fix coredump when quit benchmark_app (#13026)
* fix coredump when quit benchmark_app

Signed-off-by: fishbell <bell.song@intel.com>

* enable tests

Signed-off-by: fishbell <bell.song@intel.com>

* add macro to handle CPU not built

Signed-off-by: fishbell <bell.song@intel.com>

Signed-off-by: fishbell <bell.song@intel.com>
2022-09-15 16:47:11 +08:00
Artyom Anokhov
80f1677c2c Updating archive names in qsg (#12927) 2022-09-15 10:36:37 +02:00
Roman Kazantsev
7d184040eb [Frontend, TF FE] Fix RTTI for ConversionExtension on MacOS (#13039)
* [Frontend, TF FE] Fix RTTI for ConversionExtension on MacOS

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Put only destructor into cpp

* Remove extra white-space

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2022-09-15 07:47:20 +03:00
Yuan Xu
caaef49639 add troubleshooting item for PRC users (#12908)
* add troubleshooting item for PRC users

* updates

* Update docs/install_guides/pypi-openvino-dev.md

Co-authored-by: Helena Kloosterman <helena.kloosterman@intel.com>

* Update docs/install_guides/pypi-openvino-rt.md

Co-authored-by: Helena Kloosterman <helena.kloosterman@intel.com>

* Update docs/install_guides/pypi-openvino-dev.md

Co-authored-by: Helena Kloosterman <helena.kloosterman@intel.com>

* Update docs/install_guides/pypi-openvino-rt.md

Co-authored-by: Helena Kloosterman <helena.kloosterman@intel.com>

* Update docs/install_guides/pypi-openvino-rt.md

Co-authored-by: Helena Kloosterman <helena.kloosterman@intel.com>

* add trusted host back

Co-authored-by: Helena Kloosterman <helena.kloosterman@intel.com>
2022-09-14 12:17:38 +04:00
Evgenya Stepyreva
af16ea1d79 Revert "Fix experimental detectron do ref impl (#10621)" (#12683) (#13009)
* Revert "Fix experimental detectron do ref impl (#10621)"

This reverts commit d87233863d.

* Disabled Experimental Detectron per agreement with GPU team. Ticket to fix it: 90209
2022-09-12 18:16:13 +04:00
Mateusz Tabaka
dcc8f926e1 [ONNX] Update external data location in Constant nodes (#12992)
Ticket: 91271
2022-09-09 20:22:11 +03:00
Ekaterina Aidova
c0762847a7 openvino-dev return opencv-python back (#12957) 2022-09-09 18:48:47 +03:00
Sergey Shlyapnikov
af29d221b4 [GPU] Add NV12 -> Grayscale mode support (#12988)
* [GPU] Add NV12 -> Grayscale mode support

* Fix uv plane shape
2022-09-09 19:00:37 +04:00
mei, yang
0f5a45c875 add GenerateProposals single layer test (#12967) 2022-09-08 20:38:10 +04:00
yanlan song
facf990dfd fix inconsistent tbb config due to executor used in multi (#12929)
* fix inconsistent tbb config due to executor used in multi

Signed-off-by: fishbell <bell.song@intel.com>

* refine comment

Signed-off-by: fishbell <bell.song@intel.com>

Signed-off-by: fishbell <bell.song@intel.com>
2022-09-08 13:34:22 +08:00
Ilya Churaev
eb24795c66 Fix tbb for macos 22.2 (#12952)
* Fixed build for TBB which uses pre-reliase functions

* Disable TBB only for macOS only

* Changed condition
2022-09-07 19:59:01 +04:00
Yuan Xu
e21b51a53b Fix a link anchor for pypi page (#12950)
* fix the link for pip

* update the <a> tags
2022-09-07 13:42:44 +04:00
Yuan Xu
b2c00c66a7 fix the link for pip (#12946) 2022-09-07 10:17:42 +04:00
Tomasz Dołbniak
d84da15de5 Use absolute path in some cpuFuncTests (#12902)
* Use absolute path in some cpuFuncTests

* Missing include
2022-09-06 11:57:27 +03:00
Mateusz Bencer
917a465a00 added op check tests for RDFT and IRDFT (#12918) 2022-09-06 12:53:26 +04:00
Yuan Xu
320ed5b94c add new articles for using binaries (#12216)
* Add Overview page

* Revert "Add Overview page"

* init (#11985)

* [GPU] Pass convolution unit tests on DG2 (#12056)

* scale -> eltwise

* Proofreading-OV-Runtime (#11658)

* Update docs/OV_Runtime_UG/protecting_model_guide.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/protecting_model_guide.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/protecting_model_guide.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/protecting_model_guide.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/protecting_model_guide.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/protecting_model_guide.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/ARM_CPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/ARM_CPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/ARM_CPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/ARM_CPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/ARM_CPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/ARM_CPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/CPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/CPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/CPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/CPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/CPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/CPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/CPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/CPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/CPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/CPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/optimization_guide/dldt_deployment_optimization_common.md

Co-authored-by: Sebastian Golebiewski <sebastianx.golebiewski@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/CPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/CPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/CPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/CPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/CPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/CPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/CPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/CPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/CPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/CPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/CPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/CPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/CPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/CPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/CPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/CPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/CPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/CPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/CPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/CPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/Device_Plugins.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GNA.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GNA.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GNA.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GNA.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GNA.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GNA.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GNA.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GNA.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GNA.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GNA.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GNA.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GNA.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GNA.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GNA.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GNA.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GNA.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GPU_RemoteTensor_API.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GPU_RemoteTensor_API.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GPU_RemoteTensor_API.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GPU_RemoteTensor_API.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GPU_RemoteTensor_API.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GPU_RemoteTensor_API.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GPU_RemoteTensor_API.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GPU_RemoteTensor_API.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GPU_RemoteTensor_API.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GPU_RemoteTensor_API.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/HDDL.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/HDDL.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/HDDL.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/MYRIAD.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/MYRIAD.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/MYRIAD.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/ov_dynamic_shapes.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/config_properties.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/config_properties.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/preprocessing_details.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/preprocessing_details.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/preprocessing_details.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/preprocessing_details.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/performance_hints.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/deployment/deployment-manager-tool.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Apply suggestions from code review

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/preprocessing_details.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/performance_hints.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/preprocessing_details.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/performance_hints.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/deployment/deployment-manager-tool.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Apply suggestions from code review

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Apply suggestions from code review

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Apply suggestions from code review

* Apply suggestions from code review

* Apply suggestions from code review

* Apply suggestions from code review

* Apply suggestions from code review

* Apply suggestions from code review

* Apply suggestions from code review

* Apply suggestions from code review

* Apply suggestions from code review

* Apply suggestions from code review

* Update ref links

* Update Getting_performance_numbers.md

* Update deployment_intro.md

* Update preprocessing_details.md

* Apply suggestions from code review

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Apply suggestions from code review

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update tools/pot/openvino/tools/pot/algorithms/quantization/default/README.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Apply suggestions from code review

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/OV_Runtime_UG/automatic_batching.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Apply suggestions from code review

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Apply suggestions from code review

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Apply suggestions from code review

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/OV_Runtime_UG/deployment/deployment-manager-tool.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Apply suggestions from code review

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Apply suggestions from code review

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update tools/pot/openvino/tools/pot/algorithms/quantization/default/README.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Apply suggestions from code review

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update automatic_batching.md

* Update docs/OV_Runtime_UG/automatic_batching.md

* Update docs/OV_Runtime_UG/ShapeInference.md

* Update deployment-manager-tool.md

* Update deployment-manager-tool.md

* Update docs/OV_Runtime_UG/deployment/deployment-manager-tool.md

* Update automatic_batching.md

* Update automatic_batching.md

* Update docs/OV_Runtime_UG/ShapeInference.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/OV_Runtime_UG/integrate_with_your_application.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/OV_Runtime_UG/integrate_with_your_application.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/OV_Runtime_UG/integrate_with_your_application.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/OV_Runtime_UG/integrate_with_your_application.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/OV_Runtime_UG/integrate_with_your_application.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update integrate_with_your_application.md

* Update docs/OV_Runtime_UG/integrate_with_your_application.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/OV_Runtime_UG/integrate_with_your_application.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/OV_Runtime_UG/integrate_with_your_application.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/OV_Runtime_UG/integrate_with_your_application.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Apply suggestions from code review

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/OV_Runtime_UG/model_representation.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/OV_Runtime_UG/model_representation.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update integrate_with_your_application.md

* Update docs/OV_Runtime_UG/integrate_with_your_application.md

* Update docs/OV_Runtime_UG/layout_overview.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/OV_Runtime_UG/layout_overview.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/OV_Runtime_UG/layout_overview.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update model_representation.md

* Apply suggestions from code review

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Apply suggestions from code review

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update integrate_with_your_application.md

* Update docs/OV_Runtime_UG/layout_overview.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update Additional_Optimizations.md

Removing redundant information.

* Update docs/OV_Runtime_UG/layout_overview.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/OV_Runtime_UG/layout_overview.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/OV_Runtime_UG/layout_overview.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/OV_Runtime_UG/layout_overview.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update Additional_Optimizations.md

* Apply suggestions from code review

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update Additional_Optimizations.md

* Update docs/OV_Runtime_UG/model_representation.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/OV_Runtime_UG/layout_overview.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/OV_Runtime_UG/layout_overview.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update model_representation.md

* Update docs/OV_Runtime_UG/supported_plugins/GNA.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update tools/pot/docs/SaturationIssue.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update tools/pot/openvino/tools/pot/algorithms/quantization/accuracy_aware/README.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update tools/pot/docs/SaturationIssue.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GNA.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GNA.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update tools/pot/docs/SaturationIssue.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GNA.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GNA.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/CPU.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Apply suggestions from code review

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update tools/pot/docs/SaturationIssue.md

* Update tools/pot/docs/SaturationIssue.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Apply suggestions from code review

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Apply suggestions from code review

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Apply suggestions from code review

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Apply suggestions from code review

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Apply suggestions from code review

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Apply suggestions from code review

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Apply suggestions from code review

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Apply suggestions from code review

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Apply suggestions from code review

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Apply suggestions from code review

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Apply suggestions from code review

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update README.md

* Update README.md

* Apply suggestions from code review

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Apply suggestions from code review

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Apply suggestions from code review

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Apply suggestions from code review

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Apply suggestions from code review

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Apply suggestions from code review

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Apply suggestions from code review

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Apply suggestions from code review

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Apply suggestions from code review

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Apply suggestions from code review

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update tools/pot/docs/Introduction.md

* Update tools/pot/docs/AccuracyAwareQuantizationUsage.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Removing one-liners

Removing introductory sentences from 'Supported Features' sections.

* Update docs/OV_Runtime_UG/openvino_intro.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/benchmarks/performance_benchmarks_ovms.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update tools/pot/docs/Introduction.md

* Apply suggestions from code review

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update tools/pot/docs/DefaultQuantizationUsage.md

* Update tools/pot/docs/BestPractices.md

* Update tools/pot/docs/BestPractices.md

* Update tools/pot/docs/AccuracyAwareQuantizationUsage.md

* Update docs/optimization_guide/model_optimization_guide.md

* Update docs/optimization_guide/dldt_deployment_optimization_guide.md

* Update docs/OV_Runtime_UG/supported_plugins/config_properties.md

* Update docs/OV_Runtime_UG/supported_plugins/GNA.md

* Update docs/OV_Runtime_UG/supported_plugins/CPU.md

* Update docs/OV_Runtime_UG/preprocessing_usecase_save.md

* Apply suggestions from code review

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
Co-authored-by: msmykx <101244365+msmykx-intel@users.noreply.github.com>
Co-authored-by: Piotr Milewski <piotr.milewski@intel.com>

* updated to fuse activation in eltwise_vload8 (#12084)

* [GPU] Fix gather data type issue (#12085) (#12085)

* setting tput as the default performance mode only for AUTO, excluding MULTI plugin. (#12083)

Signed-off-by: ywang2 <yang4.wang@intel.com>

Co-authored-by: Chen Peter <peter.chen@intel.com>

* [C API][COVERITY SCAN]Fix the TAINTED_SCALAR and DEADCODE in Coverity Scan (#12087)

* Fix the Coverity scan issues

* Fix the insecure data handling (TAINTED_SCALAR) issue found in coverity scan

* [hotfix] pytest error of act_act example (#12093)

* [hotfix] pytest error of act_act example

* remove needless import

* NonZero operation: uncomment tests since they can be passed now (#11548)

* NonZero operation: uncomment tests since they can be passed now

# Conflicts:
#	src/tests/functional/plugin/cpu/shared_tests_instances/skip_tests_config.cpp

* Unbreak tests once more by changing base class from LayerTestsCommon to SubgraphBaseTest

* Unbreak compilation / style

* Add test case for cache

Co-authored-by: Chenhu Wang <chenhu.wang@intel.com>

* Increase zeroes count for NonZero tests

* Correct the change

* Remove my previous changes and add dynamic shapes / repeatable shapes into the correct file

Co-authored-by: Chenhu Wang <chenhu.wang@intel.com>

* [SAMPLES] Remove unused commandline arguments for speech_sample (#11892)

* GNA SF propagation fix (#11806)

* Fix the uninitialized value issue found in Coverity Scan (#12098)

* [GPU] Assign-6 and ReadValue-6 (#11780)

* Add methods for access to varables information in Program class

* add ReadValue and Assign primitives

* ReadValue and Assign implementations

* Implementation of memory states allocation

* Add output existance check in primitive_inst to avoid crashes if output is set during execution

* Add memory states management functionality in network component

* Integration of memory states feature in inference request component

* Exclude constant path for read_value and assign nodes in cldnn transformations

* Improve memory states test to run on a single inference request

* unit tests for ReadValue and Assign

* single-layer test for ReadValue and Assign

* Add QueryState API implementation

* Add memory state test which covers dynamic batch case

Co-authored-by: Oleksii Khovan <okhovan@lohika.com>

* [GNA] Add automatic model splitting for compiled graphs (#12001)

* DOCS-code-reference-css-style-change (#12109)

code formatting changed from blue to black, to distinguish from links

* Virtual destructor for the base class (#12102)

* [GPU] Pass Resample unit tests on DG2 (#12052)

* fix validate_fusings_gpu error
* fix biased scale testcase

* [GPU] Pass lrn unit tests on DG2 (#11986)

* [GPU] Pass reduce unit tests on DG2 (#12086)

* scale to eltwise

* [CPU] Move cpu_dump_check into CPU plugin's tools folder (#12100)

* Move cpu_dump_check into CPU plugin's tools folder

* remove cpu from names

* Update README

* Zlib update to 1.12.2 (#12128)

* [GNA] Reduce impact of sf propagation fix (#12115)

* [GPU] Simplify namespaces in the plugin part (#12121)

* [GNA] Add support for future devices with relaxed capabilities (#12000)

* [GPU] Pass eltwise unit tests on DG2 (#12113)

* check fusion in onednn too

* [GPU] modify fusing condition for reduce (#12119)

Signed-off-by: Min, Byungil <byungil.min@intel.com>

* Enable tensor offset to GemmKernelRef for input padding support (#12133)

Signed-off-by: Andrew Park <andrew.park@intel.com>

* [PYTHON][BENCHMARK_APP] Add BGR covert to Gray function (#12118)

* Fix the JIRA 80700 issue. Add BGR covert to Gray function

* Support NCHW and NHWC

Co-authored-by: Shen, Wanglei <wanglei.shen@intel.com>

* [CPU] revert pr 11990 and enable brgconv avx512 on SPR by default (#12105)

* polish onednn cc readme (#12114)

* [ONNX] Add operator com.microsoft.Fusedgemm support into frontend/onnx (#11878)

* [GPU] Implement NMS-9 operation (#11890)

* Fix GPU NonMaxSuppression implementation

* Introduce Nms9 single layer tests

* Adapt internal NMS and GPU implementation for NMS9 implementation

* Adapt CPU implementation in GPU for NMS9

* Add blocked layouts support to NMS

* Add unit tests for blocked formats for NMS

* Fix boxes groups size for the small shapes

* Use ocl implementation for blocked layout input

* Fix templates typedefs to pass win build

* Fix second output to set data in correct format

* [POT] optimizer - update usage of IndexSampler (#12146)

* Revert "[GPU] Pass activation unit tests on DG2 (#11969)" (#12167)

This reverts commit 3334e8933c.

* Fix IRDFT for case when axes are in reversed order (#12155)

* [MO] Fix output shape bug in GatherNDDecomposition (#12110)

* [GPU] Add reorder from i32 to f32 for max-pooling/conv/fc which doesn't support i32 (#12137)

* Update pypi.org pages (#12170)

* fix references

* update links

* update the wording to be more clear

* add the error message about Visual studio back

* update links to static html links of 2022.2

* Ubuntu 22.04 support (#11472)

* Ubuntu 22.04 support

* Try to fix setuptools

* Try to fix arm

* Try to add more packages

* Test 2

* test 3

* Turn dependnecies download off

* Fix

* Fix

* Fix

* Fix

* Fix

* test

* Fix

* restore eveything

* Try to restore

* Restore install_openvino_dependencies.sh

* clean-up raspbian

* New if conditions

* Removed excess dependencies

* COsmetic chnages

* Removed autools

* Removed libgkt-2

* Added HDLDL libs

* Test

* Removed some dependnecies

* Better fixes

* Removed some dependencies

* Fixed compilation

* Removed all extra

* [GPU]  optimize permute_ref  (#12159)

* change memory access pattern of fsv layout for permute

* Fix permute_ref to process F first only when (bf...) => (b...f)

* Refactor

Co-authored-by: si-eun-kim <sieun.kim@intel.com>

* Update of naming of the last operators in the graph (#12139)

* Update opset.md with opset9 (#12169)

* [GPU] integrate persistent caching for onednn (#12094)

* integrate persistant caching for onednn
* add api to save/load binary file.

* Check memory allocation size of network graph (#11911)

+ Add exception handling for out of resource

* TI repetative shape inference (#12178)

* Fixes for system libraries pugixml, tbb (#12206)

* Fixes for system libraries pugixml, tbb

* Added more dependencies for core

* Debian packages: base version (#11387)

* Xp/benchmark app ocl (#12112)

* Add some tip description about enable OpenCL for benchmark_app.

Signed-off-by: xipingya <xiping.yan@intel.com>

* Export doesn't work, we need to add -Dopencl_root_hints=[PATH]/OpenCL-CLHPP/include to cmake command.

Signed-off-by: xipingya <xiping.yan@intel.com>

Co-authored-by: Chen Peter <peter.chen@intel.com>

* ONNX: Pass name to the InputEdge (#12177)

* [IE TESTS][CONFORMANCE] Fix OpImplCheck Precision (#12148)

* add new article for using binaries

* [PyOV][DOCS] Python API contribution and developer guide (#12145)

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* [DOC][CPU] Denormals optimization doc (#12127)

* Use system pugixml where it's possible (#12218)

* Restore FEM to be static instance (#12219)

* Restore FEM to be static instance

* Restore frontend manager in ie_read_network.cpp

* [MO] Fix TopK partial shape inference with dynamic K (#12212)

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* [CPU] Fixed heap sort bug regarding heapifying (#12221)

* [CPU] Explicitly enable DNNL_VERBOSE only in case of CPU_DEBUG_CAPS (#12108)

* [GNA] Fixed convolutions with shared transpose and un-fuse-able activations after Convolution filter (Renew PR11373) (#12152)

* Commits from PR11373:
Fixed handling of transpose after convolution
[GNA] Fixed calculation of dimensions for ConvolutionFilter and PWL primitives
[GNA] Fixed coverity error and failed tests

* Apply comments

* Update src/plugins/intel_gna/gna_graph_compiler.cpp

Co-authored-by: Marcin Kusmierski <marcin.kusmierski@intel.com>

* Update src/plugins/intel_gna/gna_graph_compiler.cpp

Co-authored-by: Marcin Kusmierski <marcin.kusmierski@intel.com>

* Rollback names

* Separate test data

* Move coverity issue to separate request

Co-authored-by: Elizaveta Lobanova <elizaveta.lobanova@intel.com>
Co-authored-by: Marcin Kusmierski <marcin.kusmierski@intel.com>

* [GNA] Fix accuracy degradation in compact mode (#12150)

* [TF FE] Handle optional attributes for Convolutional operations (#12230)

* [TF FE] Handle optional attributes for Convolutional operations

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Apply code-style rules

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* update the information for pypi.org pages

* [GPU] ROIAlign v9 support (#11899)

* ROIAlign v9 support

* Code changes after review1

* Code changes after review2

* fix of single layer test for Windows

* Since PR #12043 we don't need strong include order of primitive_base.hpp and
impls/implementation map.hpp anymore

* Code changes after review3

* Code changes after review4

* update the verifying checksum step

* Fixed WIndows backslash paths (#12250)

* update install_dir info

* Move GNU build flag to "cmake/developer_package/compile_flags/sdl.cmake" (#12143)

Signed-off-by: Yan, Xiping <xiping.yan@intel.com>

* [MO] Fix Mul fusion with dynamic dimension (#12253)

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* updates

* update wording for pypi.org

* Fixed newAPI for case if core was removed (#12207)

* Fixed newAPI for case if core was removed

* Fixed code style

* Fixed typo

* Use new API by default

* Create core with template plugin

* Added doxygen comment

* Install user provided TBB as well (#12260)

* Disable loading of v7 reader for new IR versions (#12252)

* Disable loading of v7 reader for new IR versions

* Try to fix CI

* Fixed PDPD frontend

* Fixed error message creation

* Fixes for cases when TBB_DIR env var is set (#12266)

* Fixes for cases when TBB_DIR env var is set

* Don't use make in build_samples.sh script

* [GPU] Get rid of direct layout::size field usages  (#12172)

* [GPU] Get rid of direct layout::size field usages to simplify further replacement

* [GPU] Enabled -Wall and resolved compiler complaints

* Update summarize.py (#12175)

* [CPU] Add RDFT and IRDFT operators (#12099)

* [CPU] Add RDFT and IRDFT operators

Tickets: 79178 and 79192

Co-authored-by: Mateusz Bencer <mateusz.bencer@intel.com>

* Remove Interpolate Transposes as it does nothing (#12205)

* [TF FE] Implement LinSpace and BatchMatMul translators (#12271)

* [TF FE] Implement LinSpace and BatchMatMul translators

It helps to convert STN model (from e2e testing) using TensorFlow frontend

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Fix BatchMatMul translator

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Fix LinSpace operation translator

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Apply code-review feedback

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Apply code-style rules

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Apply code style rules

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Update error message on pypi.org (#12243)

* Add Overview page

* Revert "Add Overview page"

* fix references

* update links

* update the wording to be more clear

* add the error message about Visual studio back

* update links to static html links of 2022.2

* port changes to master

* update description

* update commands and uninstallation

* Add const fold check in operators instead pass (#12189)

* Add const fold check in operators instead pass
- refactor constant fold pass to using ov instead of ngraph
- add constant_folding_is_disabled overload for raw pointer

* Remove Reshape from skip const inferences
in legacy graph transformer

* Const fold test for modified operators

* [GPU] Use int64_t type for axis in softmax (#12287)

* remove obsolete info from source files to avoid confusion

* [DOC] [CPU] Proofreading for grammatical and stylistic corrections (#12288)

* Porting to master - update -readme for CPP and Python benchmark (#12245)

Porting #11961

* Fixed build_samples.sh not to call setupvars.sh for Debian package case (#12309)

* Investigate GNA tests (#12267)

* Test commit

* Revert "Disable loading of v7 reader for new IR versions (#12252)"

This reverts commit cb6ca7bb89.

* Revert "Test commit"

This reverts commit 977b83f2ba.

* [PyOV] Test refactoring (#12248)

* [GNA] Add missing support for batch normalization with weights broadcasting. Add unit tests. (#12301)

* Xiaoxia/onetbb old version (#12303)

* support oneTBB old version

* fix oneTBB version mismatch issues

* fix clang issue

* add 'tbb' path to setupvars.sh and OpenVINOConfig.cmake.in

* Update scripts/setupvars/setupvars.sh

Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
Co-authored-by: Shen, Wanglei <wanglei.shen@intel.com>

* simple Windows installer POC (#12308)

* Fixes for cases when TBB_DIR env var is set

* Don't use make in build_samples.sh script

* First version of Windows installer

* WIndows NSIS installer

* [GPU] Fix get_default_params & choose_impl not to dependent on program_node  (#12239)

* Getting rid of dependency from get_default_param for typed_program_node

* Fix bug

* Enable two pathes to call choose_impl / does_possible_impl_exists / does_an_impl_exists to be able to use given layout

* Replaced impl factory API to get kernel_impl_param's pointer

* Update for recently added primitives

* Add and apply optional_layout

* fix kernel_param_impl to be handled as unique_ptr

* Applied review comments

* Fix rebase conflict

* Fix CI error

* [CC]Fix CC issue for transformation (#12292)

* Revert "Fixed 3 naming issue"

This reverts commit a92d3cfff5.

* Revert "Fix CC issues for transformation and snippets"

This reverts commit d08a3f5aac.

* Fix NGRAPH_PASS_CALLBACK issue to make it can work

* Fix matcher name missing issue

* [TF FE] Fix conversion of NetVLAD model (#12328)

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* [MO] Fix broken port numbering for Constant operations (#12318)

* Restore inputs order in IR Reader

* Fix broken port numbering for Constant operations

Co-authored-by: Chetverikov <anton.chetverikov@intel.com>

* [GPU] Align TopK parameters with ngraph (#12278)

* [GPU] Use int64_t type for axis in CumSum (#12306)

* [GPU] Use int64_t type for axis in ScatterElementsUpdate (#12323)

* Bump OMZ submodule to fix pip-conflicts ssues (#12320)

* [PyOV] Enable type casters (#12204)

* add type caster for ov::Layout, enable load method to take pathlibs.Path as arugment

* fix typo

* fix style

* add missing blank line

* add common function to check if py::object is either Path or string

* fix style

* Update src/bindings/python/src/pyopenvino/graph/preprocess/pre_post_process.hpp

Co-authored-by: Jan Iwaszkiewicz <jan.iwaszkiewicz@intel.com>

* add tests, fix style, remove pointer argument overload

* fix style

Co-authored-by: Jan Iwaszkiewicz <jan.iwaszkiewicz@intel.com>

* [GNA] Replace GNA SoftSign by opset9 SoftSign (#12302)

* Replace GNA SoftSign by opset9 SoftSign

* v9 -> opset9

* [GPU] ScatterUpdate axis alignment (#12233)

* [GPU] added is_dynamic methods to program_node and primitive_inst. Minor refactoring (#12322)

* updates

* [GPU] Remove dependency to typed_program_node from calc_output_layout (#12378)

Signed-off-by: Andrew Park <andrew.park@intel.com>

* Use static pointers to frontend libraries (#12235)

* Add static shared_objects map in FEM
- add unit tests for frontend lib close
- not use static FEM in ie network reader
- add main for gtest which can use manifest file to filter tests

* Move library pointers map to manger impl
- add to manger impl method to make frontend from loaded plugin

* Add shutdown function to ov namespace
it cleans the static resources

* Revert changes related to linking mian for tests

* Add python binding to ov::openvino_shutdown

* Renamed shutdown method and added to legacy C++ API

(cherry picked from commit a8395bd207)

* Added C bindings

(cherry picked from commit d2c9ddc263)

* Move frontend lib close test to ieFunctTest
- moved to not introduced new test binary and modification on CI
  the frontend tests use dynamic linked frontend lib which is load
  on test application start and mask lib close tests
- remove gtest_main_manifest as not required now
- add ov::shutdown test to expect application crash

* Fix lib_close test
- remove not get_disabled_tests from utils
- revert CMake file formating

* Fix get model path in lib close tests

* Skip frontend lib close tests if static lib build

Co-authored-by: Ilya Churaev <ilya.churaev@intel.com>

* Decompose NormalizeL2 on GPU (#12361)

* [TF FE] Implement translators for TensorFlow ConvBackpropInput operations (#12356)

* [TF FE] Implement ConvBackPropInput translators

Now the translators supports dynamic input_sizes attribute and different padding modes
including EXPLICIT mode

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Fix clang-style issue

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Fix code-style issue

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Fix code-style issue

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Apply code-review feedback and fix build issues

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Apply code-review feedback: check for input size

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Fix retrieving explicit_padding attribute

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Fix code style

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Add debug log showing the result transformation callback (#12365)

* [AUGRU] AUGRUCell/Sequence op specification (#12162)

* [GPU] Add exception handling for calc_output_layout (#12393)

* Add exception handling for calc_output_layout

Signed-off-by: Andrew Park <andrew.park@intel.com>

* Apply comment to error handler

Signed-off-by: Andrew Park <andrew.park@intel.com>

* [GPU]get data type of conv weights from node.weights() when network is internal (#12232)

* get data type of convolution weights from node.weights() when network is internal

* use only instance.node.weights().get_output_layout().data_type

* fix typo

* add unit test for the case

* Update pre_replace_deconv to support output_shape for transposed conv (#12335)

Signed-off-by: Andrew Park <andrew.park@intel.com>

* Improved OpenVINO debian packages (#12385)

* [GPU] implement lru_cache(#12349) (#12349)

* Fix memory leak issue

Co-authored-by: Taylor Yeonbok Lee <taylor.lee@intel.com>

Co-authored-by: Taylor Yeonbok Lee <taylor.lee@intel.com>

* DOCS-fix_maths_formatting (#12402)

mathematical equation formatting issue fixed in POT readme for range supervision

* [GPU] Pass concat unit tests on DG2 (#12142)

* check optimized
* skip kernel compile when optimized

* GroupedGatherElimination short circuit (#12380)

* Disable GroupedGatherElimination in case of scalar inputs containing indices

* clang format

* [MO, POT] Top up upper bounds for TensorFlow and NumPy modules in all requirement files (#12191)

* [MO] Relax MO upper-bound requirements for TensorFlow and NumPy

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Just debug numpy version

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Pin upper-bounds for NumPy and TensorFlow modules in all reqs files

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Update submodule dependency for open_model_zoo

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Install numpy module first

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Update NumPy version in POT setup.py

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Extend telemetry tests with a set of possible solutions for events

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Fix build issue

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Update NumPy module version for layer tests

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* [GPU] Added common impl for optionals (#12366)

* [LPT] Correct a check for whether model is quantized (#12364)

Look inside subgraph operations, such as TensorIterator, Loop, If, etc

* Update doc for AUTO and AUTO_BATCH (#12265)

* Update doc for AUTO and AUTO_BATCH

Signed-off-by: Chen Peter <peter.chen@intel.com>

* Update docs/OV_Runtime_UG/automatic_batching.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* fix: incorrect fq type (#12234)

Co-authored-by: Wonju Lee <wonju.lee@intel.com>

* Implement workaround to convert non-frozen models using new TensorFlow frontend (#12386)

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Revert "Merge branch 'master' into add-install-binaries-22/2"

This reverts commit f4d6f04636, reversing
changes made to e505e739e2.

* update comments

* update comments

* Update docs/install_guides/installing-openvino-from-archive-windows.md

Co-authored-by: Helena Kloosterman <helena.kloosterman@intel.com>

* update OpenCV installation

* Update docs/install_guides/uninstalling-openvino.md

Co-authored-by: Helena Kloosterman <helena.kloosterman@intel.com>

* Update docs/install_guides/uninstalling-openvino.md

Co-authored-by: Helena Kloosterman <helena.kloosterman@intel.com>

* Update docs/install_guides/uninstalling-openvino.md

Co-authored-by: Helena Kloosterman <helena.kloosterman@intel.com>

* update uninstall wording

* add C++ redistributable to pypi.org pages

* update pypi.org pages and opencv for macOS

* update whats next

* add a note about long paths on Windows

* fix errors

* update CMake dependency

* fix formatting

* apply the same changes from Ilya's comments

* update uninstall, remove dev from pkg names

* update C++ requirements according to Ilya's requests

Signed-off-by: Min, Byungil <byungil.min@intel.com>
Signed-off-by: Andrew Park <andrew.park@intel.com>
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
Signed-off-by: Yan, Xiping <xiping.yan@intel.com>
Co-authored-by: Felix Dohyun Kim <tuxedcat@gmail.com>
Co-authored-by: Sebastian Golebiewski <sebastianx.golebiewski@intel.com>
Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
Co-authored-by: msmykx <101244365+msmykx-intel@users.noreply.github.com>
Co-authored-by: Piotr Milewski <piotr.milewski@intel.com>
Co-authored-by: Eddy Kim <eddy.kim@intel.com>
Co-authored-by: Paul Youngsoo Ahn <paul.y.ahn@intel.com>
Co-authored-by: Wang, Yang <yang4.wang@intel.com>
Co-authored-by: Chen Peter <peter.chen@intel.com>
Co-authored-by: RICKIE777 <ruiqi.yang@intel.com>
Co-authored-by: Bonhun Koo <bonhun.koo@intel.com>
Co-authored-by: avoskoboinyk-lohika <avoskoboinyk@lohika.com>
Co-authored-by: Chenhu Wang <chenhu.wang@intel.com>
Co-authored-by: Marcin Kusmierski <marcin.kusmierski@intel.com>
Co-authored-by: Szymon Irzabek <szymon.jakub.irzabek@intel.com>
Co-authored-by: Yaroslav Torzuk <yaroslav.torzuk2@altran.com>
Co-authored-by: Oleksii Khovan <okhovan@lohika.com>
Co-authored-by: Tomasz Dołbniak <tomasz.dolbniak@intel.com>
Co-authored-by: Tingqian Li <tingqian.li@intel.com>
Co-authored-by: Vladimir Paramuzov <vladimir.paramuzov@intel.com>
Co-authored-by: Krzysztof Bruniecki <krzysztof.bruniecki@intel.com>
Co-authored-by: Min, Byungil <byungil.min@intel.com>
Co-authored-by: Andrew Kwangwoong Park <andrew.park@intel.com>
Co-authored-by: Shen, Wanglei <wanglei.shen@intel.com>
Co-authored-by: Luo Cheng <cheng.luo@intel.com>
Co-authored-by: zihan wu <zihan.wu@intel.com>
Co-authored-by: sheng.gui@intel.com <guisheng315@sina.com>
Co-authored-by: Tetiana Gubanova <tgubanova@lohika.com>
Co-authored-by: Mateusz Bencer <mateusz.bencer@intel.com>
Co-authored-by: Przemyslaw Wysocki <przemyslaw.wysocki@intel.com>
Co-authored-by: Kelvin Choi <kelvin.choi@intel.com>
Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
Co-authored-by: Taylor Yeonbok Lee <taylor.lee@intel.com>
Co-authored-by: si-eun-kim <sieun.kim@intel.com>
Co-authored-by: Katarzyna Mitrus <katarzyna.mitrus@intel.com>
Co-authored-by: Sungeun Kim <sungeun.kim@intel.com>
Co-authored-by: Jade Cho <jade.cho@intel.com>
Co-authored-by: Evgenya Stepyreva <evgenya.stepyreva@intel.com>
Co-authored-by: Xiping Yan <xiping.yan@intel.com>
Co-authored-by: Artur Kulikowski <artur.kulikowski@intel.com>
Co-authored-by: Irina Efode <irina.efode@intel.com>
Co-authored-by: Jan Iwaszkiewicz <jan.iwaszkiewicz@intel.com>
Co-authored-by: River Li <river.li@intel.com>
Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>
Co-authored-by: Chen Xu <chen.xu@intel.com>
Co-authored-by: Egor Duplenskii <egor.duplensky@gmail.com>
Co-authored-by: Nadezhda Ageeva <nadezhda.ageeva@intel.com>
Co-authored-by: Elizaveta Lobanova <elizaveta.lobanova@intel.com>
Co-authored-by: Konstantin Beluchenko <kostiantyn.bieliuchenko@altran.com>
Co-authored-by: Ilya Churaev <ilya.churaev@intel.com>
Co-authored-by: Mateusz Tabaka <mateusz.tabaka@intel.com>
Co-authored-by: Pawel Raasz <pawel.raasz@intel.com>
Co-authored-by: Roman Lyamin <Roman.Lyamin@intel.com>
Co-authored-by: almilosz <108654258+almilosz@users.noreply.github.com>
Co-authored-by: Sun Xiaoxia <xiaoxia.sun@intel.com>
Co-authored-by: Maxim Vafin <maxim.vafin@intel.com>
Co-authored-by: Chetverikov <anton.chetverikov@intel.com>
Co-authored-by: Alina Kladieva <alina.kladieva@intel.com>
Co-authored-by: Bartek Szmelczynski <bartosz.szmelczynski@intel.com>
Co-authored-by: Wilson Seok <wilson.seok@intel.com>
Co-authored-by: Inhyuk Jo <andy.inhyuk.jo@intel.com>
Co-authored-by: Wonju Lee <wonju.lee@intel.com>
Co-authored-by: Helena Kloosterman <helena.kloosterman@intel.com>
2022-09-06 12:19:12 +04:00
Anastasia Popova
37097c71cc Fixed error in indices processing. (#12869) 2022-09-02 14:28:00 +02:00
Roman Kazantsev
9b170e63fd [TF FE] Add Transpose Sinking for Prelu operation (#12832)
* [TF FE] Add Transpose Sinking for Prelu operation

Now it covers a case with a scalar slope.

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Add unit-tests for Transpose sinking of Prelu

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Fix non-scalar slope case

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2022-09-01 11:22:22 +04:00
Mateusz Tabaka
41fa6f360b Explicitly link onednn with tbb for tbb version in [2018,2019.4] (#12789) (#12837)
Ticket: 89800
2022-08-31 17:14:54 +03:00
Ilya Lavrenov
1e9da3f5de Added version generation as in CI (#12521) (#12831) (#12840)
* Added vresion generation as in CI (#12521)

* Allow CI_BUILD_NUMBER to define only build number
2022-08-31 17:40:41 +04:00
Roman Kazantsev
6987465875 [Python API] Replace deprecated NumPy type np.bool (#12786) (#12824)
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2022-08-31 15:46:45 +04:00
Gorokhov Dmitriy
a0b661a274 [CPU] Fixed MHA accuracy for mixed precision case (#12820) 2022-08-31 10:53:38 +04:00
Alina Kladieva
a466b3fea6 Port py checks changes (#12826)
* Run py checks on changes to yaml

* Try using setup-python@v4

* Use ubuntu20.04
2022-08-30 16:24:41 +04:00
Roman Kazantsev
cb6b1fe56f [TF FE] Fix BatchToSpace translator (#12815)
According to the specification we must have the same type for block_shape and crops inputs

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2022-08-30 14:24:36 +04:00
Evgenya Stepyreva
66d3048598 Make model reshape and track batch (#12736) (#12777)
* Make model reshape and track batch (#12736)

* CVS-89672 Make model reshape and track batch

* Minor refactoring

* Changed mechanism of constant replacement to more mature

* Update src/common/transformations/include/transformations/smart_reshape/lstm_states_broadcast.hpp

* Update src/common/transformations/src/transformations/smart_reshape/lstm_states_broadcast.cpp

* Comments resolving

* Style and getting rid of asserts

* style

* Apply suggestions from code review
2022-08-26 16:53:47 +00:00
Roman Kazantsev
d2e06d4f25 [TF FE] Port fixes for Convolutional operations, ExtractImagePatches and MatrixDiag (#12764)
* [TF FE] Implement translators for ExtractImagePatches and MatrixDiag (#12593)

It allows to convert Inpaint model and infer it correctly

Co-authored-by: Maxim Vafin <maxim.vafin@intel.com>

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
Co-authored-by: Maxim Vafin <maxim.vafin@intel.com>

* [TF FE] Correct Deconvolution for NCHW layout

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Revert Deconvolution implementation and work around -1 for SS

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Fixing Conv3DBackpropInputV2 operation translator

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
Co-authored-by: Maxim Vafin <maxim.vafin@intel.com>
2022-08-26 16:38:30 +04:00
Tomasz Dołbniak
72d7b518ca cltools update to 22.08 [2022/2] (#12690)
* cltools update to 22.08

* Hash update

* Hash update

* Adjustments for the new package
2022-08-26 15:28:40 +04:00
Sergey Shlyapnikov
41a404f290 [GPU] fix Transpose issue for ConvertColor with FakeQuantize. (#12645) (#12761)
Co-authored-by: Tang Wei <wei1.tang@intel.com>
Co-authored-by: Kurt Chen <kurt.chen@intel.com>
2022-08-26 12:29:21 +04:00
Ekaterina Aidova
abaa9e6404 [OMZ] update submodule (#12681)
* [OMZ] update submodule

* move submodule
2022-08-26 10:53:38 +04:00
Sergey Shlyapnikov
429c7265df [GPU] Implement NMS-9 operation (#11890) (#12760)
* Fix GPU NonMaxSuppression implementation

* Introduce Nms9 single layer tests

* Adapt internal NMS and GPU implementation for NMS9 implementation

* Adapt CPU implementation in GPU for NMS9

* Add blocked layouts support to NMS

* Add unit tests for blocked formats for NMS

* Fix boxes groups size for the small shapes

* Use ocl implementation for blocked layout input

* Fix templates typedefs to pass win build

* Fix second output to set data in correct format

Co-authored-by: Tetiana Gubanova <tgubanova@lohika.com>
2022-08-26 00:37:20 +04:00
Trawinski, Dariusz
7123433ce3 adjustments for 2022.2 release on rh platform (#12758) 2022-08-26 00:30:34 +04:00
Nikita Malinin
319e95e419 fix: incorrect fq type (#12234) (#12757)
Co-authored-by: Wonju Lee <wonju.lee@intel.com>
(cherry picked from commit 0592ba3e8c)

Co-authored-by: Inhyuk Jo <andy.inhyuk.jo@intel.com>
2022-08-25 16:43:39 +00:00
Maxim Vafin
bafd45502b Fix issue with Squeeze with empty squeeze_dims (#12700)
* Fix issue with Squeeze with empty squeeze_dims

* Rework solution

* Apply code style

* Improve error logging

* Improve formatting

* Add more types

* Apply review feedback

* Add file which was forgotten
2022-08-25 18:30:19 +04:00
Maxim Vafin
4ea602bc7e Use new reprocessing for legacy MO (#11302) (#12653)
Co-authored-by: Andrei Kochin <andrei.kochin@intel.com>
2022-08-25 18:25:03 +04:00
Liubov Talamanova
891f1c49bc [POT][OV 2022.2] Fixed insert_fake_quantize() with empty hw_config (#12678) 2022-08-25 12:23:38 +00:00
Sergey Shlyapnikov
a3f8cef198 [GPU] Shared memory optimization for network::execute_impl() call (#12748) 2022-08-25 15:49:56 +04:00
Artur Kulikowski
826a54dc20 Backport of #12713 "MO uses the same version of protobuf like other packages" (#12734)
* MO uses the same version of protobuf like other packages

* Restrict Protobuf to version >=3.18.1 and lower than 4.0.0
2022-08-25 13:17:14 +02:00
Yuan Xu
99b8c80677 update with external suggestions (#12726) 2022-08-25 11:45:31 +04:00
guozhong wang
f409e95768 do not remove cpu when bind buffer (#12556)
Co-authored-by: Shen, Wanglei <wanglei.shen@intel.com>
2022-08-25 09:05:42 +03:00
Roman Kazantsev
d2f7816e6f [TF FE] Port changes for TF FE from the master branch (#12691)
* [TF FE] Add Transpose Sinking for additional unary-wise Operations

It helps to fix performance degradation for MobileNet models

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Add LogicalNot for Transpose sinking

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* [TF FE] Support dynamic rank support for Convolutional and Pooling operations (#12661)

* [TF FE] Add dynamic rank support for Convolutional and Pooling operations

Refactor DepthwiseConv2D, AvgPool, and FusedBatchNorm operations

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Fix build issue with rvalue

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Fix build issue with climit

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Skip duplication of Parameter nodes

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Revert changes in StridedSlice and add check for AvgPool operation type

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Revert the rest of changes for StridedSlice

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Fix translator for AvgPool: add pad mode

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Introduce helper default_op_checks

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* [TF FE] Refactor translators for Resize operations and correct Pooling (#12721)

* [TF FE] Refactor translators for Resize operations and correct Pooling

It allows to convert magenta_arbitrary-image-stylization model

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Align TF FE tranlator for Resize with legacy frontend

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Do minor fix for MaxPool

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2022-08-24 21:10:05 +03:00
Nikita Malinin
3980672082 Disable LWT tests (#12740) 2022-08-24 17:48:27 +00:00
Evgenya Stepyreva
6a20d1408e GroupedGatherElimination short circuit (#12380) (#12733)
* Disable GroupedGatherElimination in case of scalar inputs containing indices

* clang format

Co-authored-by: Tomasz Dołbniak <tomasz.dolbniak@intel.com>
2022-08-24 16:04:28 +03:00
Chen Xu
1e5fec7e25 [CPU] Reduce node improve performance for nspc layout (#12671) 2022-08-24 15:39:55 +04:00
Maxim Vafin
188746224c Add ScatterUpdate value infer (#12595) (#12714)
* Add ScatterUpdate value infer

* Add additional test case to ScatterUpdate tests
2022-08-24 14:51:03 +04:00
Luwei Zhou
aa1a607328 [CPU] Fix the strided slice issue when ellipsis_mask has redundant data. (#12705) 2022-08-24 09:43:08 +04:00
Artur Kulikowski
6fecdbca36 Backport of #12650 "Properly reading parameters with whitespaces from IR" (#12677)
* Add overrided method to generating vector of strings

* Trim the value from the the left and right

* Add test to verify that output names are correctly read from IR

* Use spaces instead of tabs

* Add C++ tests for read model contains outputs with whitespaces

* Fix test for add output

* Remove python test
2022-08-23 21:29:04 +03:00
Andrei Kochin
f87e00398d updated to convert b_fs_yx_fsv16 to o_is_yx_isv16 (#12630) (#12675)
Co-authored-by: Eddy Kim <eddy.kim@intel.com>
2022-08-23 15:46:54 +03:00
Maxim Vafin
4cdd8119da [MO] Improve layout help (#12535) (#12590)
* [MO] Improve layout help
2022-08-23 13:10:55 +02:00
Tomasz Dołbniak
714b1de678 GridSample op check test (#12586) 2022-08-23 12:06:11 +02:00
Zhen Zhao (Fiona)
0000550371 Update to add climits for ULLONG_MAX (#11958) (#12709)
Avoid GCC compiling issue ‘ULLONG_MAX’ was not declared in this scope

Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
Co-authored-by: Ilya Churaev <ilya.churaev@intel.com>

Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
Co-authored-by: Ilya Churaev <ilya.churaev@intel.com>
2022-08-23 12:52:30 +04:00
Gorokhov Dmitriy
a6bfc0cf0e [CPU] Support MHA optimization (#12643)
* [CPU] Support MHA optimization

* [CPU] Extend pattern supported by MHA node

* [CPU] MHA: fixed int8 perf issue

Co-authored-by: Gu, Jianan <jianan.gu@intel.com>
2022-08-23 12:50:02 +04:00
Ilya Lavrenov
b4d18bb406 Don't use system tbb for 2022.2 (#12702) 2022-08-23 10:34:40 +04:00
yanlan song
4d9443eb0e do not call get_profiling in threads (#12635)
* do not call get_profiling in threads

Signed-off-by: fishbell <bell.song@intel.com>

* indent

Signed-off-by: fishbell <bell.song@intel.com>

Signed-off-by: fishbell <bell.song@intel.com>
Co-authored-by: Chen Peter <peter.chen@intel.com>
2022-08-23 13:50:52 +08:00
Ilya Lavrenov
d770b535fb Don't run CPU tests if some previous steps have failed (#12701) 2022-08-23 02:41:26 +04:00
Alina Kladieva
5a0dea4a46 Cherry-pick U22 adoption in github actions (#12550) (#12697)
* Cherry-pick U22 adoption in github actions

* More fixes for shellcheck

* More fixes for shellcheck

* Update .github/workflows/py_checks.yml

Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
2022-08-23 01:50:29 +04:00
Alina Kladieva
c8d57bbc77 Cherry-pick disable CUDA plugin building Azure (#12699) 2022-08-23 01:49:49 +04:00
Artyom Anokhov
a8f2365563 coverage.cmake: Added general target for collecting coverage counters for whole project (#12655) 2022-08-22 18:20:37 +04:00
Daniil Lyakhov
8a1d34d317 [POT] Gold References Update (#12646)
* [POT] Precommit reference update (#12304)

* WIP graph tests fixing

* Fix collectiors graph tests

* remove debug code

* Fix rebase

* eps update for scales tests

* Outputs for some reference models was changed

* Sanity reference metrics update for VNNI CI hosts

* Unused hyperopt dependency which broke python3.6 support is commented

* Minor comments fixes

* [POT] Finetuned model reference update (#12610)

* Finetuned model reference update

* Comment with AVX512 reference value

* [hotfix] pytest error of act_act example (#12093)

* [hotfix] pytest error of act_act example

* remove needless import

Co-authored-by: Bonhun Koo <bonhun.koo@intel.com>
2022-08-22 12:25:42 +00:00
Mateusz Bencer
7fe32c89ae [MO] Fix SSliceComplex transformation (#12538) 2022-08-20 12:15:08 +02:00
Maxim Vafin
067c21f110 [TF FE] Refactor constant reading to not use protobuf directly (#12518) (#12651)
* Refactor constant reading

* Remove needless code

* Implement compressed value reading

* Remove needless protobuf headers

* Remove commented code

* Remove unnecessary comment

* Apply review feedback

* Fix linux build

* Fix win build

* Fix copyright
2022-08-19 20:02:11 +04:00
Roman Kazantsev
aafabb41b8 [MO, POT] Top up upper bounds for TensorFlow and NumPy modules in all requirement files (#12191) (#12628)
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2022-08-19 19:33:25 +04:00
Luo Cheng
e03fbd5c15 [CPU] Default enable avx512 f32 brgconv (#12620) 2022-08-19 17:59:15 +04:00
Xiake Sun
4e02bd2771 Add missing patchelf dependency for REHL8 for openvino runtime python wheel build (#12618) (#12625) 2022-08-18 22:19:22 +04:00
Mateusz Tabaka
8ca594f49a handle tbb library path like .../tbb/lib/intel64/gcc4.8 (#12606) 2022-08-18 13:42:19 +03:00
Artur Kulikowski
2c78fdb7c7 Fix: Refreshing of places after subgraph extraction (#12497) 2022-08-18 11:30:23 +02:00
Roman Kazantsev
544b3f8191 [TF FE] Port TF FE changes from master for integration with OVTF (#12575)
* [TF FE] Handle optional attributes for Convolutional operations (#12230)

* [TF FE] Handle optional attributes for Convolutional operations

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Apply code-style rules

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* [TF FE] Implement LinSpace and BatchMatMul translators (#12271)

* [TF FE] Implement LinSpace and BatchMatMul translators

It helps to convert STN model (from e2e testing) using TensorFlow frontend

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Fix BatchMatMul translator

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Fix LinSpace operation translator

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Apply code-review feedback

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Apply code-style rules

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Apply code style rules

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* [TF FE] Fix conversion of NetVLAD model (#12328)

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* [TF FE] Implement translators for TensorFlow ConvBackpropInput operations (#12356)

* [TF FE] Implement ConvBackPropInput translators

Now the translators supports dynamic input_sizes attribute and different padding modes
including EXPLICIT mode

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Fix clang-style issue

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Fix code-style issue

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Fix code-style issue

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Apply code-review feedback and fix build issues

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Apply code-review feedback: check for input size

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Fix retrieving explicit_padding attribute

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Fix code style

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* [TF FE] Fix StridedSlice translator for new_axis vector size longer input rank (#12442)

* [TF FE] Fix StridedSlice translator for new_axis vector longer input rank

Currently, new_axis vector is cut by input rank that is correct and leads to the loss of new axes.

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Use int64 type in mask_to_vector function

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* [TF FE] Refactor translators for Conv2d and Conv3d (#12444)

It allows to convert CNN-Transformer model. Padding was previously incorrect.

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* [TF FE] Implement conversion for Attention OCR model (#12428)

* [TF FE] Implement conversion for Attention OCR model

The following scope of work is done to make Attention OCR convertable:
1. Refactored translators for BiasAdd, Slice, and ArgMax operations. Add translation for StopGradient operation.
2. The previous traversing algorithm to compute topological sorted nodes list was incorrect. Now it is implemented based on topologically_sorted function from core/graph_util.hpp.
3. The unsupported data types are now preliminary converted to undefined type for the purpose of to have them cut off.

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* [TF FE] Refactor MaxPool operation translator for xj_feature model (#12485)

* [TF FE] Refactor MaxPool operation translator for xj_feature model

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Correct MaxPoolV2 since it has three inputs

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2022-08-17 16:16:48 +04:00
guozhong wang
5bd1e64a42 remove test case LoadNetwork_SingleIECore (#12597) 2022-08-17 09:33:19 +04:00
Xuejun Zhai
66257530e3 [Coverity Scan] sample issue from CS fix (#12509)
Signed-off-by: xuejun <Xuejun.Zhai@intel.com>

Signed-off-by: xuejun <Xuejun.Zhai@intel.com>
Co-authored-by: River Li <river.li@intel.com>
2022-08-16 11:09:00 +08:00
Ekaterina Aidova
cfbf5a1808 [releases/2022/2] openvino-dev uses opencv-python-headless as default (#12559) 2022-08-15 21:58:06 +03:00
Ilya Lavrenov
4f03abe2ca [SAMPLES] Fix flake issues in Python speech sample (#12514) (#12529)
* Fix flake issues

* Add whitespace

* Add whitespaces in tests asserts

Co-authored-by: Jan Iwaszkiewicz <jan.iwaszkiewicz@intel.com>
2022-08-15 16:58:13 +04:00
Anastasia Popova
389c970c12 Update date changed. (#12531) 2022-08-15 09:08:57 +03:00
Ilya Lavrenov
29628a89b7 Tbb port (#12541)
* Fixes for TBB 2018-2019.4

* Fixed CVS-89248
2022-08-15 06:26:47 +04:00
Andrew Kwangwoong Park
f3adf63f6b [GPU] Disable TCs for OVClassHeteroExecutableNetworkGetMetricTest (#12433) (#12472)
Signed-off-by: Andrew Park <andrew.park@intel.com>

Signed-off-by: Andrew Park <andrew.park@intel.com>
2022-08-13 21:56:23 +09:00
Mateusz Tabaka
c0212a361a [CPU] Add RDFT and IRDFT operators (#12290)
Tickets: 79178 and 79192

Co-authored-by: Mateusz Bencer <mateusz.bencer@intel.com>
2022-08-12 14:10:53 +02:00
Adrian Boguszewski
d8d5dfb34a Fixed NameError: name 'ARCH' is not defined on Raspberry Pi (#12421) (#12512)
(cherry picked from commit fe4e875586)
2022-08-12 12:43:25 +04:00
Mateusz Bencer
e628fae196 [GPU] Decompose NormalizeL2 for not supported cases (#12404) 2022-08-11 11:32:03 +02:00
Min, Byungil
f0f6896fc0 [GPU] Fix network loading time related to onednn engine creation (#12492)
+ benchmark cache_dir option takes longer than cl_cache_dir env in loading network.
+ For clDNN execution, benchmark cache_dir created onednn_engine if just ONEDNN_ENABLE config is ON.
+ Creation of onednn_engine in ocl_engine is changed to on-demand.

Signed-off-by: Min, Byungil <byungil.min@intel.com>

Signed-off-by: Min, Byungil <byungil.min@intel.com>
2022-08-11 09:32:20 +04:00
Tomasz Dołbniak
9163114290 Undefined Behavior sanitizer fixes [2022/2] (#12339)
* UBSan errors fix

* Cleanup
2022-08-11 08:03:21 +04:00
River Li
53a3cb377b [C 2.0 API]revert OV C 2.0 APIs in 2022.2 release branch (#12180)
* Revert "[C API] Enable hello_nv12_input_classification samples for C APIs of OV API 2.0 (#12031)"

This reverts commit 70d967ffb6.

* Revert "Add hello_classification_ov_c test (#11933)"

This reverts commit ebeb0a3802.

* Revert "Refine ov_partial_shape for OV API 2.0 C interface (#11891)"

This reverts commit ce5b2c6a45.

* Revert "Enable unit test for OV 2.0 C API (#11828)"

This reverts commit c4fdcafa70.

* Revert "OV 2.0 C API (#11700)"

This reverts commit 8faf8f2d89.
2022-08-11 07:16:26 +04:00
Alina Kladieva
ac805c66e1 Update Azure refs for 2022/2 (#12501) 2022-08-10 18:21:11 +00:00
Evgenya Stepyreva
c9afc5a5c1 Auto Batch: if disabled during cmake (#12382) (#12479) 2022-08-10 09:51:26 +00:00
River Li
d328b00e48 [CC]Fix CC issue for transformation (#12292) (#12489)
* Revert "Fixed 3 naming issue"

This reverts commit a92d3cfff5.

* Revert "Fix CC issues for transformation and snippets"

This reverts commit d08a3f5aac.

* Fix NGRAPH_PASS_CALLBACK issue to make it can work

* Fix matcher name missing issue
2022-08-10 11:36:51 +04:00
Wilson Seok
1788c86943 change to node.weights() from weights_memory(0) (#12407) 2022-08-10 16:18:58 +09:00
Ilya Churaev
32713f744d Use static pointers to frontend libraries (#12235) (#12471)
* Add static shared_objects map in FEM
- add unit tests for frontend lib close
- not use static FEM in ie network reader
- add main for gtest which can use manifest file to filter tests

* Move library pointers map to manger impl
- add to manger impl method to make frontend from loaded plugin

* Add shutdown function to ov namespace
it cleans the static resources

* Revert changes related to linking mian for tests

* Add python binding to ov::openvino_shutdown

* Renamed shutdown method and added to legacy C++ API

(cherry picked from commit a8395bd207)

* Added C bindings

(cherry picked from commit d2c9ddc263)

* Move frontend lib close test to ieFunctTest
- moved to not introduced new test binary and modification on CI
  the frontend tests use dynamic linked frontend lib which is load
  on test application start and mask lib close tests
- remove gtest_main_manifest as not required now
- add ov::shutdown test to expect application crash

* Fix lib_close test
- remove not get_disabled_tests from utils
- revert CMake file formating

* Fix get model path in lib close tests

* Skip frontend lib close tests if static lib build

Co-authored-by: Ilya Churaev <ilya.churaev@intel.com>

Co-authored-by: Pawel Raasz <pawel.raasz@intel.com>
2022-08-10 09:05:12 +04:00
Andrew Kwangwoong Park
ea302afb47 Update pre_replace_deconv to support output_shape for transposed conv (#12418)
Signed-off-by: Andrew Park <andrew.park@intel.com>
2022-08-10 10:37:51 +09:00
Tomasz Dołbniak
5871d5dc38 OpenCV build switched off by default [2022/2] (#12213) 2022-08-09 14:45:52 +02:00
Ilya Lavrenov
125adeaf29 CVS-88328: Ported fixes for TBB (#12461)
* Fixed WIndows backslash paths (#12250)

* Install user provided TBB as well (#12260)

* Fixes for cases when TBB_DIR env var is set (#12266)

* Fixes for cases when TBB_DIR env var is set

* Don't use make in build_samples.sh script

* Xiaoxia/onetbb old version (#12303)

* support oneTBB old version

* fix oneTBB version mismatch issues

* fix clang issue

* add 'tbb' path to setupvars.sh and OpenVINOConfig.cmake.in

* Update scripts/setupvars/setupvars.sh

Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
Co-authored-by: Shen, Wanglei <wanglei.shen@intel.com>

* Trying to fix CVS-85530 (#12455)

Co-authored-by: Sun Xiaoxia <xiaoxia.sun@intel.com>
Co-authored-by: Shen, Wanglei <wanglei.shen@intel.com>
2022-08-08 23:40:37 +00:00
Tomasz Dołbniak
a1bd02e633 Friendly names fix for ONNX models (#12412) 2022-08-08 21:56:03 +02:00
Trawinski, Dariusz
3068b3823c changes needed to rhel8 certification (#12242)
* changes needed to rhel8 certification

* preserve opencl drivers in version 21

* updated comment about supported RH versions
2022-08-08 11:36:11 +04:00
Chen Peter
71b97b69a8 Add notes for AUTO hints (#12076)
* Update doc for AUTO and AUTO_BATCH

Signed-off-by: Chen Peter <peter.chen@intel.com>

* Update per the comments

Signed-off-by: Chen Peter <peter.chen@intel.com>

* Move default hint to THROUGHPUT section

Signed-off-by: Chen Peter <peter.chen@intel.com>

* Update docs/OV_Runtime_UG/automatic_batching.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
2022-08-04 14:26:21 +08:00
Katarzyna Mitrus
e9030cca21 Update opset.md with opset9 (#12171) 2022-07-26 14:13:58 +02:00
Sebastian Golebiewski
c591d773d4 Porting to 2022.2 - DOCS-code-reference-css-style-change (#12198)
Porting  the following PR:

https://github.com/openvinotoolkit/openvino/pull/12109/

to 2022.2
2022-07-26 14:13:13 +02:00
Sebastian Golebiewski
5f4999117d Porting to 2022.2 - update -readme for CPP and Python benchmark (#12246)
Porting #11961 to 2022.2
2022-07-26 14:12:38 +02:00
Ilya Churaev
c9f9795d29 Fixed newAPI for case if core was removed (#12208)
* Fixed newAPI for case if core was removed

* Fixed code style

* Fixed typo

* Use new API by default

* Create core with template plugin

* Added doxygen comment

Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
2022-07-23 11:53:26 +00:00
Maciej Smyk
28922e2080 Install guide update for 2022.2 (#12222)
* Toctree

* Fixing reference

* Linux

* Windows

* macOS

* Raspbian-OS

* Whats-Next-Section

* References

* HDDL-MYRIAD

* Update docs/install_guides/installing-model-dev-tools.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Runtime-fix

* Update docs/install_guides/installing-openvino-overview.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update installing-openvino-overview.md

* Update docs/OV_Runtime_UG/supported_plugins/MYRIAD.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/HDDL.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update configurations-for-intel-gpu.md

* Update docs/install_guides/configurations-for-intel-gpu.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/install_guides/configurations-for-intel-gpu.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/install_guides/installing-model-dev-tools.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/install_guides/configurations-for-intel-gpu.md

* Delete installing-openvino-images.md

* Update docs/install_guides/troubleshooting-steps.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/install_guides/troubleshooting-steps.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/install_guides/troubleshooting-steps.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/install_guides/troubleshooting-steps.md

* Update installing-model-dev-tools.md

* Update configurations-for-intel-gpu.md

* Revert "Update configurations-for-intel-gpu.md"

This reverts commit f5294de324.

* Revert "Update installing-model-dev-tools.md"

This reverts commit 9109a916d6.

* ID-fix

* Update installing-openvino-macos.md

Co-authored-by: sgolebiewski-intel <101244613+sgolebiewski-intel@users.noreply.github.com>
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
2022-07-21 13:55:28 +00:00
Tomasz Dołbniak
4a88aa0493 ipython removed from the dependencies of docs (#12193) 2022-07-20 16:39:43 +02:00
Kelvin Choi
3a72200f92 [GPU] Add reorder from i32 to f32 for max-pooling/conv/fc which doesn't support i32 (#12144) 2022-07-20 22:14:22 +09:00
Egor Duplenskii
fdae95a769 [CPU] Explicitly enable DNNL_VERBOSE only in case of CPU_DEBUG_CAPS (#12151)
and rely on oneDNN default behavior otherwise
2022-07-20 14:07:42 +04:00
Sebastian Golebiewski
483f38e6d8 Porting OV Runtime to 2022.2 (#12192)
Porting OV Runtime (PR #11658) to 2022.2

https://github.com/openvinotoolkit/openvino/pull/11658/
2022-07-20 11:14:45 +02:00
River Li
c144702d8b Restore static fem in 2022.2 (#12223)
* Restore FEM to be static instance

* Restore frontend manager in ie_read_network.cpp
2022-07-20 10:29:52 +04:00
Yuan Xu
79db96d61e Pypi org updates 22/2 (#12210)
* fix references

* update links

* update the wording to be more clear

* add the error message about Visual studio back

* update links to static html links of 2022.2
2022-07-19 06:26:32 +00:00
Chenhu Wang
123f8e62bf [DOC][CPU] Denormals optimization document (#12132) 2022-07-18 16:37:44 +04:00
Taylor Yeonbok Lee
8c80f9ff58 [GPU] optimize permute_ref (#12160)
* change memory access pattern of fsv layout for permute

* Fix permute_ref to process F first only when (bf...) => (b...f)

* Refactor

Co-authored-by: si-eun-kim <sieun.kim@intel.com>
2022-07-18 18:26:00 +09:00
Eddy Kim
de5e9bb397 Revert "[GPU] Pass activation unit tests on DG2 (#11969)" (#12165)
This reverts commit 3334e8933c.
2022-07-18 18:25:45 +09:00
zihan wu
32f800c6a6 [CPU] polish onednn cc readme (#12114) (#12176) 2022-07-15 16:36:31 +00:00
Min, Byungil
b492f98d30 [GPU] modify fusing condition for reduce (#12147)
Signed-off-by: Min, Byungil <byungil.min@intel.com>
2022-07-15 16:07:43 +09:00
Andrew Kwangwoong Park
9c49b71c11 Enable tensor offset to GemmKernelRef for input padding support (#12140)
Signed-off-by: Andrew Park <andrew.park@intel.com>
2022-07-15 16:01:35 +09:00
Tomasz Dołbniak
02330bc11c Zlib update to 1.12.2 (#12129) 2022-07-14 14:40:23 +02:00
Luo Cheng
4412e1ddfa [CPU] revert pr 11990 and enable brgconv avx512 on SPR by default (#12134) 2022-07-14 14:10:51 +04:00
Tingqian Li
b7b3f0ab4a move cpu_dump_check into CPU plugin's tools folder (#12123) 2022-07-13 13:38:17 +08:00
Paul Youngsoo Ahn
0621e8cf28 [GPU] Fix gather data type issue (#12089) (#12089) 2022-07-12 19:01:07 +09:00
Tomasz Dołbniak
9d6d84088f Virtual destructor for the base class (#12103) 2022-07-12 11:55:41 +02:00
Eddy Kim
a63dad6fdd updated to fuse activation in eltwise_vload8 (#12092) 2022-07-12 18:51:48 +09:00
Wang, Yang
bbc1c26750 setting tput as the default performance mode only for AUTO, excluding MULTI plugin. (#12090)
Signed-off-by: ywang2 <yang4.wang@intel.com>

Co-authored-by: Chen Peter <peter.chen@intel.com>
Co-authored-by: Shen, Wanglei <wanglei.shen@intel.com>
2022-07-10 15:16:59 +08:00
985 changed files with 20014 additions and 22229 deletions

View File

@@ -4,8 +4,21 @@ trigger:
- master
- releases/*
paths:
exclude:
- docs/*
exclude:
- docs/
- /**/docs/*
- /**/*.md
pr:
branches:
include:
- master
- releases/*
paths:
exclude:
- docs/
- /**/docs/*
- /**/*.md
resources:
repositories:
@@ -13,7 +26,7 @@ resources:
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/openvino_contrib
ref: master
ref: releases/2022/2
jobs:
- job: android_arm64
@@ -110,11 +123,11 @@ jobs:
-DANDROID_ABI=$(ANDROID_ABI_CONFIG)
-DANDROID_STL=c++_shared
-DANDROID_PLATFORM=$(ANDROID_SDK_VERSION)
-DENABLE_OPENCV=OFF
-DENABLE_TESTS=ON
-DENABLE_SAMPLES=ON
-DENABLE_INTEL_MYRIAD=OFF
-DBUILD_java_api=ON
-DBUILD_cuda_plugin=OFF
-DTHREADING=SEQ
-DIE_EXTRA_MODULES=$(OPENVINO_CONTRIB_REPO_DIR)/modules
-DCMAKE_CXX_COMPILER_LAUNCHER=ccache

View File

@@ -4,8 +4,21 @@ trigger:
- master
- releases/*
paths:
exclude:
- docs/*
exclude:
- docs/
- /**/docs/*
- /**/*.md
pr:
branches:
include:
- master
- releases/*
paths:
exclude:
- docs/
- /**/docs/*
- /**/*.md
resources:
repositories:
@@ -13,13 +26,13 @@ resources:
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/openvino_contrib
ref: master
ref: releases/2022/2
- repository: testdata
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/testdata
ref: master
ref: releases/2022/2
jobs:
- job: Lin
@@ -161,6 +174,7 @@ jobs:
-DIE_EXTRA_MODULES=$(OPENVINO_CONTRIB_REPO_DIR)/modules
-DCMAKE_CXX_COMPILER_LAUNCHER=ccache
-DCMAKE_C_COMPILER_LAUNCHER=ccache
-DBUILD_cuda_plugin=OFF
$(REPO_DIR)
workingDirectory: $(BUILD_DIR)
@@ -214,7 +228,6 @@ jobs:
set -e
mkdir -p $(INSTALL_DIR)/opencv/
cmake -DCMAKE_INSTALL_PREFIX=$(INSTALL_DIR) -DCOMPONENT=tests -P cmake_install.cmake
cp -R $(REPO_DIR)/temp/opencv_4.5.2_ubuntu20/opencv/* $(INSTALL_DIR)/opencv/
workingDirectory: $(BUILD_DIR)
displayName: 'Install tests'
@@ -332,7 +345,7 @@ jobs:
- script: . $(SETUPVARS) && $(INSTALL_TEST_DIR)/cpuFuncTests --gtest_filter=*smoke* --gtest_print_time=1 --gtest_output=xml:TEST-cpuFuncTests.xml
displayName: 'CPU FuncTests'
continueOnError: false
condition: eq(variables['CMAKE_BUILD_SHARED_LIBS'], 'OFF')
condition: and(succeeded(), eq(variables['CMAKE_BUILD_SHARED_LIBS'], 'OFF'))
- script: |
export DATA_PATH=$(MODELS_PATH)
@@ -341,13 +354,6 @@ jobs:
displayName: 'IE CAPITests'
continueOnError: false
- script: |
export DATA_PATH=$(MODELS_PATH)
export MODELS_PATH=$(MODELS_PATH)
. $(SETUPVARS) && $(INSTALL_TEST_DIR)/OpenVinoCAPITests --gtest_output=xml:TEST-OpenVinoCAPITests.xml
displayName: 'OV CAPITests'
continueOnError: false
- task: CMake@1
inputs:
cmakeArgs: >

View File

@@ -4,8 +4,21 @@ trigger:
- master
- releases/*
paths:
exclude:
- docs/*
exclude:
- docs/
- /**/docs/*
- /**/*.md
pr:
branches:
include:
- master
- releases/*
paths:
exclude:
- docs/
- /**/docs/*
- /**/*.md
resources:
repositories:
@@ -13,7 +26,7 @@ resources:
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/openvino_contrib
ref: master
ref: releases/2022/2
jobs:
- job: linux_arm64
@@ -127,7 +140,6 @@ jobs:
-GNinja
-DVERBOSE_BUILD=ON
-DOpenCV_DIR=$(INSTALL_OPENCV)/cmake
-DENABLE_OPENCV=OFF
-DPYTHON_INCLUDE_DIRS=$(INSTALL_PYTHON)/include/python3.8
-DPYTHON_LIBRARY=$(INSTALL_PYTHON)/lib/libpython3.8.so
-DENABLE_PYTHON=ON
@@ -143,6 +155,7 @@ jobs:
-DCMAKE_BUILD_TYPE=$(BUILD_TYPE)
-DENABLE_SAMPLES=ON
-DBUILD_java_api=OFF
-DBUILD_cuda_plugin=OFF
-DENABLE_INTEL_MYRIAD=OFF
-DTHREADING=SEQ
-DIE_EXTRA_MODULES=$(OPENVINO_CONTRIB_REPO_DIR)/modules

View File

@@ -4,8 +4,21 @@ trigger:
- master
- releases/*
paths:
exclude:
- docs/*
exclude:
- docs/
- /**/docs/*
- /**/*.md
pr:
branches:
include:
- master
- releases/*
paths:
exclude:
- docs/
- /**/docs/*
- /**/*.md
jobs:
- job: LinCC
@@ -21,7 +34,6 @@ jobs:
VSTS_HTTP_TIMEOUT: 200
BUILD_TYPE: Release
REPO_DIR: $(Build.Repository.LocalPath)
OPENVINO_CONTRIB_REPO_DIR: $(REPO_DIR)/../openvino_contrib
MODELS_PATH: $(REPO_DIR)/../testdata
WORK_DIR: $(Pipeline.Workspace)/_w
BUILD_DIR: $(WORK_DIR)/build

View File

@@ -4,7 +4,7 @@ resources:
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/openvino_contrib
ref: master
ref: releases/2022/2
jobs:
- job: Lin

View File

@@ -4,7 +4,7 @@
# type: github
# endpoint: openvinotoolkit
# name: openvinotoolkit/testdata
# ref: master
# ref: releases/2022/2
jobs:
- job: Lin_lohika

View File

@@ -4,8 +4,21 @@ trigger:
- master
- releases/*
paths:
exclude:
- docs/*
exclude:
- docs/
- /**/docs/*
- /**/*.md
pr:
branches:
include:
- master
- releases/*
paths:
exclude:
- docs/
- /**/docs/*
- /**/*.md
jobs:
- job: OpenVINO_ONNX_CI

View File

@@ -4,8 +4,21 @@ trigger:
- master
- releases/*
paths:
exclude:
- docs/*
exclude:
- docs/
- /**/docs/*
- /**/*.md
pr:
branches:
include:
- master
- releases/*
paths:
exclude:
- docs/
- /**/docs/*
- /**/*.md
jobs:
- job: onnxruntime
@@ -95,7 +108,6 @@ jobs:
-DPYTHON_EXECUTABLE=/usr/bin/python3.8
-DENABLE_INTEL_MYRIAD_COMMON=OFF
-DENABLE_INTEL_GNA=OFF
-DENABLE_OPENCV=OFF
-DENABLE_CPPLINT=OFF
-DENABLE_TESTS=OFF
-DENABLE_INTEL_CPU=ON

View File

@@ -4,8 +4,21 @@ trigger:
- master
- releases/*
paths:
exclude:
- docs/*
exclude:
- docs/
- /**/docs/*
- /**/*.md
pr:
branches:
include:
- master
- releases/*
paths:
exclude:
- docs/
- /**/docs/*
- /**/*.md
resources:
repositories:
@@ -13,13 +26,13 @@ resources:
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/openvino_contrib
ref: master
ref: releases/2022/2
- repository: testdata
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/testdata
ref: master
ref: releases/2022/2
jobs:
- job: Mac
@@ -101,7 +114,7 @@ jobs:
export PATH="/usr/local/opt/cython/bin:$PATH"
export CC=gcc
export CXX=g++
cmake -GNinja -DVERBOSE_BUILD=ON -DENABLE_REQUIREMENTS_INSTALL=OFF -DCMAKE_BUILD_TYPE=$(BUILD_TYPE) -DENABLE_PYTHON=ON -DENABLE_TESTS=OFF -DENABLE_STRICT_DEPENDENCIES=OFF -DIE_EXTRA_MODULES=$(OPENVINO_CONTRIB_REPO_DIR)/modules -DCMAKE_CXX_COMPILER_LAUNCHER=ccache -DCMAKE_C_COMPILER_LAUNCHER=ccache $(REPO_DIR)
cmake -GNinja -DVERBOSE_BUILD=ON -DENABLE_REQUIREMENTS_INSTALL=OFF -DCMAKE_BUILD_TYPE=$(BUILD_TYPE) -DENABLE_PYTHON=ON -DENABLE_TESTS=OFF -DENABLE_STRICT_DEPENDENCIES=OFF -DIE_EXTRA_MODULES=$(OPENVINO_CONTRIB_REPO_DIR)/modules -DCMAKE_CXX_COMPILER_LAUNCHER=ccache -DCMAKE_C_COMPILER_LAUNCHER=ccache -DBUILD_cuda_plugin=OFF $(REPO_DIR)
workingDirectory: $(BUILD_DIR)
displayName: 'CMake'
@@ -145,7 +158,6 @@ jobs:
set -e
mkdir -p $(INSTALL_DIR)/opencv/
cmake -DCMAKE_INSTALL_PREFIX=$(INSTALL_DIR) -DCOMPONENT=tests -P cmake_install.cmake
cp -R $(REPO_DIR)/temp/opencv_4.5.2_osx/opencv/* $(INSTALL_DIR)/opencv/
workingDirectory: $(BUILD_DIR)
displayName: 'Install tests'
@@ -212,14 +224,6 @@ jobs:
continueOnError: false
enabled: false
- script: |
export DATA_PATH=$(MODELS_PATH)
export MODELS_PATH=$(MODELS_PATH)
. $(SETUPVARS) && $(INSTALL_TEST_DIR)/OpenVinoCAPITests --gtest_output=xml:TEST-OpenVinoCAPITests.xml
displayName: 'IE CAPITests'
continueOnError: false
enabled: false
- task: PublishTestResults@2
condition: always()
inputs:

View File

@@ -4,8 +4,21 @@ trigger:
- master
- releases/*
paths:
exclude:
- docs/*
exclude:
- docs/
- /**/docs/*
- /**/*.md
pr:
branches:
include:
- master
- releases/*
paths:
exclude:
- docs/
- /**/docs/*
- /**/*.md
resources:
repositories:
@@ -13,13 +26,13 @@ resources:
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/openvino_contrib
ref: master
ref: releases/2022/2
- repository: testdata
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/testdata
ref: master
ref: releases/2022/2
jobs:
- job: Win
@@ -32,7 +45,7 @@ jobs:
maxParallel: 2
# About 150% of total time
timeoutInMinutes: 270 #Temporary change
timeoutInMinutes: 270 #Temporary change
pool:
name: WIN_VMSS_VENV_D8S_WU2
@@ -135,7 +148,7 @@ jobs:
- script: |
set PATH=$(WORK_DIR)\ninja-win;%PATH%
call "$(MSVS_VARS_PATH)" && $(CMAKE_CMD) -G "Ninja Multi-Config" -DENABLE_WHEEL=ON -DENABLE_ONEDNN_FOR_GPU=$(CMAKE_BUILD_SHARED_LIBS) -DBUILD_SHARED_LIBS=$(CMAKE_BUILD_SHARED_LIBS) -DENABLE_REQUIREMENTS_INSTALL=OFF -DENABLE_FASTER_BUILD=ON -DCMAKE_BUILD_TYPE=$(BUILD_TYPE) -DENABLE_TESTS=ON -DENABLE_STRICT_DEPENDENCIES=OFF -DENABLE_PYTHON=ON -DPYTHON_EXECUTABLE="C:\hostedtoolcache\windows\Python\3.7.6\x64\python.exe" -DPYTHON_INCLUDE_DIR="C:\hostedtoolcache\windows\Python\3.7.6\x64\include" -DPYTHON_LIBRARY="C:\hostedtoolcache\windows\Python\3.7.6\x64\libs\python37.lib" -DIE_EXTRA_MODULES=$(OPENVINO_CONTRIB_REPO_DIR)\modules -DCMAKE_C_COMPILER:PATH="$(MSVC_COMPILER_PATH)" -DCMAKE_CXX_COMPILER:PATH="$(MSVC_COMPILER_PATH)" $(REPO_DIR)
call "$(MSVS_VARS_PATH)" && $(CMAKE_CMD) -G "Ninja Multi-Config" -DENABLE_WHEEL=ON -DENABLE_ONEDNN_FOR_GPU=$(CMAKE_BUILD_SHARED_LIBS) -DBUILD_SHARED_LIBS=$(CMAKE_BUILD_SHARED_LIBS) -DENABLE_REQUIREMENTS_INSTALL=OFF -DENABLE_FASTER_BUILD=ON -DCMAKE_BUILD_TYPE=$(BUILD_TYPE) -DENABLE_TESTS=ON -DENABLE_STRICT_DEPENDENCIES=OFF -DENABLE_PYTHON=ON -DPYTHON_EXECUTABLE="C:\hostedtoolcache\windows\Python\3.7.6\x64\python.exe" -DPYTHON_INCLUDE_DIR="C:\hostedtoolcache\windows\Python\3.7.6\x64\include" -DPYTHON_LIBRARY="C:\hostedtoolcache\windows\Python\3.7.6\x64\libs\python37.lib" -DIE_EXTRA_MODULES=$(OPENVINO_CONTRIB_REPO_DIR)\modules -DCMAKE_C_COMPILER:PATH="$(MSVC_COMPILER_PATH)" -DCMAKE_CXX_COMPILER:PATH="$(MSVC_COMPILER_PATH)" -DBUILD_cuda_plugin=OFF $(REPO_DIR)
workingDirectory: $(BUILD_DIR)
displayName: 'CMake'
@@ -195,7 +208,7 @@ jobs:
displayName: 'Samples Smoke Tests'
continueOnError: false
- script: $(CMAKE_CMD) -DCMAKE_INSTALL_PREFIX=$(INSTALL_DIR) -DCOMPONENT=tests -P cmake_install.cmake && xcopy $(REPO_DIR)\temp\opencv_4.5.2\opencv\* $(INSTALL_DIR)\opencv\ /e /h /y
- script: $(CMAKE_CMD) -DCMAKE_INSTALL_PREFIX=$(INSTALL_DIR) -DCOMPONENT=tests -P cmake_install.cmake
workingDirectory: $(BUILD_DIR)
displayName: 'Install tests'
@@ -276,7 +289,7 @@ jobs:
- script: call $(SETUPVARS) && $(INSTALL_TEST_DIR)\cpuFuncTests --gtest_filter=*smoke* --gtest_output=xml:TEST-cpuFuncTests.xml
displayName: 'CPU FuncTests'
continueOnError: false
condition: eq(variables['CMAKE_BUILD_SHARED_LIBS'], 'OFF')
condition: and(succeeded(), eq(variables['CMAKE_BUILD_SHARED_LIBS'], 'OFF'))
- script: |
set DATA_PATH=$(MODELS_PATH)
@@ -285,13 +298,6 @@ jobs:
displayName: 'IE CAPITests'
continueOnError: false
- script: |
set DATA_PATH=$(MODELS_PATH)
set MODELS_PATH=$(MODELS_PATH)
call $(SETUPVARS) && $(INSTALL_TEST_DIR)\OpenVinoCAPITests --gtest_output=xml:TEST-OpenVinoCAPITests.xml
displayName: 'OV CAPITests'
continueOnError: false
- task: PublishTestResults@2
condition: always()
inputs:

View File

@@ -1,11 +1,24 @@
trigger:
branches:
include:
- master
- releases/*
paths:
exclude:
- docs/
- /**/docs/*
- /**/*.md
pr:
branches:
include:
- master
- releases/*
paths:
exclude:
- docs/*
exclude:
- docs/
- /**/docs/*
- /**/*.md
jobs:
- job: WinCC
@@ -21,7 +34,6 @@ jobs:
VSTS_HTTP_TIMEOUT: 200
BUILD_TYPE: Release
REPO_DIR: $(Build.Repository.LocalPath)
OPENVINO_CONTRIB_REPO_DIR: $(REPO_DIR)\..\openvino_contrib
MODELS_PATH: $(REPO_DIR)\..\testdata
WORK_DIR: $(Pipeline.Workspace)\_w
BUILD_DIR: $(WORK_DIR)\build

View File

@@ -59,7 +59,6 @@ RUN cmake .. \
-DCMAKE_BUILD_TYPE=${BUILD_TYPE} \
-DENABLE_INTEL_MYRIAD_COMMON=OFF \
-DENABLE_INTEL_GNA=OFF \
-DENABLE_OPENCV=OFF \
-DENABLE_CPPLINT=OFF \
-DENABLE_NCC_STYLE=OFF \
-DENABLE_TESTS=OFF \

1
.gitattributes vendored
View File

@@ -64,6 +64,7 @@
*.gif filter=lfs diff=lfs merge=lfs -text
*.vsdx filter=lfs diff=lfs merge=lfs -text
*.bmp filter=lfs diff=lfs merge=lfs -text
*.svg filter=lfs diff=lfs merge=lfs -text
#POT attributes
tools/pot/tests/data/test_cases_refs/* filter=lfs diff=lfs merge=lfs -text

View File

@@ -4,7 +4,7 @@ on: [push, pull_request]
jobs:
Build_Doc:
if: github.repository == 'openvinotoolkit/openvino'
runs-on: ubuntu-20.04
runs-on: ubuntu-22.04
steps:
- name: Clone OpenVINO
uses: actions/checkout@v2
@@ -17,11 +17,11 @@ jobs:
set -e
# install doc dependencies
sudo apt update
sudo apt --assume-yes install libusb-1.0-0-dev graphviz texlive
sudo apt --assume-yes install libusb-1.0-0-dev graphviz texlive liblua5.2-0
cd docs
python -m pip install -r requirements.txt --user
python3 -m pip install -r requirements.txt --user
cd openvino_sphinx_theme
python setup.py install --user
python3 setup.py install --user
cd ../..
# install doxyrest
wget https://github.com/vovkos/doxyrest/releases/download/doxyrest-2.1.3/doxyrest-2.1.3-linux-amd64.tar.xz
@@ -43,7 +43,7 @@ jobs:
run: |
mkdir build
cd build
cmake -DENABLE_DOCS=ON -DENABLE_PYTHON=ON -DNGRAPH_PYTHON_BUILD_ENABLE=ON -DCMAKE_BUILD_TYPE=Release ..
cmake -DENABLE_DOCS=ON -DENABLE_PYTHON=ON -DCMAKE_BUILD_TYPE=Release ..
- name: Build doc
run: |

View File

@@ -3,7 +3,7 @@ on: [pull_request]
jobs:
Checks:
runs-on: ubuntu-20.04
runs-on: ubuntu-22.04
steps:
- name: Clone OpenVINO
uses: actions/checkout@v2

View File

@@ -48,7 +48,7 @@ jobs:
path: build/code_style_diff.diff
ShellCheck:
runs-on: ubuntu-18.04
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v2
with:
@@ -73,7 +73,7 @@ jobs:
working-directory: build
NamingConventionCheck:
runs-on: ubuntu-20.04
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v2
with:
@@ -82,8 +82,8 @@ jobs:
- name: Install Clang dependency
run: |
sudo apt update
sudo apt --assume-yes remove clang-7 clang-8 clang-9 clang-10 clang-11
sudo apt --assume-yes install libclang-12-dev
sudo apt --assume-yes remove clang-7 clang-8 clang-9 clang-10 clang-11 clang-12 clang-13
sudo apt --assume-yes install libclang-14-dev
- name: Install Python-based dependencies
run: python3 -m pip install -r cmake/developer_package/ncc_naming_style/requirements_dev.txt

View File

@@ -3,7 +3,7 @@ on: [push, pull_request]
jobs:
Check_Files_Size:
runs-on: ubuntu-18.04
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v2

View File

@@ -9,7 +9,7 @@ on:
jobs:
Pylint-UT:
runs-on: ubuntu-18.04
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v2
with:

View File

@@ -6,13 +6,15 @@ on:
paths:
- 'src/bindings/python/**'
- 'samples/python/**'
- '.github/workflows/py_checks.yml'
pull_request:
paths:
- 'src/bindings/python/**'
- 'samples/python/**'
- '.github/workflows/py_checks.yml'
jobs:
linters:
runs-on: ubuntu-18.04
runs-on: ubuntu-20.04
steps:
- name: Code checkout
uses: actions/checkout@v2
@@ -121,4 +123,4 @@ jobs:
run: python -m bandit -r ./ -f screen
working-directory: src/bindings/python/src/compatibility/openvino

1
.gitignore vendored
View File

@@ -1,6 +1,7 @@
# build/artifact dirs
_*
[Bb]uild*/
cmake-build*
# but ensure we don't skip __init__.py and __main__.py
!__init__.py

View File

@@ -2,7 +2,7 @@
<img src="docs/img/openvino-logo-purple-black.png" width="400px">
[![Stable release](https://img.shields.io/badge/version-2022.1-green.svg)](https://github.com/openvinotoolkit/openvino/releases/tag/2022.1)
[![Stable release](https://img.shields.io/badge/version-2022.2-green.svg)](https://github.com/openvinotoolkit/openvino/releases/tag/2022.2.0)
[![Apache License Version 2.0](https://img.shields.io/badge/license-Apache_2.0-green.svg)](LICENSE)
![GitHub branch checks state](https://img.shields.io/github/checks-status/openvinotoolkit/openvino/master?label=GitHub%20checks)
![Azure DevOps builds (branch)](https://img.shields.io/azure-devops/build/openvinoci/b2bab62f-ab2f-4871-a538-86ea1be7d20f/13?label=Public%20CI)
@@ -34,24 +34,24 @@ OpenVINO™ is an open-source toolkit for optimizing and deploying AI inference.
- Reduce resource demands and efficiently deploy on a range of Intel® platforms from edge to cloud
This open-source version includes several components: namely [Model Optimizer], [OpenVINO™ Runtime], [Post-Training Optimization Tool], as well as CPU, GPU, MYRIAD, multi device and heterogeneous plugins to accelerate deep learning inferencing on Intel® CPUs and Intel® Processor Graphics.
It supports pre-trained models from the [Open Model Zoo], along with 100+ open
This open-source version includes several components: namely [Model Optimizer], [OpenVINO™ Runtime], [Post-Training Optimization Tool], as well as CPU, GPU, MYRIAD, multi device and heterogeneous plugins to accelerate deep learning inference on Intel® CPUs and Intel® Processor Graphics.
It supports pre-trained models from [Open Model Zoo], along with 100+ open
source and public models in popular formats such as TensorFlow, ONNX, PaddlePaddle, MXNet, Caffe, Kaldi.
### Components
* [OpenVINO™ Runtime] - is a set of C++ libraries with C and Python bindings providing a common API to deliver inference solutions on the platform of your choice.
* [core](https://github.com/openvinotoolkit/openvino/tree/master/src/core) - provides the base API for model representation and modification.
* [inference](https://github.com/openvinotoolkit/openvino/tree/master/src/inference) - provides an API to infer models on device.
* [transformations](https://github.com/openvinotoolkit/openvino/tree/master/src/common/transformations) - contains the set of common transformations which are used in OpenVINO plugins.
* [low precision transformations](https://github.com/openvinotoolkit/openvino/tree/master/src/common/low_precision_transformations) - contains the set of transformations which are used in low precision models
* [bindings](https://github.com/openvinotoolkit/openvino/tree/master/src/bindings) - contains all awailable OpenVINO bindings which are maintained by OpenVINO team.
* [c](https://github.com/openvinotoolkit/openvino/tree/master/src/bindings/c) - provides C API for OpenVINO™ Runtime
* [python](https://github.com/openvinotoolkit/openvino/tree/master/src/bindings/python) - Python API for OpenVINO™ Runtime
* [Plugins](https://github.com/openvinotoolkit/openvino/tree/master/src/plugins) - contains OpenVINO plugins which are maintained in open-source by OpenVINO team. For more information please taje a look to the [list of supported devices](#supported-hardware-matrix).
* [Frontends](https://github.com/openvinotoolkit/openvino/tree/master/src/frontends) - contains available OpenVINO frontends which allow to read model from native framework format.
* [core](./src/core) - provides the base API for model representation and modification.
* [inference](./src/inference) - provides an API to infer models on the device.
* [transformations](./src/common/transformations) - contains the set of common transformations which are used in OpenVINO plugins.
* [low precision transformations](./src/common/low_precision_transformations) - contains the set of transformations that are used in low precision models
* [bindings](./src/bindings) - contains all available OpenVINO bindings which are maintained by the OpenVINO team.
* [c](./src/bindings/c) - C API for OpenVINO™ Runtime
* [python](./src/bindings/python) - Python API for OpenVINO™ Runtime
* [Plugins](./src/plugins) - contains OpenVINO plugins which are maintained in open-source by the OpenVINO team. For more information, take a look at the [list of supported devices](#supported-hardware-matrix).
* [Frontends](./src/frontends) - contains available OpenVINO frontends that allow reading models from the native framework format.
* [Model Optimizer] - is a cross-platform command-line tool that facilitates the transition between training and deployment environments, performs static model analysis, and adjusts deep learning models for optimal execution on end-point target devices.
* [Post-Training Optimization Tool] - is designed to accelerate the inference of deep learning models by applying special methods without model retraining or fine-tuning, for example, post-training 8-bit quantization.
* [Samples] - applications on C, C++ and Python languages which shows basic use cases of OpenVINO usages.
* [Samples] - applications in C, C++ and Python languages that show basic OpenVINO use cases.
## Supported Hardware matrix
@@ -69,37 +69,37 @@ The OpenVINO™ Runtime can infer models on different hardware devices. This sec
<tbody>
<tr>
<td rowspan=2>CPU</td>
<td> <a href="https://docs.openvino.ai/nightly/openvino_docs_OV_UG_supported_plugins_CPU.html#doxid-openvino-docs-o-v-u-g-supported-plugins-c-p-u">Intel CPU</a></tb>
<td><b><i><a href="https://github.com/openvinotoolkit/openvino/tree/master/src/plugins/intel_cpu">openvino_intel_cpu_plugin</a></i></b></td>
<td> <a href="https://docs.openvino.ai/2022.2/openvino_docs_OV_UG_supported_plugins_CPU.html#doxid-openvino-docs-o-v-u-g-supported-plugins-c-p-u">Intel CPU</a></tb>
<td><b><i><a href="./src/plugins/intel_cpu">openvino_intel_cpu_plugin</a></i></b></td>
<td>Intel Xeon with Intel® Advanced Vector Extensions 2 (Intel® AVX2), Intel® Advanced Vector Extensions 512 (Intel® AVX-512), and AVX512_BF16, Intel Core Processors with Intel AVX2, Intel Atom Processors with Intel® Streaming SIMD Extensions (Intel® SSE)</td>
</tr>
<tr>
<td> <a href="https://docs.openvino.ai/nightly/openvino_docs_OV_UG_supported_plugins_ARM_CPU.html">ARM CPU</a></tb>
<td> <a href="https://docs.openvino.ai/2022.2/openvino_docs_OV_UG_supported_plugins_ARM_CPU.html">ARM CPU</a></tb>
<td><b><i><a href="https://github.com/openvinotoolkit/openvino_contrib/tree/master/modules/arm_plugin">openvino_arm_cpu_plugin</a></i></b></td>
<td>Raspberry Pi™ 4 Model B, Apple® Mac mini with M1 chip, NVIDIA® Jetson Nano™, Android™ devices
</tr>
<tr>
<td>GPU</td>
<td><a href="https://docs.openvino.ai/nightly/openvino_docs_OV_UG_supported_plugins_GPU.html#doxid-openvino-docs-o-v-u-g-supported-plugins-g-p-u">Intel GPU</a></td>
<td><b><i><a href="https://github.com/openvinotoolkit/openvino/tree/master/src/plugins/intel_gpu">openvino_intel_gpu_plugin</a></i></b></td>
<td><a href="https://docs.openvino.ai/2022.2/openvino_docs_OV_UG_supported_plugins_GPU.html#doxid-openvino-docs-o-v-u-g-supported-plugins-g-p-u">Intel GPU</a></td>
<td><b><i><a href="./src/plugins/intel_gpu">openvino_intel_gpu_plugin</a></i></b></td>
<td>Intel Processor Graphics, including Intel HD Graphics and Intel Iris Graphics</td>
</tr>
<tr>
<td>GNA</td>
<td><a href="https://docs.openvino.ai/nightly/openvino_docs_OV_UG_supported_plugins_GNA.html#doxid-openvino-docs-o-v-u-g-supported-plugins-g-n-a">Intel GNA</a></td>
<td><b><i><a href="https://github.com/openvinotoolkit/openvino/tree/master/src/plugins/intel_gna">openvino_intel_gna_plugin</a></i></b></td>
<td><a href="https://docs.openvino.ai/2022.2/openvino_docs_OV_UG_supported_plugins_GNA.html#doxid-openvino-docs-o-v-u-g-supported-plugins-g-n-a">Intel GNA</a></td>
<td><b><i><a href="./src/plugins/intel_gna">openvino_intel_gna_plugin</a></i></b></td>
<td>Intel Speech Enabling Developer Kit, Amazon Alexa* Premium Far-Field Developer Kit, Intel Pentium Silver J5005 Processor, Intel Pentium Silver N5000 Processor, Intel Celeron J4005 Processor, Intel Celeron J4105 Processor, Intel Celeron Processor N4100, Intel Celeron Processor N4000, Intel Core i3-8121U Processor, Intel Core i7-1065G7 Processor, Intel Core i7-1060G7 Processor, Intel Core i5-1035G4 Processor, Intel Core i5-1035G7 Processor, Intel Core i5-1035G1 Processor, Intel Core i5-1030G7 Processor, Intel Core i5-1030G4 Processor, Intel Core i3-1005G1 Processor, Intel Core i3-1000G1 Processor, Intel Core i3-1000G4 Processor</td>
</tr>
<tr>
<td>VPU</td>
<td><a href="https://docs.openvino.ai/nightly/openvino_docs_IE_DG_supported_plugins_VPU.html#doxid-openvino-docs-i-e-d-g-supported-plugins-v-p-u">Myriad plugin</a></td>
<td><b><i><a href="https://github.com/openvinotoolkit/openvino/tree/master/src/plugins/intel_myriad">openvino_intel_myriad_plugin</a></i></b></td>
<td><a href="https://docs.openvino.ai/2022.2/openvino_docs_IE_DG_supported_plugins_VPU.html#doxid-openvino-docs-i-e-d-g-supported-plugins-v-p-u">Myriad plugin</a></td>
<td><b><i><a href="./src/plugins/intel_myriad">openvino_intel_myriad_plugin</a></i></b></td>
<td>Intel® Neural Compute Stick 2 powered by the Intel® Movidius™ Myriad™ X</td>
</tr>
</tbody>
</table>
Also OpenVINO™ Toolkit contains several plugins which should simplify to load model on several hardware devices:
OpenVINO™ Toolkit also contains several plugins which simplify loading models on several hardware devices:
<table>
<thead>
<tr>
@@ -110,23 +110,23 @@ Also OpenVINO™ Toolkit contains several plugins which should simplify to load
</thead>
<tbody>
<tr>
<td><a href="https://docs.openvino.ai/nightly/openvino_docs_IE_DG_supported_plugins_AUTO.html#doxid-openvino-docs-i-e-d-g-supported-plugins-a-u-t-o">Auto</a></td>
<td><b><i><a href="https://github.com/openvinotoolkit/openvino/tree/master/src/plugins/auto">openvino_auto_plugin</a></i></b></td>
<td><a href="https://docs.openvino.ai/2022.2/openvino_docs_IE_DG_supported_plugins_AUTO.html#doxid-openvino-docs-i-e-d-g-supported-plugins-a-u-t-o">Auto</a></td>
<td><b><i><a href="./src/plugins/auto">openvino_auto_plugin</a></i></b></td>
<td>Auto plugin enables selecting Intel device for inference automatically</td>
</tr>
<tr>
<td><a href="https://docs.openvino.ai/nightly/openvino_docs_OV_UG_Automatic_Batching.html">Auto Batch</a></td>
<td><b><i><a href="https://github.com/openvinotoolkit/openvino/tree/master/src/plugins/auto_batch">openvino_auto_batch_plugin</a></i></b></td>
<td><a href="https://docs.openvino.ai/2022.2/openvino_docs_OV_UG_Automatic_Batching.html">Auto Batch</a></td>
<td><b><i><a href="./src/plugins/auto_batch">openvino_auto_batch_plugin</a></i></b></td>
<td>Auto batch plugin performs on-the-fly automatic batching (i.e. grouping inference requests together) to improve device utilization, with no programming effort from the user</td>
</tr>
<tr>
<td><a href="https://docs.openvino.ai/nightly/openvino_docs_OV_UG_Hetero_execution.html#doxid-openvino-docs-o-v-u-g-hetero-execution">Hetero</a></td>
<td><b><i><a href="https://github.com/openvinotoolkit/openvino/tree/master/src/plugins/hetero">openvino_hetero_plugin</a></i></b></td>
<td><a href="https://docs.openvino.ai/2022.2/openvino_docs_OV_UG_Hetero_execution.html#doxid-openvino-docs-o-v-u-g-hetero-execution">Hetero</a></td>
<td><b><i><a href="./src/plugins/hetero">openvino_hetero_plugin</a></i></b></td>
<td>Heterogeneous execution enables automatic inference splitting between several devices</td>
</tr>
<tr>
<td><a href="https://docs.openvino.ai/nightly/openvino_docs_OV_UG_Running_on_multiple_devices.html#doxid-openvino-docs-o-v-u-g-running-on-multiple-devices">Multi</a></td>
<td><b><i><a href="https://github.com/openvinotoolkit/openvino/tree/master/src/plugins/auto">openvino_auto_plugin</a></i></b></td>
<td><a href="https://docs.openvino.ai/2022.2/openvino_docs_OV_UG_Running_on_multiple_devices.html#doxid-openvino-docs-o-v-u-g-running-on-multiple-devices">Multi</a></td>
<td><b><i><a href="./src/plugins/auto">openvino_auto_plugin</a></i></b></td>
<td>Multi plugin enables simultaneous inference of the same model on several devices in parallel</td>
</tr>
</tbody>
@@ -140,11 +140,11 @@ By contributing to the project, you agree to the license and copyright terms the
### User documentation
The latest documentation for OpenVINO™ Toolkit is availabe [here](https://docs.openvino.ai/). This documentation contains detailed information about all OpenVINO components and provides all important information which could be needed if you create an application which is based on binary OpenVINO distribution or own OpenVINO version without source code modification.
The latest documentation for OpenVINO™ Toolkit is available [here](https://docs.openvino.ai/). This documentation contains detailed information about all OpenVINO components and provides all the important information you may need to create an application based on binary OpenVINO distribution or own OpenVINO version without source code modification.
### Developer documentation
[Developer documentation](#todo-add) contains information about architectural decisions which are applied inside the OpenVINO components. This documentation has all necessary information which could be needed in order to contribute to OpenVINO.
[Developer documentation](./docs/dev/index.md) contains information about architectural decisions which are applied inside the OpenVINO components. This documentation has all necessary information which could be needed in order to contribute to OpenVINO.
## Tutorials
@@ -161,15 +161,15 @@ The list of OpenVINO tutorials:
## System requirements
The full information about system requirements depends on platform and available in section `System requirement` on dedicated pages:
- [Linux](https://docs.openvino.ai/latest/openvino_docs_install_guides_installing_openvino_linux.html)
- [Windows](https://docs.openvino.ai/latest/openvino_docs_install_guides_installing_openvino_windows.html)
- [macOS](https://docs.openvino.ai/latest/openvino_docs_install_guides_installing_openvino_macos.html)
- [Raspbian](https://docs.openvino.ai/latest/openvino_docs_install_guides_installing_openvino_raspbian.html)
The system requirements vary depending on platform and are available on dedicated pages:
- [Linux](https://docs.openvino.ai/2022.2/openvino_docs_install_guides_installing_openvino_linux_header.html)
- [Windows](https://docs.openvino.ai/2022.2/openvino_docs_install_guides_installing_openvino_windows_header.html)
- [macOS](https://docs.openvino.ai/2022.2/openvino_docs_install_guides_installing_openvino_macos_header.html)
- [Raspbian](https://docs.openvino.ai/2022.2/openvino_docs_install_guides_installing_openvino_raspbian.html)
## How to build
Please take a look to [OpenVINO Wiki](https://github.com/openvinotoolkit/openvino/wiki#how-to-build) to get more information about OpenVINO build process.
See the [OpenVINO Wiki](https://github.com/openvinotoolkit/openvino/wiki#how-to-build) to get more information about the OpenVINO build process.
## How to contribute
@@ -177,13 +177,13 @@ See [CONTRIBUTING](./CONTRIBUTING.md) for details. Thank you!
## Get a support
Please report questions, issues and suggestions using:
Report questions, issues and suggestions, using:
* [GitHub* Issues](https://github.com/openvinotoolkit/openvino/issues)
* The [`openvino`](https://stackoverflow.com/questions/tagged/openvino) tag on StackOverflow\*
* [Forum](https://software.intel.com/en-us/forums/computer-vision)
## See also
## Additional Resources
* [OpenVINO Wiki](https://github.com/openvinotoolkit/openvino/wiki)
* [OpenVINO Storage](https://storage.openvinotoolkit.org/)
@@ -194,15 +194,15 @@ Please report questions, issues and suggestions using:
* [Neural Network Compression Framework (NNCF)](https://github.com/openvinotoolkit/nncf) - a suite of advanced algorithms for model inference optimization including quantization, filter pruning, binarization and sparsity
* [OpenVINO™ Training Extensions (OTE)](https://github.com/openvinotoolkit/training_extensions) - convenient environment to train Deep Learning models and convert them using OpenVINO for optimized inference.
* [OpenVINO™ Model Server (OVMS)](https://github.com/openvinotoolkit/model_server) - a scalable, high-performance solution for serving deep learning models optimized for Intel architectures
* [DL Workbench](https://docs.openvino.ai/nightly/workbench_docs_Workbench_DG_Introduction.html) - An alternative, web-based version of OpenVINO designed to make production of pretrained deep learning models significantly easier.
* [Computer Vision Annotation Tool (CVAT)](https://github.com/openvinotoolkit/cvat) - an online, interactive video and image annotation tool for computer vision purposes.
* [DL Workbench](https://docs.openvino.ai/2022.2/workbench_docs_Workbench_DG_Introduction.html) - an alternative, web-based version of OpenVINO designed to facilitate optimization and compression of pre-trained deep learning models.
* [Computer Vision Annotation Tool (CVAT)](https://github.com/opencv/cvat) - an online, interactive video and image annotation tool for computer vision purposes.
* [Dataset Management Framework (Datumaro)](https://github.com/openvinotoolkit/datumaro) - a framework and CLI tool to build, transform, and analyze datasets.
---
\* Other names and brands may be claimed as the property of others.
[Open Model Zoo]:https://github.com/openvinotoolkit/open_model_zoo
[OpenVINO™ Runtime]:https://docs.openvino.ai/latest/openvino_docs_OV_UG_OV_Runtime_User_Guide.html
[Model Optimizer]:https://docs.openvino.ai/latest/openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html
[Post-Training Optimization Tool]:https://docs.openvino.ai/latest/pot_introduction.html
[OpenVINO™ Runtime]:https://docs.openvino.ai/2022.2/openvino_docs_OV_UG_OV_Runtime_User_Guide.html
[Model Optimizer]:https://docs.openvino.ai/2022.2/openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html
[Post-Training Optimization Tool]:https://docs.openvino.ai/2022.2/pot_introduction.html
[Samples]:https://github.com/openvinotoolkit/openvino/tree/master/samples

View File

@@ -84,6 +84,11 @@ ie_coverage_extract(INPUT "openvino" OUTPUT "core"
ie_coverage_genhtml(INFO_FILE "core"
PREFIX "${OV_COVERAGE_BASE_DIRECTORY}")
ie_coverage_extract(INPUT "openvino" OUTPUT "openvino_all"
PATTERNS "${OV_COVERAGE_BASE_DIRECTORY}/src/*" "${OV_COVERAGE_BASE_DIRECTORY}/docs/template_plugin/*")
ie_coverage_genhtml(INFO_FILE "openvino_all"
PREFIX "${OV_COVERAGE_BASE_DIRECTORY}")
if(ENABLE_OV_ONNX_FRONTEND)
ie_coverage_extract(INPUT "openvino" OUTPUT "onnx"
PATTERNS "${OV_COVERAGE_BASE_DIRECTORY}/src/frontends/onnx/*"

View File

@@ -151,6 +151,9 @@ function(ov_download_tbb)
if(EXISTS "${TBBROOT}/lib/cmake/TBB/TBBConfig.cmake")
# oneTBB case
update_deps_cache(TBB_DIR "${TBBROOT}/lib/cmake/TBB" "Path to TBB cmake folder")
elseif(EXISTS "${TBBROOT}/lib/cmake/tbb/TBBConfig.cmake")
# oneTBB release package version less than 2021.6.0
update_deps_cache(TBB_DIR "${TBBROOT}/lib/cmake/tbb" "Path to TBB cmake folder")
elseif(EXISTS "${TBBROOT}/lib64/cmake/TBB/TBBConfig.cmake")
# 64-bits oneTBB case
update_deps_cache(TBB_DIR "${TBBROOT}/lib64/cmake/TBB" "Path to TBB cmake folder")

View File

@@ -28,7 +28,6 @@ if(ENABLE_CLANG_FORMAT AND NOT TARGET clang_format_check_all)
add_custom_target(clang_format_fix_all)
set_target_properties(clang_format_check_all clang_format_fix_all
PROPERTIES FOLDER clang_format)
set(CLANG_FORMAT_ALL_OUTPUT_FILES "" CACHE INTERNAL "All clang-format output files")
endif()
function(add_clang_format_target TARGET_NAME)
@@ -88,14 +87,10 @@ function(add_clang_format_target TARGET_NAME)
"[clang-format] ${source_file}"
VERBATIM)
list(APPEND all_input_sources "${source_file}")
list(APPEND all_output_files "${output_file}")
endforeach()
set(CLANG_FORMAT_ALL_OUTPUT_FILES
${CLANG_FORMAT_ALL_OUTPUT_FILES} ${all_output_files}
CACHE INTERNAL
"All clang-format output files")
add_custom_target(${TARGET_NAME}
DEPENDS ${all_output_files}
COMMENT "[clang-format] ${TARGET_NAME}")
@@ -104,11 +99,11 @@ function(add_clang_format_target TARGET_NAME)
COMMAND
"${CMAKE_COMMAND}"
-D "CLANG_FORMAT=${CLANG_FORMAT}"
-D "INPUT_FILES=${CLANG_FORMAT_FOR_SOURCES}"
-D "INPUT_FILES=${all_input_sources}"
-D "EXCLUDE_PATTERNS=${CLANG_FORMAT_EXCLUDE_PATTERNS}"
-P "${IEDevScripts_DIR}/clang_format/clang_format_fix.cmake"
DEPENDS
"${CLANG_FORMAT_FOR_SOURCES}"
"${all_input_sources}"
"${IEDevScripts_DIR}/clang_format/clang_format_fix.cmake"
COMMENT
"[clang-format] ${TARGET_NAME}_fix"

View File

@@ -9,26 +9,46 @@ endif()
set(ncc_style_dir "${IEDevScripts_DIR}/ncc_naming_style")
set(ncc_style_bin_dir "${CMAKE_CURRENT_BINARY_DIR}/ncc_naming_style")
# try to find_package(Clang QUIET)
# ClangConfig.cmake contains bug that if libclang-XX-dev is not
# installed, then find_package fails with errors even in QUIET mode
configure_file("${ncc_style_dir}/try_find_clang.cmake"
"${ncc_style_bin_dir}/source/CMakeLists.txt" COPYONLY)
execute_process(
COMMAND
"${CMAKE_COMMAND}" -S "${ncc_style_bin_dir}/source"
-B "${ncc_style_bin_dir}/build"
RESULT_VARIABLE clang_find_result
OUTPUT_VARIABLE output_var
ERROR_VARIABLE error_var)
# find python3
if(NOT clang_find_result EQUAL "0")
message(WARNING "Please, install clang-[N] libclang-[N]-dev package (required for ncc naming style check)")
message(WARNING "find_package(Clang) output: ${output_var}")
message(WARNING "find_package(Clang) error: ${error_var}")
find_package(PythonInterp 3 QUIET)
if(NOT PYTHONINTERP_FOUND)
message(WARNING "Python3 interpreter was not found (required for ncc naming style check)")
set(ENABLE_NCC_STYLE OFF)
endif()
if(PYTHON_VERSION_MINOR EQUAL 6)
set(clang_version 10)
elseif(PYTHON_VERSION_MINOR EQUAL 8)
set(clang_version 12)
elseif(PYTHON_VERSION_MINOR EQUAL 9)
set(clang_version 12)
elseif(PYTHON_VERSION_MINOR EQUAL 10)
set(clang_version 14)
endif()
if(ENABLE_NCC_STYLE)
# try to find_package(Clang QUIET)
# ClangConfig.cmake contains bug that if libclang-XX-dev is not
# installed, then find_package fails with errors even in QUIET mode
configure_file("${ncc_style_dir}/try_find_clang.cmake"
"${ncc_style_bin_dir}/source/CMakeLists.txt" COPYONLY)
execute_process(
COMMAND "${CMAKE_COMMAND}" -S "${ncc_style_bin_dir}/source"
-B "${ncc_style_bin_dir}/build"
RESULT_VARIABLE clang_find_result
OUTPUT_VARIABLE output_var
ERROR_VARIABLE error_var)
if(NOT clang_find_result EQUAL "0")
message(WARNING "Please, install `apt-get install clang-${clang_version} libclang-${clang_version}-dev` package (required for ncc naming style check)")
message(TRACE "find_package(Clang) output: ${output_var}")
message(TRACE "find_package(Clang) error: ${error_var}")
set(ENABLE_NCC_STYLE OFF)
endif()
endif()
# Since we were able to find_package(Clang) in a separate process
# let's try to find in current process
if(ENABLE_NCC_STYLE)
@@ -37,19 +57,11 @@ if(ENABLE_NCC_STYLE)
get_target_property(libclang_location libclang LOCATION)
message(STATUS "Found libclang: ${libclang_location}")
else()
message(WARNING "libclang is not found (required for ncc naming style check)")
message(WARNING "libclang-${clang_version} is not found (required for ncc naming style check)")
set(ENABLE_NCC_STYLE OFF)
endif()
endif()
# find python3
find_package(PythonInterp 3 QUIET)
if(NOT PYTHONINTERP_FOUND)
message(WARNING "Python3 interpreter was not found (required for ncc naming style check)")
set(ENABLE_NCC_STYLE OFF)
endif()
# check python requirements_dev.txt
set(ncc_script_py "${ncc_style_dir}/ncc/ncc.py")

View File

@@ -1,2 +1,5 @@
clang==11.0
clang==10.0.1; python_version == '3.6'
clang==12.0.1; python_version == '3.8'
clang==12.0.1; python_version == '3.9'
clang==14.0; python_version == '3.10'
pyyaml

View File

@@ -6,6 +6,17 @@ include(CMakeParseArguments)
find_host_program(shellcheck_PROGRAM NAMES shellcheck DOC "Path to shellcheck tool")
if(shellcheck_PROGRAM)
execute_process(COMMAND "${shellcheck_PROGRAM}" --version
RESULT_VARIABLE shellcheck_EXIT_CODE
OUTPUT_VARIABLE shellcheck_VERSION_STRING)
if(shellcheck_EXIT_CODE EQUAL 0)
if(shellcheck_VERSION_STRING MATCHES "version: ([0-9]+)\.([0-9]+).([0-9]+)")
set(shellcheck_VERSION "${CMAKE_MATCH_1}.${CMAKE_MATCH_2}.${CMAKE_MATCH_3}" CACHE STRING "shellcheck version")
endif()
endif()
endif()
function(ie_shellcheck_process)
if(NOT shellcheck_PROGRAM)
message(WARNING "shellcheck tool is not found")
@@ -33,7 +44,7 @@ function(ie_shellcheck_process)
set(output_file "${output_file}.txt")
get_filename_component(script_name "${script}" NAME)
add_custom_command(OUTPUT ${output_file}
add_custom_command(OUTPUT ${output_file}
COMMAND ${CMAKE_COMMAND}
-D IE_SHELLCHECK_PROGRAM=${shellcheck_PROGRAM}
-D IE_SHELL_SCRIPT=${script}

View File

@@ -19,7 +19,7 @@ function (commitHash VAR)
message(FATAL_ERROR "repo_root is not defined")
endif()
execute_process(
COMMAND git rev-parse HEAD
COMMAND git rev-parse --short=11 HEAD
WORKING_DIRECTORY ${repo_root}
OUTPUT_VARIABLE GIT_COMMIT_HASH
OUTPUT_STRIP_TRAILING_WHITESPACE)
@@ -28,13 +28,19 @@ endfunction()
macro(ov_parse_ci_build_number)
set(OpenVINO_VERSION_BUILD 000)
set(IE_VERSION_BUILD ${OpenVINO_VERSION_BUILD})
if(CI_BUILD_NUMBER MATCHES "^([0-9]+)\.([0-9]+)\.([0-9]+)\-([0-9]+)\-.*")
set(OpenVINO_VERSION_MAJOR ${CMAKE_MATCH_1})
set(OpenVINO_VERSION_MINOR ${CMAKE_MATCH_2})
set(OpenVINO_VERSION_PATCH ${CMAKE_MATCH_3})
set(OpenVINO_VERSION_BUILD ${CMAKE_MATCH_4})
set(the_whole_version_is_defined_by_ci ON)
elseif(CI_BUILD_NUMBER MATCHES "^[0-9]+$")
set(OpenVINO_VERSION_BUILD ${CI_BUILD_NUMBER})
# only build number is defined by CI
set(the_whole_version_is_defined_by_ci OFF)
elseif(CI_BUILD_NUMBER)
message(FATAL_ERROR "Failed to parse CI_BUILD_NUMBER which is ${CI_BUILD_NUMBER}")
endif()
if(NOT DEFINED repo_root)
@@ -95,21 +101,33 @@ macro(ov_parse_ci_build_number)
set(OpenVINO_VERSION "${OpenVINO_VERSION_MAJOR}.${OpenVINO_VERSION_MINOR}.${OpenVINO_VERSION_PATCH}")
message(STATUS "OpenVINO version is ${OpenVINO_VERSION} (Build ${OpenVINO_VERSION_BUILD})")
if(NOT the_whole_version_is_defined_by_ci)
# create CI_BUILD_NUMBER
branchName(GIT_BRANCH)
commitHash(GIT_COMMIT_HASH)
if(NOT GIT_BRANCH STREQUAL "master")
set(GIT_BRANCH_POSTFIX "-${GIT_BRANCH}")
endif()
set(CI_BUILD_NUMBER "${OpenVINO_VERSION}-${OpenVINO_VERSION_BUILD}-${GIT_COMMIT_HASH}${GIT_BRANCH_POSTFIX}")
unset(GIT_BRANCH_POSTFIX)
unset(GIT_BRANCH)
unset(GIT_COMMIT_HASH)
else()
unset(the_whole_version_is_defined_by_ci)
endif()
endmacro()
if (DEFINED ENV{CI_BUILD_NUMBER})
set(CI_BUILD_NUMBER $ENV{CI_BUILD_NUMBER})
else()
branchName(GIT_BRANCH)
commitHash(GIT_COMMIT_HASH)
set(custom_build "custom_${GIT_BRANCH}_${GIT_COMMIT_HASH}")
set(CI_BUILD_NUMBER "${custom_build}")
endif()
# provides OpenVINO version
# 1. If CI_BUILD_NUMBER is defined, parses this information
# 2. Otherwise, parses openvino/core/version.hpp
if (DEFINED ENV{CI_BUILD_NUMBER})
set(CI_BUILD_NUMBER $ENV{CI_BUILD_NUMBER})
endif()
ov_parse_ci_build_number()
macro (addVersionDefines FILE)

View File

@@ -126,7 +126,7 @@ ie_dependent_option (ENABLE_FUNCTIONAL_TESTS "functional tests" ON "ENABLE_TESTS
ie_dependent_option (ENABLE_SAMPLES "console samples are part of inference engine package" ON "NOT MINGW" OFF)
ie_option (ENABLE_OPENCV "enables OpenCV" ON)
ie_option (ENABLE_OPENCV "enables OpenCV" OFF)
ie_option (ENABLE_V7_SERIALIZE "enables serialization to IR v7" OFF)
@@ -136,16 +136,7 @@ ie_dependent_option(ENABLE_TBB_RELEASE_ONLY "Only Release TBB libraries are link
ie_dependent_option (ENABLE_SYSTEM_PUGIXML "use the system copy of pugixml" OFF "BUILD_SHARED_LIBS" OFF)
get_linux_name(LINUX_OS_NAME)
if(LINUX_OS_NAME MATCHES "^Ubuntu [0-9]+\.[0-9]+$" AND NOT DEFINED ENV{TBBROOT})
# Debian packages are enabled on Ubuntu systems
# so, system TBB can be tried for usage
set(ENABLE_SYSTEM_TBB_DEFAULT ON)
else()
set(ENABLE_SYSTEM_TBB_DEFAULT OFF)
endif()
ie_dependent_option (ENABLE_SYSTEM_TBB "use the system version of TBB" ${ENABLE_SYSTEM_TBB_DEFAULT} "THREADING MATCHES TBB;LINUX" OFF)
ie_dependent_option (ENABLE_SYSTEM_TBB "use the system version of TBB" OFF "THREADING MATCHES TBB;LINUX" OFF)
ie_option (ENABLE_DEBUG_CAPS "enable OpenVINO debug capabilities at runtime" OFF)

View File

@@ -150,13 +150,23 @@ if((THREADING STREQUAL "TBB" OR THREADING STREQUAL "TBB_AUTO") AND NOT TBB_FOUND
set(enable_system_tbb "@ENABLE_SYSTEM_TBB@")
if(NOT enable_system_tbb)
set_and_check(_tbb_dir "@PACKAGE_IE_TBB_DIR@")
# see https://stackoverflow.com/questions/28070810/cmake-generate-error-on-windows-as-it-uses-as-escape-seq
if(DEFINED ENV{TBBROOT})
file(TO_CMAKE_PATH $ENV{TBBROOT} ENV_TBBROOT)
endif()
if(DEFINED ENV{TBB_DIR})
file(TO_CMAKE_PATH $ENV{TBB_DIR} ENV_TBB_DIR)
endif()
set(find_package_tbb_extra_args
CONFIG
PATHS
# oneTBB case exposed via export TBBROOT=<custom TBB root>
"$ENV{TBBROOT}/lib64/cmake/TBB"
"$ENV{TBBROOT}/lib/cmake/TBB"
# "$ENV{TBB_DIR}"
"${ENV_TBBROOT}/lib64/cmake/TBB"
"${ENV_TBBROOT}/lib/cmake/TBB"
"${ENV_TBBROOT}/lib/cmake/tbb"
"${ENV_TBB_DIR}"
# for custom TBB exposed via cmake -DTBBROOT=<custom TBB root>
"${TBBROOT}/cmake"
# _tbb_dir points to TBB_DIR (custom | temp | system) used to build OpenVINO

View File

@@ -2,7 +2,10 @@
Once you have a model that meets both OpenVINO™ and your requirements, you can choose among several ways of deploying it with your application:
* [Run inference and develop your app with OpenVINO™ Runtime](../OV_Runtime_UG/openvino_intro.md).
* [Deploy your application locally](../OV_Runtime_UG/deployment/deployment_intro.md).
* [Deploy your model online with the OpenVINO Model Server](@ref ovms_what_is_openvino_model_server).
* [Deploy your application locally](../OV_Runtime_UG/deployment/deployment_intro.md).
* [Deploy your model with OpenVINO Model Server](@ref ovms_what_is_openvino_model_server).
* [Deploy your application for the TensorFlow framework with OpenVINO Integration](./openvino_ecosystem_ovtf.md).
> **NOTE**: Note that [running inference in OpenVINO Runtime](../OV_Runtime_UG/openvino_intro.md) is the most basic form of deployment. Before moving forward, make sure you know how to create a proper Inference configuration.

View File

@@ -13,99 +13,3 @@
@endsphinxdirective
Deep Learning Workbench (DL Workbench) is an official OpenVINO™ graphical interface designed to make the production of pretrained deep learning Computer Vision and Natural Language Processing models significantly easier.
Minimize the inference-to-deployment workflow timing for neural models right in your browser: import a model, analyze its performance and accuracy, visualize the outputs, optimize and make the final model deployment-ready in a matter of minutes. DL Workbench takes you through the full OpenVINO™ workflow, providing the opportunity to learn about various toolkit components.
![](../img/openvino_dl_wb.png)
@sphinxdirective
.. link-button:: workbench_docs_Workbench_DG_Start_DL_Workbench_in_DevCloud
:type: ref
:text: Run DL Workbench in Intel® DevCloud
:classes: btn-primary btn-block
@endsphinxdirective
DL Workbench enables you to get a detailed performance assessment, explore inference configurations, and obtain an optimized model ready to be deployed on various Intel® configurations, such as client and server CPU, Intel® Processor Graphics (GPU), Intel® Movidius™ Neural Compute Stick 2 (NCS 2), and Intel® Vision Accelerator Design with Intel® Movidius™ VPUs.
DL Workbench also provides the [JupyterLab environment](https://docs.openvino.ai/latest/workbench_docs_Workbench_DG_Jupyter_Notebooks.html#doxid-workbench-docs-workbench-d-g-jupyter-notebooks) that helps you quick start with OpenVINO™ API and command-line interface (CLI). Follow the full OpenVINO workflow created for your model and learn about different toolkit components.
## Video
@sphinxdirective
.. list-table::
* - .. raw:: html
<iframe allowfullscreen mozallowfullscreen msallowfullscreen oallowfullscreen webkitallowfullscreen height="315" width="560"
src="https://www.youtube.com/embed/on8xSSTKCt8">
</iframe>
* - **DL Workbench Introduction**. Duration: 1:31
@endsphinxdirective
## User Goals
DL Workbench helps achieve your goals depending on the stage of your deep learning journey.
If you are a beginner in the deep learning field, the DL Workbench provides you with
learning opportunities:
* Learn what neural networks are, how they work, and how to examine their architectures.
* Learn the basics of neural network analysis and optimization before production.
* Get familiar with the OpenVINO™ ecosystem and its main components without installing it on your system.
If you have enough experience with neural networks, DL Workbench provides you with a
convenient web interface to optimize your model and prepare it for production:
* Measure and interpret model performance.
* Tune the model for enhanced performance.
* Analyze the quality of your model and visualize output.
## General Workflow
The diagram below illustrates the typical DL Workbench workflow. Click to see the full-size image:
![](../img/openvino_dl_wb_diagram_overview.svg)
Get a quick overview of the workflow in the DL Workbench User Interface:
![](../img/openvino_dl_wb_workflow.gif)
## OpenVINO™ Toolkit Components
The intuitive web-based interface of the DL Workbench enables you to easily use various
OpenVINO™ toolkit components:
Component | Description
|------------------|------------------|
| [Open Model Zoo](https://docs.openvinotoolkit.org/latest/omz_tools_downloader.html)| Get access to the collection of high-quality pre-trained deep learning [public](https://docs.openvinotoolkit.org/latest/omz_models_group_public.html) and [Intel-trained](https://docs.openvinotoolkit.org/latest/omz_models_group_intel.html) models trained to resolve a variety of different tasks.
| [Model Optimizer](https://docs.openvinotoolkit.org/latest/openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html) |Optimize and transform models trained in supported frameworks to the IR format. <br>Supported frameworks include TensorFlow\*, Caffe\*, Kaldi\*, MXNet\*, and ONNX\* format.
| [Benchmark Tool](https://docs.openvinotoolkit.org/latest/openvino_inference_engine_tools_benchmark_tool_README.html)| Estimate deep learning model inference performance on supported devices.
| [Accuracy Checker](https://docs.openvinotoolkit.org/latest/omz_tools_accuracy_checker.html)| Evaluate the accuracy of a model by collecting one or several metric values.
| [Post-Training Optimization Tool](https://docs.openvinotoolkit.org/latest/pot_README.html)| Optimize pretrained models with lowering the precision of a model from floating-point precision(FP32 or FP16) to integer precision (INT8), without the need to retrain or fine-tune models. |
@sphinxdirective
.. link-button:: workbench_docs_Workbench_DG_Start_DL_Workbench_in_DevCloud
:type: ref
:text: Run DL Workbench in Intel® DevCloud
:classes: btn-outline-primary
@endsphinxdirective
## Contact Us
* [DL Workbench GitHub Repository](https://github.com/openvinotoolkit/workbench)
* [DL Workbench on Intel Community Forum](https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/bd-p/distribution-openvino-toolkit)
* [DL Workbench Gitter Chat](https://gitter.im/dl-workbench/general?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&content=body)

View File

@@ -0,0 +1,24 @@
# Inference Modes {#openvino_docs_Runtime_Inference_Modes_Overview}
@sphinxdirective
.. toctree::
:maxdepth: 1
:hidden:
openvino_docs_OV_UG_supported_plugins_AUTO
openvino_docs_OV_UG_Running_on_multiple_devices
openvino_docs_OV_UG_Hetero_execution
openvino_docs_OV_UG_Automatic_Batching
@endsphinxdirective
OpenVINO Runtime offers multiple inference modes to allow optimum hardware utilization under different conditions. The most basic one is a single-device mode, which defines just one device responsible for the entire inference workload. It supports a range of Intel hardware by means of plugins embedded in the Runtime library, each set up to offer the best possible performance. For a complete list of supported devices and instructions on how to use them, refer to the [guide on inference devices](../OV_Runtime_UG/supported_plugins/Device_Plugins.md).
The remaining modes assume certain levels of automation in selecting devices for inference. Using them in the deployed solution may potentially increase its performance and portability. The automated modes are:
* [Automatic Device Selection (AUTO)](../OV_Runtime_UG/auto_device_selection.md)
* [Multi-Device Execution (MULTI)](../OV_Runtime_UG/multi_device.md)
* [Heterogeneous Execution (HETERO)](../OV_Runtime_UG/hetero_execution.md)
* [Automatic Batching Execution (Auto-batching)](../OV_Runtime_UG/automatic_batching.md)

View File

@@ -2,11 +2,21 @@
Every deep learning workflow begins with obtaining a model. You can choose to prepare a custom one, use a ready-made solution and adjust it to your needs, or even download and run a pre-trained network from an online database, such as OpenVINO's [Open Model Zoo](../model_zoo.md).
This section describes how to obtain and prepare your model for work with OpenVINO to get the best inference results:
* [Browse a database of models for use in your projects](../model_zoo.md).
[OpenVINO™ supports several model formats](../MO_DG/prepare_model/convert_model/supported_model_formats.md) and allows to convert them to it's own, OpenVINO IR, providing a tool dedicated to this task.
[Model Optimizer](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md) reads the original model and creates the OpenVINO IR model (.xml and .bin files) so that inference can ultimately be performed without delays due to format conversion. Optionally, Model Optimizer can adjust the model to be more suitable for inference, for example, by [alternating input shapes](../MO_DG/prepare_model/convert_model/Converting_Model.md), [embedding preprocessing](../MO_DG/prepare_model/Additional_Optimizations.md) and [cutting training parts off](../MO_DG/prepare_model/convert_model/Cutting_Model.md).
The approach to fully convert a model is considered the default choice, as it allows the full extent of OpenVINO features. The OpenVINO IR model format is used by other conversion and preparation tools, such as the Post-Training Optimization Tool, for further optimization of the converted model.
Conversion is not required for ONNX and PaddlePaddle models, as OpenVINO provides C++ and Python APIs for importing them to OpenVINO Runtime directly. It provides a convenient way to quickly switch from framework-based code to OpenVINO-based code in your inference application.
This section describes the how to obtain and prepare your model for work with OpenVINO to get the best inference results:
* [See the supported formats and how to use them in your project](../MO_DG/prepare_model/convert_model/supported_model_formats.md)
* [Convert different model formats to the OpenVINO IR format](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
* [Automate model-related tasks with Model Downloader and additional OMZ Tools](https://docs.openvino.ai/latest/omz_tools_downloader.html).
To begin with, you may want to [browse a database of models for use in your projects](../model_zoo.md).

View File

@@ -16,7 +16,7 @@ More resources:
A suite of advanced algorithms for Neural Network inference optimization with minimal accuracy drop. NNCF applies quantization, filter pruning, binarization and sparsity algorithms to PyTorch and TensorFlow models during training.
More resources:
* [Documentation](@ref docs_nncf_introduction)
* [Documentation](@ref tmo_introduction)
* [GitHub](https://github.com/openvinotoolkit/nncf)
* [PyPI](https://pypi.org/project/nncf/)
@@ -25,7 +25,7 @@ A solution for Model Developers and Independent Software Vendors to use secure p
More resources:
* [documentation](https://docs.openvino.ai/latest/ovsa_get_started.html)
* [GitHub]https://github.com/openvinotoolkit/security_addon)
* [GitHub](https://github.com/openvinotoolkit/security_addon)
### OpenVINO™ integration with TensorFlow (OVTF)
@@ -40,7 +40,7 @@ More resources:
A streaming media analytics framework, based on the GStreamer multimedia framework, for creating complex media analytics pipelines.
More resources:
* [documentation on GitHub](https://openvinotoolkit.github.io/dlstreamer_gst/)
* [documentation on GitHub](https://dlstreamer.github.io/index.html)
* [installation Guide on GitHub](https://github.com/openvinotoolkit/dlstreamer_gst/wiki/Install-Guide)
### DL Workbench
@@ -61,7 +61,7 @@ More resources:
An online, interactive video and image annotation tool for computer vision purposes.
More resources:
* [documentation on GitHub](https://openvinotoolkit.github.io/cvat/docs/)
* [documentation on GitHub](https://opencv.github.io/cvat/docs/)
* [web application](https://cvat.org/)
* [Docker Hub](https://hub.docker.com/r/openvino/cvat_server)
* [GitHub](https://github.com/openvinotoolkit/cvat)

View File

@@ -1,6 +1,6 @@
# How to Implement Custom GPU Operations {#openvino_docs_Extensibility_UG_GPU}
To enable operations not supported by OpenVINO out of the box, you may need an extension for an OpenVINO operation set, and a custom kernel for the device you will target. This page describes custom kernel support for the GPU device.
To enable operations not supported by OpenVINO out of the box, you may need an extension for OpenVINO operation set, and a custom kernel for the device you will target. This article describes custom kernel support for the GPU device.
The GPU codepath abstracts many details about OpenCL. You need to provide the kernel code in OpenCL C and an XML configuration file that connects the kernel and its parameters to the parameters of the operation.
@@ -8,7 +8,6 @@ There are two options for using the custom operation configuration file:
* Include a section with your kernels into the automatically-loaded `<lib_path>/cldnn_global_custom_kernels/cldnn_global_custom_kernels.xml` file.
* Call the `ov::Core::set_property()` method from your application with the `"CONFIG_FILE"` key and the configuration file name as a value before loading the network that uses custom operations to the plugin:
@sphinxtabset
@sphinxtab{C++}
@@ -31,7 +30,7 @@ $ ./classification_sample -m <path_to_model>/bvlc_alexnet_fp16.xml -i ./validati
## Configuration File Format <a name="config-file-format"></a>
The configuration file is expected to follow the `.xml` file structure
with a node of the `CustomLayer` type for every custom operation you provide.
with a node of the type `CustomLayer` for every custom operation you provide.
The definitions described in the sections below use the following notations:
@@ -44,44 +43,44 @@ Notation | Description
### CustomLayer Node and Sub-Node Structure
`CustomLayer` node contains the entire configuration for a single custom operation.
The `CustomLayer` node contains the entire configuration for a single custom operation.
| Attribute Name |\# | Description |
|-----|-----|-----|
| `name` | (1) | The name of the operation type to be used. This name should be identical to the type used in the IR.|
| `type` | (1) | Must be `SimpleGPU`. |
| `version` | (1) | Must be `1`. |
| `name` | (1) | The name of the operation type to be used. This name should be identical to the type used in the OpenVINO IR.|
| `type` | (1) | Must be `SimpleGPU`. |
| `version` | (1) | Must be `1`. |
**Sub-nodes**: `Kernel` (1), `Buffers` (1), `CompilerOptions` (0+),
`WorkSizes` (0/1)
### Kernel Node and Sub-Node Structure
`Kernel` node contains all kernel source code configuration.
The `Kernel` node contains all kernel source code configuration.
**Sub-nodes**: `Source` (1+), `Define` (0+)
### Source Node and Sub-Node Structure
`Source` node points to a single OpenCL source file.
The `Source` node points to a single OpenCL source file.
| Attribute Name | \# |Description|
|-----|-----|-----|
| `filename` | (1) | Name of the file containing OpenCL source code. Note that the path is relative to your executable. Multiple source nodes will have their sources concatenated in order. |
| `filename` | (1) | Name of the file containing OpenCL source code. The path is relative to your executable. Multiple source nodes will have their sources concatenated in order. |
**Sub-nodes**: None
### Define Node and Sub-Node Structure
`Define` node configures a single `#&zwj;define` instruction to be added to
The `Define` node configures a single `#&zwj;define` instruction to be added to
the sources during compilation (JIT).
| Attribute Name | \# | Description |
|------|-------|------|
| `name` | (1) | The name of the defined JIT. For static constants, this can include the value as well, which is taken as a string. |
| `param` | (0/1) | This parameter value is used as the value of this JIT definition. |
| `type` | (0/1) | The parameter type. Accepted values: `int`, `float`, and `int[]`, `float[]` for arrays. |
| `default` | (0/1) | The default value to be used if the specified parameters are missing from the operation in the IR. |
| `name` | (1) | The name of the defined JIT. For static constants, this can include the value as well, which is taken as a string. |
| `param` | (0/1) | This parameter value is used as the value of this JIT definition. |
| `type` | (0/1) | The parameter type. Accepted values: `int`, `float`, and `int[]`, `float[]` for arrays. |
| `default` | (0/1) | The default value to be used if the specified parameters are missing from the operation in the OpenVINO IR. |
**Sub-nodes:** None
@@ -90,37 +89,37 @@ The resulting JIT has the following form:
### Buffers Node and Sub-Node Structure
`Buffers` node configures all input/output buffers for the OpenCL entry
The `Buffers` node configures all input/output buffers for the OpenCL entry
function. No buffers node structure exists.
**Sub-nodes:** `Data` (0+), `Tensor` (1+)
### Data Node and Sub-Node Structure
`Data` node configures a single input with static data, for example,
The `Data` node configures a single input with static data, for example,
weights or biases.
| Attribute Name | \# | Description |
|----|-----|------|
| `name` | (1) | Name of a blob attached to an operation in the IR |
| `arg-index` | (1) | 0-based index in the entry function arguments to be bound to |
| `name` | (1) | Name of a blob attached to an operation in the OpenVINO IR. |
| `arg-index` | (1) | 0-based index in the entry function arguments to be bound to. |
**Sub-nodes**: None
### Tensor Node and Sub-Node Structure
`Tensor` node configures a single input or output tensor.
The `Tensor` node configures a single input or output tensor.
| Attribute Name | \# | Description |
|------|-------|-------|
| `arg-index` | (1) | 0-based index in the entry function arguments to be bound to. |
| `type` | (1) | `input` or `output` |
| `port-index` | (1) | 0-based index in the operation input/output ports in the IR |
| `format` | (0/1) | Data layout declaration for the tensor. Accepted values: `BFYX`, `BYXF`, `YXFB`, `FYXB`, and same values in all lowercase. Default value: `BFYX` |
| `port-index` | (1) | 0-based index in the operation input/output ports in the OpenVINO IR |
| `format` | (0/1) | Data layout declaration for the tensor. Accepted values: `BFYX`, `BYXF`, `YXFB`, `FYXB`(also in lowercase). The default value: `BFYX` |
### CompilerOptions Node and Sub-Node Structure
`CompilerOptions` node configures the compilation flags for the OpenCL
The `CompilerOptions` node configures the compilation flags for the OpenCL
sources.
| Attribute Name | \# | Description |
@@ -131,20 +130,20 @@ sources.
### WorkSizes Node and Sub-Node Structure
`WorkSizes` node configures the global/local work sizes to be used when
The `WorkSizes` node configures the global/local work sizes to be used when
queuing an OpenCL program for execution.
| Attribute Name | \# | Description |
|-----|------|-----|
| `global`<br>`local` | (0/1)<br>(0/1) | An array of up to three integers or formulas for defining OpenCL work-sizes to be used during execution.<br> The formulas can use the values of the B,F,Y,X dimensions and contain the operators: +,-,/,\*,%. All operators are evaluated in integer arithmetic. <br>Default value: `global=”B*F*Y*X” local=””` |
| `dim` | (0/1) | A tensor to take the work-size from. Accepted values: `input N`, `output`, where `N` is an index of input tensor starting with 0. Default value: `output` |
| `dim` | (0/1) | A tensor to take the work-size from. Accepted values: `input N`, `output`, where `N` is an index of input tensor starting with 0. The default value: `output` |
**Sub-nodes**: None
## Example Configuration File
The following code sample provides an example configuration file in XML
format. For information on the configuration file structure, see
format. For information on the configuration file structure, see the
[Configuration File Format](#config-file-format).
```xml
<CustomLayer name="ReLU" type="SimpleGPU" version="1">
@@ -170,22 +169,22 @@ For an example, see [Example Kernel](#example-kernel).
| Name | Value |
|---|---|
| `NUM_INPUTS` | Number of the input tensors bound to this kernel |
| `GLOBAL_WORKSIZE` | An array of global work sizes used to execute this kernel |
| `GLOBAL_WORKSIZE_SIZE` | The size of the `GLOBAL_WORKSIZE` array |
| `LOCAL_WORKSIZE` | An array of local work sizes used to execute this kernel |
| `LOCAL_WORKSIZE_SIZE` | The size of the `LOCAL_WORKSIZE` array |
| `<TENSOR>_DIMS`| An array of the tensor dimension sizes. Always ordered as `BFYX` |
| `NUM_INPUTS` | Number of the input tensors bound to this kernel. |
| `GLOBAL_WORKSIZE` | An array of global work sizes used to execute this kernel. |
| `GLOBAL_WORKSIZE_SIZE` | The size of the `GLOBAL_WORKSIZE` array. |
| `LOCAL_WORKSIZE` | An array of local work sizes used to execute this kernel. |
| `LOCAL_WORKSIZE_SIZE` | The size of the `LOCAL_WORKSIZE` array. |
| `<TENSOR>_DIMS`| An array of the tensor dimension sizes. Always ordered as `BFYX`. |
| `<TENSOR>_DIMS_SIZE`| The size of the `<TENSOR>_DIMS` array.|
| `<TENSOR>_TYPE`| The datatype of the tensor: `float`, `half`, or `char`|
| `<TENSOR>_TYPE`| The datatype of the tensor: `float`, `half`, or `char`. |
| `<TENSOR>_FORMAT_<TENSOR_FORMAT>` | The format of the tensor, BFYX, BYXF, YXFB , FYXB, or ANY. The format is concatenated to the defined name. You can use the tensor format to define codepaths in your code with `#&zwj;ifdef/#&zwj;endif`. |
| `<TENSOR>_LOWER_PADDING` | An array of padding elements used for the tensor dimensions before they start. Always ordered as BFYX.|
| `<TENSOR>_LOWER_PADDING_SIZE` | The size of the `<TENSOR>_LOWER_PADDING` array |
| `<TENSOR>_LOWER_PADDING_SIZE` | The size of the `<TENSOR>_LOWER_PADDING` array. |
| `<TENSOR>_UPPER_PADDING` | An array of padding elements used for the tensor dimensions after they end. Always ordered as BFYX. |
| `<TENSOR>_UPPER_PADDING_SIZE` | The size of the `<TENSOR>_UPPER_PADDING` array |
| `<TENSOR>_PITCHES` | The offset (in elements) between adjacent elements in each dimension. Always ordered as BFYX.|
| `<TENSOR>_PITCHES_SIZE`| The size of the `<TENSOR>_PITCHES` array |
| `<TENSOR>_OFFSET`| The number of elements from the start of the tensor to the first valid element, bypassing the lower padding. |
| `<TENSOR>_UPPER_PADDING_SIZE` | The size of the `<TENSOR>_UPPER_PADDING` array. |
| `<TENSOR>_PITCHES` | The offset (in elements) between adjacent elements in each dimension. Always ordered as BFYX. |
| `<TENSOR>_PITCHES_SIZE`| The size of the `<TENSOR>_PITCHES` array. |
| `<TENSOR>_OFFSET`| The number of elements from the start of the tensor to the first valid element, bypassing the lower padding. |
All `<TENSOR>` values are automatically defined for every tensor
bound to this operation, such as `INPUT0`, `INPUT1`, and `OUTPUT0`, as shown
@@ -220,20 +219,19 @@ __kernel void example_relu_kernel(
```
> **NOTE**: As described in the previous section, all items like
> **NOTE**: As described in the previous section, all items such as the
> `INPUT0_TYPE` are actually defined as OpenCL (pre-)compiler inputs by
> OpenVINO for efficiency reasons. See [Debugging
> Tips](#debugging-tips) for information on debugging the results.
> OpenVINO for efficiency reasons. See the [Debugging
> Tips](#debugging-tips) below for information on debugging the results.
## Debugging Tips<a name="debugging-tips"></a>
* **Using `printf` in the OpenCL™ Kernels**.
To debug the specific values, you can use `printf` in your kernels.
**Using `printf` in the OpenCL™ Kernels**.
To debug the specific values, use `printf` in your kernels.
However, be careful not to output excessively, which
could generate too much data. The `printf` output is typical, so
your output can be truncated to fit the buffer. Also, because of
buffering, you actually get an entire buffer of output when the
execution ends.<br>
For more information, refer to the [printf
Function](https://www.khronos.org/registry/OpenCL/sdk/1.2/docs/man/xhtml/printfFunction.html).
For more information, refer to the [printf Function](https://www.khronos.org/registry/OpenCL/sdk/1.2/docs/man/xhtml/printfFunction.html).

View File

@@ -19,62 +19,61 @@ TensorFlow, PyTorch, ONNX, PaddlePaddle, Apache MXNet, Caffe, and Kaldi. The lis
each of the supported frameworks. To see the operations supported by your framework, refer to
[Supported Framework Operations](../MO_DG/prepare_model/Supported_Frameworks_Layers.md).
Custom operations, that is those not included in the list, are not recognized by OpenVINO out-of-the-box. The need for a custom operation may appear in two main cases:
Custom operations, which are not included in the list, are not recognized by OpenVINO out-of-the-box. The need for custom operation may appear in two cases:
1. A regular framework operation that is new or rarely used, which is why it hasnt been implemented in OpenVINO yet.
1. A new or rarely used regular framework operation is not supported in OpenVINO yet.
2. A new user operation that was created for some specific model topology by a model author using framework extension capabilities.
2. A new user operation that was created for some specific model topology by the author of the model using framework extension capabilities.
Importing models with such operations requires additional steps. This guide illustrates the workflow for running inference on models featuring custom operations, allowing you to plug in your own implementation for them. OpenVINO Extensibility API lets you add support for those custom operations and use one implementation for Model Optimizer and OpenVINO Runtime.
Importing models with such operations requires additional steps. This guide illustrates the workflow for running inference on models featuring custom operations. This allows plugging in your own implementation for them. OpenVINO Extensibility API enables adding support for those custom operations and using one implementation for Model Optimizer and OpenVINO Runtime.
Defining a new custom operation basically consist of two parts:
Defining a new custom operation basically consists of two parts:
1. Definition of operation semantics in OpenVINO, the code that describes how this operation should be inferred consuming input tensor(s) and producing output tensor(s). How to implement execution kernels for [GPU](./GPU_Extensibility.md) and [VPU](./VPU_Extensibility.md) is described in separate guides.
1. Definition of operation semantics in OpenVINO, the code that describes how this operation should be inferred consuming input tensor(s) and producing output tensor(s). The implementation of execution kernels for [GPU](./GPU_Extensibility.md) and [VPU](./VPU_Extensibility.md) is described in separate guides.
2. Mapping rule that facilitates conversion of framework operation representation to OpenVINO defined operation semantics.
The first part is required for inference, the second part is required for successful import of a model containing such operations from the original framework model format. There are several options to implement each part, the next sections will describe them in detail.
The first part is required for inference. The second part is required for successful import of a model containing such operations from the original framework model format. There are several options to implement each part. The following sections will describe them in detail.
## Definition of Operation Semantics
If the custom operation can be mathematically represented as a combination of exiting OpenVINO operations and such decomposition gives desired performance, then low-level operation implementation is not required. Refer to the latest OpenVINO operation set, when deciding feasibility of such decomposition. You can use any valid combination of exiting operations. The next section of this document describes the way to map a custom operation.
If the custom operation can be mathematically represented as a combination of exiting OpenVINO operations and such decomposition gives desired performance, then low-level operation implementation is not required. When deciding feasibility of such decomposition refer to the latest OpenVINO operation set. You can use any valid combination of exiting operations. How to map a custom operation is described in the next section of this document.
If such decomposition is not possible or appears too bulky with a large number of constituent operations that do not perform well, then a new class for the custom operation should be implemented, as described in the [Custom Operation Guide](add_openvino_ops.md).
If such decomposition is not possible or appears too bulky with lots of consisting operations that are not performing well, then a new class for the custom operation should be implemented as described in the [Custom Operation Guide](add_openvino_ops.md).
Prefer implementing a custom operation class if you already have a generic C++ implementation of operation kernel. Otherwise try to decompose the operation first as described above and then after verifying correctness of inference and resulting performance, optionally invest to implementing bare metal C++ implementation.
You might prefer implementing a custom operation class if you already have a generic C++ implementation of operation kernel. Otherwise, try to decompose the operation first, as described above. Then, after verifying correctness of inference and resulting performance, you may move on to optional implementation of Bare Metal C++.
## Mapping from Framework Operation
Depending on model format used for import, mapping of custom operation is implemented differently, choose one of:
Mapping of custom operation is implemented differently, depending on model format used for import. You may choose one of the following:
1. If model is represented in ONNX (including models exported from Pytorch in ONNX) or PaddlePaddle formats, then one of the classes from [Frontend Extension API](frontend_extensions.md) should be used. It consists of several classes available in C++ which can be used with Model Optimizer `--extensions` option or when model is imported directly to OpenVINO run-time using read_model method. Python API is also available for run-time model importing.
1. If a model is represented in the ONNX (including models exported from Pytorch in ONNX) or PaddlePaddle formats, then one of the classes from [Frontend Extension API](frontend_extensions.md) should be used. It consists of several classes available in C++ which can be used with the `--extensions` option in Model Optimizer or when a model is imported directly to OpenVINO runtime using the `read_model` method. Python API is also available for runtime model import.
2. If model is represented in TensorFlow, Caffe, Kaldi or MXNet formats, then [Model Optimizer Extensions](../MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md) should be used. This approach is available for model conversion in Model Optimizer only.
2. If a model is represented in the TensorFlow, Caffe, Kaldi or MXNet formats, then [Model Optimizer Extensions](../MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md) should be used. This approach is available for model conversion in Model Optimizer only.
Existing of two approaches simultaneously is explained by two different types of frontends used for model conversion in OpenVINO: new frontends (ONNX, PaddlePaddle) and legacy frontends (TensorFlow, Caffe, Kaldi and Apache MXNet). Model Optimizer can use both front-ends in contrast to the direct import of model with `read_model` method which can use new frontends only. Follow one of the appropriate guides referenced above to implement mappings depending on framework frontend.
If you are implementing extensions for ONNX or PaddlePaddle new frontends and plan to use Model Optimizer `--extension` option for model conversion, then the extensions should be
If you are implementing extensions for new ONNX or PaddlePaddle frontends and plan to use the `--extensions` option in Model Optimizer for model conversion, then the extensions should be:
1. Implemented in C++ only
1. Implemented in C++ only.
2. Compiled as a separate shared library (see details how to do that later in this guide).
2. Compiled as a separate shared library (see details on how to do this further in this guide).
You cannot write new frontend extensions using Python API if you plan to use them with Model Optimizer.
Model Optimizer does not support new frontend extensions written in Python API.
Remaining part of this guide uses Frontend Extension API applicable for new frontends.
Remaining part of this guide describes application of Frontend Extension API for new frontends.
## Registering Extensions
A custom operation class and a new mapping frontend extension class object should be registered to be usable in OpenVINO runtime.
> **NOTE**: This documentation is written based on the [Template extension](https://github.com/openvinotoolkit/openvino/tree/master/docs/template_extension/new), which demonstrates extension development details based on minimalistic `Identity` operation that is a placeholder for your real custom operation. You can review the complete code, which is fully compliable, to see how it works.
> **NOTE**: This documentation is derived from the [Template extension](https://github.com/openvinotoolkit/openvino/tree/master/src/core/template_extension/new), which demonstrates the details of extension development. It is based on minimalistic `Identity` operation that is a placeholder for your real custom operation. Review the complete, fully compilable code to see how it works.
To load the extensions to the `ov::Core` object, use the `ov::Core::add_extension` method, this method allows to load library with extensions or extensions from the code.
Use the `ov::Core::add_extension` method to load the extensions to the `ov::Core` object. This method allows loading library with extensions or extensions from the code.
### Load extensions to core
### Load Extensions to Core
Extensions can be loaded from code with `ov::Core::add_extension` method:
Extensions can be loaded from a code with the `ov::Core::add_extension` method:
@sphinxtabset
@@ -92,7 +91,7 @@ Extensions can be loaded from code with `ov::Core::add_extension` method:
@endsphinxtabset
`Identity` is custom operation class defined in [Custom Operation Guide](add_openvino_ops.md). This is enough to enable reading IR which uses `Identity` extension operation emitted by Model Optimizer. To be able to load original model directly to the runtime, you need to add also a mapping extension:
The `Identity` is a custom operation class defined in [Custom Operation Guide](add_openvino_ops.md). This is sufficient to enable reading OpenVINO IR which uses the `Identity` extension operation emitted by Model Optimizer. In order to load original model directly to the runtime, add a mapping extension:
@sphinxdirective
@@ -110,32 +109,34 @@ Extensions can be loaded from code with `ov::Core::add_extension` method:
@endsphinxdirective
When Python API is used there is no way to implement a custom OpenVINO operation. Also, even if custom OpenVINO operation is implemented in C++ and loaded to the runtime through a shared library, there is still no way to add a frontend mapping extension that refers to this custom operation. Use C++ shared library approach to implement both operations semantics and framework mapping in this case.
When Python API is used, there is no way to implement a custom OpenVINO operation. Even if custom OpenVINO operation is implemented in C++ and loaded into the runtime by a shared library, there is still no way to add a frontend mapping extension that refers to this custom operation. In this case, use C++ shared library approach to implement both operations semantics and framework mapping.
You still can use Python for operation mapping and decomposition in case if operations from the standard OpenVINO operation set is used only.
Python can still be used to map and decompose operations when only operations from the standard OpenVINO operation set are used.
### Create library with extensions
### Create a Library with Extensions
You need to create extension library in the following cases:
- Convert model with custom operations in Model Optimizer
- Load model with custom operations in Python application. It is applicable for both framework model and IR.
- Loading models with custom operations in tools that support loading extensions from a library, for example `benchmark_app`.
An extension library should be created in the following cases:
If you want to create an extension library, for example in order to load these extensions to the Model Optimizer, you need to do next steps:
Create an entry point for extension library. OpenVINO™ provides an `OPENVINO_CREATE_EXTENSIONS()` macro, which allows to define an entry point to a library with OpenVINO™ Extensions.
This macro should have a vector of all OpenVINO™ Extensions as an argument.
- Conversion of a model with custom operations in Model Optimizer.
- Loading a model with custom operations in a Python application. This applies to both framework model and OpenVINO IR.
- Loading models with custom operations in tools that support loading extensions from a library, for example the `benchmark_app`.
Based on that, the declaration of an extension class can look as follows:
To create an extension library, for example, to load the extensions into Model Optimizer, perform the following:
1. Create an entry point for extension library. OpenVINO provides the `OPENVINO_CREATE_EXTENSIONS()` macro, which allows to define an entry point to a library with OpenVINO Extensions.
This macro should have a vector of all OpenVINO Extensions as an argument.
Based on that, the declaration of an extension class might look like the following:
@snippet template_extension/new/ov_extension.cpp ov_extension:entry_point
To configure the build of your extension library, use the following CMake script:
2. Configure the build of your extension library, using the following CMake script:
@snippet template_extension/new/CMakeLists.txt cmake:extension
This CMake script finds the OpenVINO using the `find_package` CMake command.
This CMake script finds OpenVINO, using the `find_package` CMake command.
To build the extension library, run the commands below:
3. Build the extension library, running the commands below:
```sh
$ cd docs/template_extension/new
@@ -145,7 +146,7 @@ $ cmake -DOpenVINO_DIR=<OpenVINO_DIR> ../
$ cmake --build .
```
After the build you can use path to your extension library to load your extensions to OpenVINO Runtime:
4. After the build, you may use the path to your extension library to load your extensions to OpenVINO Runtime:
@sphinxtabset
@@ -168,4 +169,3 @@ After the build you can use path to your extension library to load your extensio
* [OpenVINO Transformations](./ov_transformations.md)
* [Using OpenVINO Runtime Samples](../OV_Runtime_UG/Samples_Overview.md)
* [Hello Shape Infer SSD sample](../../samples/cpp/hello_reshape_ssd/README.md)

View File

@@ -2,9 +2,10 @@
To enable operations not supported by OpenVINO™ out of the box, you need a custom extension for Model Optimizer, a custom nGraph operation set, and a custom kernel for the device you will target. This page describes custom kernel support for one the VPU, the Intel® Neural Compute Stick 2 device, which uses the MYRIAD device plugin.
> **NOTES:**
> * OpenCL\* custom layer support is available in the preview mode.
> **NOTE:**
> * OpenCL custom layer support is available in the preview mode.
> * This section assumes you are familiar with developing kernels using OpenCL.
To customize your topology with an OpenCL layer, carry out the tasks described on this page:
1. Write and compile your OpenCL code with the standalone offline OpenCL compiler (`clc`).
@@ -13,9 +14,9 @@ To customize your topology with an OpenCL layer, carry out the tasks described o
## Compile OpenCL code for VPU (Intel® Neural Compute Stick 2)
> **NOTE**: OpenCL compiler, targeting Intel® Neural Compute Stick 2 for the SHAVE* processor only, is redistributed with OpenVINO.
OpenCL support is provided by ComputeAorta* and is distributed under a license agreement between Intel® and Codeplay* Software Ltd.
The OpenCL toolchain for the Intel® Neural Compute Stick 2 supports offline compilation only, so first compile OpenCL C code using the standalone `clc` compiler. You can find the compiler binary at `<INSTALL_DIR>/tools/cl_compiler`.
> **NOTE**: OpenCL compiler, targeting Intel® Neural Compute Stick 2 for the SHAVE processor only, is redistributed with OpenVINO.
OpenCL support is provided by ComputeAorta and is distributed under a license agreement between Intel® and Codeplay Software Ltd.
The OpenCL toolchain for the Intel® Neural Compute Stick 2 supports offline compilation only. Start with compiling OpenCL C code, using the standalone `clc` compiler. You can find the compiler binary at `<INSTALL_DIR>/tools/cl_compiler`.
> **NOTE**: By design, custom OpenCL layers support any OpenCL kernels written assuming OpenCL version 1.2. It also supports half float extension and is optimized for this type, because it is a native type for Intel® Movidius™ VPUs.
1. Prior to running a compilation, make sure that the following variables are set:
@@ -63,7 +64,7 @@ Each custom layer is described with the `CustomLayer` node. It has the following
- Node `Source` must contain the following attributes:
- `filename` The path to a compiled binary relative to the XML configuration file.
- Sub-node `Parameters` Describes parameters bindings. For more information, see the description below.
- Sub-node `WorkSizes` Describes local and global work group sizes and the source for dimension deduction as a pair `direction,port`. In the example above, the work group is described relatively to the dimension of the input tensor that comes through port 0 in the IR. `global` and `local` work group configurations support any simple math expressions with +,-,\*,/, and () from `B`(batch), `Y`(height), `X`(width) and `F`(channels).
- Sub-node `WorkSizes` Describes local and global work group sizes and the source for dimension deduction as a pair `direction,port`. In the example above, the work group is described relatively to the dimension of the input tensor that comes through port 0 in the OpenVINO IR. Work group configurations, namely `global` and `local` support any simple math expressions with +,-,\*,/, and () from `B`(batch), `Y`(height), `X`(width) and `F`(channels).
- Sub-node `Where` Allows to customize bindings with the `key="value"` attribute. For example, to substitute only 3x3 convolutions, write `<Where kernel="3,3"/>` in the binding xml.
Parameter description supports `Tensor` of one of tensor types such as `input`, `output`, `input_buffer`, `output_buffer` or `data`, `Scalar`, or `Data` nodes and has the following format:
@@ -77,7 +78,7 @@ Each custom layer is described with the `CustomLayer` node. It has the following
- `type` Node type: `input_buffer` or `output_buffer`. Use the appropriate type to bind multiple kernels that correspond to different stages of the same layer.
- `port-index` The unique identifier to bind by.
- `dim` The dim source with the same `direction,port` format used for `WorkSizes` bindings.
- `size` Amount of bytes needed. Current expression syntax supports only expression over dimensions of over selected input/output tensor or constants and might be expended in the future.
- `size` Amount of bytes needed. Current expression syntax supports only expression over dimensions of over selected input/output tensor or constants and might be extended in the future.
Here is an example of multi-stage MVN layer binding:
```xml
@@ -107,7 +108,7 @@ Each custom layer is described with the `CustomLayer` node. It has the following
<WorkSizes dim="output,0" global="((Y+7)/8)*8,F,1" local="8,1,1"/>
</CustomLayer>
```
- Each `Tensor` node that has the type `data` must contain the following attributes:
- Each `Tensor` node that has the `data` type must contain the following attributes:
- `source` A name of the blob as it is in the IR. Typical example is `weights` for convolution.
- `format` Specifies the channel order in the tensor. Optional conversion layers are generated if the custom layer format is not.
```xml
@@ -133,7 +134,7 @@ Each custom layer is described with the `CustomLayer` node. It has the following
- Each `Data` node must contain the following attributes:
- `arg-name` The name of a kernel parameter in the kernel signature.
- `type` Node type. Currently, `local_data` is the only supported value, which defines buffer allocated in fast local on-chip memory. It is limited to 100KB for all `__local` and
`__private` arrays defined inside the kernel as well as all `__local` parameters passed to the kernel. Note that a manual-DMA extension requires double buffering.
`__private` arrays defined inside the kernel as well as all `__local` parameters passed to the kernel. A manual-DMA extension requires double buffering.
If the custom layer is detected to run out of local memory, the inference fails.
- `dim` The dim source with the same `direction,port` format used for `WorkSizes` bindings.
- `size` Amount of bytes needed. The current expression syntax supports only expression over dimensions of over selected input/output tensor or constants and may be extended in the future.
@@ -158,14 +159,13 @@ Each custom layer is described with the `CustomLayer` node. It has the following
## Pass Configuration File to OpenVINO™ Runtime
> **NOTE**: If both native and custom layer implementations are present, the custom kernel has a priority over the native one.
Before loading the network that features the custom layers, provide a separate configuration file and load it using the ov::Core::set_property() method with the "CONFIG_KEY" key and the configuration file name as a value before loading the network that uses custom operations to the plugin:
Before loading the network that features the custom layers, provide a separate configuration file and load it using the `ov::Core::set_property()` method. Use the "CONFIG_KEY" key and the configuration file name as a value before loading the network that uses custom operations to the plugin:
@snippet docs/snippets/vpu/custom_op.cpp part0
## Optimizing Kernels with OpenCL for VPU (Intel® Neural Compute Stick 2)
This section provides optimization guidelines on writing custom layers with OpenCL for VPU devices. Knowledge about general OpenCL
programming model and OpenCL kernel language is assumed and not a subject of this section. The OpenCL model mapping to VPU is described in the table below.
This section provides optimization guidelines on writing custom layers with OpenCL for VPU devices. Knowledge about general OpenCL programming model and OpenCL kernel language is assumed and not a subject of this section. The OpenCL model mapping to VPU is described in the table below.
| OpenCL Model | VPU Mapping|
|-----|----|
@@ -175,41 +175,33 @@ programming model and OpenCL kernel language is assumed and not a subject of thi
| Global memory | Mapped to DDR, used to pass execution preserved parameters for inputs, outputs, and blobs |
| Work group | Executed on a single SHAVE core iterating over multiple work items |
Note that by the OpenCL specification, the work group execution order is not specified. This means that it is your
responsibility to ensure that race conditions among work groups are not introduced. Custom layer runtime spits evenly
work grid among available compute resources and executes them in an arbitrary order. This static scheduling approach works best if the load is evenly spread out across work groups, which is a typical case for Deep Learning kernels. The following guidelines are recommended to use for work group partitioning:
The work group execution order is not defined in the OpenCL specifications. This means it is your responsibility to ensure that race conditions among work groups are not introduced. Custom layer runtime distributes work grid evenly among available compute resources and executes them in an arbitrary order. This static scheduling approach works best if the load is evenly spread out across work groups, which is a typical case for Deep Learning kernels. The following guidelines are recommended to use for work group partitioning:
1. Split work evenly across work groups.
1. Distribute work evenly across work groups.
2. Adjust work group granularity to maintain equal workload for all compute codes.
3. Set the maximum number of cores using the `max-shaves` attribute for the `CustomLayer` node. This keeps more resources for the rest of topology. It is also useful if the kernel scalability reached its limits, which may happen while optimizing memory bound kernels or kernels with poor parallelization.
4. Try an alternate data layout (`BFXY`/`BYXF`) for the kernel if it improves work group partitioning or data access patterns.
Consider not just specific layer boost, but full topology performance because data conversion layers would be automatically inserted
as appropriate.
4. Try an alternate data layout (`BFXY`/`BYXF`) for the kernel to see if it improves work group partitioning or data access patterns.
Consider not just specific layer boost, but also full topology performance because data conversion layers will be automatically inserted as appropriate.
Offline OpenCL compiler (`clc`) features automatic vectorization over `get_global_id(0)` usage, if uniform access is detected.
For example, the kernel below could be automatically vectorized:
```cpp
__kernel void cvtf32f16(__global float* restrict inImage, __global half* restrict outImage,
float scale, float bais)
float scale, float bias)
{
int idx = get_global_id(0) + get_global_id(1) * get_global_size(0) + get_global_id(2) * get_global_size(0) * get_global_size(1);
outImage[idx] = convert_half(inImage[idx]*scale+bais);
outImage[idx] = convert_half(inImage[idx]*scale+bias);
}
```
However, this work-group based vectorizer (WGV) conflicts with the default LLVM vectorizer based on superword level parallelism
(SLP) for the current compiler version. Manual vectorization is recommended to provide the best performance for non-uniform code
patterns. WGV works if and only if vector types are not used in the code.
However, this work-group based vectorizer (WGV) conflicts with the default LLVM vectorizer based on superword level parallelism (SLP) for the current compiler version. Manual vectorization is recommended to provide the best performance for non-uniform code patterns. WGV works if and only if vector types are not used in the code.
Here is a short list of optimization tips:
1. Help auto-vectorizer ensure non-aliasing pointers for kernel parameters by putting `restrict` where possible.
- This can give a performance boost, especially for kernels with unrolling, like `ocl_grn` from the example below.
- Place `restrict` markers for kernels with manually vectorized codes. In the `ocl_grn` kernel below, the unrolled version without `restrict` is up to 20% slower than the most optimal one, which combines unrolling and `restrict`.
2. Put `#&zwj;pragma unroll N` to your loop header. The compiler does not trigger unrolling by default, so it is your responsibility to
annotate the code with pragmas as appropriate. The `ocl_grn` version with `#&zwj;pragma unroll 4` is up to 50% faster, most of which comes from unrolling the first loop, because LLVM, in general, is better in scheduling 3-stage loops (load-compute-store), while the fist loop
`variance += (float)(src_data[c*H*W + y*W + x] * src_data[c*H*W + y*W + x]);` is only 2-stage (load-compute). Pay
attention to unrolling such cases first. Unrolling factor is loop-dependent. Choose the smallest number that
still improves performance as an optimum between the kernel size and execution speed. For this specific kernel, changing the unroll factor from `4` to `6` results in the same performance, so unrolling factor equal to 4 is an optimum. For Intel® Neural Compute Stick 2, unrolling is conjugated with the automatic software pipelining for load, store, and compute stages:
1. Help auto-vectorizer ensure non-aliasing pointers for kernel parameters by putting the `restrict` markers where possible.
- This can give a performance boost, especially for kernels with unrolling, like the `ocl_grn` from the example below.
- Place `restrict` markers for kernels with manually vectorized codes. In the `ocl_grn` kernel below, the unrolled version without the `restrict` is up to 20% slower than the most optimal one, which combines both unrolling and `restrict`.
2. Put `#&zwj;pragma unroll N` to your loop header. The compiler does not trigger unrolling by default, so it is your responsibility to annotate the code with pragmas as appropriate. The `ocl_grn` version with `#&zwj;pragma unroll 4` is up to 50% faster, most of which comes from unrolling the first loop, because LLVM, in general, is better in scheduling 3-stage loops (load-compute-store), while the first loop
The `variance += (float)(src_data[c*H*W + y*W + x] * src_data[c*H*W + y*W + x]);` is only 2-stage (load-compute). Pay attention to unrolling such cases first. Unrolling factor is loop-dependent. Choose the smallest number that still improves performance as an optimum between the kernel size and execution speed. For this specific kernel, changing the unroll factor from `4` to `6` results in the same performance, so unrolling factor equal to 4 is an optimum. For Intel Neural Compute Stick 2, unrolling is conjugated with the automatic software pipelining for load, store, and compute stages:
```cpp
__kernel void ocl_grn(__global const half* restrict src_data, __global half* restrict dst_data, int C, float bias)
{
@@ -227,7 +219,7 @@ __kernel void ocl_grn(__global const half* restrict src_data, __global half* res
dst_data[c*H*W + y*W + x] = (half)((float)src_data[c*H*W + y*W + x] * variance);
}
```
To check the efficiency of WGV, you can compare performance of the kernel above with the kernel below, which is manually vectorized over width:
To check the efficiency of WGV, compare performance of the kernel above with the kernel below, which is manually vectorized over width:
```cpp
__kernel void ocl_grn_line(__global const half* restrict src_data, __global half* restrict dst_data, int C, int W, float bias)
{
@@ -267,19 +259,14 @@ __kernel void ocl_grn_line(__global const half* restrict src_data, __global hal
```
Both versions perform the same, but the second one has more complex code.
3. If it is easy to predict the work group size, you can also use the `reqd_work_group_size` kernel attribute to ask the compiler
to unroll the code up to the local size of the work group. Note that if the kernel is actually executed with the
different work group configuration, the result is undefined.
3. If it is easy to predict the work group size, use the `reqd_work_group_size` kernel attribute to ask the compiler to unroll the code up to the local size of the work group. If the kernel is actually executed with the different work group configuration, the result is undefined.
4. Prefer to use the `half` compute if it keeps reasonable accuracy. 16-bit float is a native type for Intel® Neural Compute Stick 2, most of the functions `half_*` are mapped to a single hardware instruction.
4. Prefer to use the `half` compute if it keeps reasonable accuracy. A 16-bit float is a native type for Intel Neural Compute Stick 2, most of the `half_*` functions are mapped to a single hardware instruction.
Use the standard `native_*` function for the rest of types.
5. Prefer to use the `convert_half` function over `vstore_half` if conversion to 32-bit float is required. `convert_half` is mapped to a single hardware instruction. For the `cvtf32f16` kernel above, the line `outImage[idx] = convert_half(inImage[idx]*scale+bais);` is eight times slower than the code with `vstore_half`.
5. Prefer to use the `convert_half` function over the `vstore_half` if conversion to 32-bit float is required. The `convert_half` function is mapped to a single hardware instruction. For the `cvtf32f16` kernel above, the `outImage[idx] = convert_half(inImage[idx]*scale+bias);` code is eight times slower than the code with `vstore_half`.
6. Mind early exits. Early exit can be extremely costly for the current version of the `clc` compiler due to conflicts with the
auto-vectorizer. The generic advice would be to setup local size by `x` dimension equal to inputs or/and outputs width.
If it is impossible to define the work grid that exactly matches inputs or/and outputs to eliminate checks, for example,
`if (get_global_id(0) >= width) return`, use line-wise kernel variant with manual vectorization.
6. Be aware of early exits, as they can be extremely costly for the current version of the `clc` compiler due to conflicts with the auto-vectorizer. It is recommended to setup local size by `x` dimension equal to inputs or/and outputs width. If it is impossible to define the work grid that exactly matches inputs or/and outputs to eliminate checks, for example, `if (get_global_id(0) >= width) return`, use line-wise kernel variant with manual vectorization.
The kernel example below demonstrates the impact of early exits on kernel performance.
```cpp
// Initial version
@@ -302,8 +289,8 @@ The kernel example below demonstrates the impact of early exits on kernel perfor
}
```
This `reorg` kernel is auto-vectorizable, but an input for YOLO v2 topology is `NCHW=<1,64,26,26>` and it is not multiple of vector width, which is `8` for `half` data type. As a result, the Inference Engine does not select the auto-vectorized kernel.
To compare performance of auto-vectorized and scalar version of the kernel, change the input size to`NCHW=<1,64,26,32>`. This enables the auto-vectorized version to be selected by the Inference Engine and can give you about 30% uplift.
Since the auto-vectorized version is faster, it makes sense to enable it for the YOLO v2 topology input size by setting the local size multiple of vector, for example, 32, and adjust global sizes accordingly. As a result, the execution work grid exceeds actual input dimension, so out-of-bound checks should be inserted. See the updated kernel version below:
To compare performance of auto-vectorized and scalar version of the kernel, change the input size to `NCHW=<1,64,26,32>`. This enables the auto-vectorized version to be selected by the Inference Engine and can give you about 30% uplift.
Since the auto-vectorized version is faster, it is recommended to enable it for the YOLO v2 topology input size by setting the local size multiple of vector, for example, `32`, and adjust global sizes accordingly. As a result, the execution work grid exceeds actual input dimension, so out-of-bound checks should be inserted. See the updated kernel version below:
```cpp
// Version with out-of-bound checks added
__kernel void reorg(const __global half* restrict src, __global half* restrict out, int W, int stride)
@@ -324,7 +311,7 @@ Since the auto-vectorized version is faster, it makes sense to enable it for the
out[W*H*c + W*h + w] = src[W2*H2*c2 + W2*h2 + w2];
}
```
This code performs the same as the initial kernel above (scalar) due to branching overhead. If you replace min/max expression `w = min(w, W-1);` with `if (w >= W) return;`, runtime increases up to 2x against to code without branching (initial version).<br>
This code performs the same as the initial kernel above (scalar) due to branching overhead. If the `w = min(w, W-1);` min/max expression is replaced with the `if (w >= W) return;`, runtime increases up to 2x against to code without branching (initial version).<br>
If branching is inevitable for your element-based kernel, it is recommended to change the scheme to line-based. See the kernel variant below:
```cpp
// Line-wise version
@@ -347,8 +334,8 @@ __kernel void reorg(const __global half* restrict src, __global half* restrict o
}
```
This decreases the execution time up to 40% against the best performing vectorized kernel without early exits (initial version).
7. Reuse computations among work items by using line-based kernels or sharing values though `__local` memory.
8. Improve data access locality. Most of custom kernels are memory bound while convolution and fully connected layers are hardware-implemented. The code below demonstrates a further optimized version of the `reorg` kernel unrolled by `stride`:
7. Reuse computations among work items by using line-based kernels or sharing values through the `__local` memory.
8. Improve data access locality. Most of custom kernels are memory bound while convolution and fully connected layers are hardware-implemented. The code below demonstrates a further optimized version of the `reorg` kernel unrolled by the `stride`:
```cpp
// Unrolled line-wise version
__kernel void reorg_unrolled_by_stride(const __global half* restrict src, __global half* restrict dst,
@@ -366,14 +353,11 @@ This decreases the execution time up to 40% against the best performing vectoriz
dst[W*H*C2*(stride_y*stride+stride_x) + W*H*c2 + W*h + w] = src[W2*H2*c2 + W2*h*stride + W2*stride_y + w2 + stride_x];
}
```
`scr` data in this case loaded only once. As the result, the cycle count drops up to 45% against the line-wise version.
The `scr` data in this case is loaded only once. As the result, the cycle count drops up to 45% against the line-wise version.
9. Copy data from `__dlobal` to `__local` or `__private` memory if the data is accessed more than once. Access to
`__dlobal` memory is orders of magnitude slower than access to `__local`/`__private` due to statically scheduled pipeline, which
stalls completely on memory access without any prefetch. The same recommendation is applicable for scalar load/store
from/to a `__blobal` pointer since work-group copying could be done in a vector fashion.
9. Copy data from the `__dlobal` to the `__local` or `__private` memory if the data is accessed more than once. Access to the `__dlobal` memory is orders of magnitude slower than access to the `__local`/`__private` due to statically scheduled pipeline, which stalls completely on memory access without any prefetch. The same recommendation is applicable for scalar load/store from/to the `__blobal` pointer since work-group copying could be done in a vector fashion.
10. Use a manual DMA extension. Local (on-chip) memory throughput is up to 24x higher than DDR throughput. Starting from OpenVINO 2020.1, VPU OpenCL features manual-DMA kernel extension to copy sub-tensor used by work group into local memory and performing compute without DDR evolved. Here is the simple GRN kernel implementation that runs over DDR. Local size is in the form (width of the input tensor, 1, 1) to define a large enough work group to get code automatically vectorized and unrolled, while global size is (width of the input tensor, height of the input tensor, 1):
10. Use a manual DMA extension. Local (on-chip) memory throughput is up to 24x higher than DDR throughput. Since the OpenVINO 2020.1 release, VPU OpenCL features manual-DMA kernel extension to copy sub-tensor used by a work group into local memory and performing compute without DDR evolved. Here is the simple GRN kernel implementation that runs over DDR. Local size is in the form (width of the input tensor, 1, 1) to define a large enough work group to get code automatically vectorized and unrolled, while global size is (width of the input tensor, height of the input tensor, 1):
```cpp
__kernel void grn_NCHW(
__global const half* restrict src_data,
@@ -398,7 +382,7 @@ from/to a `__blobal` pointer since work-group copying could be done in a vector
}
```
This kernel can be rewritten to introduce special data binding `__dma_preload` and `__dma_postwrite intrinsics`. This means that instead of one kernel, a group of three kernels should be implemented: `kernelName`, `__dma_preload_kernelName`, and `__dma_postwrite_kernelName`. `__dma_preload_kernelName` for a particular work group `n` is guaranteed to be executed before the `n`-th work group itself, while `__dma_postwrite_kernelName` is guaranteed to be executed after a corresponding work group. You can define one of those functions that are intended to be used to copy data from-to `__global` and `__local` memory. The syntactics requires exact functional signature match. The example below illustrates how to prepare your kernel for manual-DMA.
This kernel can be rewritten to introduce the `__dma_preload` and `__dma_postwrite intrinsics` special data binding. This means that instead of one kernel, a group of three kernels should be implemented: `kernelName`, `__dma_preload_kernelName`, and `__dma_postwrite_kernelName`. The `__dma_preload_kernelName` kernel for a particular work group `n` is guaranteed to be executed before the `n`-th work group itself, while the `__dma_postwrite_kernelName` is guaranteed to be executed after a corresponding work group. One of those functions may be defined to copy data from-to `__global` and `__local` memory. The syntactics requires exact functional signature match. The example below illustrates how to prepare your kernel for manual-DMA.
```cpp
__kernel void __dma_preload_grn_NCHW(
@@ -557,9 +541,9 @@ __kernel void grn_NCHW(
}
```
Note the `get_local_size` and `get_local_id` usage inside the kernel. 21x speedup is expected for a kernel on enet-curbs setup because it was completely limited by memory usage.
> **NOTE**: The `get_local_size` and `get_local_id` usage inside the kernel. 21x speedup is expected for a kernel on enet-curbs setup since it is completely limited by memory usage.
An alternative method to using DMA is to use work item copy extension. Those functions are executed inside a kernel and requires work groups equal to single work item.
An alternative method to using DMA is to use work item copy extension. Those functions are executed inside a kernel and require work groups equal to single work item.
Here is the list of supported work item functions:
```cpp

View File

@@ -70,7 +70,7 @@ To eliminate operation, OpenVINO™ has special method that considers all limita
`ov::replace_output_update_name()` in case of successful replacement it automatically preserves friendly name and runtime info.
## Transformations types <a name="transformations_types"></a>
## Transformations types <a name="transformations-types"></a>
OpenVINO™ Runtime has three main transformation types:
@@ -91,7 +91,7 @@ Transformation library has two internal macros to support conditional compilatio
When developing a transformation, you need to follow these transformation rules:
###1. Friendly Names
### 1. Friendly Names
Each `ov::Node` has an unique name and a friendly name. In transformations we care only about friendly name because it represents the name from the model.
To avoid losing friendly name when replacing node with other node or subgraph, set the original friendly name to the latest node in replacing subgraph. See the example below.
@@ -100,7 +100,7 @@ To avoid losing friendly name when replacing node with other node or subgraph, s
In more advanced cases, when replaced operation has several outputs and we add additional consumers to its outputs, we make a decision how to set friendly name by arrangement.
###2. Runtime Info
### 2. Runtime Info
Runtime info is a map `std::map<std::string, ov::Any>` located inside `ov::Node` class. It represents additional attributes in `ov::Node`.
These attributes can be set by users or by plugins and when executing transformation that changes `ov::Model` we need to preserve these attributes as they will not be automatically propagated.
@@ -111,9 +111,9 @@ Currently, there is no mechanism that automatically detects transformation types
When transformation has multiple fusions or decompositions, `ov::copy_runtime_info` must be called multiple times for each case.
**Note**: copy_runtime_info removes rt_info from destination nodes. If you want to keep it, you need to specify them in source nodes like this: copy_runtime_info({a, b, c}, {a, b})
> **NOTE**: `copy_runtime_info` removes `rt_info` from destination nodes. If you want to keep it, you need to specify them in source nodes like this: `copy_runtime_info({a, b, c}, {a, b})`
###3. Constant Folding
### 3. Constant Folding
If your transformation inserts constant sub-graphs that need to be folded, do not forget to use `ov::pass::ConstantFolding()` after your transformation or call constant folding directly for operation.
The example below shows how constant subgraph can be constructed.
@@ -140,8 +140,8 @@ In transformation development process:
## Using pass manager <a name="using_pass_manager"></a>
`ov::pass::Manager` is a container class that can store the list of transformations and execute them. The main idea of this class is to have high-level representation for grouped list of transformations.
It can register and apply any [transformation pass](#transformations_types) on model.
In addition, `ov::pass::Manager` has extended debug capabilities (find more information in the [how to debug transformations](#how_to_debug_transformations) section).
It can register and apply any [transformation pass](#transformations-types) on model.
In addition, `ov::pass::Manager` has extended debug capabilities (find more information in the [how to debug transformations](#how-to-debug-transformations) section).
The example below shows basic usage of `ov::pass::Manager`
@@ -151,7 +151,7 @@ Another example shows how multiple matcher passes can be united into single Grap
@snippet src/transformations/template_pattern_transformation.cpp matcher_pass:manager2
## How to debug transformations <a name="how_to_debug_transformations"></a>
## How to debug transformations <a name="how-to-debug-transformations"></a>
If you are using `ngraph::pass::Manager` to run sequence of transformations, you can get additional debug capabilities by using the following environment variables:
@@ -160,7 +160,7 @@ OV_PROFILE_PASS_ENABLE=1 - enables performance measurement for each transformati
OV_ENABLE_VISUALIZE_TRACING=1 - enables visualization after each transformation. By default, it saves dot and svg files.
```
> **Note**: Make sure that you have dot installed on your machine; otherwise, it will silently save only dot file without svg file.
> **NOTE**: Make sure that you have dot installed on your machine; otherwise, it will silently save only dot file without svg file.
## See Also

View File

@@ -1,4 +1,4 @@
# Build Plugin Using CMake* {#openvino_docs_ie_plugin_dg_plugin_build}
# Build Plugin Using CMake {#openvino_docs_ie_plugin_dg_plugin_build}
Inference Engine build infrastructure provides the Inference Engine Developer Package for plugin development.
@@ -57,7 +57,6 @@ A common plugin consists of the following components:
To build a plugin and its tests, run the following CMake scripts:
- Root `CMakeLists.txt`, which finds the Inference Engine Developer Package using the `find_package` CMake command and adds the `src` and `tests` subdirectories with plugin sources and their tests respectively:
```cmake
cmake_minimum_required(VERSION 3.13)
@@ -82,21 +81,15 @@ if(ENABLE_TESTS)
endif()
endif()
```
> **NOTE**: The default values of the `ENABLE_TESTS`, `ENABLE_FUNCTIONAL_TESTS` options are shared via the Inference Engine Developer Package and they are the same as for the main DLDT build tree. You can override them during plugin build using the command below:
```bash
$ cmake -DENABLE_FUNCTIONAL_TESTS=OFF -DInferenceEngineDeveloperPackage_DIR=../dldt-release-build ../template-plugin
```
> **NOTE**: The default values of the `ENABLE_TESTS`, `ENABLE_FUNCTIONAL_TESTS` options are shared via the Inference Engine Developer Package and they are the same as for the main DLDT build tree. You can override them during plugin build using the command below:
```bash
$ cmake -DENABLE_FUNCTIONAL_TESTS=OFF -DInferenceEngineDeveloperPackage_DIR=../dldt-release-build ../template-plugin
```
- `src/CMakeLists.txt` to build a plugin shared library from sources:
@snippet template_plugin/src/CMakeLists.txt cmake:plugin
> **NOTE**: `IE::inference_engine` target is imported from the Inference Engine Developer Package.
> **NOTE**: `IE::inference_engine` target is imported from the Inference Engine Developer Package.
- `tests/functional/CMakeLists.txt` to build a set of functional plugin tests:
@snippet template_plugin/tests/functional/CMakeLists.txt cmake:functional_tests
> **NOTE**: The `IE::funcSharedTests` static library with common functional Inference Engine Plugin tests is imported via the Inference Engine Developer Package.
> **NOTE**: The `IE::funcSharedTests` static library with common functional Inference Engine Plugin tests is imported via the Inference Engine Developer Package.

View File

@@ -95,6 +95,6 @@ Returns a current value for a configuration key with the name `name`. The method
@snippet src/template_executable_network.cpp executable_network:get_config
This function is the only way to get configuration values when a network is imported and compiled by other developers and tools (for example, the [Compile tool](../_inference_engine_tools_compile_tool_README.html)).
This function is the only way to get configuration values when a network is imported and compiled by other developers and tools (for example, the [Compile tool](@ref openvino_inference_engine_tools_compile_tool_README).
The next step in plugin library implementation is the [Synchronous Inference Request](@ref openvino_docs_ie_plugin_dg_infer_request) class.

View File

@@ -47,13 +47,13 @@ Inference Engine plugin dynamic library consists of several main components:
on several task executors based on a device-specific pipeline structure.
> **NOTE**: This documentation is written based on the `Template` plugin, which demonstrates plugin
development details. Find the complete code of the `Template`, which is fully compilable and up-to-date,
at `<dldt source dir>/docs/template_plugin`.
> development details. Find the complete code of the `Template`, which is fully compilable and up-to-date,
> at `<dldt source dir>/docs/template_plugin`.
Detailed guides
-----------------------
* [Build](@ref openvino_docs_ie_plugin_dg_plugin_build) a plugin library using CMake\*
* [Build](@ref openvino_docs_ie_plugin_dg_plugin_build) a plugin library using CMake
* Plugin and its components [testing](@ref openvino_docs_ie_plugin_dg_plugin_testing)
* [Quantized networks](@ref openvino_docs_ie_plugin_dg_quantized_networks)
* [Low precision transformations](@ref openvino_docs_OV_UG_lpt) guide

View File

@@ -81,7 +81,7 @@ The function accepts a const shared pointer to `ov::Model` object and performs t
1. Deep copies a const object to a local object, which can later be modified.
2. Applies common and plugin-specific transformations on a copied graph to make the graph more friendly to hardware operations. For details how to write custom plugin-specific transformation, please, refer to [Writing OpenVINO™ transformations](@ref openvino_docs_transformations) guide. See detailed topics about network representation:
* [Intermediate Representation and Operation Sets](../_docs_MO_DG_IR_and_opsets.html)
* [Intermediate Representation and Operation Sets](@ref openvino_docs_MO_DG_IR_and_opsets)
* [Quantized networks](@ref openvino_docs_ie_plugin_dg_quantized_networks).
@snippet template_plugin/src/template_plugin.cpp plugin:transform_network

View File

@@ -14,15 +14,12 @@ Engine concepts: plugin creation, multiple executable networks support, multiple
2. **Single layer tests** (`single_layer_tests` sub-folder). This groups of tests checks that a particular single layer can be inferenced on a device. An example of test instantiation based on test definition from `IE::funcSharedTests` library:
- From the declaration of convolution test class we can see that it's a parametrized GoogleTest based class with the `convLayerTestParamsSet` tuple of parameters:
@snippet single_layer/convolution.hpp test_convolution:definition
- Based on that, define a set of parameters for `Template` plugin functional test instantiation:
@snippet single_layer_tests/convolution.cpp test_convolution:declare_parameters
- Instantiate the test itself using standard GoogleTest macro `INSTANTIATE_TEST_SUITE_P`:
@snippet single_layer_tests/convolution.cpp test_convolution:instantiate
3. **Sub-graph tests** (`subgraph_tests` sub-folder). This group of tests is designed to tests small patterns or combination of layers. E.g. when a particular topology is being enabled in a plugin e.g. TF ResNet-50, there is no need to add the whole topology to test tests. In opposite way, a particular repetitive subgraph or pattern can be extracted from `ResNet-50` and added to the tests. The instantiation of the sub-graph tests is done in the same way as for single layer tests.

View File

@@ -32,7 +32,7 @@ Thus we can define:
- **Scale** as `(output_high - output_low) / (levels-1)`
- **Zero-point** as `-output_low / (output_high - output_low) * (levels-1)`
**Note**: During the quantization process the values `input_low`, `input_high`, `output_low`, `output_high` are selected so that to map a floating-point zero exactly to an integer value (zero-point) and vice versa.
> **NOTE**: During the quantization process the values `input_low`, `input_high`, `output_low`, `output_high` are selected so that to map a floating-point zero exactly to an integer value (zero-point) and vice versa.
## Quantization specifics and restrictions
In general, OpenVINO can represent and execute quantized models from different sources. However, the Post-training Optimization Tool (POT)

View File

@@ -1,4 +1,4 @@
# AvgPoolPrecisionPreserved attribute {#openvino_docs_OV_UG_lpt_AvgPoolPrecisionPreserved}
# AvgPoolPrecisionPreserved Attribute {#openvino_docs_OV_UG_lpt_AvgPoolPrecisionPreserved}
ngraph::AvgPoolPrecisionPreservedAttribute class represents the `AvgPoolPrecisionPreserved` attribute.

View File

@@ -1,4 +1,4 @@
# IntervalsAlignment attribute {#openvino_docs_OV_UG_lpt_IntervalsAlignment}
# IntervalsAlignment Attribute {#openvino_docs_OV_UG_lpt_IntervalsAlignment}
ngraph::IntervalsAlignmentAttribute class represents the `IntervalsAlignment` attribute.

View File

@@ -1,4 +1,4 @@
# PrecisionPreserved attribute {#openvino_docs_OV_UG_lpt_PrecisionPreserved}
# PrecisionPreserved Attribute {#openvino_docs_OV_UG_lpt_PrecisionPreserved}
ngraph::PrecisionPreservedAttribute class represents the `PrecisionPreserved` attribute.

View File

@@ -1,4 +1,4 @@
# Precisions attribute {#openvino_docs_OV_UG_lpt_Precisions}
# Precisions Attribute {#openvino_docs_OV_UG_lpt_Precisions}
ngraph::PrecisionsAttribute class represents the `Precisions` attribute.

View File

@@ -1,4 +1,4 @@
# QuantizationAlignment attribute {#openvino_docs_OV_UG_lpt_QuantizationAlignment}
# QuantizationAlignment Attribute {#openvino_docs_OV_UG_lpt_QuantizationAlignment}
ngraph::QuantizationAlignmentAttribute class represents the `QuantizationAlignment` attribute.

View File

@@ -1,4 +1,4 @@
# QuantizationGranularity attribute {#openvino_docs_OV_UG_lpt_QuantizationGranularity}
# QuantizationGranularity Attribute {#openvino_docs_OV_UG_lpt_QuantizationGranularity}
ngraph::QuantizationAttribute class represents the `QuantizationGranularity` attribute.

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 22 KiB

After

Width:  |  Height:  |  Size: 130 B

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 26 KiB

After

Width:  |  Height:  |  Size: 130 B

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 29 KiB

After

Width:  |  Height:  |  Size: 130 B

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 28 KiB

After

Width:  |  Height:  |  Size: 130 B

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 38 KiB

After

Width:  |  Height:  |  Size: 130 B

View File

@@ -54,4 +54,4 @@ Attributes usage by transformations:
| IntervalsAlignment | AlignQuantizationIntervals | FakeQuantizeDecompositionTransformation |
| QuantizationAlignment | AlignQuantizationParameters | FakeQuantizeDecompositionTransformation |
> **Note:** the same type of attribute instances can be created in different transformations. This approach is the result of the transformation single-responsibility principle. For example, `Precision` attribute instances are created in `MarkupCanBeQuantized` and `MarkupPrecisions` transformations, but the reasons for their creation are different.
> **NOTE**: the same type of attribute instances can be created in different transformations. This approach is the result of the transformation single-responsibility principle. For example, `Precision` attribute instances are created in `MarkupCanBeQuantized` and `MarkupPrecisions` transformations, but the reasons for their creation are different.

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 61 KiB

After

Width:  |  Height:  |  Size: 130 B

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 62 KiB

After

Width:  |  Height:  |  Size: 130 B

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 62 KiB

After

Width:  |  Height:  |  Size: 130 B

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 64 KiB

After

Width:  |  Height:  |  Size: 130 B

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 67 KiB

After

Width:  |  Height:  |  Size: 130 B

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 78 KiB

After

Width:  |  Height:  |  Size: 130 B

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 77 KiB

After

Width:  |  Height:  |  Size: 130 B

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 58 KiB

After

Width:  |  Height:  |  Size: 130 B

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 77 KiB

After

Width:  |  Height:  |  Size: 130 B

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 95 KiB

After

Width:  |  Height:  |  Size: 130 B

View File

@@ -22,7 +22,7 @@ The table of transformations and used attributes:
| AlignQuantizationIntervals | IntervalsAlignment | PrecisionPreserved |
| AlignQuantizationParameters | QuantizationAlignment | PrecisionPreserved, PerTensorQuantization |
> **Note:** the same type of attribute instances can be created in different transformations. This approach is the result of the transformation single-responsibility principle. For example, `Precision` attribute instances are created in `MarkupCanBeQuantized` and `MarkupPrecisions` transformations, but the reasons for their creation are different
> **NOTE**: the same type of attribute instances can be created in different transformations. This approach is the result of the transformation single-responsibility principle. For example, `Precision` attribute instances are created in `MarkupCanBeQuantized` and `MarkupPrecisions` transformations, but the reasons for their creation are different
Common markup transformations can be decomposed into simpler utility markup transformations. The order of Markup utility transformations is not important:
* [CreateAttribute](@ref openvino_docs_OV_UG_lpt_CreateAttribute)

View File

@@ -46,4 +46,4 @@ Changes in the example model after main transformation:
- dequantization operations.
* Dequantization operations were moved via precision preserved (`concat1` and `concat2`) and quantized (`convolution2`) operations.
> **Note:** the left branch (branch #1) does not require per-tensor quantization. As a result, the `fakeQuantize1`output interval is [0, 255]. But quantized `convolution2` requires per-tensor quantization on the right branch (branch #2). Then all connected `FakeQuantize` interval operations (`fakeQuantize1` and `fakeQuantize2`) are aligned to have per-tensor quantization after the concatenation (`concat2`) operation.
> **NOTE**: the left branch (branch #1) does not require per-tensor quantization. As a result, the `fakeQuantize1`output interval is [0, 255]. But quantized `convolution2` requires per-tensor quantization on the right branch (branch #2). Then all connected `FakeQuantize` interval operations (`fakeQuantize1` and `fakeQuantize2`) are aligned to have per-tensor quantization after the concatenation (`concat2`) operation.

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 15 KiB

After

Width:  |  Height:  |  Size: 130 B

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 23 KiB

After

Width:  |  Height:  |  Size: 130 B

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 54 KiB

After

Width:  |  Height:  |  Size: 130 B

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 51 KiB

After

Width:  |  Height:  |  Size: 130 B

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 26 KiB

After

Width:  |  Height:  |  Size: 130 B

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 15 KiB

After

Width:  |  Height:  |  Size: 130 B

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 28 KiB

After

Width:  |  Height:  |  Size: 130 B

View File

@@ -1,4 +1,4 @@
# Converting Models with Model Optimizer {#openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide}
# Model Optimizer Usage {#openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide}
@sphinxdirective
@@ -8,19 +8,12 @@
:maxdepth: 1
:hidden:
openvino_docs_model_inputs_outputs
openvino_docs_MO_DG_prepare_model_convert_model_Converting_Model
openvino_docs_MO_DG_prepare_model_Model_Optimization_Techniques
openvino_docs_MO_DG_prepare_model_Model_Optimization_Techniques
openvino_docs_MO_DG_prepare_model_convert_model_Cutting_Model
openvino_docs_MO_DG_Additional_Optimization_Use_Cases
openvino_docs_MO_DG_FP16_Compression
openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow
openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_ONNX
openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_PyTorch
openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Paddle
openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_MxNet
openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Caffe
openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Kaldi
openvino_docs_MO_DG_prepare_model_convert_model_tutorials
openvino_docs_MO_DG_prepare_model_Model_Optimizer_FAQ
@endsphinxdirective
@@ -41,7 +34,7 @@ where IR is a pair of files describing the model:
* <code>.bin</code> - Contains the weights and biases binary data.
The generated IR can be additionally optimized for inference by [Post-training optimization](../../tools/pot/docs/Introduction.md)
The OpenVINO IR can be additionally optimized for inference by [Post-training optimization](../../tools/pot/docs/Introduction.md)
> that applies post-training quantization methods.
> **TIP**: You can also work with Model Optimizer in OpenVINO™ [Deep Learning Workbench (DL Workbench)](https://docs.openvino.ai/latest/workbench_docs_Workbench_DG_Introduction.html), which is a web-based tool with GUI for optimizing, fine-tuning, analyzing, visualizing, and comparing performance of deep learning models.

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 38 KiB

After

Width:  |  Height:  |  Size: 130 B

View File

@@ -1,3 +0,0 @@
version https://git-lfs.github.com/spec/v1
oid sha256:11579795c778b28d57cbf080dedc10149500d78cc8b16a74fe2b113c76a94f6b
size 26152

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:f2720b6d3b5e680978a91379c8c37366285299aab31aa139ad9abea8334aae34
size 57687

View File

@@ -1,3 +0,0 @@
version https://git-lfs.github.com/spec/v1
oid sha256:1a570510808fb2997ee0d51af6f92c5a4a8f8a59dbd275000489f856e89124d5
size 120211

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:5c0389fe34562993b1285f1994dbc878e9547a841c903bf204074ed2219b6bc7
size 323210

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 46 KiB

After

Width:  |  Height:  |  Size: 130 B

View File

@@ -1,3 +0,0 @@
version https://git-lfs.github.com/spec/v1
oid sha256:344b2fcb9b7a180a8d8047e65b4aad3ca2651cfc7d5e1e408710a5a3730fed09
size 20851

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:1cc5ead5513c641763b994bea5a08ccaa4a694b3f5239ddd2fe58424b90e5289
size 33741

View File

@@ -1,3 +0,0 @@
version https://git-lfs.github.com/spec/v1
oid sha256:78a73487434f4178f111595eb34b344b35af14bd4ccb03e6a5b00509f86e19c5
size 5348

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:14dd247a2b498dfa570e643656e6fd5ba9f7eb6e6fd14f4ada0dda2d4426c943
size 7832

View File

@@ -1,3 +0,0 @@
version https://git-lfs.github.com/spec/v1
oid sha256:939e1aa0d2ba28dab1c930c6271a9f4063fd9f8c539d4713c0bd0f87c34f66c3
size 15020

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:e42abc494dce9f04edb6424ff6828b074879869c68d1fbe08f3980b657fecdf8
size 30634

View File

@@ -1,3 +0,0 @@
version https://git-lfs.github.com/spec/v1
oid sha256:9859464a5c3ec91e4d6316109f523f48ad8972d2213a6797330e665d45b35c54
size 44117

3
docs/MO_DG/img/lm_1b.svg Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:062fa64afa0cc43c4a2c2c0442e499b6176c837857222af30bad2fa7c9515420
size 95508

View File

@@ -1,3 +0,0 @@
version https://git-lfs.github.com/spec/v1
oid sha256:3812efef32bd7f1bf40b130d5d522bc3df6aebd406bd1186699d214bca856722
size 43721

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:bc41098cd8ca3c72f930beab155c981cc6e4e898729bd76438650ba31ebe351a
size 142111

View File

@@ -1,3 +0,0 @@
version https://git-lfs.github.com/spec/v1
oid sha256:0e232c47e8500f42bd0e1f2b93f94f58e2d59caee149c687be3cdc3e8a5be59a
size 18417

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:6a67a86a656c81bc69e024c4911c535cf0937496bdbe69f31b7fee20ee14e474
size 173854

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:4f13f2a0424aa53a52d32ad692f574d331bf31c1f1a9e09499df9729912b45f4
size 351773

View File

@@ -1,3 +0,0 @@
version https://git-lfs.github.com/spec/v1
oid sha256:2adeca1e3512b9fe7b088a5412ce21592977a1f352a013735537ec92e895dc94
size 15653

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:85172ae61de4f592245d0a89605d66ea0b425696868636f9e40276a097a2ba81
size 498110

View File

@@ -2,11 +2,11 @@
Input data for inference can be different from the training dataset and requires additional preprocessing before inference.
To accelerate the whole pipeline including preprocessing and inference, Model Optimizer provides special parameters such as `--mean_values`,
`--scale_values`, `--reverse_input_channels`, and `--layout`. Based on these parameters, Model Optimizer generates IR with additionally
`--scale_values`, `--reverse_input_channels`, and `--layout`. Based on these parameters, Model Optimizer generates OpenVINO IR with additionally
inserted sub-graphs to perform the defined preprocessing. This preprocessing block can perform mean-scale normalization of input data,
reverting data along channel dimension, and changing the data layout.
See the following sections for details on the parameters, or the [Overview of Preprocessing API](../../OV_Runtime_UG/preprocessing_overview.md) for the same functionality in OpenVINO Runtime.
for more information.
## Specifying Layout
@@ -58,10 +58,12 @@ for example, `[0, 1]` or `[-1, 1]`. Sometimes, the mean values (mean images) are
There are two cases of how the input data preprocessing is implemented.
* The input preprocessing operations are a part of a model.
In this case, the application does not perform a separate preprocessing step: everything is embedded into the model itself. Model Optimizer will generate the IR with required preprocessing operations, and no `mean` and `scale` parameters are required.
In this case, the application does not perform a separate preprocessing step: everything is embedded into the model itself. Model Optimizer will generate the OpenVINO IR format with required preprocessing operations, and no `mean` and `scale` parameters are required.
* The input preprocessing operations are not a part of a model and the preprocessing is performed within the application which feeds the model with input data.
In this case, information about mean/scale values should be provided to the Model Optimizer to embed it to the generated IR.
In this case, information about mean/scale values should be provided to Model Optimizer to embed it to the generated OpenVINO IR format.
Model Optimizer provides command-line parameters to specify the values: `--mean_values`, `--scale_values`, `--scale`.
Using these parameters, Model Optimizer embeds the corresponding preprocessing block for mean-value normalization of the input data
and optimizes this block so that the preprocessing takes negligible time for inference.
@@ -75,7 +77,8 @@ mo --input_model unet.pdmodel --mean_values [123,117,104] --scale 255
## Reversing Input Channels <a name="when_to_reverse_input_channels"></a>
Sometimes, input images for your application can be of the RGB (or BGR) format and the model is trained on images of the BGR (or RGB) format,
which is in the opposite order of color channels. In this case, it is important to preprocess the input images by reverting the color channels before inference.
To embed this preprocessing step into IR, Model Optimizer provides the `--reverse_input_channels` command-line parameter to shuffle the color channels.
To embed this preprocessing step into OpenVINO IR, Model Optimizer provides the `--reverse_input_channels` command-line parameter to shuffle the color channels.
The `--reverse_input_channels` parameter can be used to preprocess the model input in the following cases:
* Only one dimension in the input shape has a size equal to 3.
@@ -84,7 +87,7 @@ The `--reverse_input_channels` parameter can be used to preprocess the model inp
Using the `--reverse_input_channels` parameter, Model Optimizer embeds the corresponding preprocessing block for reverting
the input data along channel dimension and optimizes this block so that the preprocessing takes only negligible time for inference.
For example, the following command launches the Model Optimizer for the TensorFlow AlexNet model and embeds the `reverse_input_channel` preprocessing block into IR:
For example, the following command launches Model Optimizer for the TensorFlow AlexNet model and embeds the `reverse_input_channel` preprocessing block into OpenVINO IR:
```sh
mo --input_model alexnet.pb --reverse_input_channels

View File

@@ -9,7 +9,7 @@ When evaluating the performance of a model with OpenVINO Runtime, it is required
- Track operations that occur outside OpenVINO Runtime (such as video decoding) separately.
> **NOTE**: Some image pre-processing can be baked into OpenVINO IR and accelerated accordingly. For more information, refer to [Embedding the Pre-processing](Additional_Optimizations.md) and [General Runtime Optimizations](../../optimization_guide/dldt_deployment_optimization_common).
> **NOTE**: Some image pre-processing can be baked into OpenVINO IR and accelerated accordingly. For more information, refer to [Embedding the Pre-processing](Additional_Optimizations.md) and [General Runtime Optimizations](../../optimization_guide/dldt_deployment_optimization_common.md).
## Tip 2: Try to Get Credible Data

Some files were not shown because too many files have changed in this diff Show More