Compare commits

...

129 Commits

Author SHA1 Message Date
Sebastian Golebiewski
ec53191909 [DOCS] Supported Layers update - for 22.2 (#15361)
* LessEqual Not Supported

* porting #13997

* porting #13995
2023-02-08 10:44:29 +01:00
Xiake Sun
1ee54505a0 [Docs] Port fix convert tf crnn model docs for release 22.2 (#15468)
* Port fix convert tf crnn model for release 22.2
2023-02-08 08:46:46 +01:00
Sebastian Golebiewski
a75a93252e [DOCS] Fixing links in 'Install Openvino on Windows from Archive' article - for 22.2 (#15358)
* fix links

* Update docs/install_guides/installing-openvino-from-archive-windows.md

* Update docs/install_guides/installing-openvino-from-archive-windows.md
2023-02-06 15:25:42 +08:00
Ilya Lavrenov
3b72991477 Migrate SVG files under LFS (#15323) 2023-01-26 16:22:47 +04:00
Sebastian Golebiewski
f1479f19a9 fix indentation (#15237) 2023-01-23 09:54:32 +01:00
Maciej Smyk
7e6e08571a scheme3 (#14873)
Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
2023-01-20 19:25:32 +04:00
Ilya Lavrenov
08e3ed0966 Added SVG files to lfs (#15230) 2023-01-20 16:35:02 +04:00
Maciej Smyk
645847fae1 default_quantization_flow (#14849) 2023-01-20 14:24:10 +04:00
Sebastian Golebiewski
d8a8daa1bb fix formatting (#15201)
fix formatting and links
2023-01-20 10:26:45 +01:00
Maciej Smyk
f3551dd009 DOCS: Model Caching Overview image recreation for 22.2 (#15024)
* Model Caching Overview

* graph-background-fix
2023-01-20 08:21:30 +01:00
Maciej Smyk
c078192273 DOCS: OpenVINO™ Security Add-on image recreation for 22.2 (#15082)
* Security Add-on

* Update ovsa_example.svg
2023-01-20 08:16:20 +01:00
Tatiana Savina
441496d79b update footer 2022.2 (#15155) 2023-01-16 17:31:11 +00:00
Maciej Smyk
da99f390c4 DOCS: Libraries for Local Distribution image recreation for 22.2 (#14960)
* deployment_full

* Update deployment_full.svg
2023-01-05 23:09:17 +04:00
Sebastian Golebiewski
8d3fa0e6c2 DOCS: Hiding Transition to API 2.0 banner - for 22.2 (#14953)
Using cookies to keep the banner hidden once the user have closed it.
2023-01-05 13:59:43 +01:00
Sebastian Golebiewski
ae60d612c6 DOCS: Updating Interactive Tutorials - for 22.2 (#14948)
Porting: #14945

Adding new tutorials:
404-style-transfer-webcam
406-3D-pose-estimation-webcam
2023-01-05 13:48:51 +01:00
Maciej Smyk
bd5fc754c7 Fix inference pipeline C++ doc: refer to the correct input blob (#14733) 2023-01-05 01:20:26 +04:00
Maciej Smyk
92987ecb33 yolo_tiny_v1 (#14880) 2023-01-04 11:11:54 +01:00
Yuan Xu
b149ea0e42 add ways to find samples for PyPI installation (#14658) (#14899) 2023-01-04 10:30:17 +08:00
Maciej Smyk
0eae631776 nncf_workflow (#14866) 2023-01-03 17:15:36 +01:00
Maciej Smyk
4105180e99 deployment_simplified (#14855) 2023-01-03 15:59:45 +01:00
Maciej Smyk
97ed68051a autoplugin_accelerate (#14840) 2023-01-03 14:55:34 +01:00
Maciej Smyk
c1dfd56358 DOCS: The LowLatency Transformation images recreation for 22.2 (#14831) 2023-01-03 14:48:00 +01:00
Sebastian Golebiewski
394d3b481a format pre tags (#14914)
Porting:
https://github.com/openvinotoolkit/openvino/pull/14889

This fix addresses word wrapping in &lt;pre&gt; tags in the output html files of documentation.
2023-01-03 13:16:08 +01:00
Yuan Xu
1b90b897d1 remove a space (#14757) 2022-12-21 12:24:13 +03:00
Maciej Smyk
da2aa1aac0 DOCS: Quantization doc rewrites for 22.2 (#14372)
* Update introduction.md

* Update introduction.md

* header fix

* Update Introduction.md

* Update Introduction.md

* graph-fix

* Update Introduction.md

* Update Introduction.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
2022-12-16 10:01:28 +01:00
Maciej Smyk
ce1aa513a0 DOCS: Low Precision Transformations proofreading for 22.2 (#14068)
* Attributes
* Update lpt_attributes.md
* defines fix
* Update docs/IE_PLUGIN_DG/plugin_transformation_pipeline/low_precision_transformations/lpt_attributes.md
2022-12-16 10:24:11 +03:00
Maciej Smyk
69c166bf3a Stateful models (#14662) 2022-12-15 13:54:12 +01:00
Maciej Smyk
2c48911c49 DOCS: Samples Overview proofreading for 22.2 (#14086) 2022-12-14 10:51:51 +01:00
Maciej Smyk
801deae368 DOCS: Proofreading C Samples for 22.2 (#14121) 2022-12-14 10:50:56 +01:00
Maciej Smyk
448b4bb838 DOCS: Proofreading Samples Python - 22.2 (#14168)
Co-authored-by: Helena Kloosterman <helena.kloosterman@intel.com>
Co-authored-by: totoka-intel <107121967+totoka-intel@users.noreply.github.com>
2022-12-14 10:50:29 +01:00
Maciej Smyk
179cb63b00 DOCS: Proofreading Samples C++ for 22.2 (#14184) 2022-12-14 10:49:29 +01:00
Maciej Smyk
608d002402 DOCS: Proofreading OpenVINO Extensibility for 22.2 (#14032) 2022-12-13 11:19:14 +01:00
Sebastian Golebiewski
ec21e6906b porting #13917 (#14577)
This pull request introduces a significant rewrite to the Get Started page. The rewrites re-organize the content to add a learning path for new users and provides more links to tutorials and features.

Details:
The same HTML and CSS code is used for the top portion of the page to create the three blue display blocks. Markdown is used to implement the rest of the page.
2022-12-13 15:39:08 +08:00
Maciej Smyk
961abdb0b7 FaceNet (#13792) 2022-12-12 13:12:09 +01:00
Maciej Smyk
bf02b11a63 lm_1b (#13785) 2022-12-12 13:10:19 +01:00
Maciej Smyk
673d61126f NCF_start images (#13788) 2022-12-12 13:07:09 +01:00
Maciej Smyk
79cf494d27 image-fix (#13861) 2022-12-12 13:04:27 +01:00
Maciej Smyk
993857a52b DOCS: Cutting Off Parts of a Model - graph fix for 22.2 (#13853)
* image fix
2022-12-12 11:21:21 +01:00
Sebastian Golebiewski
af945e4913 revert data type compression parameter (#14495) 2022-12-09 14:34:07 +08:00
Sebastian Golebiewski
850b88983f DOCS: Updating 'Create a YOCTO image' article - porting #14130 for 22.2 (#14247)
* Porting #14130

Porting
https://github.com/openvinotoolkit/openvino/pull/14130

This PR addresses the https://jira.devtools.intel.com/browse/CVS-75090 ticket in Jira.
Installation steps in the article have been updated, a troubleshooting section and additional resources have been added.

* reverting the steps

Reverting the installation steps to previous order.
Emphasizing that Step 2 is an example to create the minimal image.

* correcting numbering
2022-12-07 08:38:09 +08:00
Karol Blaszczak
332d4d3b69 Docs reword model support port 22.2 (#14440)
port from master
* Update supported_model_formats.md
2022-12-06 17:29:01 +01:00
Sebastian Golebiewski
f169440f83 DOCS: Updating Readme.md—Post merge port of #13252 for 22.2 (#13474)
* Updating Readme.md - Post merge port of #13252 for 22.2

Applying post merge changes from #13252:

https://github.com/openvinotoolkit/openvino/pull/13252

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update README.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update README.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update README.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update README.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update README.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update README.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update README.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update README.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update README.md

* Updating links

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
2022-12-06 17:17:58 +04:00
Sebastian Golebiewski
ef97282841 Fixing Python API links (#14423)
Fixing the reference to Python API.
2022-12-06 11:55:14 +01:00
Sebastian Golebiewski
49afa6bb06 DOCS: Edits to Basic OpenVINO Workflow page - porting #13807 to 22.2 (#14402)
* Update get_started_demos.md

docs: Update intro and prerequisites
docs: Update Steps 1 - 3
docs: Re-organize CPU, GPU, MYRIAD examples
docs: Change examples header
docs: revise Other Demos/Samples section
docs: Change OpenVINO Runtime install links
docs: Update intro and prerequisites
docs: Update Steps 1 - 3
docs: Re-organize CPU, GPU, MYRIAD examples
docs: Change examples header
docs: revise Other Demos/Samples section
docs: Change OpenVINO Runtime install links
docs: Update intro and prerequisites
docs: Update Steps 1 - 3
docs: Re-organize CPU, GPU, MYRIAD examples
docs: Change examples header
docs: revise Other Demos/Samples section
docs: Change OpenVINO Runtime install links
Update docs/get_started/get_started_demos.md
Update docs/get_started/get_started_demos.md
Update docs/get_started/get_started_demos.md
Update docs/get_started/get_started_demos.md
Update docs/get_started/get_started_demos.md
Update docs/get_started/get_started_demos.md
Update docs/get_started/get_started_demos.md
Update docs/get_started/get_started_demos.md
Update docs/get_started/get_started_demos.md
Update docs/get_started/get_started_demos.md
Update docs/get_started/get_started_demos.md
Update docs/get_started/get_started_demos.md
Update docs/get_started/get_started_demos.md
docs: edit OpenVINO Runtime section
docs: add link to build from source
Update docs/get_started/get_started_demos.md
Update docs/get_started/get_started_demos.md
Update docs/get_started/get_started_demos.md
Update docs/get_started/get_started_demos.md
Update docs/get_started/get_started_demos.md
docs: change Basic OpenVINO Workflow in toctree
docs: minor edit to OpenVINO Dev Tools section
docs: edit Build Samples section
docs: change Prerequisites section header levels
docs: edits to Step 1
docs: remove links to OMZ Demos build instructions
docs: fix links, remove "the"s , TMs, and *s
Apply suggestions from code review
Update get_started_demos.md
Update get_started_demos.md
Update get_started_demos.md

Co-Authored-By: Yuan Xu <yuan1.xu@intel.com>
Co-Authored-By: Karol Blaszczak <karol.blaszczak@intel.com>

* Update googlenet-v3_asymmetric.json

Co-authored-by: Evan <evan.juras@gmail.com>
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
2022-12-06 14:47:39 +08:00
Karol Blaszczak
f303df8a63 reintroduce benchmarks for ovms (#14271)
Recreate performance_benchmarks_ovms.md and add it to TOC of performance_benchmarks.md
graph files updated accordingly
2022-12-02 08:36:55 +01:00
Sebastian Golebiewski
1a3a3e89ec Fixing links to API (#14253)
Addressing:
https://jira.devtools.intel.com/browse/CVS-96910

Fixing links to API
2022-11-29 12:50:45 +08:00
Karol Blaszczak
81cb88b6c5 Update OV_flow_optimization_hvr.svg (#14182) 2022-11-23 09:57:51 +01:00
Maciej Smyk
4da2c945d6 tf_openvino (#13780) 2022-11-15 08:03:09 +01:00
Yuan Xu
e57005afcb Install raspbian updates 22/2 (#13798)
* update raspbian installation

* fix formatting

* update

* update unlink command

* update the architecture
2022-11-14 16:54:14 +08:00
Yuan Xu
4dbdba1ac3 update GPU config with info about install_NEO_OCL_driver.sh (#13839)
* update

* Update configurations-for-intel-gpu.md
2022-11-14 16:49:50 +08:00
Sebastian Golebiewski
c6e7336118 DOCS: Fixing a list in Model Optimizer Extensibility for 22.2 (#13350)
A minor fix that creates a list of main transformations responsible for a layout change.
2022-11-09 17:38:52 +01:00
Sebastian Golebiewski
ab52ba5efd DOCS: Fixing formatting in Multi Device for 22.2 (#13296)
Porting:
https://github.com/openvinotoolkit/openvino/pull/13292
2022-11-09 17:02:49 +01:00
Sebastian Golebiewski
8be1ae96bc Updating links in Model Optimization Guide (#13300)
Adding a link to Model Optimizer.
2022-11-09 16:58:54 +01:00
Sebastian Golebiewski
e829bfd858 Fixing indentation in General Optimizations for 22.2 (#13302)
A minor fix that corrects indentation of snippets.
2022-11-09 16:54:17 +01:00
Maciej Smyk
c6b0b9c255 DOCS: Model Conversion Tutorials fix for 22.2 (#13423) 2022-11-09 16:19:54 +01:00
Sebastian Golebiewski
b00cbf59cb DOCS: Language-agnostic version of 'Changing Input Shapes' - for -22.2 (#13813)
Removing the 'global' tabs and preparing a language-agnostic version of the article. Replacing png image with a scalable svg file. Proofreading the article.
2022-11-09 15:29:42 +01:00
Maciej Smyk
1e2c657895 DOCS: Edits to streamline Install OpenVINO Overview Page - Port from master (#13830)
* 13156

* Update docs/install_guides/installing-model-dev-tools.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update installing-model-dev-tools.md

* dev-tools-13820

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
2022-11-04 11:33:13 +03:00
Yuan Xu
7c78f17438 Docs: Update "Install OpenVINO Runtime on Windows from Archive File" page (#13332) (#13846)
* docs: big update to Windows archive install steps

* docs: apply correct note format

* docs: add link to archives

* docs: minor update

* docs: change archive download link to GitHub

* Update docs/install_guides/installing-openvino-from-archive-windows.md

Co-authored-by: Helena Kloosterman <helena.kloosterman@intel.com>

* docs: typo fix

* docs: minor change

* docs: remove "For Python developers" in Software tab

* docs: fix curl command

* docs: clarify that archive install is for C++ users

* docs: add link to PyPI page

* docs: Change back to numbered instructions

* Update docs/install_guides/installing-openvino-from-archive-windows.md

* Update docs/install_guides/installing-openvino-from-archive-windows.md

* Update docs/install_guides/installing-openvino-from-archive-windows.md

* Apply suggestions from code review

* Update installing-openvino-from-archive-windows.md

* Update docs/install_guides/installing-openvino-from-archive-windows.md

* Update installing-openvino-from-archive-windows.md

* Update installing-openvino-from-archive-windows.md

* Update installing-openvino-from-archive-windows.md

* Update docs/install_guides/installing-openvino-from-archive-windows.md

* Update installing-openvino-from-archive-windows.md

* Update installing-openvino-from-archive-windows.md

* Update installing-openvino-from-archive-windows.md

* Update installing-openvino-from-archive-windows.md

* Update docs/install_guides/installing-openvino-from-archive-windows.md

* Update docs/install_guides/installing-openvino-from-archive-windows.md

* Update installing-openvino-from-archive-windows.md

Co-authored-by: Helena Kloosterman <helena.kloosterman@intel.com>
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

Co-authored-by: Evan <evan.juras@gmail.com>
Co-authored-by: Helena Kloosterman <helena.kloosterman@intel.com>
2022-11-04 16:32:27 +08:00
Maciej Smyk
737319b6a0 DOCS: update for consistent usage of OpenVINO Runtime - Port to master (#13829)
* 13154

* Update docs/install_guides/installing-openvino-windows-header.md

* Update docs/install_guides/installing-openvino-macos-header.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
2022-11-04 05:21:44 +03:00
Maciej Smyk
cd74d8c668 Update installing-openvino-pip.md (#13827) 2022-11-04 10:03:52 +08:00
Yuan Xu
5e3f0720cd Docs: Update "Install OpenVINO Runtime on Linux from Archive File" page (#13345) (#13824) 2022-11-03 21:28:52 +08:00
Yuan Xu
140cf689a2 Docs: Update "Install OpenVINO Runtime on macOS from Archive File" page (#13347) (#13825)
* docs: Update intro and step 1

* docs: finish updates

* docs: fix duplicate section

* docs: fix curl command

* docs: clarify archive install is for C++ users

* docs: add link to PyPI install page

* docs: minor fixes

* docs: add link to Release Notes

* docs: Change back to numbered instructions

* docs: typo fix

* Update docs/install_guides/installing-openvino-from-archive-macos.md

* Update docs/install_guides/installing-openvino-from-archive-macos.md

* Update docs/install_guides/installing-openvino-from-archive-macos.md

* Update docs/install_guides/installing-openvino-from-archive-macos.md

* Update docs/install_guides/installing-openvino-from-archive-macos.md

* Update docs/install_guides/installing-openvino-from-archive-macos.md

* Update docs/install_guides/installing-openvino-from-archive-macos.md

* Update docs/install_guides/installing-openvino-from-archive-macos.md

* Update docs/install_guides/installing-openvino-from-archive-macos.md

* Update docs/install_guides/installing-openvino-from-archive-macos.md

* Update docs/install_guides/installing-openvino-from-archive-macos.md

* Update docs/install_guides/installing-openvino-from-archive-macos.md

* Update docs/install_guides/installing-openvino-from-archive-macos.md

* Update docs/install_guides/installing-openvino-from-archive-macos.md

* Update docs/install_guides/installing-openvino-from-archive-macos.md

* Apply suggestions from code review

* Update installing-openvino-from-archive-macos.md

* Update installing-openvino-from-archive-macos.md

* Update installing-openvino-from-archive-macos.md

* Update installing-openvino-from-archive-macos.md

* Update installing-openvino-from-archive-macos.md

* Update docs/install_guides/installing-openvino-from-archive-macos.md

* Update docs/install_guides/installing-openvino-from-archive-macos.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

Co-authored-by: Evan <evan.juras@gmail.com>
Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>
2022-11-03 21:28:37 +08:00
Maciej Smyk
73fe0afe3e DOCS: Rewrite "Install OpenVINO Development Tools" page - port to 22.2 (#13820)
* Update installing-model-dev-tools.md

* what's next update

* Update docs/install_guides/installing-model-dev-tools.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
2022-11-03 15:29:17 +03:00
Karol Blaszczak
b0c7a05d24 adjust footer content to meet legal requirement (#13776) 2022-11-02 11:24:47 +01:00
Sebastian Golebiewski
31a7187ccb DOCS: Fixing indentation note local distribution for 22.2 (#13338)
A minor fix that corrects the indentation of the note admonition.
2022-11-02 11:15:19 +01:00
Sebastian Golebiewski
894b501bce DOCS: Fix pot best practices for 22.2 (#13301)
A minor fix that corrects the ordered list.
2022-11-02 11:12:08 +01:00
Maciej Smyk
d78bcbe150 DOCS: Fix OpenVINO Deep Learning Workbench Overview (#13238) 2022-11-02 10:17:50 +01:00
Maciej Smyk
897dff88a7 DOCS: API 2.0 Inference Pipeline Fix for 22.2 (#13337)
* Update common_inference_pipeline.md
2022-11-02 10:16:05 +01:00
Sebastian Golebiewski
de3cdf1067 DOCS: Fixing snippet in Optimization for Throughput - for 22.2 (#13341)
A minor fix that corrects the code snippets.
2022-11-02 10:11:37 +01:00
Maciej Smyk
65ac02865f DOC: Fix for archive installation docs for 22.2 (#13297)
* Fix

* Update docs/install_guides/installing-openvino-from-archive-linux.md

Co-authored-by: Helena Kloosterman <helena.kloosterman@intel.com>

* fix-repository

Co-authored-by: Helena Kloosterman <helena.kloosterman@intel.com>
2022-11-02 17:04:45 +08:00
Sebastian Golebiewski
01822ff343 Fixing list in Deployment Manager Tool - for 22.2 (#13343)
A minor fix that corrects the numbered list of steps in Deploying Package on Target Systems
2022-11-02 10:00:36 +01:00
Maciej Smyk
59e2f86f9b Model-Formats-fix (#13352)
Standardized Additional Resources section for Supported Model Formats along with fixing some ref links in articles.
2022-11-02 09:58:35 +01:00
Maciej Smyk
d966611e28 DOCS: Model Optimizer Usage fix for 22.2 (#13453)
* ref link fix
2022-11-02 09:56:13 +01:00
Sebastian Golebiewski
d1d95ff5fc Fixing indentation in Build Plugin Using CMake (#13467)
Minor fixes that correct indentation of code snippets and note admonitions.

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>
2022-11-02 09:51:17 +01:00
Sebastian Golebiewski
28ee91fa46 DOCS: Replace image in Protecting Model Guide - for 22.2 (#13629)
Changing the png image to scalable svg format.
2022-11-02 09:47:42 +01:00
Sebastian Golebiewski
d771eb44f4 DOCS: Updating the large_batch_approach.svg image (#13703)
* Updating the Large Batch Approach.svg image
2022-10-28 08:24:43 +02:00
Sebastian Golebiewski
27b76dba44 DOCS: Improving Readability of Further Low-Level Implementation Details - for 22.2 (#13520)
* Improving Readability of Further Low-Level Implementation Details

The changes include recreation of the graphics to improve the readability of the article. Minor proofreading corrections have been applied as well.
2022-10-27 15:46:11 +02:00
Maciej Smyk
7be33a6079 DOCS: Security Add-on directive fix for 22.2 (#13259)
* Update ovsa_get_started.md
2022-10-27 08:25:59 +02:00
Karol Blaszczak
d437958466 DOCS-diagram_fix for 22.2 (#13634) 2022-10-25 16:32:26 +02:00
Sebastian Golebiewski
9d6b193201 Fixing indentation in Plugin Testing - for 22.2 (#13375)
A minor fix that corrects indentation of snippets.
2022-10-25 12:47:48 +02:00
Maciej Smyk
a57e3a9697 DOCS: Fix for Runtime Inference - 22.2 (#13549)
Fixed the following issues:

Switching between C++ and Python docs for "Shape Inference",
Removed repetitions,
Quote background in bullet list at the beginning of "Multi-device execution",
Broken note directives,
Fixed video player size in "Inference with OpenVINO Runtime",
Standardize Additional Resources throughout Runtime Inference.
2022-10-25 12:42:23 +02:00
Yuan Xu
fe1954aa25 update troubleshooting parent page (#13228)
* update wording
2022-10-25 12:25:38 +02:00
Sebastian Golebiewski
bc8582469e DOCS: Update GPU_Extensibility.md - for 22.2 (#13404)
Minor fixes, including indentation of code snippet and removing unordered list in Debugging Tips.
2022-10-18 18:24:23 +02:00
Sebastian Golebiewski
4795a4ac4a DOCS: Updating link to OMZ Demos (#13256)
* DOCS: Updating link to OMZ Demos

Changing the version of the docs to which link directs.

* Update integrate_with_your_application.md
2022-10-17 10:15:55 +02:00
Karol Blaszczak
33960aa4e8 DOCS - benchmarks table update 22.2 (#13437)
Co-authored-by: Alina Kladieva <alina.kladieva@intel.com>
2022-10-13 07:18:49 +04:00
Sebastian Golebiewski
cad5b795f8 DOCS: Fixing Readme for 2022.2 (#13239)
Minor linguistic corrections and fixing links
2022-10-12 19:04:13 +02:00
Maciej Smyk
1ed2e8b156 DOCS: NNCF Fix for 22.2 (#13215) 2022-10-12 17:08:19 +02:00
Haiqi Pan
e4d599713a Correct the GPU number that triggers the CPU removal in CTPUT (#13278) 2022-10-12 15:00:38 +08:00
Sebastian Golebiewski
b5562c4ddf DOCS: Fixing version selector dropdown for 22.2 (#13241)
* DOCS: Fixing version selector dropdown for 22.2

Fixing the version selector dropdown, to avoid horizontal scrollbar and trimming text.

Porting:
https://github.com/openvinotoolkit/openvino/pull/13187

* Adding overflow

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
2022-09-28 16:17:43 +04:00
Sebastian Golebiewski
3c9745990c DOCS: Fixing math equation for 22.2 (#13211)
A small fix for a math equation that was not rendered properly.

Porting:
https://github.com/openvinotoolkit/openvino/pull/13210
2022-09-28 15:36:32 +04:00
Karol Blaszczak
128e950d49 Update ov_chart.png (#13192)
Update ov_chart.png
2022-09-28 08:43:05 +02:00
Karol Blaszczak
f4c8920cf3 Update performance_int8_vs_fp32.md (#13191) 2022-09-28 08:32:19 +02:00
Yuan Xu
67934ce37e Fix a link for install archive pages (#13230)
* update links

* update for test

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
2022-09-27 21:48:53 +04:00
Karol Blaszczak
d183c1ca44 DOCS-homepage-adjustment (#13208)
* DOCS-homepage-adjustment

adjustment in 2 images

* Update docs/Documentation/deployment_guide_introduction.md
2022-09-27 14:30:38 +04:00
Yuan Xu
f43b1ef805 update linux section (#13123) 2022-09-26 13:30:10 +04:00
Wang Wangwang
dc8fcaf6e2 Docs: Update the doc on how to manually set operations affinity & fix some spelling errors (#13204)
* Docs: Fix spelling errors

* Docs: Update the doc on how to manually set operations affinity
2022-09-26 10:39:41 +04:00
Maciej Smyk
c8c5c2eb14 Update Extending_Model_Optimizer_with_Caffe_Python_Layers.md (#13142) 2022-09-23 01:43:51 +04:00
Trawinski, Dariusz
7f01b0a8eb test ipython extension (#13175) 2022-09-22 21:13:21 +04:00
Karol Blaszczak
0b5cd796d4 Docs benchmarks table update (#13174)
* update table

* remove mask rcnn resnet
2022-09-22 21:03:11 +04:00
Sebastian Golebiewski
f361cc2d6b DOCS: NNCF documentation for 22.2 (#13173)
* Updating NNCF documentation

* nncf-doc-update-ms

* Adding python files

* Changing ID of Range Supervision

* Minor fixes

Fixing formatting and renaming ID

* Proofreading

Minor corrections and removal of Neural Network Compression Framework article

Co-authored-by: msmykx <101244365+msmykx-intel@users.noreply.github.com>
2022-09-22 21:01:55 +04:00
Karol Blaszczak
2e8acae6f2 Docs benchmarks page update port22.2 (#13165)
* update page and benchmark config data

benchmarks articles
update data tables
delete image

* hide / remove ovms benchmarks page

Co-authored-by: Ilya Churaev <ilya.churaev@intel.com>
2022-09-22 17:04:45 +04:00
Ilya Churaev
ea92b38c44 Test change (#13169)
* Test change

* New change

* Disabled docs for linux

* Added new file to check

* Try to fix CI

* Additional try

* Remove redundant change

* Fixed configuration

* Enabled for .ci changes

* Revert "Added new file to check"

This reverts commit da05ad4bd4.

* Revert "Test change"

This reverts commit 6f670d6112.

* Revert "New change"

This reverts commit efeccd6537.
2022-09-22 16:04:08 +04:00
Ilya Churaev
20d2477124 Update CI trigger rules (#13167)
* Update CI trigger rules

* Code test change

* Revert "Code test change"

This reverts commit 086bde7ca8.

* Test change

* Fixed CI

* Revert "Test change"

This reverts commit c72c9077cd.
2022-09-22 15:13:17 +04:00
Karol Blaszczak
356289adc1 test for build failures (#13145)
notebooks link seems to be breaking documentation buils
2022-09-21 19:20:05 +04:00
Sebastian Golebiewski
3717201e99 DOCS: New Tutorials homepage for 22.2 version (#13080)
* DOCS: New Tutorials homepage for 22.2 version

Updating tutorials homepage and including notebooks generated on 13.09.2022:

https://repository.toolbox.iotg.sclab.intel.com/projects/ov-notebook/0.1.0-latest/20220913220807/dist/rst_files/

* Update requirements.txt

* Update requirements.txt

* Update notebooks-installation.md

* Update tutorials.md
2022-09-21 11:31:22 +04:00
Maciej Smyk
5bf210cbea DOCS: Fix OpenVINO Extensibility for 22.2 (#13108)
* Extensibility-fix

* Extensibility-fix-2

* Update Customize_Model_Optimizer.md

* Update Customize_Model_Optimizer.md
2022-09-20 15:07:53 +04:00
Maciej Smyk
6835565610 Media-Processing (#13086) 2022-09-20 15:07:42 +04:00
Sebastian Golebiewski
2d2af81a08 DOCS: Fix in Protecting Model (#13109)
* DOCS: Fix in Protecting Model

A small fix for a not working reference link to the schematic in "Experimental: Protecting Deep Learning Model " article

* Update README.md
2022-09-19 23:23:49 +04:00
Sebastian Golebiewski
f08632615c DOCS: Fixing formatting in Samples (#13085)
Fixing incorrectly numbered lists and indentation of code blocks.
2022-09-19 23:20:12 +04:00
Maciej Smyk
db49a6b662 Update ovsa_get_started.md (#13087) 2022-09-19 18:40:07 +04:00
Sebastian Golebiewski
b5db7ec6b1 DOCS: Fixing Model Representation for 22.2 (#13088)
* DOCS: Fixing Model Representation for 22.2

Fixing the snippets in tabs.

A follow up of:
https://github.com/openvinotoolkit/openvino/pull/12495/

* Update model_representation.md

Changing "See Also" to "Additional Resources"

* Update model_representation.md

* Update model_representation.md

* Update model_representation.md

* Update model_representation.md
2022-09-19 18:39:02 +04:00
Sebastian Golebiewski
12ca62bed5 DOCS: Fixing broken link to PaddleClas for 22.2 (#13101)
A small fix for a broken link to PaddleClas
2022-09-19 18:38:54 +04:00
Sebastian Golebiewski
465f19ae60 DOCS: Fixing missing heading in Infer Request for 22.2 (#13103)
Fixing missing "Examples of Infer Request Usages" heading.
2022-09-19 18:38:34 +04:00
Sebastian Golebiewski
4066218fa0 DOCS: Fixing note admonition in Pytorch Convert RNNT for 22.2 (#13104)
A small fix for a broken note admonition in "Converting a PyTorch RNN-T Model" article.
2022-09-19 18:38:13 +04:00
Sebastian Golebiewski
9c652198f0 DOCS: Fixing link to General Optimizations article for 22.2 (#13106)
A small fix for a broken link.
2022-09-19 18:35:35 +04:00
Sebastian Golebiewski
8856b95234 DOCS: Fixing link to MobileNetV1 FPN for 22.2 (#13107)
* DOCS: Fixing link to MobileNetV1 FPN for 22.2

A small fix for a broken link to MobileNetV1 FPN model in "Quantizing Object Detection Model with Accuracy Control" article

* Update README.md

Fixing broken code block.
2022-09-19 18:34:39 +04:00
Maciej Smyk
944c8b7fb5 Update openvino_ecosystem.md (#13098) 2022-09-19 18:33:51 +04:00
Yuan Xu
5792a4a6df Fix language switcher for 22/2 (#13076)
* port fix from master

* Revert "port fix from master"

This reverts commit 903abd946a.

* Revert "Revert "port fix from master""

This reverts commit 63e1e944a0.
2022-09-19 16:50:46 +04:00
Karol Blaszczak
8fa3b23c6d DOCS-doc_structure_step_2 - recreated (#13082)
* DOCS-doc_structure_step_2

- adjustments to the previous change based on feedback
- changes focusing on ModelOptimizer section to mitigate the removal of ONNX and PdPd articles

* remove 2 files we brought back after 22.1
2022-09-19 16:50:29 +04:00
Sebastian Golebiewski
d83741f433 Porting: Change notebooks fetching link for documentation #12750 (#13046)
* Porting: Change notebooks fetching link for documentation

Porting:

#12750

There are newly generated files (since 30.08.2022) that seem to be fine but apparently "latest" is not build in the docs:

https://repository.toolbox.iotg.sclab.intel.com/projects/ov-notebook/0.1.0-latest/20220913220807/dist/rst_files/

The question remains, why is it so?

* Update consts.py

Updating to the most recent version from 13.09

https://repository.toolbox.iotg.sclab.intel.com/projects/ov-notebook/0.1.0-latest/20220913220807/dist/rst_files/
2022-09-19 16:48:29 +04:00
Karol Blaszczak
9a0a0c4e2c [DOC][CPU] Updated CPU suppported i/o precisions map (#12839) (#13077)
Co-authored-by: Gorokhov Dmitriy <dmitry.gorokhov@intel.com>
2022-09-19 16:48:14 +04:00
Karol Blaszczak
37a0278204 change 2 images for asynch mode (#13071)
changing two screenshots in "general optimizations" to one comparison csv image
2022-09-19 16:47:56 +04:00
Karol Blaszczak
48ea77df85 change hello reshape ssd sample port to 22.2 (#12657) (#13068)
* change hello reshape ssd sample (#12657)

ssdlite_mobilenet_v2 changed to mobilenet-ssd, as per J. Espinoza's request, to fix 
84516

* one more correction of mobilnet
2022-09-19 16:47:15 +04:00
Karol Blaszczak
4451ef7d42 Math equation in POT - port to 22.2 (#13067) 2022-09-19 16:46:49 +04:00
Sebastian Golebiewski
427900eca7 DOCS-homepage-restyling-pt1-to-22.2 (#12300)
Porting to 2022.2 branch
2022-09-19 16:46:41 +04:00
yanlan song
72c3bf222b fix coredump when quit benchmark_app (#13026)
* fix coredump when quit benchmark_app

Signed-off-by: fishbell <bell.song@intel.com>

* enable tests

Signed-off-by: fishbell <bell.song@intel.com>

* add macro to handle CPU not built

Signed-off-by: fishbell <bell.song@intel.com>

Signed-off-by: fishbell <bell.song@intel.com>
2022-09-15 16:47:11 +08:00
Artyom Anokhov
80f1677c2c Updating archive names in qsg (#12927) 2022-09-15 10:36:37 +02:00
Roman Kazantsev
7d184040eb [Frontend, TF FE] Fix RTTI for ConversionExtension on MacOS (#13039)
* [Frontend, TF FE] Fix RTTI for ConversionExtension on MacOS

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Put only destructor into cpp

* Remove extra white-space

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2022-09-15 07:47:20 +03:00
Yuan Xu
caaef49639 add troubleshooting item for PRC users (#12908)
* add troubleshooting item for PRC users

* updates

* Update docs/install_guides/pypi-openvino-dev.md

Co-authored-by: Helena Kloosterman <helena.kloosterman@intel.com>

* Update docs/install_guides/pypi-openvino-rt.md

Co-authored-by: Helena Kloosterman <helena.kloosterman@intel.com>

* Update docs/install_guides/pypi-openvino-dev.md

Co-authored-by: Helena Kloosterman <helena.kloosterman@intel.com>

* Update docs/install_guides/pypi-openvino-rt.md

Co-authored-by: Helena Kloosterman <helena.kloosterman@intel.com>

* Update docs/install_guides/pypi-openvino-rt.md

Co-authored-by: Helena Kloosterman <helena.kloosterman@intel.com>

* add trusted host back

Co-authored-by: Helena Kloosterman <helena.kloosterman@intel.com>
2022-09-14 12:17:38 +04:00
368 changed files with 6424 additions and 10155 deletions

View File

@@ -4,8 +4,21 @@ trigger:
- master
- releases/*
paths:
exclude:
- docs/*
exclude:
- docs/
- /**/docs/*
- /**/*.md
pr:
branches:
include:
- master
- releases/*
paths:
exclude:
- docs/
- /**/docs/*
- /**/*.md
resources:
repositories:

View File

@@ -4,8 +4,21 @@ trigger:
- master
- releases/*
paths:
exclude:
- docs/*
exclude:
- docs/
- /**/docs/*
- /**/*.md
pr:
branches:
include:
- master
- releases/*
paths:
exclude:
- docs/
- /**/docs/*
- /**/*.md
resources:
repositories:

View File

@@ -4,8 +4,21 @@ trigger:
- master
- releases/*
paths:
exclude:
- docs/*
exclude:
- docs/
- /**/docs/*
- /**/*.md
pr:
branches:
include:
- master
- releases/*
paths:
exclude:
- docs/
- /**/docs/*
- /**/*.md
resources:
repositories:

View File

@@ -4,8 +4,21 @@ trigger:
- master
- releases/*
paths:
exclude:
- docs/*
exclude:
- docs/
- /**/docs/*
- /**/*.md
pr:
branches:
include:
- master
- releases/*
paths:
exclude:
- docs/
- /**/docs/*
- /**/*.md
jobs:
- job: LinCC

View File

@@ -4,8 +4,21 @@ trigger:
- master
- releases/*
paths:
exclude:
- docs/*
exclude:
- docs/
- /**/docs/*
- /**/*.md
pr:
branches:
include:
- master
- releases/*
paths:
exclude:
- docs/
- /**/docs/*
- /**/*.md
jobs:
- job: OpenVINO_ONNX_CI

View File

@@ -4,8 +4,21 @@ trigger:
- master
- releases/*
paths:
exclude:
- docs/*
exclude:
- docs/
- /**/docs/*
- /**/*.md
pr:
branches:
include:
- master
- releases/*
paths:
exclude:
- docs/
- /**/docs/*
- /**/*.md
jobs:
- job: onnxruntime

View File

@@ -4,8 +4,21 @@ trigger:
- master
- releases/*
paths:
exclude:
- docs/*
exclude:
- docs/
- /**/docs/*
- /**/*.md
pr:
branches:
include:
- master
- releases/*
paths:
exclude:
- docs/
- /**/docs/*
- /**/*.md
resources:
repositories:

View File

@@ -4,8 +4,21 @@ trigger:
- master
- releases/*
paths:
exclude:
- docs/*
exclude:
- docs/
- /**/docs/*
- /**/*.md
pr:
branches:
include:
- master
- releases/*
paths:
exclude:
- docs/
- /**/docs/*
- /**/*.md
resources:
repositories:

View File

@@ -1,11 +1,24 @@
trigger:
branches:
include:
- master
- releases/*
paths:
exclude:
- docs/
- /**/docs/*
- /**/*.md
pr:
branches:
include:
- master
- releases/*
paths:
exclude:
- docs/*
exclude:
- docs/
- /**/docs/*
- /**/*.md
jobs:
- job: WinCC

1
.gitattributes vendored
View File

@@ -64,6 +64,7 @@
*.gif filter=lfs diff=lfs merge=lfs -text
*.vsdx filter=lfs diff=lfs merge=lfs -text
*.bmp filter=lfs diff=lfs merge=lfs -text
*.svg filter=lfs diff=lfs merge=lfs -text
#POT attributes
tools/pot/tests/data/test_cases_refs/* filter=lfs diff=lfs merge=lfs -text

View File

@@ -2,7 +2,7 @@
<img src="docs/img/openvino-logo-purple-black.png" width="400px">
[![Stable release](https://img.shields.io/badge/version-2022.1-green.svg)](https://github.com/openvinotoolkit/openvino/releases/tag/2022.1)
[![Stable release](https://img.shields.io/badge/version-2022.2-green.svg)](https://github.com/openvinotoolkit/openvino/releases/tag/2022.2.0)
[![Apache License Version 2.0](https://img.shields.io/badge/license-Apache_2.0-green.svg)](LICENSE)
![GitHub branch checks state](https://img.shields.io/github/checks-status/openvinotoolkit/openvino/master?label=GitHub%20checks)
![Azure DevOps builds (branch)](https://img.shields.io/azure-devops/build/openvinoci/b2bab62f-ab2f-4871-a538-86ea1be7d20f/13?label=Public%20CI)
@@ -34,24 +34,24 @@ OpenVINO™ is an open-source toolkit for optimizing and deploying AI inference.
- Reduce resource demands and efficiently deploy on a range of Intel® platforms from edge to cloud
This open-source version includes several components: namely [Model Optimizer], [OpenVINO™ Runtime], [Post-Training Optimization Tool], as well as CPU, GPU, MYRIAD, multi device and heterogeneous plugins to accelerate deep learning inferencing on Intel® CPUs and Intel® Processor Graphics.
It supports pre-trained models from the [Open Model Zoo], along with 100+ open
This open-source version includes several components: namely [Model Optimizer], [OpenVINO™ Runtime], [Post-Training Optimization Tool], as well as CPU, GPU, MYRIAD, multi device and heterogeneous plugins to accelerate deep learning inference on Intel® CPUs and Intel® Processor Graphics.
It supports pre-trained models from [Open Model Zoo], along with 100+ open
source and public models in popular formats such as TensorFlow, ONNX, PaddlePaddle, MXNet, Caffe, Kaldi.
### Components
* [OpenVINO™ Runtime] - is a set of C++ libraries with C and Python bindings providing a common API to deliver inference solutions on the platform of your choice.
* [core](https://github.com/openvinotoolkit/openvino/tree/master/src/core) - provides the base API for model representation and modification.
* [inference](https://github.com/openvinotoolkit/openvino/tree/master/src/inference) - provides an API to infer models on device.
* [transformations](https://github.com/openvinotoolkit/openvino/tree/master/src/common/transformations) - contains the set of common transformations which are used in OpenVINO plugins.
* [low precision transformations](https://github.com/openvinotoolkit/openvino/tree/master/src/common/low_precision_transformations) - contains the set of transformations which are used in low precision models
* [bindings](https://github.com/openvinotoolkit/openvino/tree/master/src/bindings) - contains all awailable OpenVINO bindings which are maintained by OpenVINO team.
* [c](https://github.com/openvinotoolkit/openvino/tree/master/src/bindings/c) - provides C API for OpenVINO™ Runtime
* [python](https://github.com/openvinotoolkit/openvino/tree/master/src/bindings/python) - Python API for OpenVINO™ Runtime
* [Plugins](https://github.com/openvinotoolkit/openvino/tree/master/src/plugins) - contains OpenVINO plugins which are maintained in open-source by OpenVINO team. For more information please taje a look to the [list of supported devices](#supported-hardware-matrix).
* [Frontends](https://github.com/openvinotoolkit/openvino/tree/master/src/frontends) - contains available OpenVINO frontends which allow to read model from native framework format.
* [core](./src/core) - provides the base API for model representation and modification.
* [inference](./src/inference) - provides an API to infer models on the device.
* [transformations](./src/common/transformations) - contains the set of common transformations which are used in OpenVINO plugins.
* [low precision transformations](./src/common/low_precision_transformations) - contains the set of transformations that are used in low precision models
* [bindings](./src/bindings) - contains all available OpenVINO bindings which are maintained by the OpenVINO team.
* [c](./src/bindings/c) - C API for OpenVINO™ Runtime
* [python](./src/bindings/python) - Python API for OpenVINO™ Runtime
* [Plugins](./src/plugins) - contains OpenVINO plugins which are maintained in open-source by the OpenVINO team. For more information, take a look at the [list of supported devices](#supported-hardware-matrix).
* [Frontends](./src/frontends) - contains available OpenVINO frontends that allow reading models from the native framework format.
* [Model Optimizer] - is a cross-platform command-line tool that facilitates the transition between training and deployment environments, performs static model analysis, and adjusts deep learning models for optimal execution on end-point target devices.
* [Post-Training Optimization Tool] - is designed to accelerate the inference of deep learning models by applying special methods without model retraining or fine-tuning, for example, post-training 8-bit quantization.
* [Samples] - applications on C, C++ and Python languages which shows basic use cases of OpenVINO usages.
* [Samples] - applications in C, C++ and Python languages that show basic OpenVINO use cases.
## Supported Hardware matrix
@@ -69,37 +69,37 @@ The OpenVINO™ Runtime can infer models on different hardware devices. This sec
<tbody>
<tr>
<td rowspan=2>CPU</td>
<td> <a href="https://docs.openvino.ai/nightly/openvino_docs_OV_UG_supported_plugins_CPU.html#doxid-openvino-docs-o-v-u-g-supported-plugins-c-p-u">Intel CPU</a></tb>
<td><b><i><a href="https://github.com/openvinotoolkit/openvino/tree/master/src/plugins/intel_cpu">openvino_intel_cpu_plugin</a></i></b></td>
<td> <a href="https://docs.openvino.ai/2022.2/openvino_docs_OV_UG_supported_plugins_CPU.html#doxid-openvino-docs-o-v-u-g-supported-plugins-c-p-u">Intel CPU</a></tb>
<td><b><i><a href="./src/plugins/intel_cpu">openvino_intel_cpu_plugin</a></i></b></td>
<td>Intel Xeon with Intel® Advanced Vector Extensions 2 (Intel® AVX2), Intel® Advanced Vector Extensions 512 (Intel® AVX-512), and AVX512_BF16, Intel Core Processors with Intel AVX2, Intel Atom Processors with Intel® Streaming SIMD Extensions (Intel® SSE)</td>
</tr>
<tr>
<td> <a href="https://docs.openvino.ai/nightly/openvino_docs_OV_UG_supported_plugins_ARM_CPU.html">ARM CPU</a></tb>
<td> <a href="https://docs.openvino.ai/2022.2/openvino_docs_OV_UG_supported_plugins_ARM_CPU.html">ARM CPU</a></tb>
<td><b><i><a href="https://github.com/openvinotoolkit/openvino_contrib/tree/master/modules/arm_plugin">openvino_arm_cpu_plugin</a></i></b></td>
<td>Raspberry Pi™ 4 Model B, Apple® Mac mini with M1 chip, NVIDIA® Jetson Nano™, Android™ devices
</tr>
<tr>
<td>GPU</td>
<td><a href="https://docs.openvino.ai/nightly/openvino_docs_OV_UG_supported_plugins_GPU.html#doxid-openvino-docs-o-v-u-g-supported-plugins-g-p-u">Intel GPU</a></td>
<td><b><i><a href="https://github.com/openvinotoolkit/openvino/tree/master/src/plugins/intel_gpu">openvino_intel_gpu_plugin</a></i></b></td>
<td><a href="https://docs.openvino.ai/2022.2/openvino_docs_OV_UG_supported_plugins_GPU.html#doxid-openvino-docs-o-v-u-g-supported-plugins-g-p-u">Intel GPU</a></td>
<td><b><i><a href="./src/plugins/intel_gpu">openvino_intel_gpu_plugin</a></i></b></td>
<td>Intel Processor Graphics, including Intel HD Graphics and Intel Iris Graphics</td>
</tr>
<tr>
<td>GNA</td>
<td><a href="https://docs.openvino.ai/nightly/openvino_docs_OV_UG_supported_plugins_GNA.html#doxid-openvino-docs-o-v-u-g-supported-plugins-g-n-a">Intel GNA</a></td>
<td><b><i><a href="https://github.com/openvinotoolkit/openvino/tree/master/src/plugins/intel_gna">openvino_intel_gna_plugin</a></i></b></td>
<td><a href="https://docs.openvino.ai/2022.2/openvino_docs_OV_UG_supported_plugins_GNA.html#doxid-openvino-docs-o-v-u-g-supported-plugins-g-n-a">Intel GNA</a></td>
<td><b><i><a href="./src/plugins/intel_gna">openvino_intel_gna_plugin</a></i></b></td>
<td>Intel Speech Enabling Developer Kit, Amazon Alexa* Premium Far-Field Developer Kit, Intel Pentium Silver J5005 Processor, Intel Pentium Silver N5000 Processor, Intel Celeron J4005 Processor, Intel Celeron J4105 Processor, Intel Celeron Processor N4100, Intel Celeron Processor N4000, Intel Core i3-8121U Processor, Intel Core i7-1065G7 Processor, Intel Core i7-1060G7 Processor, Intel Core i5-1035G4 Processor, Intel Core i5-1035G7 Processor, Intel Core i5-1035G1 Processor, Intel Core i5-1030G7 Processor, Intel Core i5-1030G4 Processor, Intel Core i3-1005G1 Processor, Intel Core i3-1000G1 Processor, Intel Core i3-1000G4 Processor</td>
</tr>
<tr>
<td>VPU</td>
<td><a href="https://docs.openvino.ai/nightly/openvino_docs_IE_DG_supported_plugins_VPU.html#doxid-openvino-docs-i-e-d-g-supported-plugins-v-p-u">Myriad plugin</a></td>
<td><b><i><a href="https://github.com/openvinotoolkit/openvino/tree/master/src/plugins/intel_myriad">openvino_intel_myriad_plugin</a></i></b></td>
<td><a href="https://docs.openvino.ai/2022.2/openvino_docs_IE_DG_supported_plugins_VPU.html#doxid-openvino-docs-i-e-d-g-supported-plugins-v-p-u">Myriad plugin</a></td>
<td><b><i><a href="./src/plugins/intel_myriad">openvino_intel_myriad_plugin</a></i></b></td>
<td>Intel® Neural Compute Stick 2 powered by the Intel® Movidius™ Myriad™ X</td>
</tr>
</tbody>
</table>
Also OpenVINO™ Toolkit contains several plugins which should simplify to load model on several hardware devices:
OpenVINO™ Toolkit also contains several plugins which simplify loading models on several hardware devices:
<table>
<thead>
<tr>
@@ -110,23 +110,23 @@ Also OpenVINO™ Toolkit contains several plugins which should simplify to load
</thead>
<tbody>
<tr>
<td><a href="https://docs.openvino.ai/nightly/openvino_docs_IE_DG_supported_plugins_AUTO.html#doxid-openvino-docs-i-e-d-g-supported-plugins-a-u-t-o">Auto</a></td>
<td><b><i><a href="https://github.com/openvinotoolkit/openvino/tree/master/src/plugins/auto">openvino_auto_plugin</a></i></b></td>
<td><a href="https://docs.openvino.ai/2022.2/openvino_docs_IE_DG_supported_plugins_AUTO.html#doxid-openvino-docs-i-e-d-g-supported-plugins-a-u-t-o">Auto</a></td>
<td><b><i><a href="./src/plugins/auto">openvino_auto_plugin</a></i></b></td>
<td>Auto plugin enables selecting Intel device for inference automatically</td>
</tr>
<tr>
<td><a href="https://docs.openvino.ai/nightly/openvino_docs_OV_UG_Automatic_Batching.html">Auto Batch</a></td>
<td><b><i><a href="https://github.com/openvinotoolkit/openvino/tree/master/src/plugins/auto_batch">openvino_auto_batch_plugin</a></i></b></td>
<td><a href="https://docs.openvino.ai/2022.2/openvino_docs_OV_UG_Automatic_Batching.html">Auto Batch</a></td>
<td><b><i><a href="./src/plugins/auto_batch">openvino_auto_batch_plugin</a></i></b></td>
<td>Auto batch plugin performs on-the-fly automatic batching (i.e. grouping inference requests together) to improve device utilization, with no programming effort from the user</td>
</tr>
<tr>
<td><a href="https://docs.openvino.ai/nightly/openvino_docs_OV_UG_Hetero_execution.html#doxid-openvino-docs-o-v-u-g-hetero-execution">Hetero</a></td>
<td><b><i><a href="https://github.com/openvinotoolkit/openvino/tree/master/src/plugins/hetero">openvino_hetero_plugin</a></i></b></td>
<td><a href="https://docs.openvino.ai/2022.2/openvino_docs_OV_UG_Hetero_execution.html#doxid-openvino-docs-o-v-u-g-hetero-execution">Hetero</a></td>
<td><b><i><a href="./src/plugins/hetero">openvino_hetero_plugin</a></i></b></td>
<td>Heterogeneous execution enables automatic inference splitting between several devices</td>
</tr>
<tr>
<td><a href="https://docs.openvino.ai/nightly/openvino_docs_OV_UG_Running_on_multiple_devices.html#doxid-openvino-docs-o-v-u-g-running-on-multiple-devices">Multi</a></td>
<td><b><i><a href="https://github.com/openvinotoolkit/openvino/tree/master/src/plugins/auto">openvino_auto_plugin</a></i></b></td>
<td><a href="https://docs.openvino.ai/2022.2/openvino_docs_OV_UG_Running_on_multiple_devices.html#doxid-openvino-docs-o-v-u-g-running-on-multiple-devices">Multi</a></td>
<td><b><i><a href="./src/plugins/auto">openvino_auto_plugin</a></i></b></td>
<td>Multi plugin enables simultaneous inference of the same model on several devices in parallel</td>
</tr>
</tbody>
@@ -140,11 +140,11 @@ By contributing to the project, you agree to the license and copyright terms the
### User documentation
The latest documentation for OpenVINO™ Toolkit is availabe [here](https://docs.openvino.ai/). This documentation contains detailed information about all OpenVINO components and provides all important information which could be needed if you create an application which is based on binary OpenVINO distribution or own OpenVINO version without source code modification.
The latest documentation for OpenVINO™ Toolkit is available [here](https://docs.openvino.ai/). This documentation contains detailed information about all OpenVINO components and provides all the important information you may need to create an application based on binary OpenVINO distribution or own OpenVINO version without source code modification.
### Developer documentation
[Developer documentation](#todo-add) contains information about architectural decisions which are applied inside the OpenVINO components. This documentation has all necessary information which could be needed in order to contribute to OpenVINO.
[Developer documentation](./docs/dev/index.md) contains information about architectural decisions which are applied inside the OpenVINO components. This documentation has all necessary information which could be needed in order to contribute to OpenVINO.
## Tutorials
@@ -161,15 +161,15 @@ The list of OpenVINO tutorials:
## System requirements
The full information about system requirements depends on platform and is available on dedicated pages:
- [Linux](https://docs.openvino.ai/latest/openvino_docs_install_guides_installing_openvino_linux_header.html)
- [Windows](https://docs.openvino.ai/latest/openvino_docs_install_guides_installing_openvino_windows_header.html)
- [macOS](https://docs.openvino.ai/latest/openvino_docs_install_guides_installing_openvino_macos_header.html)
- [Raspbian](https://docs.openvino.ai/latest/openvino_docs_install_guides_installing_openvino_raspbian.html)
The system requirements vary depending on platform and are available on dedicated pages:
- [Linux](https://docs.openvino.ai/2022.2/openvino_docs_install_guides_installing_openvino_linux_header.html)
- [Windows](https://docs.openvino.ai/2022.2/openvino_docs_install_guides_installing_openvino_windows_header.html)
- [macOS](https://docs.openvino.ai/2022.2/openvino_docs_install_guides_installing_openvino_macos_header.html)
- [Raspbian](https://docs.openvino.ai/2022.2/openvino_docs_install_guides_installing_openvino_raspbian.html)
## How to build
Please take a look to [OpenVINO Wiki](https://github.com/openvinotoolkit/openvino/wiki#how-to-build) to get more information about OpenVINO build process.
See the [OpenVINO Wiki](https://github.com/openvinotoolkit/openvino/wiki#how-to-build) to get more information about the OpenVINO build process.
## How to contribute
@@ -177,13 +177,13 @@ See [CONTRIBUTING](./CONTRIBUTING.md) for details. Thank you!
## Get a support
Please report questions, issues and suggestions using:
Report questions, issues and suggestions, using:
* [GitHub* Issues](https://github.com/openvinotoolkit/openvino/issues)
* The [`openvino`](https://stackoverflow.com/questions/tagged/openvino) tag on StackOverflow\*
* [Forum](https://software.intel.com/en-us/forums/computer-vision)
## See also
## Additional Resources
* [OpenVINO Wiki](https://github.com/openvinotoolkit/openvino/wiki)
* [OpenVINO Storage](https://storage.openvinotoolkit.org/)
@@ -194,15 +194,15 @@ Please report questions, issues and suggestions using:
* [Neural Network Compression Framework (NNCF)](https://github.com/openvinotoolkit/nncf) - a suite of advanced algorithms for model inference optimization including quantization, filter pruning, binarization and sparsity
* [OpenVINO™ Training Extensions (OTE)](https://github.com/openvinotoolkit/training_extensions) - convenient environment to train Deep Learning models and convert them using OpenVINO for optimized inference.
* [OpenVINO™ Model Server (OVMS)](https://github.com/openvinotoolkit/model_server) - a scalable, high-performance solution for serving deep learning models optimized for Intel architectures
* [DL Workbench](https://docs.openvino.ai/nightly/workbench_docs_Workbench_DG_Introduction.html) - An alternative, web-based version of OpenVINO designed to make production of pretrained deep learning models significantly easier.
* [Computer Vision Annotation Tool (CVAT)](https://github.com/openvinotoolkit/cvat) - an online, interactive video and image annotation tool for computer vision purposes.
* [DL Workbench](https://docs.openvino.ai/2022.2/workbench_docs_Workbench_DG_Introduction.html) - an alternative, web-based version of OpenVINO designed to facilitate optimization and compression of pre-trained deep learning models.
* [Computer Vision Annotation Tool (CVAT)](https://github.com/opencv/cvat) - an online, interactive video and image annotation tool for computer vision purposes.
* [Dataset Management Framework (Datumaro)](https://github.com/openvinotoolkit/datumaro) - a framework and CLI tool to build, transform, and analyze datasets.
---
\* Other names and brands may be claimed as the property of others.
[Open Model Zoo]:https://github.com/openvinotoolkit/open_model_zoo
[OpenVINO™ Runtime]:https://docs.openvino.ai/latest/openvino_docs_OV_UG_OV_Runtime_User_Guide.html
[Model Optimizer]:https://docs.openvino.ai/latest/openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html
[Post-Training Optimization Tool]:https://docs.openvino.ai/latest/pot_introduction.html
[OpenVINO™ Runtime]:https://docs.openvino.ai/2022.2/openvino_docs_OV_UG_OV_Runtime_User_Guide.html
[Model Optimizer]:https://docs.openvino.ai/2022.2/openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html
[Post-Training Optimization Tool]:https://docs.openvino.ai/2022.2/pot_introduction.html
[Samples]:https://github.com/openvinotoolkit/openvino/tree/master/samples

View File

@@ -2,7 +2,10 @@
Once you have a model that meets both OpenVINO™ and your requirements, you can choose among several ways of deploying it with your application:
* [Run inference and develop your app with OpenVINO™ Runtime](../OV_Runtime_UG/openvino_intro.md).
* [Deploy your application locally](../OV_Runtime_UG/deployment/deployment_intro.md).
* [Deploy your model online with the OpenVINO Model Server](@ref ovms_what_is_openvino_model_server).
* [Deploy your application locally](../OV_Runtime_UG/deployment/deployment_intro.md).
* [Deploy your model with OpenVINO Model Server](@ref ovms_what_is_openvino_model_server).
* [Deploy your application for the TensorFlow framework with OpenVINO Integration](./openvino_ecosystem_ovtf.md).
> **NOTE**: Note that [running inference in OpenVINO Runtime](../OV_Runtime_UG/openvino_intro.md) is the most basic form of deployment. Before moving forward, make sure you know how to create a proper Inference configuration.

View File

@@ -13,99 +13,3 @@
@endsphinxdirective
Deep Learning Workbench (DL Workbench) is an official OpenVINO™ graphical interface designed to make the production of pretrained deep learning Computer Vision and Natural Language Processing models significantly easier.
Minimize the inference-to-deployment workflow timing for neural models right in your browser: import a model, analyze its performance and accuracy, visualize the outputs, optimize and make the final model deployment-ready in a matter of minutes. DL Workbench takes you through the full OpenVINO™ workflow, providing the opportunity to learn about various toolkit components.
![](../img/openvino_dl_wb.png)
@sphinxdirective
.. link-button:: workbench_docs_Workbench_DG_Start_DL_Workbench_in_DevCloud
:type: ref
:text: Run DL Workbench in Intel® DevCloud
:classes: btn-primary btn-block
@endsphinxdirective
DL Workbench enables you to get a detailed performance assessment, explore inference configurations, and obtain an optimized model ready to be deployed on various Intel® configurations, such as client and server CPU, Intel® Processor Graphics (GPU), Intel® Movidius™ Neural Compute Stick 2 (NCS 2), and Intel® Vision Accelerator Design with Intel® Movidius™ VPUs.
DL Workbench also provides the [JupyterLab environment](https://docs.openvino.ai/latest/workbench_docs_Workbench_DG_Jupyter_Notebooks.html#doxid-workbench-docs-workbench-d-g-jupyter-notebooks) that helps you quick start with OpenVINO™ API and command-line interface (CLI). Follow the full OpenVINO workflow created for your model and learn about different toolkit components.
## Video
@sphinxdirective
.. list-table::
* - .. raw:: html
<iframe allowfullscreen mozallowfullscreen msallowfullscreen oallowfullscreen webkitallowfullscreen height="315" width="560"
src="https://www.youtube.com/embed/on8xSSTKCt8">
</iframe>
* - **DL Workbench Introduction**. Duration: 1:31
@endsphinxdirective
## User Goals
DL Workbench helps achieve your goals depending on the stage of your deep learning journey.
If you are a beginner in the deep learning field, the DL Workbench provides you with
learning opportunities:
* Learn what neural networks are, how they work, and how to examine their architectures.
* Learn the basics of neural network analysis and optimization before production.
* Get familiar with the OpenVINO™ ecosystem and its main components without installing it on your system.
If you have enough experience with neural networks, DL Workbench provides you with a
convenient web interface to optimize your model and prepare it for production:
* Measure and interpret model performance.
* Tune the model for enhanced performance.
* Analyze the quality of your model and visualize output.
## General Workflow
The diagram below illustrates the typical DL Workbench workflow. Click to see the full-size image:
![](../img/openvino_dl_wb_diagram_overview.svg)
Get a quick overview of the workflow in the DL Workbench User Interface:
![](../img/openvino_dl_wb_workflow.gif)
## OpenVINO™ Toolkit Components
The intuitive web-based interface of the DL Workbench enables you to easily use various
OpenVINO™ toolkit components:
Component | Description
|------------------|------------------|
| [Open Model Zoo](https://docs.openvinotoolkit.org/latest/omz_tools_downloader.html)| Get access to the collection of high-quality pre-trained deep learning [public](https://docs.openvinotoolkit.org/latest/omz_models_group_public.html) and [Intel-trained](https://docs.openvinotoolkit.org/latest/omz_models_group_intel.html) models trained to resolve a variety of different tasks.
| [Model Optimizer](https://docs.openvinotoolkit.org/latest/openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html) |Optimize and transform models trained in supported frameworks to the IR format. <br>Supported frameworks include TensorFlow\*, Caffe\*, Kaldi\*, MXNet\*, and ONNX\* format.
| [Benchmark Tool](https://docs.openvinotoolkit.org/latest/openvino_inference_engine_tools_benchmark_tool_README.html)| Estimate deep learning model inference performance on supported devices.
| [Accuracy Checker](https://docs.openvinotoolkit.org/latest/omz_tools_accuracy_checker.html)| Evaluate the accuracy of a model by collecting one or several metric values.
| [Post-Training Optimization Tool](https://docs.openvinotoolkit.org/latest/pot_README.html)| Optimize pretrained models with lowering the precision of a model from floating-point precision(FP32 or FP16) to integer precision (INT8), without the need to retrain or fine-tune models. |
@sphinxdirective
.. link-button:: workbench_docs_Workbench_DG_Start_DL_Workbench_in_DevCloud
:type: ref
:text: Run DL Workbench in Intel® DevCloud
:classes: btn-outline-primary
@endsphinxdirective
## Contact Us
* [DL Workbench GitHub Repository](https://github.com/openvinotoolkit/workbench)
* [DL Workbench on Intel Community Forum](https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/bd-p/distribution-openvino-toolkit)
* [DL Workbench Gitter Chat](https://gitter.im/dl-workbench/general?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&content=body)

View File

@@ -0,0 +1,24 @@
# Inference Modes {#openvino_docs_Runtime_Inference_Modes_Overview}
@sphinxdirective
.. toctree::
:maxdepth: 1
:hidden:
openvino_docs_OV_UG_supported_plugins_AUTO
openvino_docs_OV_UG_Running_on_multiple_devices
openvino_docs_OV_UG_Hetero_execution
openvino_docs_OV_UG_Automatic_Batching
@endsphinxdirective
OpenVINO Runtime offers multiple inference modes to allow optimum hardware utilization under different conditions. The most basic one is a single-device mode, which defines just one device responsible for the entire inference workload. It supports a range of Intel hardware by means of plugins embedded in the Runtime library, each set up to offer the best possible performance. For a complete list of supported devices and instructions on how to use them, refer to the [guide on inference devices](../OV_Runtime_UG/supported_plugins/Device_Plugins.md).
The remaining modes assume certain levels of automation in selecting devices for inference. Using them in the deployed solution may potentially increase its performance and portability. The automated modes are:
* [Automatic Device Selection (AUTO)](../OV_Runtime_UG/auto_device_selection.md)
* [Multi-Device Execution (MULTI)](../OV_Runtime_UG/multi_device.md)
* [Heterogeneous Execution (HETERO)](../OV_Runtime_UG/hetero_execution.md)
* [Automatic Batching Execution (Auto-batching)](../OV_Runtime_UG/automatic_batching.md)

View File

@@ -2,11 +2,21 @@
Every deep learning workflow begins with obtaining a model. You can choose to prepare a custom one, use a ready-made solution and adjust it to your needs, or even download and run a pre-trained network from an online database, such as OpenVINO's [Open Model Zoo](../model_zoo.md).
This section describes how to obtain and prepare your model for work with OpenVINO to get the best inference results:
* [Browse a database of models for use in your projects](../model_zoo.md).
[OpenVINO™ supports several model formats](../MO_DG/prepare_model/convert_model/supported_model_formats.md) and allows to convert them to it's own, OpenVINO IR, providing a tool dedicated to this task.
[Model Optimizer](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md) reads the original model and creates the OpenVINO IR model (.xml and .bin files) so that inference can ultimately be performed without delays due to format conversion. Optionally, Model Optimizer can adjust the model to be more suitable for inference, for example, by [alternating input shapes](../MO_DG/prepare_model/convert_model/Converting_Model.md), [embedding preprocessing](../MO_DG/prepare_model/Additional_Optimizations.md) and [cutting training parts off](../MO_DG/prepare_model/convert_model/Cutting_Model.md).
The approach to fully convert a model is considered the default choice, as it allows the full extent of OpenVINO features. The OpenVINO IR model format is used by other conversion and preparation tools, such as the Post-Training Optimization Tool, for further optimization of the converted model.
Conversion is not required for ONNX and PaddlePaddle models, as OpenVINO provides C++ and Python APIs for importing them to OpenVINO Runtime directly. It provides a convenient way to quickly switch from framework-based code to OpenVINO-based code in your inference application.
This section describes the how to obtain and prepare your model for work with OpenVINO to get the best inference results:
* [See the supported formats and how to use them in your project](../MO_DG/prepare_model/convert_model/supported_model_formats.md)
* [Convert different model formats to the OpenVINO IR format](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
* [Automate model-related tasks with Model Downloader and additional OMZ Tools](https://docs.openvino.ai/latest/omz_tools_downloader.html).
To begin with, you may want to [browse a database of models for use in your projects](../model_zoo.md).

View File

@@ -16,7 +16,7 @@ More resources:
A suite of advanced algorithms for Neural Network inference optimization with minimal accuracy drop. NNCF applies quantization, filter pruning, binarization and sparsity algorithms to PyTorch and TensorFlow models during training.
More resources:
* [Documentation](@ref docs_nncf_introduction)
* [Documentation](@ref tmo_introduction)
* [GitHub](https://github.com/openvinotoolkit/nncf)
* [PyPI](https://pypi.org/project/nncf/)
@@ -25,7 +25,7 @@ A solution for Model Developers and Independent Software Vendors to use secure p
More resources:
* [documentation](https://docs.openvino.ai/latest/ovsa_get_started.html)
* [GitHub]https://github.com/openvinotoolkit/security_addon)
* [GitHub](https://github.com/openvinotoolkit/security_addon)
### OpenVINO™ integration with TensorFlow (OVTF)
@@ -40,7 +40,7 @@ More resources:
A streaming media analytics framework, based on the GStreamer multimedia framework, for creating complex media analytics pipelines.
More resources:
* [documentation on GitHub](https://openvinotoolkit.github.io/dlstreamer_gst/)
* [documentation on GitHub](https://dlstreamer.github.io/index.html)
* [installation Guide on GitHub](https://github.com/openvinotoolkit/dlstreamer_gst/wiki/Install-Guide)
### DL Workbench
@@ -61,7 +61,7 @@ More resources:
An online, interactive video and image annotation tool for computer vision purposes.
More resources:
* [documentation on GitHub](https://openvinotoolkit.github.io/cvat/docs/)
* [documentation on GitHub](https://opencv.github.io/cvat/docs/)
* [web application](https://cvat.org/)
* [Docker Hub](https://hub.docker.com/r/openvino/cvat_server)
* [GitHub](https://github.com/openvinotoolkit/cvat)

View File

@@ -1,6 +1,6 @@
# How to Implement Custom GPU Operations {#openvino_docs_Extensibility_UG_GPU}
To enable operations not supported by OpenVINO out of the box, you may need an extension for an OpenVINO operation set, and a custom kernel for the device you will target. This page describes custom kernel support for the GPU device.
To enable operations not supported by OpenVINO out of the box, you may need an extension for OpenVINO operation set, and a custom kernel for the device you will target. This article describes custom kernel support for the GPU device.
The GPU codepath abstracts many details about OpenCL. You need to provide the kernel code in OpenCL C and an XML configuration file that connects the kernel and its parameters to the parameters of the operation.
@@ -8,7 +8,6 @@ There are two options for using the custom operation configuration file:
* Include a section with your kernels into the automatically-loaded `<lib_path>/cldnn_global_custom_kernels/cldnn_global_custom_kernels.xml` file.
* Call the `ov::Core::set_property()` method from your application with the `"CONFIG_FILE"` key and the configuration file name as a value before loading the network that uses custom operations to the plugin:
@sphinxtabset
@sphinxtab{C++}
@@ -31,7 +30,7 @@ $ ./classification_sample -m <path_to_model>/bvlc_alexnet_fp16.xml -i ./validati
## Configuration File Format <a name="config-file-format"></a>
The configuration file is expected to follow the `.xml` file structure
with a node of the `CustomLayer` type for every custom operation you provide.
with a node of the type `CustomLayer` for every custom operation you provide.
The definitions described in the sections below use the following notations:
@@ -44,44 +43,44 @@ Notation | Description
### CustomLayer Node and Sub-Node Structure
`CustomLayer` node contains the entire configuration for a single custom operation.
The `CustomLayer` node contains the entire configuration for a single custom operation.
| Attribute Name |\# | Description |
|-----|-----|-----|
| `name` | (1) | The name of the operation type to be used. This name should be identical to the type used in the IR.|
| `type` | (1) | Must be `SimpleGPU`. |
| `version` | (1) | Must be `1`. |
| `name` | (1) | The name of the operation type to be used. This name should be identical to the type used in the OpenVINO IR.|
| `type` | (1) | Must be `SimpleGPU`. |
| `version` | (1) | Must be `1`. |
**Sub-nodes**: `Kernel` (1), `Buffers` (1), `CompilerOptions` (0+),
`WorkSizes` (0/1)
### Kernel Node and Sub-Node Structure
`Kernel` node contains all kernel source code configuration.
The `Kernel` node contains all kernel source code configuration.
**Sub-nodes**: `Source` (1+), `Define` (0+)
### Source Node and Sub-Node Structure
`Source` node points to a single OpenCL source file.
The `Source` node points to a single OpenCL source file.
| Attribute Name | \# |Description|
|-----|-----|-----|
| `filename` | (1) | Name of the file containing OpenCL source code. Note that the path is relative to your executable. Multiple source nodes will have their sources concatenated in order. |
| `filename` | (1) | Name of the file containing OpenCL source code. The path is relative to your executable. Multiple source nodes will have their sources concatenated in order. |
**Sub-nodes**: None
### Define Node and Sub-Node Structure
`Define` node configures a single `#&zwj;define` instruction to be added to
The `Define` node configures a single `#&zwj;define` instruction to be added to
the sources during compilation (JIT).
| Attribute Name | \# | Description |
|------|-------|------|
| `name` | (1) | The name of the defined JIT. For static constants, this can include the value as well, which is taken as a string. |
| `param` | (0/1) | This parameter value is used as the value of this JIT definition. |
| `type` | (0/1) | The parameter type. Accepted values: `int`, `float`, and `int[]`, `float[]` for arrays. |
| `default` | (0/1) | The default value to be used if the specified parameters are missing from the operation in the IR. |
| `name` | (1) | The name of the defined JIT. For static constants, this can include the value as well, which is taken as a string. |
| `param` | (0/1) | This parameter value is used as the value of this JIT definition. |
| `type` | (0/1) | The parameter type. Accepted values: `int`, `float`, and `int[]`, `float[]` for arrays. |
| `default` | (0/1) | The default value to be used if the specified parameters are missing from the operation in the OpenVINO IR. |
**Sub-nodes:** None
@@ -90,37 +89,37 @@ The resulting JIT has the following form:
### Buffers Node and Sub-Node Structure
`Buffers` node configures all input/output buffers for the OpenCL entry
The `Buffers` node configures all input/output buffers for the OpenCL entry
function. No buffers node structure exists.
**Sub-nodes:** `Data` (0+), `Tensor` (1+)
### Data Node and Sub-Node Structure
`Data` node configures a single input with static data, for example,
The `Data` node configures a single input with static data, for example,
weights or biases.
| Attribute Name | \# | Description |
|----|-----|------|
| `name` | (1) | Name of a blob attached to an operation in the IR |
| `arg-index` | (1) | 0-based index in the entry function arguments to be bound to |
| `name` | (1) | Name of a blob attached to an operation in the OpenVINO IR. |
| `arg-index` | (1) | 0-based index in the entry function arguments to be bound to. |
**Sub-nodes**: None
### Tensor Node and Sub-Node Structure
`Tensor` node configures a single input or output tensor.
The `Tensor` node configures a single input or output tensor.
| Attribute Name | \# | Description |
|------|-------|-------|
| `arg-index` | (1) | 0-based index in the entry function arguments to be bound to. |
| `type` | (1) | `input` or `output` |
| `port-index` | (1) | 0-based index in the operation input/output ports in the IR |
| `format` | (0/1) | Data layout declaration for the tensor. Accepted values: `BFYX`, `BYXF`, `YXFB`, `FYXB`, and same values in all lowercase. Default value: `BFYX` |
| `port-index` | (1) | 0-based index in the operation input/output ports in the OpenVINO IR |
| `format` | (0/1) | Data layout declaration for the tensor. Accepted values: `BFYX`, `BYXF`, `YXFB`, `FYXB`(also in lowercase). The default value: `BFYX` |
### CompilerOptions Node and Sub-Node Structure
`CompilerOptions` node configures the compilation flags for the OpenCL
The `CompilerOptions` node configures the compilation flags for the OpenCL
sources.
| Attribute Name | \# | Description |
@@ -131,20 +130,20 @@ sources.
### WorkSizes Node and Sub-Node Structure
`WorkSizes` node configures the global/local work sizes to be used when
The `WorkSizes` node configures the global/local work sizes to be used when
queuing an OpenCL program for execution.
| Attribute Name | \# | Description |
|-----|------|-----|
| `global`<br>`local` | (0/1)<br>(0/1) | An array of up to three integers or formulas for defining OpenCL work-sizes to be used during execution.<br> The formulas can use the values of the B,F,Y,X dimensions and contain the operators: +,-,/,\*,%. All operators are evaluated in integer arithmetic. <br>Default value: `global=”B*F*Y*X” local=””` |
| `dim` | (0/1) | A tensor to take the work-size from. Accepted values: `input N`, `output`, where `N` is an index of input tensor starting with 0. Default value: `output` |
| `dim` | (0/1) | A tensor to take the work-size from. Accepted values: `input N`, `output`, where `N` is an index of input tensor starting with 0. The default value: `output` |
**Sub-nodes**: None
## Example Configuration File
The following code sample provides an example configuration file in XML
format. For information on the configuration file structure, see
format. For information on the configuration file structure, see the
[Configuration File Format](#config-file-format).
```xml
<CustomLayer name="ReLU" type="SimpleGPU" version="1">
@@ -170,22 +169,22 @@ For an example, see [Example Kernel](#example-kernel).
| Name | Value |
|---|---|
| `NUM_INPUTS` | Number of the input tensors bound to this kernel |
| `GLOBAL_WORKSIZE` | An array of global work sizes used to execute this kernel |
| `GLOBAL_WORKSIZE_SIZE` | The size of the `GLOBAL_WORKSIZE` array |
| `LOCAL_WORKSIZE` | An array of local work sizes used to execute this kernel |
| `LOCAL_WORKSIZE_SIZE` | The size of the `LOCAL_WORKSIZE` array |
| `<TENSOR>_DIMS`| An array of the tensor dimension sizes. Always ordered as `BFYX` |
| `NUM_INPUTS` | Number of the input tensors bound to this kernel. |
| `GLOBAL_WORKSIZE` | An array of global work sizes used to execute this kernel. |
| `GLOBAL_WORKSIZE_SIZE` | The size of the `GLOBAL_WORKSIZE` array. |
| `LOCAL_WORKSIZE` | An array of local work sizes used to execute this kernel. |
| `LOCAL_WORKSIZE_SIZE` | The size of the `LOCAL_WORKSIZE` array. |
| `<TENSOR>_DIMS`| An array of the tensor dimension sizes. Always ordered as `BFYX`. |
| `<TENSOR>_DIMS_SIZE`| The size of the `<TENSOR>_DIMS` array.|
| `<TENSOR>_TYPE`| The datatype of the tensor: `float`, `half`, or `char`|
| `<TENSOR>_TYPE`| The datatype of the tensor: `float`, `half`, or `char`. |
| `<TENSOR>_FORMAT_<TENSOR_FORMAT>` | The format of the tensor, BFYX, BYXF, YXFB , FYXB, or ANY. The format is concatenated to the defined name. You can use the tensor format to define codepaths in your code with `#&zwj;ifdef/#&zwj;endif`. |
| `<TENSOR>_LOWER_PADDING` | An array of padding elements used for the tensor dimensions before they start. Always ordered as BFYX.|
| `<TENSOR>_LOWER_PADDING_SIZE` | The size of the `<TENSOR>_LOWER_PADDING` array |
| `<TENSOR>_LOWER_PADDING_SIZE` | The size of the `<TENSOR>_LOWER_PADDING` array. |
| `<TENSOR>_UPPER_PADDING` | An array of padding elements used for the tensor dimensions after they end. Always ordered as BFYX. |
| `<TENSOR>_UPPER_PADDING_SIZE` | The size of the `<TENSOR>_UPPER_PADDING` array |
| `<TENSOR>_PITCHES` | The offset (in elements) between adjacent elements in each dimension. Always ordered as BFYX.|
| `<TENSOR>_PITCHES_SIZE`| The size of the `<TENSOR>_PITCHES` array |
| `<TENSOR>_OFFSET`| The number of elements from the start of the tensor to the first valid element, bypassing the lower padding. |
| `<TENSOR>_UPPER_PADDING_SIZE` | The size of the `<TENSOR>_UPPER_PADDING` array. |
| `<TENSOR>_PITCHES` | The offset (in elements) between adjacent elements in each dimension. Always ordered as BFYX. |
| `<TENSOR>_PITCHES_SIZE`| The size of the `<TENSOR>_PITCHES` array. |
| `<TENSOR>_OFFSET`| The number of elements from the start of the tensor to the first valid element, bypassing the lower padding. |
All `<TENSOR>` values are automatically defined for every tensor
bound to this operation, such as `INPUT0`, `INPUT1`, and `OUTPUT0`, as shown
@@ -220,20 +219,19 @@ __kernel void example_relu_kernel(
```
> **NOTE**: As described in the previous section, all items like
> **NOTE**: As described in the previous section, all items such as the
> `INPUT0_TYPE` are actually defined as OpenCL (pre-)compiler inputs by
> OpenVINO for efficiency reasons. See [Debugging
> Tips](#debugging-tips) for information on debugging the results.
> OpenVINO for efficiency reasons. See the [Debugging
> Tips](#debugging-tips) below for information on debugging the results.
## Debugging Tips<a name="debugging-tips"></a>
* **Using `printf` in the OpenCL™ Kernels**.
To debug the specific values, you can use `printf` in your kernels.
**Using `printf` in the OpenCL™ Kernels**.
To debug the specific values, use `printf` in your kernels.
However, be careful not to output excessively, which
could generate too much data. The `printf` output is typical, so
your output can be truncated to fit the buffer. Also, because of
buffering, you actually get an entire buffer of output when the
execution ends.<br>
For more information, refer to the [printf
Function](https://www.khronos.org/registry/OpenCL/sdk/1.2/docs/man/xhtml/printfFunction.html).
For more information, refer to the [printf Function](https://www.khronos.org/registry/OpenCL/sdk/1.2/docs/man/xhtml/printfFunction.html).

View File

@@ -19,62 +19,61 @@ TensorFlow, PyTorch, ONNX, PaddlePaddle, Apache MXNet, Caffe, and Kaldi. The lis
each of the supported frameworks. To see the operations supported by your framework, refer to
[Supported Framework Operations](../MO_DG/prepare_model/Supported_Frameworks_Layers.md).
Custom operations, that is those not included in the list, are not recognized by OpenVINO out-of-the-box. The need for a custom operation may appear in two main cases:
Custom operations, which are not included in the list, are not recognized by OpenVINO out-of-the-box. The need for custom operation may appear in two cases:
1. A regular framework operation that is new or rarely used, which is why it hasnt been implemented in OpenVINO yet.
1. A new or rarely used regular framework operation is not supported in OpenVINO yet.
2. A new user operation that was created for some specific model topology by a model author using framework extension capabilities.
2. A new user operation that was created for some specific model topology by the author of the model using framework extension capabilities.
Importing models with such operations requires additional steps. This guide illustrates the workflow for running inference on models featuring custom operations, allowing you to plug in your own implementation for them. OpenVINO Extensibility API lets you add support for those custom operations and use one implementation for Model Optimizer and OpenVINO Runtime.
Importing models with such operations requires additional steps. This guide illustrates the workflow for running inference on models featuring custom operations. This allows plugging in your own implementation for them. OpenVINO Extensibility API enables adding support for those custom operations and using one implementation for Model Optimizer and OpenVINO Runtime.
Defining a new custom operation basically consist of two parts:
Defining a new custom operation basically consists of two parts:
1. Definition of operation semantics in OpenVINO, the code that describes how this operation should be inferred consuming input tensor(s) and producing output tensor(s). How to implement execution kernels for [GPU](./GPU_Extensibility.md) and [VPU](./VPU_Extensibility.md) is described in separate guides.
1. Definition of operation semantics in OpenVINO, the code that describes how this operation should be inferred consuming input tensor(s) and producing output tensor(s). The implementation of execution kernels for [GPU](./GPU_Extensibility.md) and [VPU](./VPU_Extensibility.md) is described in separate guides.
2. Mapping rule that facilitates conversion of framework operation representation to OpenVINO defined operation semantics.
The first part is required for inference, the second part is required for successful import of a model containing such operations from the original framework model format. There are several options to implement each part, the next sections will describe them in detail.
The first part is required for inference. The second part is required for successful import of a model containing such operations from the original framework model format. There are several options to implement each part. The following sections will describe them in detail.
## Definition of Operation Semantics
If the custom operation can be mathematically represented as a combination of exiting OpenVINO operations and such decomposition gives desired performance, then low-level operation implementation is not required. Refer to the latest OpenVINO operation set, when deciding feasibility of such decomposition. You can use any valid combination of exiting operations. The next section of this document describes the way to map a custom operation.
If the custom operation can be mathematically represented as a combination of exiting OpenVINO operations and such decomposition gives desired performance, then low-level operation implementation is not required. When deciding feasibility of such decomposition refer to the latest OpenVINO operation set. You can use any valid combination of exiting operations. How to map a custom operation is described in the next section of this document.
If such decomposition is not possible or appears too bulky with a large number of constituent operations that do not perform well, then a new class for the custom operation should be implemented, as described in the [Custom Operation Guide](add_openvino_ops.md).
If such decomposition is not possible or appears too bulky with lots of consisting operations that are not performing well, then a new class for the custom operation should be implemented as described in the [Custom Operation Guide](add_openvino_ops.md).
Prefer implementing a custom operation class if you already have a generic C++ implementation of operation kernel. Otherwise try to decompose the operation first as described above and then after verifying correctness of inference and resulting performance, optionally invest to implementing bare metal C++ implementation.
You might prefer implementing a custom operation class if you already have a generic C++ implementation of operation kernel. Otherwise, try to decompose the operation first, as described above. Then, after verifying correctness of inference and resulting performance, you may move on to optional implementation of Bare Metal C++.
## Mapping from Framework Operation
Depending on model format used for import, mapping of custom operation is implemented differently, choose one of:
Mapping of custom operation is implemented differently, depending on model format used for import. You may choose one of the following:
1. If model is represented in ONNX (including models exported from Pytorch in ONNX) or PaddlePaddle formats, then one of the classes from [Frontend Extension API](frontend_extensions.md) should be used. It consists of several classes available in C++ which can be used with Model Optimizer `--extensions` option or when model is imported directly to OpenVINO run-time using read_model method. Python API is also available for run-time model importing.
1. If a model is represented in the ONNX (including models exported from Pytorch in ONNX) or PaddlePaddle formats, then one of the classes from [Frontend Extension API](frontend_extensions.md) should be used. It consists of several classes available in C++ which can be used with the `--extensions` option in Model Optimizer or when a model is imported directly to OpenVINO runtime using the `read_model` method. Python API is also available for runtime model import.
2. If model is represented in TensorFlow, Caffe, Kaldi or MXNet formats, then [Model Optimizer Extensions](../MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md) should be used. This approach is available for model conversion in Model Optimizer only.
2. If a model is represented in the TensorFlow, Caffe, Kaldi or MXNet formats, then [Model Optimizer Extensions](../MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md) should be used. This approach is available for model conversion in Model Optimizer only.
Existing of two approaches simultaneously is explained by two different types of frontends used for model conversion in OpenVINO: new frontends (ONNX, PaddlePaddle) and legacy frontends (TensorFlow, Caffe, Kaldi and Apache MXNet). Model Optimizer can use both front-ends in contrast to the direct import of model with `read_model` method which can use new frontends only. Follow one of the appropriate guides referenced above to implement mappings depending on framework frontend.
If you are implementing extensions for ONNX or PaddlePaddle new frontends and plan to use Model Optimizer `--extension` option for model conversion, then the extensions should be
If you are implementing extensions for new ONNX or PaddlePaddle frontends and plan to use the `--extensions` option in Model Optimizer for model conversion, then the extensions should be:
1. Implemented in C++ only
1. Implemented in C++ only.
2. Compiled as a separate shared library (see details how to do that later in this guide).
2. Compiled as a separate shared library (see details on how to do this further in this guide).
You cannot write new frontend extensions using Python API if you plan to use them with Model Optimizer.
Model Optimizer does not support new frontend extensions written in Python API.
Remaining part of this guide uses Frontend Extension API applicable for new frontends.
Remaining part of this guide describes application of Frontend Extension API for new frontends.
## Registering Extensions
A custom operation class and a new mapping frontend extension class object should be registered to be usable in OpenVINO runtime.
> **NOTE**: This documentation is written based on the [Template extension](https://github.com/openvinotoolkit/openvino/tree/master/docs/template_extension/new), which demonstrates extension development details based on minimalistic `Identity` operation that is a placeholder for your real custom operation. You can review the complete code, which is fully compliable, to see how it works.
> **NOTE**: This documentation is derived from the [Template extension](https://github.com/openvinotoolkit/openvino/tree/master/src/core/template_extension/new), which demonstrates the details of extension development. It is based on minimalistic `Identity` operation that is a placeholder for your real custom operation. Review the complete, fully compilable code to see how it works.
To load the extensions to the `ov::Core` object, use the `ov::Core::add_extension` method, this method allows to load library with extensions or extensions from the code.
Use the `ov::Core::add_extension` method to load the extensions to the `ov::Core` object. This method allows loading library with extensions or extensions from the code.
### Load extensions to core
### Load Extensions to Core
Extensions can be loaded from code with `ov::Core::add_extension` method:
Extensions can be loaded from a code with the `ov::Core::add_extension` method:
@sphinxtabset
@@ -92,7 +91,7 @@ Extensions can be loaded from code with `ov::Core::add_extension` method:
@endsphinxtabset
`Identity` is custom operation class defined in [Custom Operation Guide](add_openvino_ops.md). This is enough to enable reading IR which uses `Identity` extension operation emitted by Model Optimizer. To be able to load original model directly to the runtime, you need to add also a mapping extension:
The `Identity` is a custom operation class defined in [Custom Operation Guide](add_openvino_ops.md). This is sufficient to enable reading OpenVINO IR which uses the `Identity` extension operation emitted by Model Optimizer. In order to load original model directly to the runtime, add a mapping extension:
@sphinxdirective
@@ -110,32 +109,34 @@ Extensions can be loaded from code with `ov::Core::add_extension` method:
@endsphinxdirective
When Python API is used there is no way to implement a custom OpenVINO operation. Also, even if custom OpenVINO operation is implemented in C++ and loaded to the runtime through a shared library, there is still no way to add a frontend mapping extension that refers to this custom operation. Use C++ shared library approach to implement both operations semantics and framework mapping in this case.
When Python API is used, there is no way to implement a custom OpenVINO operation. Even if custom OpenVINO operation is implemented in C++ and loaded into the runtime by a shared library, there is still no way to add a frontend mapping extension that refers to this custom operation. In this case, use C++ shared library approach to implement both operations semantics and framework mapping.
You still can use Python for operation mapping and decomposition in case if operations from the standard OpenVINO operation set is used only.
Python can still be used to map and decompose operations when only operations from the standard OpenVINO operation set are used.
### Create library with extensions
### Create a Library with Extensions
You need to create extension library in the following cases:
- Convert model with custom operations in Model Optimizer
- Load model with custom operations in Python application. It is applicable for both framework model and IR.
- Loading models with custom operations in tools that support loading extensions from a library, for example `benchmark_app`.
An extension library should be created in the following cases:
If you want to create an extension library, for example in order to load these extensions to the Model Optimizer, you need to do next steps:
Create an entry point for extension library. OpenVINO™ provides an `OPENVINO_CREATE_EXTENSIONS()` macro, which allows to define an entry point to a library with OpenVINO™ Extensions.
This macro should have a vector of all OpenVINO™ Extensions as an argument.
- Conversion of a model with custom operations in Model Optimizer.
- Loading a model with custom operations in a Python application. This applies to both framework model and OpenVINO IR.
- Loading models with custom operations in tools that support loading extensions from a library, for example the `benchmark_app`.
Based on that, the declaration of an extension class can look as follows:
To create an extension library, for example, to load the extensions into Model Optimizer, perform the following:
1. Create an entry point for extension library. OpenVINO provides the `OPENVINO_CREATE_EXTENSIONS()` macro, which allows to define an entry point to a library with OpenVINO Extensions.
This macro should have a vector of all OpenVINO Extensions as an argument.
Based on that, the declaration of an extension class might look like the following:
@snippet template_extension/new/ov_extension.cpp ov_extension:entry_point
To configure the build of your extension library, use the following CMake script:
2. Configure the build of your extension library, using the following CMake script:
@snippet template_extension/new/CMakeLists.txt cmake:extension
This CMake script finds the OpenVINO using the `find_package` CMake command.
This CMake script finds OpenVINO, using the `find_package` CMake command.
To build the extension library, run the commands below:
3. Build the extension library, running the commands below:
```sh
$ cd docs/template_extension/new
@@ -145,7 +146,7 @@ $ cmake -DOpenVINO_DIR=<OpenVINO_DIR> ../
$ cmake --build .
```
After the build you can use path to your extension library to load your extensions to OpenVINO Runtime:
4. After the build, you may use the path to your extension library to load your extensions to OpenVINO Runtime:
@sphinxtabset
@@ -168,4 +169,3 @@ After the build you can use path to your extension library to load your extensio
* [OpenVINO Transformations](./ov_transformations.md)
* [Using OpenVINO Runtime Samples](../OV_Runtime_UG/Samples_Overview.md)
* [Hello Shape Infer SSD sample](../../samples/cpp/hello_reshape_ssd/README.md)

View File

@@ -2,9 +2,10 @@
To enable operations not supported by OpenVINO™ out of the box, you need a custom extension for Model Optimizer, a custom nGraph operation set, and a custom kernel for the device you will target. This page describes custom kernel support for one the VPU, the Intel® Neural Compute Stick 2 device, which uses the MYRIAD device plugin.
> **NOTES:**
> * OpenCL\* custom layer support is available in the preview mode.
> **NOTE:**
> * OpenCL custom layer support is available in the preview mode.
> * This section assumes you are familiar with developing kernels using OpenCL.
To customize your topology with an OpenCL layer, carry out the tasks described on this page:
1. Write and compile your OpenCL code with the standalone offline OpenCL compiler (`clc`).
@@ -13,9 +14,9 @@ To customize your topology with an OpenCL layer, carry out the tasks described o
## Compile OpenCL code for VPU (Intel® Neural Compute Stick 2)
> **NOTE**: OpenCL compiler, targeting Intel® Neural Compute Stick 2 for the SHAVE* processor only, is redistributed with OpenVINO.
OpenCL support is provided by ComputeAorta* and is distributed under a license agreement between Intel® and Codeplay* Software Ltd.
The OpenCL toolchain for the Intel® Neural Compute Stick 2 supports offline compilation only, so first compile OpenCL C code using the standalone `clc` compiler. You can find the compiler binary at `<INSTALL_DIR>/tools/cl_compiler`.
> **NOTE**: OpenCL compiler, targeting Intel® Neural Compute Stick 2 for the SHAVE processor only, is redistributed with OpenVINO.
OpenCL support is provided by ComputeAorta and is distributed under a license agreement between Intel® and Codeplay Software Ltd.
The OpenCL toolchain for the Intel® Neural Compute Stick 2 supports offline compilation only. Start with compiling OpenCL C code, using the standalone `clc` compiler. You can find the compiler binary at `<INSTALL_DIR>/tools/cl_compiler`.
> **NOTE**: By design, custom OpenCL layers support any OpenCL kernels written assuming OpenCL version 1.2. It also supports half float extension and is optimized for this type, because it is a native type for Intel® Movidius™ VPUs.
1. Prior to running a compilation, make sure that the following variables are set:
@@ -63,7 +64,7 @@ Each custom layer is described with the `CustomLayer` node. It has the following
- Node `Source` must contain the following attributes:
- `filename` The path to a compiled binary relative to the XML configuration file.
- Sub-node `Parameters` Describes parameters bindings. For more information, see the description below.
- Sub-node `WorkSizes` Describes local and global work group sizes and the source for dimension deduction as a pair `direction,port`. In the example above, the work group is described relatively to the dimension of the input tensor that comes through port 0 in the IR. `global` and `local` work group configurations support any simple math expressions with +,-,\*,/, and () from `B`(batch), `Y`(height), `X`(width) and `F`(channels).
- Sub-node `WorkSizes` Describes local and global work group sizes and the source for dimension deduction as a pair `direction,port`. In the example above, the work group is described relatively to the dimension of the input tensor that comes through port 0 in the OpenVINO IR. Work group configurations, namely `global` and `local` support any simple math expressions with +,-,\*,/, and () from `B`(batch), `Y`(height), `X`(width) and `F`(channels).
- Sub-node `Where` Allows to customize bindings with the `key="value"` attribute. For example, to substitute only 3x3 convolutions, write `<Where kernel="3,3"/>` in the binding xml.
Parameter description supports `Tensor` of one of tensor types such as `input`, `output`, `input_buffer`, `output_buffer` or `data`, `Scalar`, or `Data` nodes and has the following format:
@@ -77,7 +78,7 @@ Each custom layer is described with the `CustomLayer` node. It has the following
- `type` Node type: `input_buffer` or `output_buffer`. Use the appropriate type to bind multiple kernels that correspond to different stages of the same layer.
- `port-index` The unique identifier to bind by.
- `dim` The dim source with the same `direction,port` format used for `WorkSizes` bindings.
- `size` Amount of bytes needed. Current expression syntax supports only expression over dimensions of over selected input/output tensor or constants and might be expended in the future.
- `size` Amount of bytes needed. Current expression syntax supports only expression over dimensions of over selected input/output tensor or constants and might be extended in the future.
Here is an example of multi-stage MVN layer binding:
```xml
@@ -107,7 +108,7 @@ Each custom layer is described with the `CustomLayer` node. It has the following
<WorkSizes dim="output,0" global="((Y+7)/8)*8,F,1" local="8,1,1"/>
</CustomLayer>
```
- Each `Tensor` node that has the type `data` must contain the following attributes:
- Each `Tensor` node that has the `data` type must contain the following attributes:
- `source` A name of the blob as it is in the IR. Typical example is `weights` for convolution.
- `format` Specifies the channel order in the tensor. Optional conversion layers are generated if the custom layer format is not.
```xml
@@ -133,7 +134,7 @@ Each custom layer is described with the `CustomLayer` node. It has the following
- Each `Data` node must contain the following attributes:
- `arg-name` The name of a kernel parameter in the kernel signature.
- `type` Node type. Currently, `local_data` is the only supported value, which defines buffer allocated in fast local on-chip memory. It is limited to 100KB for all `__local` and
`__private` arrays defined inside the kernel as well as all `__local` parameters passed to the kernel. Note that a manual-DMA extension requires double buffering.
`__private` arrays defined inside the kernel as well as all `__local` parameters passed to the kernel. A manual-DMA extension requires double buffering.
If the custom layer is detected to run out of local memory, the inference fails.
- `dim` The dim source with the same `direction,port` format used for `WorkSizes` bindings.
- `size` Amount of bytes needed. The current expression syntax supports only expression over dimensions of over selected input/output tensor or constants and may be extended in the future.
@@ -158,14 +159,13 @@ Each custom layer is described with the `CustomLayer` node. It has the following
## Pass Configuration File to OpenVINO™ Runtime
> **NOTE**: If both native and custom layer implementations are present, the custom kernel has a priority over the native one.
Before loading the network that features the custom layers, provide a separate configuration file and load it using the ov::Core::set_property() method with the "CONFIG_KEY" key and the configuration file name as a value before loading the network that uses custom operations to the plugin:
Before loading the network that features the custom layers, provide a separate configuration file and load it using the `ov::Core::set_property()` method. Use the "CONFIG_KEY" key and the configuration file name as a value before loading the network that uses custom operations to the plugin:
@snippet docs/snippets/vpu/custom_op.cpp part0
## Optimizing Kernels with OpenCL for VPU (Intel® Neural Compute Stick 2)
This section provides optimization guidelines on writing custom layers with OpenCL for VPU devices. Knowledge about general OpenCL
programming model and OpenCL kernel language is assumed and not a subject of this section. The OpenCL model mapping to VPU is described in the table below.
This section provides optimization guidelines on writing custom layers with OpenCL for VPU devices. Knowledge about general OpenCL programming model and OpenCL kernel language is assumed and not a subject of this section. The OpenCL model mapping to VPU is described in the table below.
| OpenCL Model | VPU Mapping|
|-----|----|
@@ -175,41 +175,33 @@ programming model and OpenCL kernel language is assumed and not a subject of thi
| Global memory | Mapped to DDR, used to pass execution preserved parameters for inputs, outputs, and blobs |
| Work group | Executed on a single SHAVE core iterating over multiple work items |
Note that by the OpenCL specification, the work group execution order is not specified. This means that it is your
responsibility to ensure that race conditions among work groups are not introduced. Custom layer runtime spits evenly
work grid among available compute resources and executes them in an arbitrary order. This static scheduling approach works best if the load is evenly spread out across work groups, which is a typical case for Deep Learning kernels. The following guidelines are recommended to use for work group partitioning:
The work group execution order is not defined in the OpenCL specifications. This means it is your responsibility to ensure that race conditions among work groups are not introduced. Custom layer runtime distributes work grid evenly among available compute resources and executes them in an arbitrary order. This static scheduling approach works best if the load is evenly spread out across work groups, which is a typical case for Deep Learning kernels. The following guidelines are recommended to use for work group partitioning:
1. Split work evenly across work groups.
1. Distribute work evenly across work groups.
2. Adjust work group granularity to maintain equal workload for all compute codes.
3. Set the maximum number of cores using the `max-shaves` attribute for the `CustomLayer` node. This keeps more resources for the rest of topology. It is also useful if the kernel scalability reached its limits, which may happen while optimizing memory bound kernels or kernels with poor parallelization.
4. Try an alternate data layout (`BFXY`/`BYXF`) for the kernel if it improves work group partitioning or data access patterns.
Consider not just specific layer boost, but full topology performance because data conversion layers would be automatically inserted
as appropriate.
4. Try an alternate data layout (`BFXY`/`BYXF`) for the kernel to see if it improves work group partitioning or data access patterns.
Consider not just specific layer boost, but also full topology performance because data conversion layers will be automatically inserted as appropriate.
Offline OpenCL compiler (`clc`) features automatic vectorization over `get_global_id(0)` usage, if uniform access is detected.
For example, the kernel below could be automatically vectorized:
```cpp
__kernel void cvtf32f16(__global float* restrict inImage, __global half* restrict outImage,
float scale, float bais)
float scale, float bias)
{
int idx = get_global_id(0) + get_global_id(1) * get_global_size(0) + get_global_id(2) * get_global_size(0) * get_global_size(1);
outImage[idx] = convert_half(inImage[idx]*scale+bais);
outImage[idx] = convert_half(inImage[idx]*scale+bias);
}
```
However, this work-group based vectorizer (WGV) conflicts with the default LLVM vectorizer based on superword level parallelism
(SLP) for the current compiler version. Manual vectorization is recommended to provide the best performance for non-uniform code
patterns. WGV works if and only if vector types are not used in the code.
However, this work-group based vectorizer (WGV) conflicts with the default LLVM vectorizer based on superword level parallelism (SLP) for the current compiler version. Manual vectorization is recommended to provide the best performance for non-uniform code patterns. WGV works if and only if vector types are not used in the code.
Here is a short list of optimization tips:
1. Help auto-vectorizer ensure non-aliasing pointers for kernel parameters by putting `restrict` where possible.
- This can give a performance boost, especially for kernels with unrolling, like `ocl_grn` from the example below.
- Place `restrict` markers for kernels with manually vectorized codes. In the `ocl_grn` kernel below, the unrolled version without `restrict` is up to 20% slower than the most optimal one, which combines unrolling and `restrict`.
2. Put `#&zwj;pragma unroll N` to your loop header. The compiler does not trigger unrolling by default, so it is your responsibility to
annotate the code with pragmas as appropriate. The `ocl_grn` version with `#&zwj;pragma unroll 4` is up to 50% faster, most of which comes from unrolling the first loop, because LLVM, in general, is better in scheduling 3-stage loops (load-compute-store), while the fist loop
`variance += (float)(src_data[c*H*W + y*W + x] * src_data[c*H*W + y*W + x]);` is only 2-stage (load-compute). Pay
attention to unrolling such cases first. Unrolling factor is loop-dependent. Choose the smallest number that
still improves performance as an optimum between the kernel size and execution speed. For this specific kernel, changing the unroll factor from `4` to `6` results in the same performance, so unrolling factor equal to 4 is an optimum. For Intel® Neural Compute Stick 2, unrolling is conjugated with the automatic software pipelining for load, store, and compute stages:
1. Help auto-vectorizer ensure non-aliasing pointers for kernel parameters by putting the `restrict` markers where possible.
- This can give a performance boost, especially for kernels with unrolling, like the `ocl_grn` from the example below.
- Place `restrict` markers for kernels with manually vectorized codes. In the `ocl_grn` kernel below, the unrolled version without the `restrict` is up to 20% slower than the most optimal one, which combines both unrolling and `restrict`.
2. Put `#&zwj;pragma unroll N` to your loop header. The compiler does not trigger unrolling by default, so it is your responsibility to annotate the code with pragmas as appropriate. The `ocl_grn` version with `#&zwj;pragma unroll 4` is up to 50% faster, most of which comes from unrolling the first loop, because LLVM, in general, is better in scheduling 3-stage loops (load-compute-store), while the first loop
The `variance += (float)(src_data[c*H*W + y*W + x] * src_data[c*H*W + y*W + x]);` is only 2-stage (load-compute). Pay attention to unrolling such cases first. Unrolling factor is loop-dependent. Choose the smallest number that still improves performance as an optimum between the kernel size and execution speed. For this specific kernel, changing the unroll factor from `4` to `6` results in the same performance, so unrolling factor equal to 4 is an optimum. For Intel Neural Compute Stick 2, unrolling is conjugated with the automatic software pipelining for load, store, and compute stages:
```cpp
__kernel void ocl_grn(__global const half* restrict src_data, __global half* restrict dst_data, int C, float bias)
{
@@ -227,7 +219,7 @@ __kernel void ocl_grn(__global const half* restrict src_data, __global half* res
dst_data[c*H*W + y*W + x] = (half)((float)src_data[c*H*W + y*W + x] * variance);
}
```
To check the efficiency of WGV, you can compare performance of the kernel above with the kernel below, which is manually vectorized over width:
To check the efficiency of WGV, compare performance of the kernel above with the kernel below, which is manually vectorized over width:
```cpp
__kernel void ocl_grn_line(__global const half* restrict src_data, __global half* restrict dst_data, int C, int W, float bias)
{
@@ -267,19 +259,14 @@ __kernel void ocl_grn_line(__global const half* restrict src_data, __global hal
```
Both versions perform the same, but the second one has more complex code.
3. If it is easy to predict the work group size, you can also use the `reqd_work_group_size` kernel attribute to ask the compiler
to unroll the code up to the local size of the work group. Note that if the kernel is actually executed with the
different work group configuration, the result is undefined.
3. If it is easy to predict the work group size, use the `reqd_work_group_size` kernel attribute to ask the compiler to unroll the code up to the local size of the work group. If the kernel is actually executed with the different work group configuration, the result is undefined.
4. Prefer to use the `half` compute if it keeps reasonable accuracy. 16-bit float is a native type for Intel® Neural Compute Stick 2, most of the functions `half_*` are mapped to a single hardware instruction.
4. Prefer to use the `half` compute if it keeps reasonable accuracy. A 16-bit float is a native type for Intel Neural Compute Stick 2, most of the `half_*` functions are mapped to a single hardware instruction.
Use the standard `native_*` function for the rest of types.
5. Prefer to use the `convert_half` function over `vstore_half` if conversion to 32-bit float is required. `convert_half` is mapped to a single hardware instruction. For the `cvtf32f16` kernel above, the line `outImage[idx] = convert_half(inImage[idx]*scale+bais);` is eight times slower than the code with `vstore_half`.
5. Prefer to use the `convert_half` function over the `vstore_half` if conversion to 32-bit float is required. The `convert_half` function is mapped to a single hardware instruction. For the `cvtf32f16` kernel above, the `outImage[idx] = convert_half(inImage[idx]*scale+bias);` code is eight times slower than the code with `vstore_half`.
6. Mind early exits. Early exit can be extremely costly for the current version of the `clc` compiler due to conflicts with the
auto-vectorizer. The generic advice would be to setup local size by `x` dimension equal to inputs or/and outputs width.
If it is impossible to define the work grid that exactly matches inputs or/and outputs to eliminate checks, for example,
`if (get_global_id(0) >= width) return`, use line-wise kernel variant with manual vectorization.
6. Be aware of early exits, as they can be extremely costly for the current version of the `clc` compiler due to conflicts with the auto-vectorizer. It is recommended to setup local size by `x` dimension equal to inputs or/and outputs width. If it is impossible to define the work grid that exactly matches inputs or/and outputs to eliminate checks, for example, `if (get_global_id(0) >= width) return`, use line-wise kernel variant with manual vectorization.
The kernel example below demonstrates the impact of early exits on kernel performance.
```cpp
// Initial version
@@ -302,8 +289,8 @@ The kernel example below demonstrates the impact of early exits on kernel perfor
}
```
This `reorg` kernel is auto-vectorizable, but an input for YOLO v2 topology is `NCHW=<1,64,26,26>` and it is not multiple of vector width, which is `8` for `half` data type. As a result, the Inference Engine does not select the auto-vectorized kernel.
To compare performance of auto-vectorized and scalar version of the kernel, change the input size to`NCHW=<1,64,26,32>`. This enables the auto-vectorized version to be selected by the Inference Engine and can give you about 30% uplift.
Since the auto-vectorized version is faster, it makes sense to enable it for the YOLO v2 topology input size by setting the local size multiple of vector, for example, 32, and adjust global sizes accordingly. As a result, the execution work grid exceeds actual input dimension, so out-of-bound checks should be inserted. See the updated kernel version below:
To compare performance of auto-vectorized and scalar version of the kernel, change the input size to `NCHW=<1,64,26,32>`. This enables the auto-vectorized version to be selected by the Inference Engine and can give you about 30% uplift.
Since the auto-vectorized version is faster, it is recommended to enable it for the YOLO v2 topology input size by setting the local size multiple of vector, for example, `32`, and adjust global sizes accordingly. As a result, the execution work grid exceeds actual input dimension, so out-of-bound checks should be inserted. See the updated kernel version below:
```cpp
// Version with out-of-bound checks added
__kernel void reorg(const __global half* restrict src, __global half* restrict out, int W, int stride)
@@ -324,7 +311,7 @@ Since the auto-vectorized version is faster, it makes sense to enable it for the
out[W*H*c + W*h + w] = src[W2*H2*c2 + W2*h2 + w2];
}
```
This code performs the same as the initial kernel above (scalar) due to branching overhead. If you replace min/max expression `w = min(w, W-1);` with `if (w >= W) return;`, runtime increases up to 2x against to code without branching (initial version).<br>
This code performs the same as the initial kernel above (scalar) due to branching overhead. If the `w = min(w, W-1);` min/max expression is replaced with the `if (w >= W) return;`, runtime increases up to 2x against to code without branching (initial version).<br>
If branching is inevitable for your element-based kernel, it is recommended to change the scheme to line-based. See the kernel variant below:
```cpp
// Line-wise version
@@ -347,8 +334,8 @@ __kernel void reorg(const __global half* restrict src, __global half* restrict o
}
```
This decreases the execution time up to 40% against the best performing vectorized kernel without early exits (initial version).
7. Reuse computations among work items by using line-based kernels or sharing values though `__local` memory.
8. Improve data access locality. Most of custom kernels are memory bound while convolution and fully connected layers are hardware-implemented. The code below demonstrates a further optimized version of the `reorg` kernel unrolled by `stride`:
7. Reuse computations among work items by using line-based kernels or sharing values through the `__local` memory.
8. Improve data access locality. Most of custom kernels are memory bound while convolution and fully connected layers are hardware-implemented. The code below demonstrates a further optimized version of the `reorg` kernel unrolled by the `stride`:
```cpp
// Unrolled line-wise version
__kernel void reorg_unrolled_by_stride(const __global half* restrict src, __global half* restrict dst,
@@ -366,14 +353,11 @@ This decreases the execution time up to 40% against the best performing vectoriz
dst[W*H*C2*(stride_y*stride+stride_x) + W*H*c2 + W*h + w] = src[W2*H2*c2 + W2*h*stride + W2*stride_y + w2 + stride_x];
}
```
`scr` data in this case loaded only once. As the result, the cycle count drops up to 45% against the line-wise version.
The `scr` data in this case is loaded only once. As the result, the cycle count drops up to 45% against the line-wise version.
9. Copy data from `__dlobal` to `__local` or `__private` memory if the data is accessed more than once. Access to
`__dlobal` memory is orders of magnitude slower than access to `__local`/`__private` due to statically scheduled pipeline, which
stalls completely on memory access without any prefetch. The same recommendation is applicable for scalar load/store
from/to a `__blobal` pointer since work-group copying could be done in a vector fashion.
9. Copy data from the `__dlobal` to the `__local` or `__private` memory if the data is accessed more than once. Access to the `__dlobal` memory is orders of magnitude slower than access to the `__local`/`__private` due to statically scheduled pipeline, which stalls completely on memory access without any prefetch. The same recommendation is applicable for scalar load/store from/to the `__blobal` pointer since work-group copying could be done in a vector fashion.
10. Use a manual DMA extension. Local (on-chip) memory throughput is up to 24x higher than DDR throughput. Starting from OpenVINO 2020.1, VPU OpenCL features manual-DMA kernel extension to copy sub-tensor used by work group into local memory and performing compute without DDR evolved. Here is the simple GRN kernel implementation that runs over DDR. Local size is in the form (width of the input tensor, 1, 1) to define a large enough work group to get code automatically vectorized and unrolled, while global size is (width of the input tensor, height of the input tensor, 1):
10. Use a manual DMA extension. Local (on-chip) memory throughput is up to 24x higher than DDR throughput. Since the OpenVINO 2020.1 release, VPU OpenCL features manual-DMA kernel extension to copy sub-tensor used by a work group into local memory and performing compute without DDR evolved. Here is the simple GRN kernel implementation that runs over DDR. Local size is in the form (width of the input tensor, 1, 1) to define a large enough work group to get code automatically vectorized and unrolled, while global size is (width of the input tensor, height of the input tensor, 1):
```cpp
__kernel void grn_NCHW(
__global const half* restrict src_data,
@@ -398,7 +382,7 @@ from/to a `__blobal` pointer since work-group copying could be done in a vector
}
```
This kernel can be rewritten to introduce special data binding `__dma_preload` and `__dma_postwrite intrinsics`. This means that instead of one kernel, a group of three kernels should be implemented: `kernelName`, `__dma_preload_kernelName`, and `__dma_postwrite_kernelName`. `__dma_preload_kernelName` for a particular work group `n` is guaranteed to be executed before the `n`-th work group itself, while `__dma_postwrite_kernelName` is guaranteed to be executed after a corresponding work group. You can define one of those functions that are intended to be used to copy data from-to `__global` and `__local` memory. The syntactics requires exact functional signature match. The example below illustrates how to prepare your kernel for manual-DMA.
This kernel can be rewritten to introduce the `__dma_preload` and `__dma_postwrite intrinsics` special data binding. This means that instead of one kernel, a group of three kernels should be implemented: `kernelName`, `__dma_preload_kernelName`, and `__dma_postwrite_kernelName`. The `__dma_preload_kernelName` kernel for a particular work group `n` is guaranteed to be executed before the `n`-th work group itself, while the `__dma_postwrite_kernelName` is guaranteed to be executed after a corresponding work group. One of those functions may be defined to copy data from-to `__global` and `__local` memory. The syntactics requires exact functional signature match. The example below illustrates how to prepare your kernel for manual-DMA.
```cpp
__kernel void __dma_preload_grn_NCHW(
@@ -557,9 +541,9 @@ __kernel void grn_NCHW(
}
```
Note the `get_local_size` and `get_local_id` usage inside the kernel. 21x speedup is expected for a kernel on enet-curbs setup because it was completely limited by memory usage.
> **NOTE**: The `get_local_size` and `get_local_id` usage inside the kernel. 21x speedup is expected for a kernel on enet-curbs setup since it is completely limited by memory usage.
An alternative method to using DMA is to use work item copy extension. Those functions are executed inside a kernel and requires work groups equal to single work item.
An alternative method to using DMA is to use work item copy extension. Those functions are executed inside a kernel and require work groups equal to single work item.
Here is the list of supported work item functions:
```cpp

View File

@@ -70,7 +70,7 @@ To eliminate operation, OpenVINO™ has special method that considers all limita
`ov::replace_output_update_name()` in case of successful replacement it automatically preserves friendly name and runtime info.
## Transformations types <a name="transformations_types"></a>
## Transformations types <a name="transformations-types"></a>
OpenVINO™ Runtime has three main transformation types:
@@ -91,7 +91,7 @@ Transformation library has two internal macros to support conditional compilatio
When developing a transformation, you need to follow these transformation rules:
###1. Friendly Names
### 1. Friendly Names
Each `ov::Node` has an unique name and a friendly name. In transformations we care only about friendly name because it represents the name from the model.
To avoid losing friendly name when replacing node with other node or subgraph, set the original friendly name to the latest node in replacing subgraph. See the example below.
@@ -100,7 +100,7 @@ To avoid losing friendly name when replacing node with other node or subgraph, s
In more advanced cases, when replaced operation has several outputs and we add additional consumers to its outputs, we make a decision how to set friendly name by arrangement.
###2. Runtime Info
### 2. Runtime Info
Runtime info is a map `std::map<std::string, ov::Any>` located inside `ov::Node` class. It represents additional attributes in `ov::Node`.
These attributes can be set by users or by plugins and when executing transformation that changes `ov::Model` we need to preserve these attributes as they will not be automatically propagated.
@@ -111,9 +111,9 @@ Currently, there is no mechanism that automatically detects transformation types
When transformation has multiple fusions or decompositions, `ov::copy_runtime_info` must be called multiple times for each case.
**Note**: copy_runtime_info removes rt_info from destination nodes. If you want to keep it, you need to specify them in source nodes like this: copy_runtime_info({a, b, c}, {a, b})
> **NOTE**: `copy_runtime_info` removes `rt_info` from destination nodes. If you want to keep it, you need to specify them in source nodes like this: `copy_runtime_info({a, b, c}, {a, b})`
###3. Constant Folding
### 3. Constant Folding
If your transformation inserts constant sub-graphs that need to be folded, do not forget to use `ov::pass::ConstantFolding()` after your transformation or call constant folding directly for operation.
The example below shows how constant subgraph can be constructed.
@@ -140,8 +140,8 @@ In transformation development process:
## Using pass manager <a name="using_pass_manager"></a>
`ov::pass::Manager` is a container class that can store the list of transformations and execute them. The main idea of this class is to have high-level representation for grouped list of transformations.
It can register and apply any [transformation pass](#transformations_types) on model.
In addition, `ov::pass::Manager` has extended debug capabilities (find more information in the [how to debug transformations](#how_to_debug_transformations) section).
It can register and apply any [transformation pass](#transformations-types) on model.
In addition, `ov::pass::Manager` has extended debug capabilities (find more information in the [how to debug transformations](#how-to-debug-transformations) section).
The example below shows basic usage of `ov::pass::Manager`
@@ -151,7 +151,7 @@ Another example shows how multiple matcher passes can be united into single Grap
@snippet src/transformations/template_pattern_transformation.cpp matcher_pass:manager2
## How to debug transformations <a name="how_to_debug_transformations"></a>
## How to debug transformations <a name="how-to-debug-transformations"></a>
If you are using `ngraph::pass::Manager` to run sequence of transformations, you can get additional debug capabilities by using the following environment variables:
@@ -160,7 +160,7 @@ OV_PROFILE_PASS_ENABLE=1 - enables performance measurement for each transformati
OV_ENABLE_VISUALIZE_TRACING=1 - enables visualization after each transformation. By default, it saves dot and svg files.
```
> **Note**: Make sure that you have dot installed on your machine; otherwise, it will silently save only dot file without svg file.
> **NOTE**: Make sure that you have dot installed on your machine; otherwise, it will silently save only dot file without svg file.
## See Also

View File

@@ -1,4 +1,4 @@
# Build Plugin Using CMake* {#openvino_docs_ie_plugin_dg_plugin_build}
# Build Plugin Using CMake {#openvino_docs_ie_plugin_dg_plugin_build}
Inference Engine build infrastructure provides the Inference Engine Developer Package for plugin development.
@@ -57,7 +57,6 @@ A common plugin consists of the following components:
To build a plugin and its tests, run the following CMake scripts:
- Root `CMakeLists.txt`, which finds the Inference Engine Developer Package using the `find_package` CMake command and adds the `src` and `tests` subdirectories with plugin sources and their tests respectively:
```cmake
cmake_minimum_required(VERSION 3.13)
@@ -82,21 +81,15 @@ if(ENABLE_TESTS)
endif()
endif()
```
> **NOTE**: The default values of the `ENABLE_TESTS`, `ENABLE_FUNCTIONAL_TESTS` options are shared via the Inference Engine Developer Package and they are the same as for the main DLDT build tree. You can override them during plugin build using the command below:
```bash
$ cmake -DENABLE_FUNCTIONAL_TESTS=OFF -DInferenceEngineDeveloperPackage_DIR=../dldt-release-build ../template-plugin
```
> **NOTE**: The default values of the `ENABLE_TESTS`, `ENABLE_FUNCTIONAL_TESTS` options are shared via the Inference Engine Developer Package and they are the same as for the main DLDT build tree. You can override them during plugin build using the command below:
```bash
$ cmake -DENABLE_FUNCTIONAL_TESTS=OFF -DInferenceEngineDeveloperPackage_DIR=../dldt-release-build ../template-plugin
```
- `src/CMakeLists.txt` to build a plugin shared library from sources:
@snippet template_plugin/src/CMakeLists.txt cmake:plugin
> **NOTE**: `IE::inference_engine` target is imported from the Inference Engine Developer Package.
> **NOTE**: `IE::inference_engine` target is imported from the Inference Engine Developer Package.
- `tests/functional/CMakeLists.txt` to build a set of functional plugin tests:
@snippet template_plugin/tests/functional/CMakeLists.txt cmake:functional_tests
> **NOTE**: The `IE::funcSharedTests` static library with common functional Inference Engine Plugin tests is imported via the Inference Engine Developer Package.
> **NOTE**: The `IE::funcSharedTests` static library with common functional Inference Engine Plugin tests is imported via the Inference Engine Developer Package.

View File

@@ -95,6 +95,6 @@ Returns a current value for a configuration key with the name `name`. The method
@snippet src/template_executable_network.cpp executable_network:get_config
This function is the only way to get configuration values when a network is imported and compiled by other developers and tools (for example, the [Compile tool](../_inference_engine_tools_compile_tool_README.html)).
This function is the only way to get configuration values when a network is imported and compiled by other developers and tools (for example, the [Compile tool](@ref openvino_inference_engine_tools_compile_tool_README).
The next step in plugin library implementation is the [Synchronous Inference Request](@ref openvino_docs_ie_plugin_dg_infer_request) class.

View File

@@ -47,13 +47,13 @@ Inference Engine plugin dynamic library consists of several main components:
on several task executors based on a device-specific pipeline structure.
> **NOTE**: This documentation is written based on the `Template` plugin, which demonstrates plugin
development details. Find the complete code of the `Template`, which is fully compilable and up-to-date,
at `<dldt source dir>/docs/template_plugin`.
> development details. Find the complete code of the `Template`, which is fully compilable and up-to-date,
> at `<dldt source dir>/docs/template_plugin`.
Detailed guides
-----------------------
* [Build](@ref openvino_docs_ie_plugin_dg_plugin_build) a plugin library using CMake\*
* [Build](@ref openvino_docs_ie_plugin_dg_plugin_build) a plugin library using CMake
* Plugin and its components [testing](@ref openvino_docs_ie_plugin_dg_plugin_testing)
* [Quantized networks](@ref openvino_docs_ie_plugin_dg_quantized_networks)
* [Low precision transformations](@ref openvino_docs_OV_UG_lpt) guide

View File

@@ -81,7 +81,7 @@ The function accepts a const shared pointer to `ov::Model` object and performs t
1. Deep copies a const object to a local object, which can later be modified.
2. Applies common and plugin-specific transformations on a copied graph to make the graph more friendly to hardware operations. For details how to write custom plugin-specific transformation, please, refer to [Writing OpenVINO™ transformations](@ref openvino_docs_transformations) guide. See detailed topics about network representation:
* [Intermediate Representation and Operation Sets](../_docs_MO_DG_IR_and_opsets.html)
* [Intermediate Representation and Operation Sets](@ref openvino_docs_MO_DG_IR_and_opsets)
* [Quantized networks](@ref openvino_docs_ie_plugin_dg_quantized_networks).
@snippet template_plugin/src/template_plugin.cpp plugin:transform_network

View File

@@ -14,15 +14,12 @@ Engine concepts: plugin creation, multiple executable networks support, multiple
2. **Single layer tests** (`single_layer_tests` sub-folder). This groups of tests checks that a particular single layer can be inferenced on a device. An example of test instantiation based on test definition from `IE::funcSharedTests` library:
- From the declaration of convolution test class we can see that it's a parametrized GoogleTest based class with the `convLayerTestParamsSet` tuple of parameters:
@snippet single_layer/convolution.hpp test_convolution:definition
- Based on that, define a set of parameters for `Template` plugin functional test instantiation:
@snippet single_layer_tests/convolution.cpp test_convolution:declare_parameters
- Instantiate the test itself using standard GoogleTest macro `INSTANTIATE_TEST_SUITE_P`:
@snippet single_layer_tests/convolution.cpp test_convolution:instantiate
3. **Sub-graph tests** (`subgraph_tests` sub-folder). This group of tests is designed to tests small patterns or combination of layers. E.g. when a particular topology is being enabled in a plugin e.g. TF ResNet-50, there is no need to add the whole topology to test tests. In opposite way, a particular repetitive subgraph or pattern can be extracted from `ResNet-50` and added to the tests. The instantiation of the sub-graph tests is done in the same way as for single layer tests.

View File

@@ -32,7 +32,7 @@ Thus we can define:
- **Scale** as `(output_high - output_low) / (levels-1)`
- **Zero-point** as `-output_low / (output_high - output_low) * (levels-1)`
**Note**: During the quantization process the values `input_low`, `input_high`, `output_low`, `output_high` are selected so that to map a floating-point zero exactly to an integer value (zero-point) and vice versa.
> **NOTE**: During the quantization process the values `input_low`, `input_high`, `output_low`, `output_high` are selected so that to map a floating-point zero exactly to an integer value (zero-point) and vice versa.
## Quantization specifics and restrictions
In general, OpenVINO can represent and execute quantized models from different sources. However, the Post-training Optimization Tool (POT)

View File

@@ -1,4 +1,4 @@
# AvgPoolPrecisionPreserved attribute {#openvino_docs_OV_UG_lpt_AvgPoolPrecisionPreserved}
# AvgPoolPrecisionPreserved Attribute {#openvino_docs_OV_UG_lpt_AvgPoolPrecisionPreserved}
ngraph::AvgPoolPrecisionPreservedAttribute class represents the `AvgPoolPrecisionPreserved` attribute.

View File

@@ -1,4 +1,4 @@
# IntervalsAlignment attribute {#openvino_docs_OV_UG_lpt_IntervalsAlignment}
# IntervalsAlignment Attribute {#openvino_docs_OV_UG_lpt_IntervalsAlignment}
ngraph::IntervalsAlignmentAttribute class represents the `IntervalsAlignment` attribute.

View File

@@ -1,4 +1,4 @@
# PrecisionPreserved attribute {#openvino_docs_OV_UG_lpt_PrecisionPreserved}
# PrecisionPreserved Attribute {#openvino_docs_OV_UG_lpt_PrecisionPreserved}
ngraph::PrecisionPreservedAttribute class represents the `PrecisionPreserved` attribute.

View File

@@ -1,4 +1,4 @@
# Precisions attribute {#openvino_docs_OV_UG_lpt_Precisions}
# Precisions Attribute {#openvino_docs_OV_UG_lpt_Precisions}
ngraph::PrecisionsAttribute class represents the `Precisions` attribute.

View File

@@ -1,4 +1,4 @@
# QuantizationAlignment attribute {#openvino_docs_OV_UG_lpt_QuantizationAlignment}
# QuantizationAlignment Attribute {#openvino_docs_OV_UG_lpt_QuantizationAlignment}
ngraph::QuantizationAlignmentAttribute class represents the `QuantizationAlignment` attribute.

View File

@@ -1,4 +1,4 @@
# QuantizationGranularity attribute {#openvino_docs_OV_UG_lpt_QuantizationGranularity}
# QuantizationGranularity Attribute {#openvino_docs_OV_UG_lpt_QuantizationGranularity}
ngraph::QuantizationAttribute class represents the `QuantizationGranularity` attribute.

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 22 KiB

After

Width:  |  Height:  |  Size: 130 B

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 26 KiB

After

Width:  |  Height:  |  Size: 130 B

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 29 KiB

After

Width:  |  Height:  |  Size: 130 B

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 28 KiB

After

Width:  |  Height:  |  Size: 130 B

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 38 KiB

After

Width:  |  Height:  |  Size: 130 B

View File

@@ -54,4 +54,4 @@ Attributes usage by transformations:
| IntervalsAlignment | AlignQuantizationIntervals | FakeQuantizeDecompositionTransformation |
| QuantizationAlignment | AlignQuantizationParameters | FakeQuantizeDecompositionTransformation |
> **Note:** the same type of attribute instances can be created in different transformations. This approach is the result of the transformation single-responsibility principle. For example, `Precision` attribute instances are created in `MarkupCanBeQuantized` and `MarkupPrecisions` transformations, but the reasons for their creation are different.
> **NOTE**: the same type of attribute instances can be created in different transformations. This approach is the result of the transformation single-responsibility principle. For example, `Precision` attribute instances are created in `MarkupCanBeQuantized` and `MarkupPrecisions` transformations, but the reasons for their creation are different.

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 61 KiB

After

Width:  |  Height:  |  Size: 130 B

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 62 KiB

After

Width:  |  Height:  |  Size: 130 B

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 62 KiB

After

Width:  |  Height:  |  Size: 130 B

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 64 KiB

After

Width:  |  Height:  |  Size: 130 B

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 67 KiB

After

Width:  |  Height:  |  Size: 130 B

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 78 KiB

After

Width:  |  Height:  |  Size: 130 B

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 77 KiB

After

Width:  |  Height:  |  Size: 130 B

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 58 KiB

After

Width:  |  Height:  |  Size: 130 B

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 77 KiB

After

Width:  |  Height:  |  Size: 130 B

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 95 KiB

After

Width:  |  Height:  |  Size: 130 B

View File

@@ -22,7 +22,7 @@ The table of transformations and used attributes:
| AlignQuantizationIntervals | IntervalsAlignment | PrecisionPreserved |
| AlignQuantizationParameters | QuantizationAlignment | PrecisionPreserved, PerTensorQuantization |
> **Note:** the same type of attribute instances can be created in different transformations. This approach is the result of the transformation single-responsibility principle. For example, `Precision` attribute instances are created in `MarkupCanBeQuantized` and `MarkupPrecisions` transformations, but the reasons for their creation are different
> **NOTE**: the same type of attribute instances can be created in different transformations. This approach is the result of the transformation single-responsibility principle. For example, `Precision` attribute instances are created in `MarkupCanBeQuantized` and `MarkupPrecisions` transformations, but the reasons for their creation are different
Common markup transformations can be decomposed into simpler utility markup transformations. The order of Markup utility transformations is not important:
* [CreateAttribute](@ref openvino_docs_OV_UG_lpt_CreateAttribute)

View File

@@ -46,4 +46,4 @@ Changes in the example model after main transformation:
- dequantization operations.
* Dequantization operations were moved via precision preserved (`concat1` and `concat2`) and quantized (`convolution2`) operations.
> **Note:** the left branch (branch #1) does not require per-tensor quantization. As a result, the `fakeQuantize1`output interval is [0, 255]. But quantized `convolution2` requires per-tensor quantization on the right branch (branch #2). Then all connected `FakeQuantize` interval operations (`fakeQuantize1` and `fakeQuantize2`) are aligned to have per-tensor quantization after the concatenation (`concat2`) operation.
> **NOTE**: the left branch (branch #1) does not require per-tensor quantization. As a result, the `fakeQuantize1`output interval is [0, 255]. But quantized `convolution2` requires per-tensor quantization on the right branch (branch #2). Then all connected `FakeQuantize` interval operations (`fakeQuantize1` and `fakeQuantize2`) are aligned to have per-tensor quantization after the concatenation (`concat2`) operation.

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 15 KiB

After

Width:  |  Height:  |  Size: 130 B

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 23 KiB

After

Width:  |  Height:  |  Size: 130 B

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 54 KiB

After

Width:  |  Height:  |  Size: 130 B

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 51 KiB

After

Width:  |  Height:  |  Size: 130 B

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 26 KiB

After

Width:  |  Height:  |  Size: 130 B

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 15 KiB

After

Width:  |  Height:  |  Size: 130 B

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 28 KiB

After

Width:  |  Height:  |  Size: 130 B

View File

@@ -1,4 +1,4 @@
# Converting Models with Model Optimizer {#openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide}
# Model Optimizer Usage {#openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide}
@sphinxdirective
@@ -8,19 +8,12 @@
:maxdepth: 1
:hidden:
openvino_docs_model_inputs_outputs
openvino_docs_MO_DG_prepare_model_convert_model_Converting_Model
openvino_docs_MO_DG_prepare_model_Model_Optimization_Techniques
openvino_docs_MO_DG_prepare_model_Model_Optimization_Techniques
openvino_docs_MO_DG_prepare_model_convert_model_Cutting_Model
openvino_docs_MO_DG_Additional_Optimization_Use_Cases
openvino_docs_MO_DG_FP16_Compression
openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow
openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_ONNX
openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_PyTorch
openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Paddle
openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_MxNet
openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Caffe
openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Kaldi
openvino_docs_MO_DG_prepare_model_convert_model_tutorials
openvino_docs_MO_DG_prepare_model_Model_Optimizer_FAQ
@endsphinxdirective
@@ -41,7 +34,7 @@ where IR is a pair of files describing the model:
* <code>.bin</code> - Contains the weights and biases binary data.
The generated IR can be additionally optimized for inference by [Post-training optimization](../../tools/pot/docs/Introduction.md)
The OpenVINO IR can be additionally optimized for inference by [Post-training optimization](../../tools/pot/docs/Introduction.md)
> that applies post-training quantization methods.
> **TIP**: You can also work with Model Optimizer in OpenVINO™ [Deep Learning Workbench (DL Workbench)](https://docs.openvino.ai/latest/workbench_docs_Workbench_DG_Introduction.html), which is a web-based tool with GUI for optimizing, fine-tuning, analyzing, visualizing, and comparing performance of deep learning models.

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 38 KiB

After

Width:  |  Height:  |  Size: 130 B

View File

@@ -1,3 +0,0 @@
version https://git-lfs.github.com/spec/v1
oid sha256:11579795c778b28d57cbf080dedc10149500d78cc8b16a74fe2b113c76a94f6b
size 26152

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:f2720b6d3b5e680978a91379c8c37366285299aab31aa139ad9abea8334aae34
size 57687

View File

@@ -1,3 +0,0 @@
version https://git-lfs.github.com/spec/v1
oid sha256:1a570510808fb2997ee0d51af6f92c5a4a8f8a59dbd275000489f856e89124d5
size 120211

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:5c0389fe34562993b1285f1994dbc878e9547a841c903bf204074ed2219b6bc7
size 323210

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 46 KiB

After

Width:  |  Height:  |  Size: 130 B

View File

@@ -1,3 +0,0 @@
version https://git-lfs.github.com/spec/v1
oid sha256:344b2fcb9b7a180a8d8047e65b4aad3ca2651cfc7d5e1e408710a5a3730fed09
size 20851

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:1cc5ead5513c641763b994bea5a08ccaa4a694b3f5239ddd2fe58424b90e5289
size 33741

View File

@@ -1,3 +0,0 @@
version https://git-lfs.github.com/spec/v1
oid sha256:78a73487434f4178f111595eb34b344b35af14bd4ccb03e6a5b00509f86e19c5
size 5348

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:14dd247a2b498dfa570e643656e6fd5ba9f7eb6e6fd14f4ada0dda2d4426c943
size 7832

View File

@@ -1,3 +0,0 @@
version https://git-lfs.github.com/spec/v1
oid sha256:939e1aa0d2ba28dab1c930c6271a9f4063fd9f8c539d4713c0bd0f87c34f66c3
size 15020

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:e42abc494dce9f04edb6424ff6828b074879869c68d1fbe08f3980b657fecdf8
size 30634

View File

@@ -1,3 +0,0 @@
version https://git-lfs.github.com/spec/v1
oid sha256:9859464a5c3ec91e4d6316109f523f48ad8972d2213a6797330e665d45b35c54
size 44117

3
docs/MO_DG/img/lm_1b.svg Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:062fa64afa0cc43c4a2c2c0442e499b6176c837857222af30bad2fa7c9515420
size 95508

View File

@@ -1,3 +0,0 @@
version https://git-lfs.github.com/spec/v1
oid sha256:3812efef32bd7f1bf40b130d5d522bc3df6aebd406bd1186699d214bca856722
size 43721

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:bc41098cd8ca3c72f930beab155c981cc6e4e898729bd76438650ba31ebe351a
size 142111

View File

@@ -1,3 +0,0 @@
version https://git-lfs.github.com/spec/v1
oid sha256:0e232c47e8500f42bd0e1f2b93f94f58e2d59caee149c687be3cdc3e8a5be59a
size 18417

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:6a67a86a656c81bc69e024c4911c535cf0937496bdbe69f31b7fee20ee14e474
size 173854

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:4f13f2a0424aa53a52d32ad692f574d331bf31c1f1a9e09499df9729912b45f4
size 351773

View File

@@ -1,3 +0,0 @@
version https://git-lfs.github.com/spec/v1
oid sha256:2adeca1e3512b9fe7b088a5412ce21592977a1f352a013735537ec92e895dc94
size 15653

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:85172ae61de4f592245d0a89605d66ea0b425696868636f9e40276a097a2ba81
size 498110

View File

@@ -9,7 +9,7 @@ When evaluating the performance of a model with OpenVINO Runtime, it is required
- Track operations that occur outside OpenVINO Runtime (such as video decoding) separately.
> **NOTE**: Some image pre-processing can be baked into OpenVINO IR and accelerated accordingly. For more information, refer to [Embedding the Pre-processing](Additional_Optimizations.md) and [General Runtime Optimizations](../../optimization_guide/dldt_deployment_optimization_common).
> **NOTE**: Some image pre-processing can be baked into OpenVINO IR and accelerated accordingly. For more information, refer to [Embedding the Pre-processing](Additional_Optimizations.md) and [General Runtime Optimizations](../../optimization_guide/dldt_deployment_optimization_common.md).
## Tip 2: Try to Get Credible Data

View File

@@ -1,7 +1,7 @@
# Model Optimization Techniques {#openvino_docs_MO_DG_prepare_model_Model_Optimization_Techniques}
Optimization offers methods to accelerate inference with the convolution neural networks (CNN) that do not require model retraining.
* * *
## Linear Operations Fusing
@@ -26,7 +26,7 @@ This optimization method consists of three stages:
The picture below shows the depicted part of Caffe Resnet269 topology where `BatchNorm` and `ScaleShift` layers will be fused to `Convolution` layers.
![Caffe ResNet269 block before and after optimization generated with Netscope*](../img/optimizations/resnet_269.png)
![Caffe ResNet269 block before and after optimization generated with Netscope*](../img/optimizations/resnet_269.svg)
* * *
@@ -38,7 +38,7 @@ ResNet optimization is a specific optimization that applies to Caffe ResNet topo
In the picture below, you can see the original and optimized parts of a Caffe ResNet50 model. The main idea of this optimization is to move the stride that is greater than 1 from Convolution layers with the kernel size = 1 to upper Convolution layers. In addition, the Model Optimizer adds a Pooling layer to align the input shape for a Eltwise layer, if it was changed during the optimization.
![ResNet50 blocks (original and optimized) from Netscope](../img/optimizations/resnet_optimization.png)
![ResNet50 blocks (original and optimized) from Netscope](../img/optimizations/resnet_optimization.svg)
In this example, the stride from the `res3a_branch1` and `res3a_branch2a` Convolution layers moves to the `res2c_branch2b` Convolution layer. In addition, to align the input shape for `res2c` Eltwise, the optimization inserts the Pooling layer with kernel size = 1 and stride = 2.
@@ -48,7 +48,7 @@ In this example, the stride from the `res3a_branch1` and `res3a_branch2a` Convol
Grouped convolution fusing is a specific optimization that applies for TensorFlow topologies. The main idea of this optimization is to combine convolutions results for the `Split` outputs and then recombine them using `Concat` operation in the same order as they were out from `Split`.
![Split→Convolutions→Concat block from TensorBoard*](../img/optimizations/groups.png)
![Split→Convolutions→Concat block from TensorBoard*](../img/optimizations/groups.svg)
* * *
@@ -62,4 +62,4 @@ On the picture below you can see two visualized Intermediate Representations (IR
The first one is original IR that will be produced by the Model Optimizer.
The second one will be produced by the Model Optimizer with key `--finegrain_fusing InceptionV4/InceptionV4/Conv2d_1a_3x3/Conv2D`, where you can see that `Convolution` was not fused with `Mul1_3752` and `Mul1_4061/Fused_Mul_5096/FusedScaleShift_5987` operations.
![TF InceptionV4 block without/with key --finegrain_fusing (from IR visualizer)](../img/optimizations/inception_v4.png)
![TF InceptionV4 block without/with key --finegrain_fusing (from IR visualizer)](../img/optimizations/inception_v4.svg)

View File

@@ -1,10 +1,10 @@
# Model Optimizer Frequently Asked Questions {#openvino_docs_MO_DG_prepare_model_Model_Optimizer_FAQ}
If your question is not covered by the topics below, use the [OpenVINO&trade; Support page](https://software.intel.com/en-us/openvino-toolkit/documentation/get-started), where you can participate on a free forum.
If your question is not covered by the topics below, use the [OpenVINO Support page](https://software.intel.com/en-us/openvino-toolkit/documentation/get-started), where you can participate on a free forum.
#### 1. What does the message "[ ERROR ]: Current caffe.proto does not contain field" mean? <a name="question-1"></a>
#### Q1. What does the message "[ ERROR ]: Current caffe.proto does not contain field" mean? <a name="question-1"></a>
Internally, Model Optimizer uses a protobuf library to parse and load Caffe models. This library requires a file grammar and a generated parser. For a Caffe fallback, Model Optimizer uses a Caffe-generated parser for a Caffe-specific `.proto` file (which is usually located in the `src/caffe/proto` directory). Make sure that you install exactly the same version of Caffe (with Python interface) as that was used to create the model.
**A** : Internally, Model Optimizer uses a protobuf library to parse and load Caffe models. This library requires a file grammar and a generated parser. For a Caffe fallback, Model Optimizer uses a Caffe-generated parser for a Caffe-specific `.proto` file (which is usually located in the `src/caffe/proto` directory). Make sure that you install exactly the same version of Caffe (with Python interface) as that was used to create the model.
If you just want to experiment with Model Optimizer and test a Python extension for working with your custom
layers without building Caffe, add the layer description to the `caffe.proto` file and generate a parser for it.
@@ -35,38 +35,38 @@ where `PATH_TO_CUSTOM_CAFFE` is the path to the root directory of custom Caffe.
3. Now, Model Optimizer is able to load the model into memory and start working with your extensions if there are any.
However, since your model has custom layers, you must register them as custom. To learn more about it, refer to [Custom Layers in Model Optimizer](customize_model_optimizer/Customize_Model_Optimizer.md).
However, since your model has custom layers, you must register them as custom. To learn more about it, refer to [Custom Layers in Model Optimizer](@ref openvino_docs_MO_DG_prepare_model_customize_model_optimizer_Customize_Model_Optimizer).
#### 2. How do I create a bare caffemodel, if I have only prototxt? <a name="question-2"></a>
#### Q2. How do I create a bare caffemodel, if I have only prototxt? <a name="question-2"></a>
You need the Caffe Python interface. In this case, do the following:
**A** : You need the Caffe Python interface. In this case, do the following:
```shell
python3
import caffe
net = caffe.Net('<PATH_TO_PROTOTXT>/my_net.prototxt', caffe.TEST)
net.save('<PATH_TO_PROTOTXT>/my_net.caffemodel')
```
#### 3. What does the message "[ ERROR ]: Unable to create ports for node with id" mean? <a name="question-3"></a>
#### Q3. What does the message "[ ERROR ]: Unable to create ports for node with id" mean? <a name="question-3"></a>
Most likely, Model Optimizer does not know how to infer output shapes of some layers in the given topology.
**A** : Most likely, Model Optimizer does not know how to infer output shapes of some layers in the given topology.
To lessen the scope, compile the list of layers that are custom for Model Optimizer: present in the topology,
absent in the [list of supported layers](Supported_Frameworks_Layers.md) for the target framework. Then, refer to available options in the corresponding section in the [Custom Layers in Model Optimizer](customize_model_optimizer/Customize_Model_Optimizer.md) page.
absent in the [list of supported layers](@ref openvino_docs_MO_DG_prepare_model_Supported_Frameworks_Layers) for the target framework. Then, refer to available options in the corresponding section in the [Custom Layers in Model Optimizer](@ref openvino_docs_MO_DG_prepare_model_customize_model_optimizer_Customize_Model_Optimizer) page.
#### 4. What does the message "Input image of shape is larger than mean image from file" mean? <a name="question-4"></a>
#### Q4. What does the message "Input image of shape is larger than mean image from file" mean? <a name="question-4"></a>
Your model input shapes must be smaller than or equal to the shapes of the mean image file you provide. The idea behind the mean file is to subtract its values from the input image in an element-wise manner. When the mean file is smaller than the input image, there are not enough values to perform element-wise subtraction. Also, make sure you use the mean file that was used during the network training phase. Note that the mean file is dependent on dataset.
**A** : Your model input shapes must be smaller than or equal to the shapes of the mean image file you provide. The idea behind the mean file is to subtract its values from the input image in an element-wise manner. When the mean file is smaller than the input image, there are not enough values to perform element-wise subtraction. Also, make sure you use the mean file that was used during the network training phase. Note that the mean file is dependent on dataset.
#### 5. What does the message "Mean file is empty" mean? <a name="question-5"></a>
#### Q5. What does the message "Mean file is empty" mean? <a name="question-5"></a>
Most likely, the mean file specified with the `--mean_file` flag is empty while Model Optimizer is launched. Make sure that this is exactly the required mean file and try to regenerate it from the given dataset if possible.
**A** : Most likely, the mean file specified with the `--mean_file` flag is empty while Model Optimizer is launched. Make sure that this is exactly the required mean file and try to regenerate it from the given dataset if possible.
#### 6. What does the message "Probably mean file has incorrect format" mean? <a name="question-6"></a>
#### Q6. What does the message "Probably mean file has incorrect format" mean? <a name="question-6"></a>
The mean file that you provide for Model Optimizer must be in the `.binaryproto` format. You can try to check the content, using recommendations from the BVLC Caffe ([#290](https://github.com/BVLC/caffe/issues/290)).
**A** : The mean file that you provide for Model Optimizer must be in the `.binaryproto` format. You can try to check the content, using recommendations from the BVLC Caffe ([#290](https://github.com/BVLC/caffe/issues/290)).
#### 7. What does the message "Invalid proto file: there is neither 'layer' nor 'layers' top-level messages" mean? <a name="question-7"></a>
#### Q7. What does the message "Invalid proto file: there is neither 'layer' nor 'layers' top-level messages" mean? <a name="question-7"></a>
The structure of any Caffe topology is described in the `caffe.proto` file of any Caffe version. For example, the following `.proto` file in Model Optimizer is used by default: `mo/front/caffe/proto/my_caffe.proto`, with the structure:
**A** : The structure of any Caffe topology is described in the `caffe.proto` file of any Caffe version. For example, the following `.proto` file in Model Optimizer is used by default: `mo/front/caffe/proto/my_caffe.proto`, with the structure:
```
message NetParameter {
// ... some other parameters
@@ -79,9 +79,9 @@ message NetParameter {
```
This means that any topology should contain layers as top-level structures in `prototxt`. For example, see the [LeNet topology](https://github.com/BVLC/caffe/blob/master/examples/mnist/lenet.prototxt).
#### 8. What does the message "Old-style inputs (via 'input_dims') are not supported. Please specify inputs via 'input_shape'" mean? <a name="question-8"></a>
#### Q8. What does the message "Old-style inputs (via 'input_dims') are not supported. Please specify inputs via 'input_shape'" mean? <a name="question-8"></a>
The structure of any Caffe topology is described in the `caffe.proto` file for any Caffe version. For example, the following `.proto` file in Model Optimizer is used by default: `mo/front/caffe/proto/my_caffe.proto`, with the structure:
**A** : The structure of any Caffe topology is described in the `caffe.proto` file for any Caffe version. For example, the following `.proto` file in Model Optimizer is used by default: `mo/front/caffe/proto/my_caffe.proto`, with the structure:
```sh
message NetParameter {
@@ -156,199 +156,200 @@ input_dim: 500
However, if your model contains more than one input, Model Optimizer is able to convert the model with inputs specified in one of the first three forms in the above list. The 4th form is not supported for multi-input topologies.
#### 9. What does the message "Mean file for topologies with multiple inputs is not supported" mean? <a name="question-9"></a>
#### Q9. What does the message "Mean file for topologies with multiple inputs is not supported" mean? <a name="question-9"></a>
Model Optimizer does not support mean file processing for topologies with more than one input. In this case, you need to perform preprocessing of the inputs for a generated Intermediate Representation in OpenVINO Runtime to perform subtraction for every input of your multi-input model. See the [Overview of Preprocessing](../../OV_Runtime_UG/preprocessing_overview.md) for details.
**A** : Model Optimizer does not support mean file processing for topologies with more than one input. In this case, you need to perform preprocessing of the inputs for a generated Intermediate Representation in OpenVINO Runtime to perform subtraction for every input of your multi-input model. See the [Overview of Preprocessing](@ref openvino_docs_OV_UG_Preprocessing_Overview) for details.
#### 10. What does the message "Cannot load or process mean file: value error" mean? <a name="question-10"></a>
#### Q10. What does the message "Cannot load or process mean file: value error" mean? <a name="question-10"></a>
There are multiple reasons why Model Optimizer does not accept the mean file. See FAQs [#4](#question-4), [#5](#question-5), and [#6](#question-6).
**A** : There are multiple reasons why Model Optimizer does not accept the mean file. See FAQs [#4](#question-4), [#5](#question-5), and [#6](#question-6).
#### 11. What does the message "Invalid prototxt file: value error" mean? <a name="question-11"></a>
#### Q11. What does the message "Invalid prototxt file: value error" mean? <a name="question-11"></a>
There are multiple reasons why Model Optimizer does not accept a Caffe topology. See FAQs [#7](#question-7) and [#20](#question-20).
**A** : There are multiple reasons why Model Optimizer does not accept a Caffe topology. See FAQs [#7](#question-7) and [#20](#question-20).
#### 12. What does the message "Error happened while constructing caffe.Net in the Caffe fallback function" mean? <a name="question-12"></a>
#### Q12. What does the message "Error happened while constructing caffe.Net in the Caffe fallback function" mean? <a name="question-12"></a>
Model Optimizer tried to infer a specified layer via the Caffe framework. However, it cannot construct a net using the Caffe Python interface. Make sure that your `caffemodel` and `prototxt` files are correct. To ensure that the problem is not in the `prototxt` file, see FAQ [#2](#question-2).
**A** : Model Optimizer tried to infer a specified layer via the Caffe framework. However, it cannot construct a net using the Caffe Python interface. Make sure that your `caffemodel` and `prototxt` files are correct. To ensure that the problem is not in the `prototxt` file, see FAQ [#2](#question-2).
#### 13. What does the message "Cannot infer shapes due to exception in Caffe" mean? <a name="question-13"></a>
#### Q13. What does the message "Cannot infer shapes due to exception in Caffe" mean? <a name="question-13"></a>
Model Optimizer tried to infer a custom layer via the Caffe framework, but the model could not be inferred using Caffe. This might happen if you try to convert the model with some noise weights and biases, which conflict with layers that have dynamic shapes. You should write your own extension for every custom layer your topology might have. For more details, refer to the [Model Optimizer Extensibility](customize_model_optimizer/Customize_Model_Optimizer.md) page.
**A** : Model Optimizer tried to infer a custom layer via the Caffe framework, but the model could not be inferred using Caffe. This might happen if you try to convert the model with some noise weights and biases, which conflict with layers that have dynamic shapes. You should write your own extension for every custom layer your topology might have. For more details, refer to the [Model Optimizer Extensibility](@ref openvino_docs_MO_DG_prepare_model_customize_model_optimizer_Customize_Model_Optimizer) page.
#### 14. What does the message "Cannot infer shape for node {} because there is no Caffe available. Please register python infer function for op or use Caffe for shape inference" mean? <a name="question-14"></a>
#### Q14. What does the message "Cannot infer shape for node {} because there is no Caffe available. Please register python infer function for op or use Caffe for shape inference" mean? <a name="question-14"></a>
Your model contains a custom layer and you have correctly registered it with the `CustomLayersMapping.xml` file. These steps are required to offload shape inference of the custom layer with the help of the system Caffe. However, Model Optimizer could not import a Caffe package. Make sure that you have built Caffe with a `pycaffe` target and added it to the `PYTHONPATH` environment variable. At the same time, it is highly recommended to avoid dependency on Caffe and write your own Model Optimizer extension for your custom layer. For more information, refer to FAQ [#44](#question-44).
**A** : Your model contains a custom layer and you have correctly registered it with the `CustomLayersMapping.xml` file. These steps are required to offload shape inference of the custom layer with the help of the system Caffe. However, Model Optimizer could not import a Caffe package. Make sure that you have built Caffe with a `pycaffe` target and added it to the `PYTHONPATH` environment variable. At the same time, it is highly recommended to avoid dependency on Caffe and write your own Model Optimizer extension for your custom layer. For more information, refer to FAQ [#44](#question-44).
#### 15. What does the message "Framework name can not be deduced from the given options. Use --framework to choose one of Caffe, TensorFlow, MXNet" mean? <a name="question-15"></a>
#### Q15. What does the message "Framework name can not be deduced from the given options. Use --framework to choose one of Caffe, TensorFlow, MXNet" mean? <a name="question-15"></a>
You have run Model Optimizer without a flag `--framework caffe|tf|mxnet`. Model Optimizer tries to deduce the framework by the extension of input model file (`.pb` for TensorFlow, `.caffemodel` for Caffe, `.params` for Apache MXNet). Your input model might have a different extension and you need to explicitly set the source framework. For example, use `--framework caffe`.
**A** : You have run Model Optimizer without a flag `--framework caffe|tf|mxnet`. Model Optimizer tries to deduce the framework by the extension of input model file (`.pb` for TensorFlow, `.caffemodel` for Caffe, `.params` for Apache MXNet). Your input model might have a different extension and you need to explicitly set the source framework. For example, use `--framework caffe`.
#### 16. What does the message "Input shape is required to convert MXNet model. Please provide it with --input_shape" mean? <a name="question-16"></a>
#### Q16. What does the message "Input shape is required to convert MXNet model. Please provide it with --input_shape" mean? <a name="question-16"></a>
Input shape was not provided. That is mandatory for converting an MXNet model to the OpenVINO Intermediate Representation, because MXNet models do not contain information about input shapes. Use the `--input_shape` flag to specify it. For more information about using the `--input_shape`, refer to FAQ [#56](#question-56).
**A** : Input shape was not provided. That is mandatory for converting an MXNet model to the OpenVINO Intermediate Representation, because MXNet models do not contain information about input shapes. Use the `--input_shape` flag to specify it. For more information about using the `--input_shape`, refer to FAQ [#56](#question-56).
#### 17. What does the message "Both --mean_file and mean_values are specified. Specify either mean file or mean values" mean? <a name="question-17"></a>
#### Q17. What does the message "Both --mean_file and mean_values are specified. Specify either mean file or mean values" mean? <a name="question-17"></a>
The `--mean_file` and `--mean_values` options are two ways of specifying preprocessing for the input. However, they cannot be used together, as it would mean double subtraction and lead to ambiguity. Choose one of these options and pass it with the corresponding CLI option.
**A** : The `--mean_file` and `--mean_values` options are two ways of specifying preprocessing for the input. However, they cannot be used together, as it would mean double subtraction and lead to ambiguity. Choose one of these options and pass it with the corresponding CLI option.
#### 18. What does the message "Negative value specified for --mean_file_offsets option. Please specify positive integer values in format '(x,y)'" mean? <a name="question-18"></a>
#### Q18. What does the message "Negative value specified for --mean_file_offsets option. Please specify positive integer values in format '(x,y)'" mean? <a name="question-18"></a>
You might have specified negative values with `--mean_file_offsets`. Only positive integer values in format '(x,y)' must be used.
**A** : You might have specified negative values with `--mean_file_offsets`. Only positive integer values in format '(x,y)' must be used.
#### 19. What does the message "Both --scale and --scale_values are defined. Specify either scale factor or scale values per input channels" mean? <a name="question-19"></a>
#### Q19. What does the message "Both --scale and --scale_values are defined. Specify either scale factor or scale values per input channels" mean? <a name="question-19"></a>
The `--scale` option sets a scaling factor for all channels, while `--scale_values` sets a scaling factor per each channel. Using both of them simultaneously produces ambiguity, so you must use only one of them. For more information, refer to the **Using Framework-Agnostic Conversion Parameters** section: for <a href="ConvertFromCaffe.html#using-framework-agnostic-conv-param">Converting a Caffe Model</a>, <a href="ConvertFromTensorFlow.html#using-framework-agnostic-conv-param">Converting a TensorFlow Model</a>, <a href="ConvertFromMXNet.html#using-framework-agnostic-conv-param">Converting an MXNet Model</a>.
**A** : The `--scale` option sets a scaling factor for all channels, while `--scale_values` sets a scaling factor per each channel. Using both of them simultaneously produces ambiguity, so you must use only one of them. For more information, refer to the **Using Framework-Agnostic Conversion Parameters** section: for [Converting a Caffe Model](@ref openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Caffe), [Converting a TensorFlow Model](@ref openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow), [Converting an MXNet Model](@ref openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_MxNet).
#### 20. What does the message "Cannot find prototxt file: for Caffe please specify --input_proto - a protobuf file that stores topology and --input_model that stores pre-trained weights" mean? <a name="question-20"></a>
#### Q20. What does the message "Cannot find prototxt file: for Caffe please specify --input_proto - a protobuf file that stores topology and --input_model that stores pre-trained weights" mean? <a name="question-20"></a>
Model Optimizer cannot find a `.prototxt` file for a specified model. By default, it must be located in the same directory as the input model with the same name (except extension). If any of these conditions is not satisfied, use `--input_proto` to specify the path to the `.prototxt` file.
**A** : Model Optimizer cannot find a `.prototxt` file for a specified model. By default, it must be located in the same directory as the input model with the same name (except extension). If any of these conditions is not satisfied, use `--input_proto` to specify the path to the `.prototxt` file.
#### 21. What does the message "Failed to create directory .. . Permission denied!" mean? <a name="question-21"></a>
#### Q21. What does the message "Failed to create directory .. . Permission denied!" mean? <a name="question-21"></a>
Model Optimizer cannot create a directory specified via `--output_dir`. Make sure that you have enough permissions to create the specified directory.
**A** : Model Optimizer cannot create a directory specified via `--output_dir`. Make sure that you have enough permissions to create the specified directory.
#### 22. What does the message "Discovered data node without inputs and value" mean? <a name="question-22"></a>
#### Q22. What does the message "Discovered data node without inputs and value" mean? <a name="question-22"></a>
One of the layers in the specified topology might not have inputs or values. Make sure that the provided `caffemodel` and `protobuf` files are correct.
**A** : One of the layers in the specified topology might not have inputs or values. Make sure that the provided `caffemodel` and `protobuf` files are correct.
#### 23. What does the message "Part of the nodes was not translated to IE. Stopped" mean? <a name="question-23"></a>
#### Q23. What does the message "Part of the nodes was not translated to IE. Stopped" mean? <a name="question-23"></a>
Some of the operations are not supported by OpenVINO Runtime and cannot be translated to OpenVINO Intermediate Representation. You can extend Model Optimizer by allowing generation of new types of operations and implement these operations in the dedicated OpenVINO plugins. For more information, refer to the [OpenVINO Extensibility Mechanism](../../Extensibility_UG/Intro.md) guide.
**A** : Some of the operations are not supported by OpenVINO Runtime and cannot be translated to OpenVINO Intermediate Representation. You can extend Model Optimizer by allowing generation of new types of operations and implement these operations in the dedicated OpenVINO plugins. For more information, refer to the [OpenVINO Extensibility Mechanism](@ref openvino_docs_Extensibility_UG_Intro) guide.
#### 24. What does the message "While creating an edge from .. to .. : node name is undefined in the graph. Check correctness of the input model" mean? <a name="question-24"></a>
#### Q24. What does the message "While creating an edge from .. to .. : node name is undefined in the graph. Check correctness of the input model" mean? <a name="question-24"></a>
Model Optimizer cannot build a graph based on a specified model. Most likely, it is incorrect.
**A** : Model Optimizer cannot build a graph based on a specified model. Most likely, it is incorrect.
#### 25. What does the message "Node does not exist in the graph" mean? <a name="question-25"></a>
#### Q25. What does the message "Node does not exist in the graph" mean? <a name="question-25"></a>
You might have specified an output node via the `--output` flag that does not exist in a provided model. Make sure that the specified output is correct and this node exists in the current model.
**A** : You might have specified an output node via the `--output` flag that does not exist in a provided model. Make sure that the specified output is correct and this node exists in the current model.
#### 26. What does the message "--input parameter was provided. Other inputs are needed for output computation. Provide more inputs or choose another place to cut the net" mean? <a name="question-26"></a>
#### Q26. What does the message "--input parameter was provided. Other inputs are needed for output computation. Provide more inputs or choose another place to cut the net" mean? <a name="question-26"></a>
Most likely, Model Optimizer tried to cut the model by a specified input. However, other inputs are needed.
**A** : Most likely, Model Optimizer tried to cut the model by a specified input. However, other inputs are needed.
#### 27. What does the message "Placeholder node does not have an input port, but input port was provided" mean? <a name="question-27"></a>
#### Q27. What does the message "Placeholder node does not have an input port, but input port was provided" mean? <a name="question-27"></a>
You might have specified a placeholder node with an input node, while the placeholder node does not have it in the model.
**A** : You might have specified a placeholder node with an input node, while the placeholder node does not have it in the model.
#### 28. What does the message "Port index is out of number of available input ports for node" mean? <a name="question-28"></a>
#### Q28. What does the message "Port index is out of number of available input ports for node" mean? <a name="question-28"></a>
This error occurs when an incorrect input port is specified with the `--input` command line argument. When using `--input`, you may optionally specify an input port in the form: `X:node_name`, where `X` is an integer index of the input port starting from 0 and `node_name` is the name of a node in the model. This error occurs when the specified input port `X` is not in the range 0..(n-1), where n is the number of input ports for the node. Specify a correct port index, or do not use it if it is not needed.
**A** : This error occurs when an incorrect input port is specified with the `--input` command line argument. When using `--input`, you may optionally specify an input port in the form: `X:node_name`, where `X` is an integer index of the input port starting from 0 and `node_name` is the name of a node in the model. This error occurs when the specified input port `X` is not in the range 0..(n-1), where n is the number of input ports for the node. Specify a correct port index, or do not use it if it is not needed.
#### 29. What does the message "Node has more than 1 input and input shapes were provided. Try not to provide input shapes or specify input port with PORT:NODE notation, where PORT is an integer" mean? <a name="question-29"></a>
#### Q29. What does the message "Node has more than 1 input and input shapes were provided. Try not to provide input shapes or specify input port with PORT:NODE notation, where PORT is an integer" mean? <a name="question-29"></a>
This error occurs when an incorrect combination of the `--input` and `--input_shape` command line options is used. Using both `--input` and `--input_shape` is valid only if `--input` points to the `Placeholder` node, a node with one input port or `--input` has the form `PORT:NODE`, where `PORT` is an integer port index of input for node `NODE`. Otherwise, the combination of `--input` and `--input_shape` is incorrect.
**A** : This error occurs when an incorrect combination of the `--input` and `--input_shape` command line options is used. Using both `--input` and `--input_shape` is valid only if `--input` points to the `Placeholder` node, a node with one input port or `--input` has the form `PORT:NODE`, where `PORT` is an integer port index of input for node `NODE`. Otherwise, the combination of `--input` and `--input_shape` is incorrect.
#### 30. What does the message "Input port > 0 in --input is not supported if --input_shape is not provided. Node: NAME_OF_THE_NODE. Omit port index and all input ports will be replaced by placeholders. Or provide --input_shape" mean? <a name="question-30"></a>
@anchor FAQ30
#### Q30. What does the message "Input port > 0 in --input is not supported if --input_shape is not provided. Node: NAME_OF_THE_NODE. Omit port index and all input ports will be replaced by placeholders. Or provide --input_shape" mean?
When using the `PORT:NODE` notation for the `--input` command line argument and `PORT` > 0, you should specify `--input_shape` for this input. This is a limitation of the current Model Optimizer implementation.
**A** : When using the `PORT:NODE` notation for the `--input` command line argument and `PORT` > 0, you should specify `--input_shape` for this input. This is a limitation of the current Model Optimizer implementation.
**NOTE**: It is no longer relevant message since the limitation on input port index for model truncation has been resolved.
> **NOTE**: It is no longer relevant message since the limitation on input port index for model truncation has been resolved.
#### 31. What does the message "No or multiple placeholders in the model, but only one shape is provided, cannot set it" mean? <a name="question-31"></a>
#### Q31. What does the message "No or multiple placeholders in the model, but only one shape is provided, cannot set it" mean? <a name="question-31"></a>
You might have provided only one shape for the placeholder, while there are none or multiple inputs in the model. Make sure that you have provided the correct data for placeholder nodes.
**A** : You might have provided only one shape for the placeholder, while there are none or multiple inputs in the model. Make sure that you have provided the correct data for placeholder nodes.
#### 32. What does the message "The amount of input nodes for port is not equal to 1" mean? <a name="question-32"></a>
#### Q32. What does the message "The amount of input nodes for port is not equal to 1" mean? <a name="question-32"></a>
This error occurs when the `SubgraphMatch.single_input_node` function is used for an input port that supplies more than one node in a sub-graph. The `single_input_node` function can be used only for ports that has a single consumer inside the matching sub-graph. When multiple nodes are connected to the port, use the `input_nodes` function or `node_by_pattern` function instead of `single_input_node`. For more details, refer to the **Graph Transformation Extensions** section in the [Model Optimizer Extensibility](customize_model_optimizer/Customize_Model_Optimizer.md) guide.
**A** : This error occurs when the `SubgraphMatch.single_input_node` function is used for an input port that supplies more than one node in a sub-graph. The `single_input_node` function can be used only for ports that has a single consumer inside the matching sub-graph. When multiple nodes are connected to the port, use the `input_nodes` function or `node_by_pattern` function instead of `single_input_node`. For more details, refer to the **Graph Transformation Extensions** section in the [Model Optimizer Extensibility](@ref openvino_docs_MO_DG_prepare_model_customize_model_optimizer_Customize_Model_Optimizer) guide.
#### 33. What does the message "Output node for port has already been specified" mean? <a name="question-33"></a>
#### Q33. What does the message "Output node for port has already been specified" mean? <a name="question-33"></a>
This error occurs when the `SubgraphMatch._add_output_node` function is called manually from user's extension code. This is an internal function, and you should not call it directly.
**A** : This error occurs when the `SubgraphMatch._add_output_node` function is called manually from user's extension code. This is an internal function, and you should not call it directly.
#### 34. What does the message "Unsupported match kind.... Match kinds "points" or "scope" are supported only" mean? <a name="question-34"></a>
#### Q34. What does the message "Unsupported match kind.... Match kinds "points" or "scope" are supported only" mean? <a name="question-34"></a>
While using configuration file to implement a TensorFlow front replacement extension, an incorrect match kind was used. Only `points` or `scope` match kinds are supported. For more details, refer to the [Model Optimizer Extensibility](customize_model_optimizer/Customize_Model_Optimizer.md) guide.
**A** : While using configuration file to implement a TensorFlow front replacement extension, an incorrect match kind was used. Only `points` or `scope` match kinds are supported. For more details, refer to the [Model Optimizer Extensibility](@ref openvino_docs_MO_DG_prepare_model_customize_model_optimizer_Customize_Model_Optimizer) guide.
#### 35. What does the message "Cannot write an event file for the TensorBoard to directory" mean? <a name="question-35"></a>
#### Q35. What does the message "Cannot write an event file for the TensorBoard to directory" mean? <a name="question-35"></a>
Model Optimizer tried to write an event file in the specified directory but failed to do that. That could happen when the specified directory does not exist or you do not have permissions to write in it.
**A** : Model Optimizer tried to write an event file in the specified directory but failed to do that. That could happen when the specified directory does not exist or you do not have permissions to write in it.
#### 36. What does the message "There is no registered 'infer' function for node with op = .. . Please implement this function in the extensions" mean? <a name="question-36"></a>
#### Q36. What does the message "There is no registered 'infer' function for node with op = .. . Please implement this function in the extensions" mean? <a name="question-36"></a>
Most likely, you tried to extend Model Optimizer with a new primitive, but you did not specify an infer function. For more information on extensions, see the [OpenVINO&trade; Extensibility Mechanism](../../Extensibility_UG/Intro.md) guide.
**A** Most likely, you tried to extend Model Optimizer with a new primitive, but you did not specify an infer function. For more information on extensions, see the [OpenVINO Extensibility Mechanism](@ref openvino_docs_Extensibility_UG_Intro) guide.
#### 37. What does the message "Stopped shape/value propagation at node" mean? <a name="question-37"></a>
#### Q37. What does the message "Stopped shape/value propagation at node" mean? <a name="question-37"></a>
Model Optimizer cannot infer shapes or values for the specified node. It can happen because of the following reasons: a bug exists in the custom shape infer function, the node inputs have incorrect values/shapes, or the input shapes are incorrect.
**A** : Model Optimizer cannot infer shapes or values for the specified node. It can happen because of the following reasons: a bug exists in the custom shape infer function, the node inputs have incorrect values/shapes, or the input shapes are incorrect.
#### 38. What does the message "The input with shape .. does not have the batch dimension" mean? <a name="question-38"></a>
#### Q38. What does the message "The input with shape .. does not have the batch dimension" mean? <a name="question-38"></a>
Batch dimension is the first dimension in the shape and it should be equal to 1 or undefined. In your case, it is not either equal to 1 or undefined, which is why the `-b` shortcut produces undefined and unspecified behavior. To resolve the issue, specify full shapes for each input with the `--input_shape` option. Run Model Optimizer with the `--help` option to learn more about the notation for input shapes.
**A** : Batch dimension is the first dimension in the shape and it should be equal to 1 or undefined. In your case, it is not either equal to 1 or undefined, which is why the `-b` shortcut produces undefined and unspecified behavior. To resolve the issue, specify full shapes for each input with the `--input_shape` option. Run Model Optimizer with the `--help` option to learn more about the notation for input shapes.
#### 39. What does the message "Not all output shapes were inferred or fully defined for node" mean? <a name="question-39"></a>
#### Q39. What does the message "Not all output shapes were inferred or fully defined for node" mean? <a name="question-39"></a>
Most likely, the shape is not defined (partially or fully) for the specified node. You can use `--input_shape` with positive integers to override model input shapes.
**A** : Most likely, the shape is not defined (partially or fully) for the specified node. You can use `--input_shape` with positive integers to override model input shapes.
#### 40. What does the message "Shape for tensor is not defined. Can not proceed" mean? <a name="question-40"></a>
#### Q40. What does the message "Shape for tensor is not defined. Can not proceed" mean? <a name="question-40"></a>
This error occurs when the `--input` command-line option is used to cut a model and `--input_shape` is not used to override shapes for a node, so a shape for the node cannot be inferred by Model Optimizer. You need to help Model Optimizer by specifying shapes with `--input_shape` for each node specified with the `--input` command-line option.
**A** : This error occurs when the `--input` command-line option is used to cut a model and `--input_shape` is not used to override shapes for a node, so a shape for the node cannot be inferred by Model Optimizer. You need to help Model Optimizer by specifying shapes with `--input_shape` for each node specified with the `--input` command-line option.
#### 41. What does the message "Module TensorFlow was not found. Please install TensorFlow 1.2 or higher" mean? <a name="question-41"></a>
#### Q41. What does the message "Module TensorFlow was not found. Please install TensorFlow 1.2 or higher" mean? <a name="question-41"></a>
To convert TensorFlow models with Model Optimizer, TensorFlow 1.2 or newer must be installed. For more information on prerequisites, see the [Configuring Model Optimizer](../Deep_Learning_Model_Optimizer_DevGuide.md) guide.
**A** : To convert TensorFlow models with Model Optimizer, TensorFlow 1.2 or newer must be installed. For more information on prerequisites, see the [Configuring Model Optimizer](@ref openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide) guide.
#### 42. What does the message "Cannot read the model file: it is incorrect TensorFlow model file or missing" mean? <a name="question-42"></a>
#### Q42. What does the message "Cannot read the model file: it is incorrect TensorFlow model file or missing" mean? <a name="question-42"></a>
The model file should contain a frozen TensorFlow graph in the text or binary format. Make sure that `--input_model_is_text` is provided for a model in the text format. By default, a model is interpreted as binary file.
**A** : The model file should contain a frozen TensorFlow graph in the text or binary format. Make sure that `--input_model_is_text` is provided for a model in the text format. By default, a model is interpreted as binary file.
#### 43. What does the message "Cannot pre-process TensorFlow graph after reading from model file. File is corrupt or has unsupported format" mean? <a name="question-43"></a>
#### Q43. What does the message "Cannot pre-process TensorFlow graph after reading from model file. File is corrupt or has unsupported format" mean? <a name="question-43"></a>
Most likely, there is a problem with the specified file for the model. The file exists, but it has an invalid format or is corrupted.
**A** : Most likely, there is a problem with the specified file for the model. The file exists, but it has an invalid format or is corrupted.
#### 44. What does the message "Found custom layer. Model Optimizer does not support this layer. Please, register it in CustomLayersMapping.xml or implement extension" mean? <a name="question-44"></a>
#### Q44. What does the message "Found custom layer. Model Optimizer does not support this layer. Please, register it in CustomLayersMapping.xml or implement extension" mean? <a name="question-44"></a>
This means that the layer `{layer_name}` is not supported in Model Optimizer. You will find a list of all unsupported layers in the corresponding section. You should implement the extensions for this layer. See [OpenVINO&trade; Extensibility Mechanism](../../Extensibility_UG/Intro.md) for more information.
**A** : This means that the layer `{layer_name}` is not supported in Model Optimizer. You will find a list of all unsupported layers in the corresponding section. You should implement the extensions for this layer. See [OpenVINO Extensibility Mechanism](@ref openvino_docs_Extensibility_UG_Intro) for more information.
#### 45. What does the message "Custom replacement configuration file does not exist" mean? <a name="question-45"></a>
#### Q45. What does the message "Custom replacement configuration file does not exist" mean? <a name="question-45"></a>
A path to the custom replacement configuration file was provided with the `--transformations_config` flag, but the file could not be found. Make sure the specified path is correct and the file exists.
**A** : A path to the custom replacement configuration file was provided with the `--transformations_config` flag, but the file could not be found. Make sure the specified path is correct and the file exists.
#### 46. What does the message "Extractors collection have case insensitive duplicates" mean? <a name="question-46"></a>
#### Q46. What does the message "Extractors collection have case insensitive duplicates" mean? <a name="question-46"></a>
When extending Model Optimizer with new primitives, keep in mind that their names are case-insensitive. Most likely, another operation with the same name is already defined. For more information, see the [OpenVINO&trade; Extensibility Mechanism](../../Extensibility_UG/Intro.md) guide.
**A** : When extending Model Optimizer with new primitives, keep in mind that their names are case-insensitive. Most likely, another operation with the same name is already defined. For more information, see the [OpenVINO Extensibility Mechanism](@ref openvino_docs_Extensibility_UG_Intro) guide.
#### 47. What does the message "Input model name is not in an expected format, cannot extract iteration number" mean? <a name="question-47"></a>
#### Q47. What does the message "Input model name is not in an expected format, cannot extract iteration number" mean? <a name="question-47"></a>
Model Optimizer cannot load an MXNet model in the specified file format. Make sure you use the `.json` or `.param` format.
**A** : Model Optimizer cannot load an MXNet model in the specified file format. Make sure you use the `.json` or `.param` format.
#### 48. What does the message "Cannot convert type of placeholder because not all of its outputs are 'Cast' to float operations" mean? <a name="question-48"></a>
#### Q48. What does the message "Cannot convert type of placeholder because not all of its outputs are 'Cast' to float operations" mean? <a name="question-48"></a>
There are models where `Placeholder` has the UINT8 type and the first operation after it is 'Cast', which casts the input to FP32. Model Optimizer detected that the `Placeholder` has the UINT8 type, but the next operation is not 'Cast' to float. Model Optimizer does not support such a case. Make sure you change the model to have `Placeholder` for FP32.
**A** : There are models where `Placeholder` has the UINT8 type and the first operation after it is 'Cast', which casts the input to FP32. Model Optimizer detected that the `Placeholder` has the UINT8 type, but the next operation is not 'Cast' to float. Model Optimizer does not support such a case. Make sure you change the model to have `Placeholder` for FP32.
#### 49. What does the message "Data type is unsupported" mean? <a name="question-49"></a>
#### Q49. What does the message "Data type is unsupported" mean? <a name="question-49"></a>
Model Optimizer cannot convert the model to the specified data type. Currently, FP16 and FP32 are supported. Make sure you specify the data type with the `--data_type` flag. The available values are: FP16, FP32, half, float.
**A** : Model Optimizer cannot convert the model to the specified data type. Currently, FP16 and FP32 are supported. Make sure you specify the data type with the `--data_type` flag. The available values are: FP16, FP32, half, float.
#### 50. What does the message "No node with name ..." mean? <a name="question-50"></a>
#### Q50. What does the message "No node with name ..." mean? <a name="question-50"></a>
Model Optimizer tried to access a node that does not exist. This could happen if you have incorrectly specified placeholder, input or output node name.
**A** : Model Optimizer tried to access a node that does not exist. This could happen if you have incorrectly specified placeholder, input or output node name.
#### 51. What does the message "Module MXNet was not found. Please install MXNet 1.0.0" mean? <a name="question-51"></a>
#### Q51. What does the message "Module MXNet was not found. Please install MXNet 1.0.0" mean? <a name="question-51"></a>
To convert MXNet models with Model Optimizer, Apache MXNet 1.0.0 must be installed. For more information about prerequisites, see the[Configuring Model Optimizer](../Deep_Learning_Model_Optimizer_DevGuide.md) guide.
**A** : To convert MXNet models with Model Optimizer, Apache MXNet 1.0.0 must be installed. For more information about prerequisites, see the[Configuring Model Optimizer](@ref openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide) guide.
#### 52. What does the message "The following error happened while loading MXNet model .." mean? <a name="question-52"></a>
#### Q52. What does the message "The following error happened while loading MXNet model .." mean? <a name="question-52"></a>
Most likely, there is a problem with loading of the MXNet model. Make sure the specified path is correct, the model exists and is not corrupted, and you have sufficient permissions to work with it.
**A** : Most likely, there is a problem with loading of the MXNet model. Make sure the specified path is correct, the model exists and is not corrupted, and you have sufficient permissions to work with it.
#### 53. What does the message "The following error happened while processing input shapes: .." mean? <a name="question-53"></a>
#### Q53. What does the message "The following error happened while processing input shapes: .." mean? <a name="question-53"></a>
Make sure inputs are defined and have correct shapes. You can use `--input_shape` with positive integers to override model input shapes.
**A** : Make sure inputs are defined and have correct shapes. You can use `--input_shape` with positive integers to override model input shapes.
#### 54. What does the message "Attempt to register of custom name for the second time as class. Note that custom names are case-insensitive" mean? <a name="question-54"></a>
#### Q54. What does the message "Attempt to register of custom name for the second time as class. Note that custom names are case-insensitive" mean? <a name="question-54"></a>
When extending Model Optimizer with new primitives, keep in mind that their names are case-insensitive. Most likely, another operation with the same name is already defined. For more information, see the [OpenVINO&trade; Extensibility Mechanism](../../Extensibility_UG/Intro.md) guide.
**A** : When extending Model Optimizer with new primitives, keep in mind that their names are case-insensitive. Most likely, another operation with the same name is already defined. For more information, see the [OpenVINO Extensibility Mechanism](@ref openvino_docs_Extensibility_UG_Intro) guide.
#### 55. What does the message "Both --input_shape and --batch were provided. Please, provide only one of them" mean? <a name="question-55"></a>
#### Q55. What does the message "Both --input_shape and --batch were provided. Please, provide only one of them" mean? <a name="question-55"></a>
Specifying the batch and the input shapes at the same time is not supported. You must specify a desired batch as the first value of the input shape.
**A** : Specifying the batch and the input shapes at the same time is not supported. You must specify a desired batch as the first value of the input shape.
#### 56. What does the message "Input shape .. cannot be parsed" mean? <a name="question-56"></a>
#### Q56. What does the message "Input shape .. cannot be parsed" mean? <a name="question-56"></a>
The specified input shape cannot be parsed. Define it in one of the following ways:
**A** : The specified input shape cannot be parsed. Define it in one of the following ways:
*
```shell
@@ -365,138 +366,138 @@ The specified input shape cannot be parsed. Define it in one of the following wa
Keep in mind that there is no space between and inside the brackets for input shapes.
#### 57. What does the message "Please provide input layer names for input layer shapes" mean? <a name="question-57"></a>
#### Q57. What does the message "Please provide input layer names for input layer shapes" mean? <a name="question-57"></a>
When specifying input shapes for several layers, you must provide names for inputs, whose shapes will be overwritten. For usage examples, see the [Converting a Caffe Model](convert_model/Convert_Model_From_Caffe.md). Additional information for `--input_shape` is in FAQ [#56](#question-56).
**A** : When specifying input shapes for several layers, you must provide names for inputs, whose shapes will be overwritten. For usage examples, see the [Converting a Caffe Model](@ref openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Caffe). Additional information for `--input_shape` is in FAQ [#56](#question-56).
#### 58. What does the message "Values cannot be parsed" mean? <a name="question-58"></a>
#### Q58. What does the message "Values cannot be parsed" mean? <a name="question-58"></a>
Mean values for the given parameter cannot be parsed. It should be a string with a list of mean values. For example, in '(1,2,3)', 1 stands for the RED channel, 2 for the GREEN channel, 3 for the BLUE channel.
**A** : Mean values for the given parameter cannot be parsed. It should be a string with a list of mean values. For example, in '(1,2,3)', 1 stands for the RED channel, 2 for the GREEN channel, 3 for the BLUE channel.
#### 59. What does the message ".. channels are expected for given values" mean? <a name="question-59"></a>
#### Q59. What does the message ".. channels are expected for given values" mean? <a name="question-59"></a>
The number of channels and the number of given values for mean values do not match. The shape should be defined as '(R,G,B)' or '[R,G,B]'. The shape should not contain undefined dimensions (? or -1). The order of values is as follows: (value for a RED channel, value for a GREEN channel, value for a BLUE channel).
**A** : The number of channels and the number of given values for mean values do not match. The shape should be defined as '(R,G,B)' or '[R,G,B]'. The shape should not contain undefined dimensions (? or -1). The order of values is as follows: (value for a RED channel, value for a GREEN channel, value for a BLUE channel).
#### 60. What does the message "You should specify input for each mean value" mean? <a name="question-60"></a>
#### Q60. What does the message "You should specify input for each mean value" mean? <a name="question-60"></a>
Most likely, you didn't specify inputs using `--mean_values`. Specify inputs with the `--input` flag. For usage examples, refer to the FAQ [#62](#question-62).
**A** : Most likely, you didn't specify inputs using `--mean_values`. Specify inputs with the `--input` flag. For usage examples, refer to the FAQ [#62](#question-62).
#### 61. What does the message "You should specify input for each scale value" mean? <a name="question-61"></a>
#### Q61. What does the message "You should specify input for each scale value" mean? <a name="question-61"></a>
Most likely, you didn't specify inputs using `--scale_values`. Specify inputs with the `--input` flag. For usage examples, refer to the FAQ [#63](#question-63).
**A** : Most likely, you didn't specify inputs using `--scale_values`. Specify inputs with the `--input` flag. For usage examples, refer to the FAQ [#63](#question-63).
#### 62. What does the message "Number of inputs and mean values does not match" mean? <a name="question-62"></a>
#### Q62. What does the message "Number of inputs and mean values does not match" mean? <a name="question-62"></a>
The number of specified mean values and the number of inputs must be equal. For a usage example, refer to the [Converting a Caffe Model](convert_model/Convert_Model_From_Caffe.md) guide.
**A** : The number of specified mean values and the number of inputs must be equal. For a usage example, refer to the [Converting a Caffe Model](@ref openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Caffe) guide.
#### 63. What does the message "Number of inputs and scale values does not match" mean? <a name="question-63"></a>
#### Q63. What does the message "Number of inputs and scale values does not match" mean? <a name="question-63"></a>
The number of specified scale values and the number of inputs must be equal. For a usage example, refer to the [Converting a Caffe Model](convert_model/Convert_Model_From_Caffe.md) guide.
**A** : The number of specified scale values and the number of inputs must be equal. For a usage example, refer to the [Converting a Caffe Model](@ref openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Caffe) guide.
#### 64. What does the message "No class registered for match kind ... Supported match kinds are .. " mean? <a name="question-64"></a>
#### Q64. What does the message "No class registered for match kind ... Supported match kinds are .. " mean? <a name="question-64"></a>
A replacement defined in the configuration file for sub-graph replacement, using node names patterns or start/end nodes, has the `match_kind` attribute. The attribute may have only one of the values: `scope` or `points`. If a different value is provided, this error is displayed.
**A** : A replacement defined in the configuration file for sub-graph replacement, using node names patterns or start/end nodes, has the `match_kind` attribute. The attribute may have only one of the values: `scope` or `points`. If a different value is provided, this error is displayed.
#### 65. What does the message "No instance(s) is(are) defined for the custom replacement" mean? <a name="question-65"></a>
#### Q65. What does the message "No instance(s) is(are) defined for the custom replacement" mean? <a name="question-65"></a>
A replacement defined in the configuration file for sub-graph replacement, using node names patterns or start/end nodes, has the `instances` attribute. This attribute is mandatory. This error will occur if the attribute is missing. For more details, refer to the **Graph Transformation Extensions** section in the [Model Optimizer Extensibility](customize_model_optimizer/Customize_Model_Optimizer.md) guide.
**A** : A replacement defined in the configuration file for sub-graph replacement, using node names patterns or start/end nodes, has the `instances` attribute. This attribute is mandatory. This error will occur if the attribute is missing. For more details, refer to the **Graph Transformation Extensions** section in the [Model Optimizer Extensibility](@ref openvino_docs_MO_DG_prepare_model_customize_model_optimizer_Customize_Model_Optimizer) guide.
#### 66. What does the message "The instance must be a single dictionary for the custom replacement with id .." mean? <a name="question-66"></a>
#### Q66. What does the message "The instance must be a single dictionary for the custom replacement with id .." mean? <a name="question-66"></a>
A replacement defined in the configuration file for sub-graph replacement, using start/end nodes, has the `instances` attribute. For this type of replacement, the instance must be defined with a dictionary with two keys `start_points` and `end_points`. Values for these keys are lists with the start and end node names, respectively. For more details, refer to the **Graph Transformation Extensions** section in the [Model Optimizer Extensibility](customize_model_optimizer/Customize_Model_Optimizer.md) guide.
**A** : A replacement defined in the configuration file for sub-graph replacement, using start/end nodes, has the `instances` attribute. For this type of replacement, the instance must be defined with a dictionary with two keys `start_points` and `end_points`. Values for these keys are lists with the start and end node names, respectively. For more details, refer to the **Graph Transformation Extensions** section in the [Model Optimizer Extensibility](@ref openvino_docs_MO_DG_prepare_model_customize_model_optimizer_Customize_Model_Optimizer) guide.
#### 67. What does the message "No instances are defined for replacement with id .. " mean? <a name="question-67"></a>
#### Q67. What does the message "No instances are defined for replacement with id .. " mean? <a name="question-67"></a>
A replacement for the specified id is not defined in the configuration file. For more information, refer to the FAQ [#65](#question-65).
**A** : A replacement for the specified id is not defined in the configuration file. For more information, refer to the FAQ [#65](#question-65).
#### 68. What does the message "Custom replacements configuration file .. does not exist" mean? <a name="question-68"></a>
#### Q68. What does the message "Custom replacements configuration file .. does not exist" mean? <a name="question-68"></a>
The path to a custom replacement configuration file was provided with the `--transformations_config` flag, but it cannot be found. Make sure the specified path is correct and the file exists.
**A** : The path to a custom replacement configuration file was provided with the `--transformations_config` flag, but it cannot be found. Make sure the specified path is correct and the file exists.
#### 69. What does the message "Failed to parse custom replacements configuration file .." mean? <a name="question-69"></a>
#### Q69. What does the message "Failed to parse custom replacements configuration file .." mean? <a name="question-69"></a>
The file for custom replacement configuration provided with the `--transformations_config` flag cannot be parsed. In particular, it should have a valid JSON structure. For more details, refer to the [JSON Schema Reference](https://spacetelescope.github.io/understanding-json-schema/reference/index.html) page.
**A** : The file for custom replacement configuration provided with the `--transformations_config` flag cannot be parsed. In particular, it should have a valid JSON structure. For more details, refer to the [JSON Schema Reference](https://spacetelescope.github.io/understanding-json-schema/reference/index.html) page.
#### 70. What does the message "One of the custom replacements in the configuration file .. does not contain attribute 'id'" mean? <a name="question-70"></a>
#### Q70. What does the message "One of the custom replacements in the configuration file .. does not contain attribute 'id'" mean? <a name="question-70"></a>
Every custom replacement should declare a set of mandatory attributes and their values. For more details, refer to FAQ [#71](#question-71).
**A** : Every custom replacement should declare a set of mandatory attributes and their values. For more details, refer to FAQ [#71](#question-71).
#### 71. What does the message "File .. validation failed" mean? <a name="question-71"></a>
#### Q71. What does the message "File .. validation failed" mean? <a name="question-71"></a>
The file for custom replacement configuration provided with the `--transformations_config` flag cannot pass validation. Make sure you have specified `id`, `instances`, and `match_kind` for all the patterns.
**A** : The file for custom replacement configuration provided with the `--transformations_config` flag cannot pass validation. Make sure you have specified `id`, `instances`, and `match_kind` for all the patterns.
#### 72. What does the message "Cannot update the file .. because it is broken" mean? <a name="question-72"></a>
#### Q72. What does the message "Cannot update the file .. because it is broken" mean? <a name="question-72"></a>
The custom replacement configuration file provided with the `--tensorflow_custom_operations_config_update` cannot be parsed. Make sure that the file is correct and refer to FAQ [#68](#question-68), [#69](#question-69), [#70](#question-70), and [#71](#question-71).
**A** : The custom replacement configuration file provided with the `--tensorflow_custom_operations_config_update` cannot be parsed. Make sure that the file is correct and refer to FAQ [#68](#question-68), [#69](#question-69), [#70](#question-70), and [#71](#question-71).
#### 73. What does the message "End node .. is not reachable from start nodes: .." mean? <a name="question-73"></a>
#### Q73. What does the message "End node .. is not reachable from start nodes: .." mean? <a name="question-73"></a>
This error occurs when you try to make a sub-graph match. It is detected that between the start and end nodes that were specified as inputs/outputs for the subgraph to find, there are nodes marked as outputs but there is no path from them to the input nodes. Make sure the subgraph you want to match does actually contain all the specified output nodes.
**A** : This error occurs when you try to make a sub-graph match. It is detected that between the start and end nodes that were specified as inputs/outputs for the subgraph to find, there are nodes marked as outputs but there is no path from them to the input nodes. Make sure the subgraph you want to match does actually contain all the specified output nodes.
#### 74. What does the message "Sub-graph contains network input node .." mean? <a name="question-74"></a>
#### Q74. What does the message "Sub-graph contains network input node .." mean? <a name="question-74"></a>
The start or end node for the sub-graph replacement using start/end nodes is specified incorrectly. Model Optimizer finds internal nodes of the sub-graph strictly "between" the start and end nodes, and then adds all input nodes to the sub-graph (and the inputs of their inputs, etc.) for these "internal" nodes. This error reports that Model Optimizer reached input node during this phase. This means that the start/end points are specified incorrectly in the configuration file. For more details, refer to the **Graph Transformation Extensions** section in the [Model Optimizer Extensibility](customize_model_optimizer/Customize_Model_Optimizer.md) guide.
**A** : The start or end node for the sub-graph replacement using start/end nodes is specified incorrectly. Model Optimizer finds internal nodes of the sub-graph strictly "between" the start and end nodes, and then adds all input nodes to the sub-graph (and the inputs of their inputs, etc.) for these "internal" nodes. This error reports that Model Optimizer reached input node during this phase. This means that the start/end points are specified incorrectly in the configuration file. For more details, refer to the **Graph Transformation Extensions** section in the [Model Optimizer Extensibility](@ref openvino_docs_MO_DG_prepare_model_customize_model_optimizer_Customize_Model_Optimizer) guide.
#### 75. What does the message "... elements of ... were clipped to infinity while converting a blob for node [...] to ..." mean? <a name="question-75"></a>
#### Q75. What does the message "... elements of ... were clipped to infinity while converting a blob for node [...] to ..." mean? <a name="question-75"></a>
This message may appear when the `--data_type=FP16` command-line option is used. This option implies conversion of all the blobs in the node to FP16. If a value in a blob is out of the range of valid FP16 values, the value is converted to positive or negative infinity. It may lead to incorrect results of inference or may not be a problem, depending on the model. The number of such elements and the total number of elements in the blob is printed out together with the name of the node, where this blob is used.
**A** : This message may appear when the `--data_type=FP16` command-line option is used. This option implies conversion of all the blobs in the node to FP16. If a value in a blob is out of the range of valid FP16 values, the value is converted to positive or negative infinity. It may lead to incorrect results of inference or may not be a problem, depending on the model. The number of such elements and the total number of elements in the blob is printed out together with the name of the node, where this blob is used.
#### 76. What does the message "... elements of ... were clipped to zero while converting a blob for node [...] to ..." mean? <a name="question-76"></a>
#### Q76. What does the message "... elements of ... were clipped to zero while converting a blob for node [...] to ..." mean? <a name="question-76"></a>
This message may appear when the `--data_type=FP16` command-line option is used. This option implies conversion of all blobs in the mode to FP16. If a value in the blob is so close to zero that it cannot be represented as a valid FP16 value, it is converted to a true zero FP16 value. Depending on the model, it may lead to incorrect results of inference or may not be a problem. The number of such elements and the total number of elements in the blob are printed out together with a name of the node, where this blob is used.
**A** : This message may appear when the `--data_type=FP16` command-line option is used. This option implies conversion of all blobs in the mode to FP16. If a value in the blob is so close to zero that it cannot be represented as a valid FP16 value, it is converted to a true zero FP16 value. Depending on the model, it may lead to incorrect results of inference or may not be a problem. The number of such elements and the total number of elements in the blob are printed out together with a name of the node, where this blob is used.
#### 77. What does the message "The amount of nodes matched pattern ... is not equal to 1" mean? <a name="question-77"></a>
#### Q77. What does the message "The amount of nodes matched pattern ... is not equal to 1" mean? <a name="question-77"></a>
This error occurs when the `SubgraphMatch.node_by_pattern` function is used with a pattern that does not uniquely identify a single node in a sub-graph. Try to extend the pattern string to make unambiguous match to a single sub-graph node. For more details, refer to the **Graph Transformation Extensions** section in the [Model Optimizer Extensibility](customize_model_optimizer/Customize_Model_Optimizer.md) guide.
**A** : This error occurs when the `SubgraphMatch.node_by_pattern` function is used with a pattern that does not uniquely identify a single node in a sub-graph. Try to extend the pattern string to make unambiguous match to a single sub-graph node. For more details, refer to the **Graph Transformation Extensions** section in the [Model Optimizer Extensibility](@ref openvino_docs_MO_DG_prepare_model_customize_model_optimizer_Customize_Model_Optimizer) guide.
#### 78. What does the message "The topology contains no "input" layers" mean? <a name="question-78"></a>
#### Q78. What does the message "The topology contains no "input" layers" mean? <a name="question-78"></a>
Your Caffe topology `.prototxt` file is intended for training. Model Optimizer expects a deployment-ready `.prototxt` file. To fix the problem, prepare a deployment-ready `.prototxt` file. Preparation of a deploy-ready topology usually results in removing `data` layer(s), adding `input` layer(s), and removing loss layer(s).
**A** : Your Caffe topology `.prototxt` file is intended for training. Model Optimizer expects a deployment-ready `.prototxt` file. To fix the problem, prepare a deployment-ready `.prototxt` file. Preparation of a deploy-ready topology usually results in removing `data` layer(s), adding `input` layer(s), and removing loss layer(s).
#### 79. What does the message "Warning: please expect that Model Optimizer conversion might be slow" mean? <a name="question-79"></a>
#### Q79. What does the message "Warning: please expect that Model Optimizer conversion might be slow" mean? <a name="question-79"></a>
You are using an unsupported Python version. Use only versions 3.4 - 3.6 for the C++ `protobuf` implementation that is supplied with OpenVINO toolkit. You can still boost the conversion speed by building the protobuf library from sources. For complete instructions about building `protobuf` from sources, see the appropriate section in the[Converting a Model to Intermediate Representation](../Deep_Learning_Model_Optimizer_DevGuide.md) guide.
**A** : You are using an unsupported Python version. Use only versions 3.4 - 3.6 for the C++ `protobuf` implementation that is supplied with OpenVINO toolkit. You can still boost the conversion speed by building the protobuf library from sources. For complete instructions about building `protobuf` from sources, see the appropriate section in the[Converting a Model to Intermediate Representation](@ref openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide) guide.
#### 80. What does the message "Arguments --nd_prefix_name, --pretrained_model_name and --input_symbol should be provided. Please provide all or do not use any." mean? <a name="question-80"></a>
#### Q80. What does the message "Arguments --nd_prefix_name, --pretrained_model_name and --input_symbol should be provided. Please provide all or do not use any." mean? <a name="question-80"></a>
This error occurs if you did not provide the `--nd_prefix_name`, `--pretrained_model_name`, and `--input_symbol` parameters.
**A** : This error occurs if you did not provide the `--nd_prefix_name`, `--pretrained_model_name`, and `--input_symbol` parameters.
Model Optimizer requires both `.params` and `.nd` model files to merge into the result file (`.params`).
Topology description (`.json` file) should be prepared (merged) in advance and provided with the `--input_symbol` parameter.
If you add additional layers and weights that are in `.nd` files to your model, Model Optimizer can build a model
from one `.params` file and two additional `.nd` files (`*_args.nd`, `*_auxs.nd`).
To do that, provide both CLI options or do not pass them if you want to convert an MXNet model without additional weights.
For more information, refer to the [Converting an MXNet Model](convert_model/Convert_Model_From_MxNet.md) guide.
For more information, refer to the [Converting an MXNet Model](@ref openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_MxNet) guide.
#### 81. What does the message "You should specify input for mean/scale values" mean? <a name="question-81"></a>
#### Q81. What does the message "You should specify input for mean/scale values" mean? <a name="question-81"></a>
When the model has multiple inputs and you want to provide mean/scale values, you need to pass those values for each input. More specifically, the number of passed values should be the same as the number of inputs of the model.
For more information, refer to the [Converting a Model to Intermediate Representation](convert_model/Converting_Model.md) guide.
**A** : When the model has multiple inputs and you want to provide mean/scale values, you need to pass those values for each input. More specifically, the number of passed values should be the same as the number of inputs of the model.
For more information, refer to the [Converting a Model to Intermediate Representation](@ref openvino_docs_MO_DG_prepare_model_convert_model_Converting_Model) guide.
#### 82. What does the message "Input with name ... not found!" mean? <a name="question-82"></a>
#### Q82. What does the message "Input with name ... not found!" mean? <a name="question-82"></a>
When you passed the mean/scale values and specify names of input layers of the model, you might have used the name that does not correspond to any input layer. Make sure that you list only names of the input layers of your model when passing values with the `--input` option.
For more information, refer to the [Converting a Model to Intermediate Representation](convert_model/Converting_Model.md) guide.
**A** : When you passed the mean/scale values and specify names of input layers of the model, you might have used the name that does not correspond to any input layer. Make sure that you list only names of the input layers of your model when passing values with the `--input` option.
For more information, refer to the [Converting a Model to Intermediate Representation](@ref openvino_docs_MO_DG_prepare_model_convert_model_Converting_Model) guide.
#### 83. What does the message "Specified input json ... does not exist" mean? <a name="question-83"></a>
#### Q83. What does the message "Specified input json ... does not exist" mean? <a name="question-83"></a>
Most likely, `.json` file does not exist or has a name that does not match the notation of Apache MXNet. Make sure the file exists and has a correct name.
For more information, refer to the [Converting an MXNet Model](convert_model/Convert_Model_From_MxNet.md) guide.
**A** : Most likely, `.json` file does not exist or has a name that does not match the notation of Apache MXNet. Make sure the file exists and has a correct name.
For more information, refer to the [Converting an MXNet Model](@ref openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_MxNet) guide.
#### 84. What does the message "Unsupported Input model file type ... Model Optimizer support only .params and .nd files format" mean? <a name="question-84"></a>
#### Q84. What does the message "Unsupported Input model file type ... Model Optimizer support only .params and .nd files format" mean? <a name="question-84"></a>
Model Optimizer for Apache MXNet supports only `.params` and `.nd` files formats. Most likely, you specified an unsupported file format in `--input_model`.
For more information, refer to [Converting an MXNet Model](convert_model/Convert_Model_From_MxNet.md).
**A** : Model Optimizer for Apache MXNet supports only `.params` and `.nd` files formats. Most likely, you specified an unsupported file format in `--input_model`.
For more information, refer to [Converting an MXNet Model](@ref openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_MxNet).
#### 85. What does the message "Operation ... not supported. Please register it as custom op" mean? <a name="question-85"></a>
#### Q85. What does the message "Operation ... not supported. Please register it as custom op" mean? <a name="question-85"></a>
Model Optimizer tried to load the model that contains some unsupported operations.
**A** : Model Optimizer tried to load the model that contains some unsupported operations.
If you want to convert model that contains unsupported operations, you need to prepare extension for all such operations.
For more information, refer to the [OpenVINO&trade; Extensibility Mechanism](../../Extensibility_UG/Intro.md) guide.
For more information, refer to the [OpenVINO Extensibility Mechanism](@ref openvino_docs_Extensibility_UG_Intro) guide.
#### 86. What does the message "Can not register Op ... Please, call function 'register_caffe_python_extractor' with parameter 'name'" mean? <a name="question-86"></a>
#### Q86. What does the message "Can not register Op ... Please, call function 'register_caffe_python_extractor' with parameter 'name'" mean? <a name="question-86"></a>
This error appears if the class of implementation of `Op` for Python Caffe layer could not be used by Model Optimizer. Python layers should be handled differently comparing to ordinary Caffe layers.
**A** : This error appears if the class of implementation of `Op` for Python Caffe layer could not be used by Model Optimizer. Python layers should be handled differently comparing to ordinary Caffe layers.
In particular, you need to call the function `register_caffe_python_extractor` and pass `name` as the second argument of the function.
The name should be the compilation of the layer name with the module name separated by a dot.
@@ -538,101 +539,101 @@ Note that the first call <code>register_caffe_python_extractor(ProposalPythonExa
The second call prevents Model Optimizer from using this extension as if it is an extension for
a layer with type `Proposal`. Otherwise, this layer can be chosen as an implementation of extension that can lead to potential issues.
For more information, refer to the [OpenVINO&trade; Extensibility Mechanism](../../Extensibility_UG/Intro.md) guide.
For more information, refer to the [OpenVINO Extensibility Mechanism](@ref openvino_docs_Extensibility_UG_Intro) guide.
#### 87. What does the message "Model Optimizer is unable to calculate output shape of Memory node .." mean? <a name="question-87"></a>
#### Q87. What does the message "Model Optimizer is unable to calculate output shape of Memory node .." mean? <a name="question-87"></a>
Model Optimizer supports only `Memory` layers, in which `input_memory` goes before `ScaleShift` or the `FullyConnected` layer.
**A** : Model Optimizer supports only `Memory` layers, in which `input_memory` goes before `ScaleShift` or the `FullyConnected` layer.
This error message means that in your model the layer after input memory is not of the `ScaleShift` or `FullyConnected` type.
This is a known limitation.
#### 88. What do the messages "File ... does not appear to be a Kaldi file (magic number does not match)", "Kaldi model should start with <Nnet> tag" mean? <a name="question-88"></a>
#### Q88. What do the messages "File ... does not appear to be a Kaldi file (magic number does not match)", "Kaldi model should start with <Nnet> tag" mean? <a name="question-88"></a>
These error messages mean that Model Optimizer does not support your Kaldi model, because the `checksum` of the model is not
**A** : These error messages mean that Model Optimizer does not support your Kaldi model, because the `checksum` of the model is not
16896 (the model should start with this number), or the model file does not contain the `<Net>` tag as a starting one.
Make sure that you provide a path to a true Kaldi model and try again.
#### 89. What do the messages "Expect counts file to be one-line file." or "Expect counts file to contain list of integers" mean? <a name="question-89"></a>
#### Q89. What do the messages "Expect counts file to be one-line file." or "Expect counts file to contain list of integers" mean? <a name="question-89"></a>
These messages mean that the file counts you passed contain not one line. The count file should start with
**A** : These messages mean that the file counts you passed contain not one line. The count file should start with
`[` and end with `]`, and integer values should be separated by spaces between those brackets.
#### 90. What does the message "Model Optimizer is not able to read Kaldi model .." mean? <a name="question-90"></a>
#### Q90. What does the message "Model Optimizer is not able to read Kaldi model .." mean? <a name="question-90"></a>
There are multiple reasons why Model Optimizer does not accept a Kaldi topology, including:
**A** : There are multiple reasons why Model Optimizer does not accept a Kaldi topology, including:
the file is not available or does not exist. Refer to FAQ [#88](#question-88).
#### 91. What does the message "Model Optimizer is not able to read counts file .." mean? <a name="question-91"></a>
#### Q91. What does the message "Model Optimizer is not able to read counts file .." mean? <a name="question-91"></a>
There are multiple reasons why Model Optimizer does not accept a counts file, including:
**A** : There are multiple reasons why Model Optimizer does not accept a counts file, including:
the file is not available or does not exist. Refer to FAQ [#89](#question-89).
#### 92. What does the message "For legacy MXNet models Model Optimizer does not support conversion of old MXNet models (trained with 1.0.0 version of MXNet and lower) with custom layers." mean? <a name="question-92"></a>
#### Q92. What does the message "For legacy MXNet models Model Optimizer does not support conversion of old MXNet models (trained with 1.0.0 version of MXNet and lower) with custom layers." mean? <a name="question-92"></a>
This message means that if you have a model with custom layers and its JSON file has been generated with Apache MXNet version
**A** : This message means that if you have a model with custom layers and its JSON file has been generated with Apache MXNet version
lower than 1.0.0, Model Optimizer does not support such topologies. If you want to convert it, you have to rebuild
MXNet with unsupported layers or generate a new JSON file with Apache MXNet version 1.0.0 or higher. You also need to implement
OpenVINO extension to use custom layers.
For more information, refer to the [OpenVINO&trade; Extensibility Mechanism](../../Extensibility_UG/Intro.md) guide.
For more information, refer to the [OpenVINO Extensibility Mechanism](@ref openvino_docs_Extensibility_UG_Intro) guide.
#### 93. What does the message "Graph contains a cycle. Can not proceed .." mean? <a name="question-93"></a>
#### Q93. What does the message "Graph contains a cycle. Can not proceed .." mean? <a name="question-93"></a>
Model Optimizer supports only straightforward models without cycles.
**A** : Model Optimizer supports only straightforward models without cycles.
There are multiple ways to avoid cycles:
For Tensorflow:
* [Convert models, created with TensorFlow Object Detection API](convert_model/tf_specific/Convert_Object_Detection_API_Models.md)
* [Convert models, created with TensorFlow Object Detection API](@ref openvino_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_Object_Detection_API_Models)
For all frameworks:
1. [Replace cycle containing Sub-graph in Model Optimizer](customize_model_optimizer/Customize_Model_Optimizer.md)
2. See [OpenVINO&trade; Extensibility Mechanism](../../Extensibility_UG/Intro.md)
1. [Replace cycle containing Sub-graph in Model Optimizer](@ref openvino_docs_MO_DG_prepare_model_customize_model_optimizer_Customize_Model_Optimizer)
2. See [OpenVINO Extensibility Mechanism](@ref openvino_docs_Extensibility_UG_Intro)
or
* Edit the model in its original framework to exclude cycle.
#### 94. What does the message "Can not transpose attribute '..' with value .. for node '..' .." mean? <a name="question-94"></a>
#### Q94. What does the message "Can not transpose attribute '..' with value .. for node '..' .." mean? <a name="question-94"></a>
This message means that the model is not supported. It may be caused by using shapes larger than 4-D.
**A** : This message means that the model is not supported. It may be caused by using shapes larger than 4-D.
There are two ways to avoid such message:
* [Cut off parts of the model](convert_model/Cutting_Model.md).
* [Cut off parts of the model](@ref openvino_docs_MO_DG_prepare_model_convert_model_Cutting_Model).
* Edit the network in its original framework to exclude such layers.
#### 95. What does the message "Expected token `</ParallelComponent>`, has `...`" mean? <a name="question-95"></a>
#### Q95. What does the message "Expected token `</ParallelComponent>`, has `...`" mean? <a name="question-95"></a>
This error messages mean that Model Optimizer does not support your Kaldi model, because the Net contains `ParallelComponent` that does not end with the `</ParallelComponent>` tag.
**A** : This error messages mean that Model Optimizer does not support your Kaldi model, because the Net contains `ParallelComponent` that does not end with the `</ParallelComponent>` tag.
Make sure that you provide a path to a true Kaldi model and try again.
#### 96. What does the message "Interp layer shape inference function may be wrong, please, try to update layer shape inference function in the file (extensions/ops/interp.op at the line ...)." mean? <a name="question-96"></a>
#### Q96. What does the message "Interp layer shape inference function may be wrong, please, try to update layer shape inference function in the file (extensions/ops/interp.op at the line ...)." mean? <a name="question-96"></a>
There are many flavors of Caffe framework, and most layers in them are implemented identically.
**A** : There are many flavors of Caffe framework, and most layers in them are implemented identically.
However, there are exceptions. For example, the output value of layer Interp is calculated differently in Deeplab-Caffe and classic Caffe. Therefore, if your model contains layer Interp and the conversion of your model has failed, modify the `interp_infer` function in the `extensions/ops/interp.op` file according to the comments in the file.
#### 97. What does the message "Mean/scale values should ..." mean? <a name="question-97"></a>
#### Q97. What does the message "Mean/scale values should ..." mean? <a name="question-97"></a>
It means that your mean/scale values have a wrong format. Specify mean/scale values in the form of `layer_name(val1,val2,val3)`.
You need to specify values for each input of the model. For more information, refer to the [Converting a Model to Intermediate Representation](convert_model/Converting_Model.md) guide.
**A** : It means that your mean/scale values have a wrong format. Specify mean/scale values in the form of `layer_name(val1,val2,val3)`.
You need to specify values for each input of the model. For more information, refer to the [Converting a Model to Intermediate Representation](@ref openvino_docs_MO_DG_prepare_model_convert_model_Converting_Model) guide.
#### 98. What does the message "Operation _contrib_box_nms is not supported ..." mean? <a name="question-98"></a>
#### Q98. What does the message "Operation _contrib_box_nms is not supported ..." mean? <a name="question-98"></a>
It means that you are trying to convert a topology contains the `_contrib_box_nms` operation which is not supported directly. However, the sub-graph of operations including `_contrib_box_nms` could be replaced with the DetectionOutput layer if your topology is one of the `gluoncv` topologies. Specify the `--enable_ssd_gluoncv` command-line parameter for Model Optimizer to enable this transformation.
**A** : It means that you are trying to convert a topology contains the `_contrib_box_nms` operation which is not supported directly. However, the sub-graph of operations including `_contrib_box_nms` could be replaced with the DetectionOutput layer if your topology is one of the `gluoncv` topologies. Specify the `--enable_ssd_gluoncv` command-line parameter for Model Optimizer to enable this transformation.
#### 99. What does the message "ModelOptimizer is not able to parse *.caffemodel" mean? <a name="question-99"></a>
#### Q99. What does the message "ModelOptimizer is not able to parse *.caffemodel" mean? <a name="question-99"></a>
If a `*.caffemodel` file exists and is correct, the error occurred possibly because of the use of Python protobuf implementation. In some cases, error messages may appear during model parsing, for example: "`utf-8` codec can't decode byte 0xe0 in position 4: invalid continuation byte in field: mo_caffe.SpatialTransformerParameter.transform_type". You can either use Python 3.6/3.7 or build the `cpp` implementation of `protobuf` yourself for your version of Python. For the complete instructions about building `protobuf` from sources, see the appropriate section in the [Converting Models with Model Optimizer](../Deep_Learning_Model_Optimizer_DevGuide.md) guide.
**A** : If a `*.caffemodel` file exists and is correct, the error occurred possibly because of the use of Python protobuf implementation. In some cases, error messages may appear during model parsing, for example: "`utf-8` codec can't decode byte 0xe0 in position 4: invalid continuation byte in field: mo_caffe.SpatialTransformerParameter.transform_type". You can either use Python 3.6/3.7 or build the `cpp` implementation of `protobuf` yourself for your version of Python. For the complete instructions about building `protobuf` from sources, see the appropriate section in the [Converting Models with Model Optimizer](@ref openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide) guide.
#### 100. What does the message "SyntaxError: 'yield' inside list comprehension" during MxNet model conversion mean? <a name="question-100"></a>
#### Q100. What does the message "SyntaxError: 'yield' inside list comprehension" during MxNet model conversion mean? <a name="question-100"></a>
The issue "SyntaxError: `yield` inside list comprehension" might occur during converting MXNet models (`mobilefacedet-v1-mxnet`, `brain-tumor-segmentation-0001`) on Windows platform with Python 3.8 environment. This issue is caused by the API changes for `yield expression` in Python 3.8.
**A** : The issue "SyntaxError: `yield` inside list comprehension" might occur during converting MXNet models (`mobilefacedet-v1-mxnet`, `brain-tumor-segmentation-0001`) on Windows platform with Python 3.8 environment. This issue is caused by the API changes for `yield expression` in Python 3.8.
The following workarounds are suggested to resolve this issue:
1. Use Python 3.6/3.7 to convert MXNet models on Windows
2. Update Apache MXNet by using `pip install mxnet==1.7.0.post2`
Note that it might have conflicts with previously installed PyPI dependencies.
#### 101. What does the message "The IR preparation was executed by the legacy MO path. ..." mean? <a name="question-101"></a>
#### Q101. What does the message "The IR preparation was executed by the legacy MO path. ..." mean? <a name="question-101"></a>
For the models in ONNX format, there are two available paths of IR conversion.
**A** : For the models in ONNX format, there are two available paths of IR conversion.
The old one is handled by the old Python implementation, while the new one uses new C++ frontends.
Starting from the 2022.1 version, the default IR conversion path for ONNX models is processed using the new ONNX frontend.
Certain features, such as `--extensions` and `--transformations_config`, are not yet fully supported on the new frontends.

View File

@@ -86,11 +86,11 @@ The optional parameters without default values and not specified by the user in
Internally, when you run Model Optimizer, it loads the model, goes through the topology, and tries to find each layer type in a list of known layers. Custom layers are layers that are not included in the list. If your topology contains such kind of layers, Model Optimizer classifies them as custom.
## Supported Caffe Layers
For the list of supported standard layers, refer to the [Supported Framework Layers](../Supported_Frameworks_Layers.md) page.
For the list of supported standard layers, refer to the [Supported Framework Layers](@ref openvino_docs_MO_DG_prepare_model_Supported_Frameworks_Layers) page.
## Frequently Asked Questions (FAQ)
Model Optimizer provides explanatory messages when it is unable to complete conversions due to typographical errors, incorrectly used options, or other issues. A message describes the potential cause of the problem and gives a link to [Model Optimizer FAQ](../Model_Optimizer_FAQ.md) which provides instructions on how to resolve most issues. The FAQ also includes links to relevant sections to help you understand what went wrong.
Model Optimizer provides explanatory messages when it is unable to complete conversions due to typographical errors, incorrectly used options, or other issues. A message describes the potential cause of the problem and gives a link to [Model Optimizer FAQ](@ref openvino_docs_MO_DG_prepare_model_Model_Optimizer_FAQ) which provides instructions on how to resolve most issues. The FAQ also includes links to relevant sections to help you understand what went wrong.
## Summary
@@ -100,5 +100,5 @@ In this document, you learned:
* Which Caffe models are supported.
* How to convert a trained Caffe model by using Model Optimizer with both framework-agnostic and Caffe-specific command-line options.
## See Also
[Model Conversion Tutorials](Convert_Model_Tutorials.md)
## Additional Resources
See the [Model Conversion Tutorials](@ref openvino_docs_MO_DG_prepare_model_convert_model_tutorials) page for a set of tutorials providing step-by-step instructions for converting specific Caffe models.

View File

@@ -74,7 +74,8 @@ Based on this mapping, link inputs and outputs in your application manually as f
must be copied to `Parameter_0_for_Offset_fastlstm2.r_trunc__2Offset_fastlstm2.r_trunc__2_out`.
## Supported Kaldi Layers
For the list of supported standard layers, refer to the [Supported Framework Layers ](../Supported_Frameworks_Layers.md) page.
For the list of supported standard layers, refer to the [Supported Framework Layers ](@ref openvino_docs_MO_DG_prepare_model_Supported_Frameworks_Layers) page.
## See Also
[Model Conversion Tutorials](Convert_Model_Tutorials.md)
## Additional Resources
See the [Model Conversion Tutorials](@ref openvino_docs_MO_DG_prepare_model_convert_model_tutorials) page for a set of tutorials providing step-by-step instructions for converting specific Kaldi models. Here are some examples:
* [Convert Kaldi ASpIRE Chain Time Delay Neural Network (TDNN) Model](@ref openvino_docs_MO_DG_prepare_model_convert_model_kaldi_specific_Aspire_Tdnn_Model)

View File

@@ -39,11 +39,11 @@ MXNet-specific parameters:
Internally, when you run Model Optimizer, it loads the model, goes through the topology, and tries to find each layer type in a list of known layers. Custom layers are layers that are not included in the list. If your topology contains such kind of layers, Model Optimizer classifies them as custom.
## Supported MXNet Layers
For the list of supported standard layers, refer to the [Supported Framework Layers](../Supported_Frameworks_Layers.md) page.
For the list of supported standard layers, refer to the [Supported Framework Layers](@ref openvino_docs_MO_DG_prepare_model_Supported_Frameworks_Layers) page.
## Frequently Asked Questions (FAQ)
Model Optimizer provides explanatory messages when it is unable to complete conversions due to typographical errors, incorrectly used options, or other issues. A message describes the potential cause of the problem and gives a link to [Model Optimizer FAQ](../Model_Optimizer_FAQ.md) which provides instructions on how to resolve most issues. The FAQ also includes links to relevant sections to help you understand what went wrong.
Model Optimizer provides explanatory messages when it is unable to complete conversions due to typographical errors, incorrectly used options, or other issues. A message describes the potential cause of the problem and gives a link to [Model Optimizer FAQ](@ref openvino_docs_MO_DG_prepare_model_Model_Optimizer_FAQ) which provides instructions on how to resolve most issues. The FAQ also includes links to relevant sections to help you understand what went wrong.
## Summary
@@ -53,5 +53,7 @@ In this document, you learned:
* Which MXNet models are supported.
* How to convert a trained MXNet model by using the Model Optimizer with both framework-agnostic and MXNet-specific command-line options.
## See Also
[Model Conversion Tutorials](Convert_Model_Tutorials.md)
## Additional Resources
See the [Model Conversion Tutorials](@ref openvino_docs_MO_DG_prepare_model_convert_model_tutorials) page for a set of tutorials providing step-by-step instructions for converting specific MXNet models. Here are some examples:
* [Convert MXNet GluonCV Model](@ref openvino_docs_MO_DG_prepare_model_convert_model_mxnet_specific_Convert_GluonCV_Models)
* [Convert MXNet Style Transfer Model](@ref openvino_docs_MO_DG_prepare_model_convert_model_mxnet_specific_Convert_Style_Transfer_From_MXNet)

View File

@@ -5,7 +5,7 @@
## Converting an ONNX Model <a name="Convert_From_ONNX"></a>
This page provides instructions on how to convert a model from the ONNX format to the OpenVINO IR format using Model Optimizer. To use Model Optimizer, install OpenVINO Development Tools by following the [installation instructions](https://docs.openvino.ai/latest/openvino_docs_install_guides_install_dev_tools.html).
This page provides instructions on how to convert a model from the ONNX format to the OpenVINO IR format using Model Optimizer. To use Model Optimizer, install OpenVINO Development Tools by following the [installation instructions](@ref openvino_docs_install_guides_install_dev_tools).
The Model Optimizer process assumes you have an ONNX model that was directly downloaded from a public repository or converted from any framework that supports exporting to the ONNX format.
@@ -15,14 +15,14 @@ To convert an ONNX model, run Model Optimizer with the path to the input model `
mo --input_model <INPUT_MODEL>.onnx
```
There are no ONNX specific parameters, so only framework-agnostic parameters are available to convert your model. For details, see the *General Conversion Parameters* section in the [Converting a Model to Intermediate Representation (IR)](Converting_Model.md) guide.
There are no ONNX specific parameters, so only framework-agnostic parameters are available to convert your model. For details, see the *General Conversion Parameters* section in the [Converting a Model to Intermediate Representation (IR)](@ref openvino_docs_MO_DG_prepare_model_convert_model_Converting_Model) guide.
## Supported ONNX Layers
For the list of supported standard layers, refer to the [Supported Framework Layers](../Supported_Frameworks_Layers.md) page.
For the list of supported standard layers, refer to the [Supported Framework Layers](@ref openvino_docs_MO_DG_prepare_model_Supported_Frameworks_Layers) page.
## Additional Resources
See the [Model Conversion Tutorials](Convert_Model_Tutorials.md) page for a set of tutorials providing step-by-step instructions for converting specific ONNX models. Here are some examples:
* [Convert ONNX* Faster R-CNN Model](onnx_specific/Convert_Faster_RCNN.md)
* [Convert ONNX* GPT-2 Model](onnx_specific/Convert_GPT2.md)
* [Convert ONNX* Mask R-CNN Model](onnx_specific/Convert_Mask_RCNN.md)
See the [Model Conversion Tutorials](@ref openvino_docs_MO_DG_prepare_model_convert_model_tutorials) page for a set of tutorials providing step-by-step instructions for converting specific ONNX models. Here are some examples:
* [Convert ONNX Faster R-CNN Model](@ref openvino_docs_MO_DG_prepare_model_convert_model_onnx_specific_Convert_Faster_RCNN)
* [Convert ONNX GPT-2 Model](@ref openvino_docs_MO_DG_prepare_model_convert_model_onnx_specific_Convert_GPT2)
* [Convert ONNX Mask R-CNN Model](@ref openvino_docs_MO_DG_prepare_model_convert_model_onnx_specific_Convert_Mask_RCNN)

View File

@@ -12,7 +12,7 @@ To convert a PaddlePaddle model, use the `mo` script and specify the path to the
```
## Supported PaddlePaddle Layers
For the list of supported standard layers, refer to the [Supported Framework Layers](../Supported_Frameworks_Layers.md) page.
For the list of supported standard layers, refer to the [Supported Framework Layers](@ref openvino_docs_MO_DG_prepare_model_Supported_Frameworks_Layers) page.
## Officially Supported PaddlePaddle Models
The following PaddlePaddle models have been officially validated and confirmed to work (as of OpenVINO 2022.1):
@@ -39,7 +39,7 @@ The following PaddlePaddle models have been officially validated and confirmed t
- Models are exported from `PaddleClas <https://github.com/PaddlePaddle/PaddleClas/tree/release/2.1/>`_. Refer to `getting_started_en.md <https://github.com/PaddlePaddle/PaddleClas/blob/release/2.1/docs/en/tutorials/getting_started_en.md#4-use-the-inference-model-to-predict>`_.
* - MobileNet v3
- classification
- Models are exported from `PaddleClas <https://github.com/PaddlePaddle/PaddleClas/tree/release/2.1/)>`_. Refer to `getting_started_en.md <https://github.com/PaddlePaddle/PaddleClas/blob/release/2.1/docs/en/tutorials/getting_started_en.md#4-use-the-inference-model-to-predict>`_.
- Models are exported from `PaddleClas <https://github.com/PaddlePaddle/PaddleClas/tree/release/2.1/>`_. Refer to `getting_started_en.md <https://github.com/PaddlePaddle/PaddleClas/blob/release/2.1/docs/en/tutorials/getting_started_en.md#4-use-the-inference-model-to-predict>`_.
* - BiSeNet v2
- semantic segmentation
- Models are exported from `PaddleSeg <https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.1>`_. Refer to `model_export.md <https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.1/docs/model_export.md#>`_.
@@ -70,7 +70,7 @@ The following PaddlePaddle models have been officially validated and confirmed t
@endsphinxdirective
## Frequently Asked Questions (FAQ)
When Model Optimizer is unable to run to completion due to typographical errors, incorrectly used options, or other issues, it provides explanatory messages. They describe the potential cause of the problem and give a link to the [Model Optimizer FAQ](../Model_Optimizer_FAQ.md), which provides instructions on how to resolve most issues. The FAQ also includes links to relevant sections in the Model Optimizer Developer Guide to help you understand what went wrong.
When Model Optimizer is unable to run to completion due to typographical errors, incorrectly used options, or other issues, it provides explanatory messages. They describe the potential cause of the problem and give a link to the [Model Optimizer FAQ](@ref openvino_docs_MO_DG_prepare_model_Model_Optimizer_FAQ), which provides instructions on how to resolve most issues. The FAQ also includes links to relevant sections in the Model Optimizer Developer Guide to help you understand what went wrong.
## See Also
[Model Conversion Tutorials](Convert_Model_Tutorials.md)
## Additional Resources
See the [Model Conversion Tutorials](@ref openvino_docs_MO_DG_prepare_model_convert_model_tutorials) page for a set of tutorials providing step-by-step instructions for converting specific PaddlePaddle models.

View File

@@ -3,7 +3,7 @@
The PyTorch framework is supported through export to the ONNX format. In order to optimize and deploy a model that was trained with it:
1. [Export a PyTorch model to ONNX](#export-to-onnx).
2. [Convert the ONNX model](Convert_Model_From_ONNX.md) to produce an optimized [Intermediate Representation](../../IR_and_opsets.md) of the model based on the trained network topology, weights, and biases values.
2. [Convert the ONNX model](Convert_Model_From_ONNX.md) to produce an optimized [Intermediate Representation](@ref openvino_docs_MO_DG_IR_and_opsets) of the model based on the trained network topology, weights, and biases values.
## Exporting a PyTorch Model to ONNX Format <a name="export-to-onnx"></a>
PyTorch models are defined in Python. To export them, use the `torch.onnx.export()` method. The code to
@@ -32,5 +32,8 @@ torch.onnx.export(model, (dummy_input, ), 'model.onnx')
It is recommended to export models to opset 11 or higher when export to default opset 9 is not working. In that case, use `opset_version`
option of the `torch.onnx.export`. For more information about ONNX opset, refer to the [Operator Schemas](https://github.com/onnx/onnx/blob/master/docs/Operators.md) page.
## See Also
[Model Conversion Tutorials](Convert_Model_Tutorials.md)
## Additional Resources
See the [Model Conversion Tutorials](@ref openvino_docs_MO_DG_prepare_model_convert_model_tutorials) page for a set of tutorials providing step-by-step instructions for converting specific PyTorch models. Here are some examples:
* [Convert PyTorch BERT-NER Model](@ref openvino_docs_MO_DG_prepare_model_convert_model_pytorch_specific_Convert_Bert_ner)
* [Convert PyTorch RCAN Model](@ref openvino_docs_MO_DG_prepare_model_convert_model_pytorch_specific_Convert_RCAN)
* [Convert PyTorch YOLACT Model](@ref openvino_docs_MO_DG_prepare_model_convert_model_pytorch_specific_Convert_YOLACT)

View File

@@ -2,7 +2,7 @@
This page provides general instructions on how to convert a model from a TensorFlow format to the OpenVINO IR format using Model Optimizer. The instructions are different depending on whether your model was created with TensorFlow v1.X or TensorFlow v2.X.
To use Model Optimizer, install OpenVINO Development Tools by following the [installation instructions](../../../install_guides/installing-model-dev-tools.md).
To use Model Optimizer, install OpenVINO Development Tools by following the [installation instructions](@ref openvino_docs_install_guides_install_dev_tools).
## Converting TensorFlow 1 Models <a name="Convert_From_TF2X"></a>
@@ -140,10 +140,10 @@ mo --saved_model_dir BERT --input mask,word_ids,type_ids --input_shape [2,30],[2
```
## Supported TensorFlow and TensorFlow 2 Keras Layers
For the list of supported standard layers, refer to the [Supported Framework Layers ](../Supported_Frameworks_Layers.md) page.
For the list of supported standard layers, refer to the [Supported Framework Layers ](@ref openvino_docs_MO_DG_prepare_model_Supported_Frameworks_Layers) page.
## Frequently Asked Questions (FAQ)
The Model Optimizer provides explanatory messages if it is unable to run to completion due to typographical errors, incorrectly used options, or other issues. The message describes the potential cause of the problem and gives a link to the [Model Optimizer FAQ](../Model_Optimizer_FAQ.md). The FAQ provides instructions on how to resolve most issues. The FAQ also includes links to relevant sections in the Model Optimizer Developer Guide to help you understand what went wrong.
The Model Optimizer provides explanatory messages if it is unable to run to completion due to typographical errors, incorrectly used options, or other issues. The message describes the potential cause of the problem and gives a link to the [Model Optimizer FAQ](@ref openvino_docs_MO_DG_prepare_model_Model_Optimizer_FAQ). The FAQ provides instructions on how to resolve most issues. The FAQ also includes links to relevant sections in the Model Optimizer Developer Guide to help you understand what went wrong.
## Summary
In this document, you learned:
@@ -154,8 +154,8 @@ In this document, you learned:
* How to convert a trained TensorFlow model using the Model Optimizer with both framework-agnostic and TensorFlow-specific command-line options.
## Additional Resources
For step-by-step instructions on how to convert specific TensorFlow models, see the [Model Conversion Tutorials](Convert_Model_Tutorials.md) page. Here are some examples:
* [Convert TensorFlow EfficientDet Models](tf_specific/Convert_EfficientDet_Models.md)
* [Convert TensorFlow FaceNet Models](tf_specific/Convert_FaceNet_From_Tensorflow.md)
* [Convert TensorFlow Object Detection API Models](tf_specific/Convert_Object_Detection_API_Models.md)
See the [Model Conversion Tutorials](@ref openvino_docs_MO_DG_prepare_model_convert_model_tutorials) page for a set of tutorials providing step-by-step instructions for converting specific TensorFlow models. Here are some examples:
* [Convert TensorFlow EfficientDet Models](@ref openvino_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_EfficientDet_Models)
* [Convert TensorFlow FaceNet Models](@ref openvino_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_FaceNet_From_Tensorflow)
* [Convert TensorFlow Object Detection API Models](@ref openvino_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_Object_Detection_API_Models)

View File

@@ -21,7 +21,7 @@ Model Optimizer provides command line options `--input` and `--output` to specif
The `--input` option is required for cases unrelated to model cutting. For example, when the model contains several inputs and `--input_shape` or `--mean_values` options are used, the `--input` option specifies the order of input nodes for correct mapping between multiple items provided in `--input_shape` and `--mean_values` and the inputs in the model.
Model cutting is illustrated with the Inception V1 model, found in the `models/research/slim` repository. To proceed with this chapter, make sure you do the necessary steps to [prepare the model for Model Optimizer](Converting_Model.md).
Model cutting is illustrated with the Inception V1 model, found in the `models/research/slim` repository. To proceed with this chapter, make sure you do the necessary steps to [prepare the model for Model Optimizer](@ref openvino_docs_MO_DG_prepare_model_convert_model_Converting_Model).
## Default Behavior without --input and --output
@@ -29,13 +29,13 @@ The input model is converted as a whole if neither `--input` nor `--output` comm
For Inception_V1, there is one `Placeholder`: input. If the model is viewed in TensorBoard, the input operation is easy to find:
![Placeholder in Inception V1](../../img/inception_v1_std_input.png)
![Placeholder in Inception V1](../../img/inception_v1_std_input.svg)
`Reshape` is the only output operation, which is enclosed in a nested name scope of `InceptionV1/Logits/Predictions`, under the full name of `InceptionV1/Logits/Predictions/Reshape_1`.
In TensorBoard, along with some of its predecessors, it looks as follows:
![TensorBoard with predecessors](../../img/inception_v1_std_output.png)
![TensorBoard with predecessors](../../img/inception_v1_std_output.svg)
Convert this model and put the results in a writable output directory:
```sh
@@ -90,7 +90,7 @@ The Intermediate Representations are identical for both conversions. The same is
Now, consider how to cut some parts of the model off. This chapter describes the first convolution block `InceptionV1/InceptionV1/Conv2d_1a_7x7` of the Inception V1 model to illustrate cutting:
![Inception V1 first convolution block](../../img/inception_v1_first_block.png)
![Inception V1 first convolution block](../../img/inception_v1_first_block.svg)
### Cutting at the End
@@ -377,7 +377,7 @@ Different behavior occurs when `--input_shape` is also used as an attempt to ove
```sh
mo --input_model inception_v1.pb--input=InceptionV1/InceptionV1/Conv2d_1a_7x7/convolution --input_shape [1,224,224,3] --output_dir <OUTPUT_MODEL_DIR>
```
An error occurs (for more information, see the [Model Optimizer FAQ](../Model_Optimizer_FAQ.md#FAQ30)):
An error occurs (for more information, see the [Model Optimizer FAQ](@ref FAQ30):
```sh
[ ERROR ] Node InceptionV1/InceptionV1/Conv2d_1a_7x7/convolution has more than 1 input and input shapes were provided.
Try not to provide input shapes or specify input port with PORT:NODE notation, where PORT is an integer.

View File

@@ -11,7 +11,7 @@ BERT-NER model repository. The model with configuration files is stored in the `
To convert the model to ONNX format, create and run the following script in the root
directory of the model repository. If you download the pretrained model, you need
to download [`bert.py`](https://github.com/kamalkraj/BERT-NER/blob/dev/bert.py) to run the script.
to download [bert.py](https://github.com/kamalkraj/BERT-NER/blob/dev/bert.py) to run the script.
The instructions were tested with the commit-SHA: `e5be564156f194f1becb0d82aeaf6e762d9eb9ed`.
```python

View File

@@ -20,13 +20,13 @@ mkdir rnnt_for_openvino
cd rnnt_for_openvino
```
**Step 3**. Download pretrained weights for PyTorch implementation from [https://zenodo.org/record/3662521#.YG21DugzZaQ](https://zenodo.org/record/3662521#.YG21DugzZaQ).
**Step 3**. Download pretrained weights for PyTorch implementation from [here](https://zenodo.org/record/3662521#.YG21DugzZaQ).
For UNIX-like systems, you can use `wget`:
```bash
wget https://zenodo.org/record/3662521/files/DistributedDataParallel_1576581068.9962234-epoch-100.pt
```
The link was taken from `setup.sh` in the `speech_recoginitin/rnnt` subfolder. You will get exactly the same weights as
if you were following the guide from [https://github.com/mlcommons/inference/tree/master/speech_recognition/rnnt](https://github.com/mlcommons/inference/tree/master/speech_recognition/rnnt).
if you were following the [guide](https://github.com/mlcommons/inference/tree/master/speech_recognition/rnnt).
**Step 4**. Install required Python packages:
```bash
@@ -103,6 +103,4 @@ mo --input_model rnnt_encoder.onnx --input "input[157 1 240],feature_length->157
mo --input_model rnnt_prediction.onnx --input "symbol[1 1],hidden_in_1[2 1 320],hidden_in_2[2 1 320]"
mo --input_model rnnt_joint.onnx --input "0[1 1 1024],1[1 1 320]"
```
> **NOTE**: The hardcoded value for sequence length = 157 was taken from the MLCommons, but conversion to IR preserves
network [reshapeability](../../../../OV_Runtime_UG/ShapeInference.md). Therefore, input shapes can be changed manually to any value during either conversion or
inference.
> **NOTE**: The hardcoded value for sequence length = 157 was taken from the MLCommons, but conversion to IR preserves network [reshapeability](@ref openvino_docs_OV_UG_ShapeInference). Therefore, input shapes can be changed manually to any value during either conversion or inference.

View File

@@ -0,0 +1,37 @@
# Supported Model Formats {#Supported_Model_Formats}
@sphinxdirective
.. toctree::
:maxdepth: 1
:hidden:
openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow
openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_ONNX
openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_PyTorch
openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Paddle
openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_MxNet
openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Caffe
openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Kaldi
openvino_docs_MO_DG_prepare_model_convert_model_tutorials
@endsphinxdirective
**OpenVINO IR (Intermediate Representation)** - the proprietary format of OpenVINO™, benefiting from the full extent of its features.
**ONNX, PaddlePaddle** - formats supported directly, which means they can be used with OpenVINO Runtime without any prior conversion. For a guide on how to run inference on ONNX and PaddlePaddle, see how to [Integrate OpenVINO™ with Your Application](../../../OV_Runtime_UG/integrate_with_your_application.md).
**TensorFlow, PyTorch, MXNet, Caffe, Kaldi** - formats supported indirectly, which means they need to be converted to OpenVINO IR before running inference. The conversion is done with Model Optimizer and in some cases may involve intermediate steps.
Refer to the following articles for details on conversion for different formats and models:
* [How to convert ONNX](./Convert_Model_From_ONNX.md)
* [How to convert PaddlePaddle](./Convert_Model_From_Paddle.md)
* [How to convert TensorFlow](./Convert_Model_From_TensorFlow.md)
* [How to convert PyTorch](./Convert_Model_From_PyTorch.md)
* [How to convert MXNet](./Convert_Model_From_MxNet.md)
* [How to convert Caffe](./Convert_Model_From_Caffe.md)
* [How to convert Kaldi](./Convert_Model_From_Kaldi.md)
* [Conversion examples for specific models](./Convert_Model_Tutorials.md)

View File

@@ -3,7 +3,7 @@
Pretrained models for BERT (Bidirectional Encoder Representations from Transformers) are
[publicly available](https://github.com/google-research/bert).
## <a name="supported_models"></a>Supported Models
## <a name="supported-models"></a>Supported Models
The following models from the pretrained [BERT model list](https://github.com/google-research/bert#pre-trained-models) are currently supported:
@@ -43,7 +43,7 @@ Pretrained models are not suitable for batch reshaping out-of-the-box because of
# Converting a Reshapable TensorFlow BERT Model to OpenVINO IR
Follow these steps to make a pretrained TensorFlow BERT model reshapable over batch dimension:
1. Download a pretrained BERT model you want to use from the <a href="#supported_models">Supported Models list</a>
1. Download a pretrained BERT model you want to use from the <a href="#supported-models">Supported Models list</a>
2. Clone google-research/bert git repository:
```sh
https://github.com/google-research/bert.git

View File

@@ -1,56 +1,52 @@
# Converting a TensorFlow CRNN Model {#openvino_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_CRNN_From_Tensorflow}
This tutorial explains how to convert a CRNN model to Intermediate Representation (IR).
This tutorial explains how to convert a CRNN model to OpenVINO™ Intermediate Representation (IR).
There are several public versions of TensorFlow CRNN model implementation available on GitHub. This tutorial explains how to convert the model from
the [https://github.com/MaybeShewill-CV/CRNN_Tensorflow](https://github.com/MaybeShewill-CV/CRNN_Tensorflow) repository to IR.
the [CRNN Tensorflow](https://github.com/MaybeShewill-CV/CRNN_Tensorflow) repository to IR, and is validated with Python 3.7, TensorFlow 1.15.0, and protobuf 3.19.0.
If you have another implementation of CRNN model, it can be converted to OpenVINO IR in a similar way. You need to get inference graph and run Model Optimizer on it.
**To convert this model to the IR:**
**To convert the model to IR:**
**Step 1.** Clone this GitHub repository and checkout the commit:
1. Clone repository:
**Step 1.** Clone this GitHub repository and check out the commit:
1. Clone the repository:
```sh
git clone https://github.com/MaybeShewill-CV/CRNN_Tensorflow.git
git clone https://github.com/MaybeShewill-CV/CRNN_Tensorflow.git
```
2. Checkout necessary commit:
2. Go to the `CRNN_Tensorflow` directory of the cloned repository:
```sh
cd path/to/CRNN_Tensorflow
```
3. Check out the necessary commit:
```sh
git checkout 64f1f1867bffaacfeacc7a80eebf5834a5726122
```
**Step 2.** Train the model, using framework or use the pretrained checkpoint provided in this repository.
**Step 2.** Train the model using the framework or the pretrained checkpoint provided in this repository.
**Step 3.** Create an inference graph:
1. Go to the `CRNN_Tensorflow` directory of the cloned repository:
```sh
cd path/to/CRNN_Tensorflow
```
2. Add `CRNN_Tensorflow` folder to `PYTHONPATH`.
* For Linux OS:
1. Add the `CRNN_Tensorflow` folder to `PYTHONPATH`.
* For Linux:
```sh
export PYTHONPATH="${PYTHONPATH}:/path/to/CRNN_Tensorflow/"
```
* For Windows OS add `/path/to/CRNN_Tensorflow/` to the `PYTHONPATH` environment variable in settings.
3. Open the `tools/test_shadownet.py` script. After `saver.restore(sess=sess, save_path=weights_path)` line, add the following code:
* For Windows, add `/path/to/CRNN_Tensorflow/` to the `PYTHONPATH` environment variable in settings.
2. Edit the `tools/demo_shadownet.py` script. After `saver.restore(sess=sess, save_path=weights_path)` line, add the following code:
```python
import tensorflow as tf
from tensorflow.python.framework import graph_io
frozen = tf.compat.v1.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['shadow/LSTMLayers/transpose_time_major'])
frozen = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['shadow/LSTMLayers/transpose_time_major'])
graph_io.write_graph(frozen, '.', 'frozen_graph.pb', as_text=False)
```
4. Run the demo with the following command:
3. Run the demo with the following command:
```sh
python tools/test_shadownet.py --image_path data/test_images/test_01.jpg --weights_path model/shadownet/shadownet_2017-10-17-11-47-46.ckpt-199999
python tools/demo_shadownet.py --image_path data/test_images/test_01.jpg --weights_path model/shadownet/shadownet_2017-10-17-11-47-46.ckpt-199999
```
If you want to use your checkpoint, replace the path in the `--weights_path` parameter with a path to your checkpoint.
5. In the `CRNN_Tensorflow` directory, you will find the inference CRNN graph `frozen_graph.pb`. You can use this graph with the OpenVINO&trade; toolkit
to convert the model into the IR and run inference.
4. In the `CRNN_Tensorflow` directory, you will find the inference CRNN graph `frozen_graph.pb`. You can use this graph with OpenVINO
to convert the model to IR and then run inference.
**Step 4.** Convert the model into the IR:
**Step 4.** Convert the model to IR:
```sh
mo --input_model path/to/your/CRNN_Tensorflow/frozen_graph.pb
```

View File

@@ -41,7 +41,7 @@ As a result, the frozen model file `savedmodeldir/efficientdet-d4_frozen.pb` wil
> **NOTE**: For custom trained models, specify `--hparams` flag to `config.yaml` which was used during training.
> **NOTE**: If you see an error `AttributeError: module 'tensorflow_core.python.keras.api._v2.keras.initializers' has no attribute 'variance_scaling'`, apply the fix from the [patch](https://github.com/google/automl/pull/846).
> **NOTE**: If you see an error *AttributeError: module 'tensorflow_core.python.keras.api._v2.keras.initializers' has no attribute 'variance_scaling'*, apply the fix from the [patch](https://github.com/google/automl/pull/846).
### Converting an EfficientDet TensorFlow Model to the IR
@@ -65,9 +65,9 @@ for the Model Optimizer on how to convert the model and trigger transformations
train the model yourself and modified the `hparams_config` file or the parameters are different from the ones used for EfficientDet-D4.
The attribute names are self-explanatory or match the name in the `hparams_config` file.
> **NOTE**: The color channel order (RGB or BGR) of an input data should match the channel order of the model training dataset. If they are different, perform the `RGB<->BGR` conversion specifying the command-line parameter: `--reverse_input_channels`. Otherwise, inference results may be incorrect. For more information about the parameter, refer to the **When to Reverse Input Channels** section of the [Converting a Model to Intermediate Representation (IR)](../Converting_Model.md) guide.
> **NOTE**: The color channel order (RGB or BGR) of an input data should match the channel order of the model training dataset. If they are different, perform the `RGB<->BGR` conversion specifying the command-line parameter: `--reverse_input_channels`. Otherwise, inference results may be incorrect. For more information about the parameter, refer to the **When to Reverse Input Channels** section of the [Converting a Model to Intermediate Representation (IR)](@ref openvino_docs_MO_DG_prepare_model_convert_model_Converting_Model) guide.
OpenVINO&trade; toolkit provides samples that can be used to infer EfficientDet model.
OpenVINO toolkit provides samples that can be used to infer EfficientDet model.
For more information, refer to the [Open Model Zoo Demos](@ref omz_demos).
## <a name="efficientdet-ir-results-interpretation"></a>Interpreting Results of the TensorFlow Model and the IR

View File

@@ -8,7 +8,7 @@ There are two inputs in this network: boolean `phase_train` which manages state
`batch_size` which is a part of batch joining pattern.
![FaceNet model view](../../../img/FaceNet.png)
![FaceNet model view](../../../img/FaceNet.svg)
## Converting a TensorFlow FaceNet Model to the IR

View File

@@ -161,7 +161,7 @@ This tutorial assumes the use of the trained GNMT model from `wmt16_gnmt_4_layer
**Step 3**. Create an inference graph:
The OpenVINO&trade; assumes that a model is used for inference only. Hence, before converting the model into the IR, you need to transform the training graph into the inference graph.
The OpenVINO assumes that a model is used for inference only. Hence, before converting the model into the IR, you need to transform the training graph into the inference graph.
For the GNMT model, the training graph and the inference graph have different decoders: the training graph uses a greedy search decoding algorithm, while the inference graph uses a beam search decoding algorithm.
1. Apply the `GNMT_inference.patch` patch to the repository. Refer to the <a href="#patch-file">Create a Patch File</a> instructions if you do not have it:
@@ -215,7 +215,7 @@ Output cutting:
* `LookupTableFindV2` operation is cut from the output and the `dynamic_seq2seq/decoder/decoder/GatherTree` node is treated as a new exit point.
For more information about model cutting, refer to the [Cutting Off Parts of a Model](../Cutting_Model.md) guide.
For more information about model cutting, refer to the [Cutting Off Parts of a Model](@ref openvino_docs_MO_DG_prepare_model_convert_model_Cutting_Model) guide.
## Using a GNMT Model <a name="run_GNMT"></a>
@@ -234,7 +234,7 @@ Outputs of the model:
* `dynamic_seq2seq/decoder/decoder/GatherTree` tensor with shape `[max_sequence_length * 2, batch, beam_size]`,
that contains `beam_size` best translations for every sentence from input (also decoded as indices of words in
vocabulary).
> **NOTE**: The shape of this tensor in TensorFlow can be different: instead of `max_sequence_length * 2`, it can be any value less than that, because OpenVINO&trade; does not support dynamic shapes of outputs, while TensorFlow can stop decoding iterations when `eos` symbol is generated.
> **NOTE**: The shape of this tensor in TensorFlow can be different: instead of `max_sequence_length * 2`, it can be any value less than that, because OpenVINO does not support dynamic shapes of outputs, while TensorFlow can stop decoding iterations when `eos` symbol is generated.
#### Running GNMT IR <a name="run_GNMT"></a>
@@ -273,4 +273,4 @@ exec_net = ie.load_network(network=net, device_name="CPU")
result_ie = exec_net.infer(input_data)
```
For more information about Python API, refer to the [OpenVINO Runtime Python API](ie_python_api/api.html) guide.
For more information about Python API, refer to the [OpenVINO Runtime Python API](https://docs.openvino.ai/2022.2/api/api_reference.html) guide.

View File

@@ -25,7 +25,7 @@ where `rating/BiasAdd` is an output node.
3. Convert the model to the OpenVINO format. If you look at your frozen model, you can see that
it has one input that is split into four `ResourceGather` layers. (Click image to zoom in.)
![NCF model beginning](../../../img/NCF_start.png)
![NCF model beginning](../../../img/NCF_start.svg)
However, as the Model Optimizer does not support such data feeding, you should skip it. Cut
the edges incoming in `ResourceGather` port 1:

View File

@@ -1,9 +1,10 @@
# Converting TensorFlow Object Detection API Models {#openvino_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_Object_Detection_API_Models}
> **NOTES**:
> * Starting with the 2022.1 release, Model Optimizer can convert the TensorFlow Object Detection API Faster and Mask RCNNs topologies differently. By default, Model Optimizer adds operation "Proposal" to the generated IR. This operation needs an additional input to the model with name "image_info" which should be fed with several values describing the preprocessing applied to the input image (refer to the [Proposal](../../../../ops/detection/Proposal_4.md) operation specification for more information). However, this input is redundant for the models trained and inferred with equal size images. Model Optimizer can generate IR for such models and insert operation [DetectionOutput](../../../../ops/detection/DetectionOutput_1.md) instead of `Proposal`. The `DetectionOutput` operation does not require additional model input "image_info". Moreover, for some models the produced inference results are closer to the original TensorFlow model. In order to trigger new behavior, the attribute "operation_to_add" in the corresponding JSON transformation configuration file should be set to value "DetectionOutput" instead of default one "Proposal".
> * Starting with the 2021.1 release, Model Optimizer converts the TensorFlow Object Detection API SSDs, Faster and Mask RCNNs topologies keeping shape-calculating sub-graphs by default, so topologies can be re-shaped in the OpenVINO Runtime using dedicated reshape API. Refer to the [Using Shape Inference](../../../../OV_Runtime_UG/ShapeInference.md) guide for more information on how to use this feature. It is possible to change the both spatial dimensions of the input image and batch size.
> * To generate IRs for TF 1 SSD topologies, Model Optimizer creates a number of `PriorBoxClustered` operations instead of a constant node with prior boxes calculated for the particular input image size. This change allows you to reshape the topology in the OpenVINO Runtime using dedicated API. The reshaping is supported for all SSD topologies except FPNs, which contain hardcoded shapes for some operations preventing from changing topology input shape.
**NOTES**:
* Starting with the 2022.1 release, Model Optimizer can convert the TensorFlow Object Detection API Faster and Mask RCNNs topologies differently. By default, Model Optimizer adds operation "Proposal" to the generated IR. This operation needs an additional input to the model with name "image_info" which should be fed with several values describing the preprocessing applied to the input image (refer to the [Proposal](@ref openvino_docs_ops_detection_Proposal_4) operation specification for more information). However, this input is redundant for the models trained and inferred with equal size images. Model Optimizer can generate IR for such models and insert operation [DetectionOutput](@ref openvino_docs_ops_detection_DetectionOutput_1) instead of `Proposal`. The `DetectionOutput` operation does not require additional model input "image_info". Moreover, for some models the produced inference results are closer to the original TensorFlow model. In order to trigger new behavior, the attribute "operation_to_add" in the corresponding JSON transformation configuration file should be set to value "DetectionOutput" instead of default one "Proposal".
* Starting with the 2021.1 release, Model Optimizer converts the TensorFlow Object Detection API SSDs, Faster and Mask RCNNs topologies keeping shape-calculating sub-graphs by default, so topologies can be re-shaped in the OpenVINO Runtime using dedicated reshape API. Refer to the [Using Shape Inference](@ref openvino_docs_OV_UG_ShapeInference) guide for more information on how to use this feature. It is possible to change the both spatial dimensions of the input image and batch size.
* To generate IRs for TF 1 SSD topologies, Model Optimizer creates a number of `PriorBoxClustered` operations instead of a constant node with prior boxes calculated for the particular input image size. This change allows you to reshape the topology in the OpenVINO Runtime using dedicated API. The reshaping is supported for all SSD topologies except FPNs, which contain hardcoded shapes for some operations preventing from changing topology input shape.
## Converting a Model
@@ -62,7 +63,7 @@ Open Model Zoo provides set of demo applications to show implementation of close
based on deep learning in various tasks, including Image Classification, Visual Object Detection, Text Recognition,
Speech Recognition, Natural Language Processing and others. Refer to the links below for more details.
* [OpenVINO Samples](../../../../OV_Runtime_UG/Samples_Overview.md)
* [OpenVINO Samples](@ref openvino_docs_OV_UG_Samples_Overview)
* [Open Model Zoo Demos](@ref omz_demos)
## Feeding Input Images to the Samples
@@ -134,5 +135,5 @@ It is also important to open the model in the [TensorBoard](https://www.tensorfl
* `--input_model <path_to_frozen.pb>` --- Path to the frozen model.
* `--tensorboard_logdir` --- Path to the directory where TensorBoard looks for the event files.
Implementation of the transformations for Object Detection API models is located in the file [https://github.com/openvinotoolkit/openvino/blob/releases/2022/1/tools/mo/openvino/tools/mo/front/tf/ObjectDetectionAPI.py](https://github.com/openvinotoolkit/openvino/blob/releases/2022/1/tools/mo/openvino/tools/mo/front/tf/ObjectDetectionAPI.py). Refer to the code in this file to understand the details of the conversion process.
Implementation of the transformations for Object Detection API models is located in the [file](https://github.com/openvinotoolkit/openvino/blob/releases/2022/1/tools/mo/openvino/tools/mo/front/tf/ObjectDetectionAPI.py). Refer to the code in this file to understand the details of the conversion process.

Some files were not shown because too many files have changed in this diff Show More