Compare commits

...

138 Commits

Author SHA1 Message Date
Artyom Anokhov
fa1c41994f Bump version to 2023.0.1. Updated conflicted version for APT/YUM (#18268) 2023-06-28 13:36:31 +02:00
bstankix
caae459f54 Change html_baseurl to canonical (#18253) 2023-06-27 10:25:37 +02:00
Karol Blaszczak
7ef5cbff30 [DOCS] benchmark update for OVMS 23.0 (#18010) (#18250) 2023-06-27 09:16:16 +02:00
Maciej Smyk
85956dfa4d [DOCS] Debugging Auto-Device Plugin rst shift + Notebooks installation id align for 23.0 (#18241)
* Update AutoPlugin_Debugging.md

* Update AutoPlugin_Debugging.md

* Update AutoPlugin_Debugging.md

* Update AutoPlugin_Debugging.md

* notebooks id fix

* fixes

* Update AutoPlugin_Debugging.md
2023-06-27 08:15:51 +02:00
Maciej Smyk
2d98cbed74 [DOCS] Table directive update + Get Started fix for 23.0 (#18217)
* Update notebooks-installation.md

* Update notebooks-installation.md

* Update performance_benchmarks.md

* Update openvino_ecosystem.md

* Update get_started_demos.md

* Update installing-model-dev-tools.md

* Update installing-model-dev-tools.md

* Update installing-openvino-brew.md

* Update installing-openvino-conda.md

* fix

* Update installing-openvino-apt.md

* Update installing-openvino-apt.md

* Update installing-openvino-docker-linux.md

* Update installing-openvino-docker-windows.md

* Update installing-openvino-from-archive-linux.md

* Update installing-openvino-from-archive-macos.md

* Update installing-openvino-from-archive-macos.md

* Update installing-openvino-from-archive-linux.md

* Update installing-openvino-from-archive-macos.md

* Update installing-openvino-from-archive-linux.md

* Update installing-openvino-from-archive-macos.md

* Update installing-openvino-from-archive-windows.md

* tabs

* fixes

* fixes2

* Update GPU_RemoteTensor_API.md

* fixes

* fixes

* Get started fix
2023-06-26 10:27:12 +02:00
Sebastian Golebiewski
5d47cedcc9 updating-tutorials (#18213) 2023-06-26 09:42:21 +02:00
Ilya Lavrenov
9ab5a8f5d9 Added cmake_policy call to allow IN_LIST in if() (#18226) 2023-06-24 22:51:54 +04:00
Maciej Smyk
ad84dc6205 [DOCS] Docker and GPU update for 23.0 (#17851)
* docker gpu update

* ref fix

* Update installing-openvino-docker-linux.md

* fixes

* Update DeviceDriverVersion.svg

* Update docs/install_guides/configurations-for-intel-gpu.md

Co-authored-by: Miłosz Żeglarski <milosz.zeglarski@intel.com>

* Update docs/install_guides/configurations-for-intel-gpu.md

Co-authored-by: Miłosz Żeglarski <milosz.zeglarski@intel.com>

* fixes from review

* Update configurations-for-intel-gpu.md

* Update configurations-for-intel-gpu.md

* Update deployment_migration.md

---------

Co-authored-by: Miłosz Żeglarski <milosz.zeglarski@intel.com>
2023-06-22 15:47:59 +02:00
bstankix
bd3e4347dd [DOCS] gsearch 2023-06-22 13:08:00 +02:00
Tatiana Savina
0adf0e27ee [DOCS] Port docs fixes (#18155)
* change classification notebook (#18037)

* add python block (#18085)
2023-06-21 11:47:37 +02:00
Tatiana Savina
cb7cab1886 [DOCS] shift to rst - opsets F,G (#17253) (#18152)
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
2023-06-20 14:22:13 +02:00
Georgy Krivoruchko
fd48b0bbdc [TF FE] Workaround for Broadcast/Concat issue with empty tensors (#18140)
* Added transformation for Concat
* Added test
* CI fix
* Fixed behavior of the "empty tensor list" test
2023-06-20 13:13:55 +04:00
Mateusz Bencer
691630b68c [PORT TO 23.0][ONNX FE] Allow to mix new and legacy extensions (#18116)
* [ONNX FE] Allow to mix new and legacy extensions

* added unit test

* Update op_extension.cpp

Fixed compilation with Conan

Ported https://github.com/openvinotoolkit/openvino/pull/18126

* Update op_extension.cpp

Fixed code style

---------

Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
2023-06-17 13:17:37 +00:00
Nikita Malinin
205feb9421 ENABLE_MMAP property pos (#17896) (#18106)
(cherry picked from commit 29f06692d6)
2023-06-17 12:48:47 +04:00
Georgy Krivoruchko
5ef750d5b3 Fixed Windows behavior if folder path on input (#18113) 2023-06-16 16:39:33 +00:00
Zlobin Vladimir
80fddfe1c2 Update open_model_zoo submodule (#18110)
Catch up https://github.com/openvinotoolkit/open_model_zoo/pull/3790
2023-06-16 18:19:07 +04:00
Maciej Smyk
7eb59527a0 [DOCS] CMake options description in build guide for 23.0 2023-06-16 10:37:53 +02:00
Sebastian Golebiewski
21fdda5609 [DOCS] Restyling tabs for 23.0
Porting: #18054

Introducing changes in css style for tabs from sphinx-design extension.
2023-06-14 11:43:10 +00:00
Tatiana Savina
9983f74dc7 fix link (#18050) 2023-06-14 08:54:11 +00:00
Maciej Smyk
ef0b8161c9 Update build_linux.md (#18046) 2023-06-14 10:45:16 +02:00
Sebastian Golebiewski
9e2dacbc53 [DOCS] Restyling elements on home page - for 23.0 2023-06-13 08:50:20 +02:00
Sebastian Golebiewski
d299be4202 [DOCS] Fixing formatting issues in articles - for 23.0 (#18004)
* fixing-formatting
2023-06-13 07:59:16 +02:00
Tatiana Savina
99fe2e9bdc add tabs (#18007) 2023-06-12 16:45:34 +02:00
Karol Blaszczak
6668ec39d7 [DOCS] Adding Datumaro document into OV Ecosystems (#17944) (#17968)
* add Datumaro document
* add datumaro into toctree

authored-by: Wonju Lee <wonju.lee@intel.com>
2023-06-09 13:22:43 +02:00
Maciej Smyk
1e5dced9d4 Update build_linux.md (#17967) 2023-06-09 15:03:45 +04:00
Zlobin Vladimir
7d73bae243 Update open_model_zoo submodule (#17902)
* Update open_model_zoo submodule

Catch up https://github.com/openvinotoolkit/open_model_zoo/pull/3779

---------

Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
2023-06-09 13:52:38 +04:00
Sebastian Golebiewski
d8d4fb9c94 [DOCS] Fixing broken code blocks for 23.0 (#17960)
* code-blocks-fixes
2023-06-09 09:08:27 +02:00
Ilya Lavrenov
11cde296b7 Updated refs to dependency repositories (#17953) 2023-06-08 20:14:48 +04:00
Ilya Lavrenov
44f8dac403 Align tabs in install archives Linux (#17947) (#17950) 2023-06-08 14:49:28 +02:00
Tatiana Savina
41b4fd1057 add enter dir (#17897) 2023-06-08 13:08:41 +02:00
Sebastian Golebiewski
0f89782489 update-deployment-manager (#17904) 2023-06-08 10:31:50 +04:00
Tatiana Savina
d894716fad [DOCS] Add sudo to uninstall (#17929)
* add sudo to uninstall

* Update uninstalling-openvino.md
2023-06-07 18:18:12 +02:00
Tatiana Savina
f6fd84d2e1 fix archive link (#17918) 2023-06-07 09:19:05 +00:00
Tatiana Savina
648b2ad308 [DOCS] Model optimization paragraph fix (#17907)
* fix mo guide paragraph

* fix format

* fix paragraph

* remove extra line
2023-06-07 10:45:01 +02:00
Tatiana Savina
ea5c1b04e5 [DOCS] Fix list and links to POT (#17887)
* change link to POT

* change header label

* fix typo
2023-06-06 10:59:05 +02:00
Karol Blaszczak
f3d88cbf99 DOCS post-release adjustments (#17876) 2023-06-05 15:43:45 +02:00
Tatiana Savina
e824e482b1 fix apt and yum links (#17877) 2023-06-05 13:11:21 +03:00
Sebastian Golebiewski
e4d0021e2c update-diagram (#17872) 2023-06-05 08:17:26 +02:00
Artyom Anokhov
e74cb4084d [docs] Conda update (#17861)
* Adding installing OV via Conda-Forge for MacOS

* Adding section Compiling with OpenVINO™ Runtime from Conda-Forge

* Update docs/install_guides/installing-openvino-conda.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Update docs/install_guides/installing-openvino-conda.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Update docs/install_guides/installing-openvino-conda.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Update docs/install_guides/installing-openvino-conda.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Update docs/install_guides/installing-openvino-conda.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Update docs/install_guides/installing-openvino-conda.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Update docs/install_guides/installing-openvino-conda.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Update docs/install_guides/installing-openvino-conda.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* installing-openvino-conda: Fixed title

* installing-openvino-macos-header: Fixed order for links

---------

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>
2023-06-02 21:54:11 +04:00
Tatiana Savina
e843e357cd change notebooks links (#17857) 2023-06-02 13:14:26 +02:00
Tatiana Savina
ecc502733d [DOCS] Change downloads directory link (#17846)
* installation link

* fix path
2023-06-01 19:04:02 +04:00
bstankix
d1de793552 Port default platform type selection from nightly (#17845) 2023-06-01 15:46:22 +02:00
Karol Blaszczak
ebaf6a2fcb DOCS homepage update (#17843) 2023-06-01 15:16:20 +02:00
Anton Voronov
88b006bce9 [DOC] cpu documentation fixes (#17816)
* [DOC] cpu documentation fixes

* fixed typos
2023-06-01 10:17:48 +02:00
Ilya Lavrenov
4aae068125 Update archive names for 2023.0 release (#17831) 2023-06-01 12:07:06 +04:00
Ilya Lavrenov
41c37c8af9 Updated badges for 2023.0 (#17832) 2023-06-01 12:03:20 +04:00
Sebastian Golebiewski
f40f0fa58b [DOCS] convert_model() as a default conversion path - for 23.0 (#17751)
Porting: https://github.com/openvinotoolkit/openvino/pull/17454

Updating MO documentation to make convert_model() a default conversion path.
2023-05-31 19:22:54 +02:00
Tatiana Savina
20dc436b6f DOCS Fix build links (#17821)
* change doc vers

* fix links
2023-05-31 17:45:57 +02:00
Sebastian Golebiewski
b2b7a57a4c update-tutorials (#17812) 2023-05-31 15:48:08 +02:00
Tatiana Savina
4481bfa17e [DOCS] Review release docs (#17793)
* review docs

* fix link to notebook

* fix build

* fix links

* remove bracket
2023-05-31 15:46:53 +02:00
Sebastian Golebiewski
366a5467d1 [DOCS] benchmark 23.0 update - port from master (#17806)
Porting: #17789

new benchmarking data
2023-05-31 12:58:49 +02:00
Karol Blaszczak
4be1dddb21 DOCS operation support articles update (#17449) (#17809)
port: #17449

conformance table added
ARM merged with CPU
precision support and layout tables removed from the overview device article (info available in device articles)
2023-05-31 10:56:25 +00:00
Ilya Lavrenov
3fd9b8c3b7 Updated install docs for 2023.0 (#17764) 2023-05-31 13:37:30 +04:00
Maciej Smyk
66528622a8 [DOCS] Link adjustment for dev docs + fix to build.md CPU link for 23.0 (#17747)
Port from #17744

JIRA Ticket: 110042

Update of hardcoded links to switch references from latest, nightly and 2022.3 (and earlier) to 2023.0.

JIRA Ticket: 111393

Fix for the Mac (Intel CPU) link name (it should be Intel CPU instead of Intel GPU).
2023-05-31 11:34:22 +02:00
Tatiana Savina
4fb2cebf28 [DOCS] Compile tool docs port (#17753)
* [DOCS] Compile tool docs change (#17460)

* add compile tool description

* change refs

* remove page to build docs

* doc reference fix

* review comments

* fix comment

* snippet comment

* Update docs/snippets/compile_model.cpp

Co-authored-by: Ilya Churaev <ilyachur@gmail.com>

* change snippet name

* create ov object

* code block fix

* cpp code block

* include change

* code test

* change snippet

* Update docs/snippets/export_compiled_model.cpp

Co-authored-by: Ilya Churaev <ilyachur@gmail.com>

---------

Co-authored-by: Ilya Churaev <ilyachur@gmail.com>

* Fixed compile_tool install (#17666)

---------

Co-authored-by: Ilya Churaev <ilyachur@gmail.com>
Co-authored-by: Ilya Churaev <ilya.churaev@intel.com>
2023-05-31 13:27:01 +04:00
Wanglei Shen
c18a24c05b [DOC] Add multi threading for 2023.0 release in CPU plugin document (#17788) 2023-05-31 12:54:42 +04:00
Anton Voronov
95f0005793 [DOC][CPU] Documentation update (#17786) 2023-05-31 10:37:14 +04:00
bstankix
9ac239de75 Port ability to build notebooks from local files from nightly (#17798) 2023-05-30 16:27:11 +02:00
Tatiana Savina
ad5c0808a6 add pad1 (#17760) 2023-05-30 10:39:05 +02:00
Tatiana Savina
66c6e125cf [DOCS] Port workflow docs (#17761)
* [DOCS]  Deploy and run documentation sections (#17708)

* first draft

* change name

* restructure

* workflow headers change

* change note

* remove deployment guide

* change deployment description

* fix conflicts

* clean up conflict fixes
2023-05-30 10:38:47 +02:00
Maciej Smyk
53bfc41a74 [DOCS] Configuring devices article update for 2023.0 (#17757)
* Update configure_devices.md
2023-05-29 09:02:25 +02:00
Karol Blaszczak
9b72c33039 [DOCS] install-guide fix port to 23.0 (#17672) 2023-05-25 18:39:03 +02:00
Zlobin Vladimir
c0e9e1b1a1 Update open_model_zoo submodule (#17733)
Catch up https://github.com/openvinotoolkit/open_model_zoo/pull/3770

Ticket: 110042
2023-05-25 15:56:45 +00:00
Daria Mityagina
720e283ff1 Update comments and help text (#17710) 2023-05-24 22:12:27 +02:00
Tatyana Raguzova
0e87a28791 [build_samples] Using make instead of cmake (#17560) 2023-05-24 22:43:42 +04:00
Ilya Lavrenov
6d17bbb7e9 Conan port (#17625) 2023-05-24 22:07:50 +04:00
Maxim Vafin
cebbfe65ac [DOCS] Add examples of using named outputs in extensions (#17622)
* [DOCS]Add examples of using named outputs in extensions

* Fix opset

* Apply suggestions from code review

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Update docs/Extensibility_UG/frontend_extensions.md

* Add reference to external docs

* Update docs/Extensibility_UG/frontend_extensions.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

---------

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>
2023-05-24 14:15:01 +02:00
Aleksandr Voron
c4c6567182 [DOCS][CPU] Update ARM CPU plugin documentation (#17700) 2023-05-24 15:42:32 +04:00
Karol Blaszczak
1a9ce16dd6 [DOCS] framework deprecation notice (#17484) (#17537)
Port: #17484
A new PR will be created with more changes, as suggested by jane-intel and slyalin. The "deprecated" label for articles and  additional content on converting models to ONNX will be covered then.
2023-05-24 11:53:56 +02:00
Maciej Smyk
4e8d5f3798 [DOCS] link fix (#17658) 2023-05-23 07:31:19 +02:00
Przemyslaw Wysocki
7351859ec2 limit linter (#17624) 2023-05-22 23:58:06 +04:00
Ilya Lavrenov
405c5ea03a Install libtbb12 on U22 (#17653) 2023-05-22 17:52:59 +04:00
Sebastian Golebiewski
183253e834 [DOCS] Update Interactive Tutorials - for 23.0 (#17600)
port: https://github.com/openvinotoolkit/openvino/pull/17598/
2023-05-22 14:46:14 +02:00
Maciej Smyk
cfea37b139 [DOCS] RST fixes for 23.0 (#17606)
* fixes
2023-05-22 10:33:32 +02:00
Tatiana Savina
34f00bd173 DOCS Update optimization docs with NNCF PTQ changes and deprecation of POT (#17398) (#17633)
* Update model_optimization_guide.md

* Update model_optimization_guide.md

* Update model_optimization_guide.md

* Update model_optimization_guide.md

* Update model_optimization_guide.md

* Update model_optimization_guide.md

* Update model_optimization_guide.md

* Update home.rst

* Update ptq_introduction.md

* Update Introduction.md

* Update Introduction.md

* Update Introduction.md

* Update ptq_introduction.md

* Update ptq_introduction.md

* Update basic_quantization_flow.md

* Update basic_quantization_flow.md

* Update basic_quantization_flow.md

* Update quantization_w_accuracy_control.md

* Update quantization_w_accuracy_control.md

* Update quantization_w_accuracy_control.md

* Update quantization_w_accuracy_control.md

* Update quantization_w_accuracy_control.md

* Update quantization_w_accuracy_control.md

* Update quantization_w_accuracy_control.md

* Update quantization_w_accuracy_control.md

* Update quantization_w_accuracy_control.md

* Update basic_quantization_flow.md

* Update basic_quantization_flow.md

* Update quantization_w_accuracy_control.md

* Update basic_quantization_flow.md

* Update basic_quantization_flow.md

* Update model_optimization_guide.md

* Update ptq_introduction.md

* Update quantization_w_accuracy_control.md

* Update model_optimization_guide.md

* Update quantization_w_accuracy_control.md

* Update model_optimization_guide.md

* Update quantization_w_accuracy_control.md

* Update model_optimization_guide.md

* Update Introduction.md

* Update basic_quantization_flow.md

* Update basic_quantization_flow.md

* Update quantization_w_accuracy_control.md

* Update ptq_introduction.md

* Update Introduction.md

* Update model_optimization_guide.md

* Update basic_quantization_flow.md

* Update quantization_w_accuracy_control.md

* Update quantization_w_accuracy_control.md

* Update quantization_w_accuracy_control.md

* Update Introduction.md

* Update FrequentlyAskedQuestions.md

* Update model_optimization_guide.md

* Update Introduction.md

* Update model_optimization_guide.md

* Update model_optimization_guide.md

* Update model_optimization_guide.md

* Update model_optimization_guide.md

* Update model_optimization_guide.md

* Update ptq_introduction.md

* Update ptq_introduction.md

* added code snippet (#1)

* Update basic_quantization_flow.md

* Update basic_quantization_flow.md

* Update quantization_w_accuracy_control.md

* Update basic_quantization_flow.md

* Update basic_quantization_flow.md

* Update ptq_introduction.md

* Update model_optimization_guide.md

* Update basic_quantization_flow.md

* Update ptq_introduction.md

* Update quantization_w_accuracy_control.md

* Update basic_quantization_flow.md

* Update basic_quantization_flow.md

* Update basic_quantization_flow.md

* Update ptq_introduction.md

* Update ptq_introduction.md

* Delete ptq_introduction.md

* Update FrequentlyAskedQuestions.md

* Update Introduction.md

* Update quantization_w_accuracy_control.md

* Update introduction.md

* Update basic_quantization_flow.md code blocks

* Update quantization_w_accuracy_control.md code snippets

* Update docs/optimization_guide/nncf/ptq/code/ptq_torch.py



* Update model_optimization_guide.md

* Optimization docs proofreading  (#2)

* images updated

* delete reminder

* review

* text review

* change images to original ones

* Update filter_pruning.md code blocks

* Update basic_quantization_flow.md

* Update quantization_w_accuracy_control.md

* Update images (#3)

* images updated

* delete reminder

* review

* text review

* change images to original ones

* Update filter_pruning.md code blocks

* update images

* resolve conflicts

* resolve conflicts

* change images to original ones

* resolve conflicts

* update images

* fix conflicts

* Update model_optimization_guide.md

* Update docs/optimization_guide/nncf/ptq/code/ptq_tensorflow.py



* Update docs/optimization_guide/nncf/ptq/code/ptq_torch.py



* Update docs/optimization_guide/nncf/ptq/code/ptq_onnx.py



* Update docs/optimization_guide/nncf/ptq/code/ptq_aa_openvino.py



* Update docs/optimization_guide/nncf/ptq/code/ptq_openvino.py



* table format fix

* Update headers

* Update qat.md code blocks

---------

Co-authored-by: Maksim Proshin <maksim.proshin@intel.com>
Co-authored-by: Alexander Suslov <alexander.suslov@intel.com>
2023-05-19 15:37:41 +00:00
Tatiana Savina
17326abb72 [MO][TF FE] Document freezing as essential step for pruning SM format (#17595) (#17632)
* [MO][TF FE] Document freezing as essential step for pruning SM format



* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_TensorFlow.md



---------

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>
2023-05-19 15:32:57 +00:00
Ilya Lavrenov
8601042bea Added python 3.11 for deployment tool (#17627) 2023-05-19 18:08:49 +04:00
Artyom Anokhov
39958e0dc1 Updated APT/YUM instructions with actual version. Added instructions for Ubuntu22. Updated subfolders naming for APT. (#17561) 2023-05-19 12:46:40 +02:00
Maciej Smyk
6fc9840e32 [DOCS] Link adjustment for 23.0 (#17604) 2023-05-18 15:10:13 +02:00
Ekaterina Aidova
b4452d5630 update OMZ submodule to fix bug (#17570) 2023-05-17 05:51:59 -07:00
Evgenya Stepyreva
4c69552656 Normalize_L2 relax constant input restriction (#17567)
* Normalize_L2 relax constant input restriction

* Fix warning treated as error during windows build
2023-05-17 12:37:02 +00:00
Maciej Smyk
6d8b3405ca [DOCS] Precision Control article for 23.0 (#17573)
Port from: https://github.com/openvinotoolkit/openvino/pull/17413

Added separate article on Precision Control (ov::hint::execution_mode and ov::inference_precision properties)
2023-05-17 11:21:05 +00:00
Evgenya Stepyreva
4c2096ad9c Strided Slice fix constant creation (#17557)
* Strided Slice fix constant creation

* Apply suggestions from code review

* Final touches
2023-05-16 13:53:57 +00:00
Aleksandr Voron
0c67b90f47 [CPU][ARM] Dynamic shapes support in ARM transformations (#17517) 2023-05-16 13:10:34 +04:00
Jan Iwaszkiewicz
83f51e0d00 [PyOV][Backport] Remove numpy strides from Tensor creation (#17535)
* [PyOV] Remove numpy strides from Tensor creation

* [PyOV] Add test for stride calculation

* [PyOV] Fix flake issue
2023-05-16 09:04:56 +04:00
Dmitry Kurtaev
8bb2a2a789 [CMake] Add CMAKE_MAKE_PROGRAM arg (#17340)
Co-authored-by: Ilya Churaev <ilya.churaev@intel.com>
2023-05-15 17:22:27 +04:00
Pawel Raasz
c9cfd6755c [Core] StridedSlice improvements of bound evaluation and constant folding (#17536)
* StridedSlice improvements:
-Bound evaluation for begin, end partial values when ignore mask set.
- Custom constant fold implementation.

* Improve const folding when all begin or end values
are ignored
2023-05-15 12:24:36 +00:00
Karol Blaszczak
c0060aefa7 Prepare "memory_optimization_guide.md" (#17022) (#17498)
---------

Co-authored-by: Vitaliy Urusovskij <vitaliy.urusovskij@intel.com>
2023-05-15 10:48:32 +02:00
Gorokhov Dmitriy
8a97d3c0e1 [CPU] Restore OneDNN InnerProduct primitive os_block computation behavior (#17462) 2023-05-12 15:50:10 +04:00
Wanglei Shen
c5fd3300a2 HOT FIX: disable set_cpu_used in 2023.0 release (#17456)
* disable set_cpu_used in 2023.0 release

* fix code style issue
2023-05-12 14:16:42 +08:00
Mateusz Mikolajczyk
a7f6f5292e Add missing check for special zero (#17479) 2023-05-12 09:30:55 +04:00
Maxim Vafin
804df84f7d Add transformation to convert adaptive pool to reduce (#17478)
* Add transformation to convert adaptive pool to reduce

* Update src/common/transformations/src/transformations/common_optimizations/moc_transformations.cpp

* Add tests and apply feedback

* Simplify if branches
2023-05-11 15:51:26 +00:00
Evgenya Stepyreva
1e49a594f7 [Shape inference] Pooling: Dimension div fix (#17197) (#17471)
* Dimension div fix

* codestyle fixes

* Convolution labels propagation test instances corrected

Co-authored-by: Vladislav Golubev <vladislav.golubev@intel.com>
2023-05-11 14:36:17 +04:00
Tatiana Savina
d5ac1c2e5c [DOCS] Port update docs for TF FE (#17464)
* [TF FE] Update docs for TF FE (#17453)

* Update tensorflow_frontend.md

* Update docs/resources/tensorflow_frontend.md

---------

Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>
2023-05-11 01:03:47 +04:00
Karol Blaszczak
afb2ae6b7a [DOCS] Update GPU.md (#17400) 2023-05-10 17:30:22 +02:00
Maxim Vafin
c5623b71cf Remove posibility to export to onnx (#17423)
* Remove posibility to export to onnx

* Apply suggestions from code review

* Fix tests and docs

* Workaround function inputs

* Fix code style
2023-05-10 16:35:54 +04:00
Maxim Vafin
152b11e77f Remove section about --use_legacy_frontend for PyTorch models (#17441) 2023-05-09 20:30:27 +04:00
Mateusz Tabaka
5adf3b5ca8 [TF frontend] use InterpolateMode::LINEAR_ONNX if input rank is 4 (#17406)
This change mimicks LinearToLinearONNXReplacer transformation in
legacy frontend, where linear interpolate mode is replaced with
linear_onnx due to performance reasons.

Ticket: CVS-108343
2023-05-09 14:52:35 +02:00
Fang Xu
a2ccbdf86e Update oneTBB2021.2.2 for 2023.0 (#17367)
* update oneTBB2021.2.2 for windows

* update SHA256

* update SHA256

oneTBB https://github.com/oneapi-src/oneTBB/releases/tag/v2021.2.2 (a25ebdf)

* add print for hwloc which is not found

---------

Co-authored-by: Chen Peter <peter.chen@intel.com>
2023-05-09 05:49:33 -07:00
Roman Kazantsev
1440b9950f [TF FE] Handle incorrect models (empty, fake) by TF FE (#17408) (#17432)
* [TF FE] Handle incorrect models (empty, fake) by TF FE



* Apply suggestions from code review

---------

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2023-05-09 16:30:51 +04:00
Mateusz Tabaka
d88d4d22e8 Update docs for frontend extensions (#17428) 2023-05-09 13:27:40 +02:00
Tatiana Savina
ea79006a0a DOCS Port shift to rst - Model representation doc (#17385)
* DOCS shift to rst - Model representation doc (#17320)

* model representation to rst

* fix indentation

* fix cide tabs

* fix indentation

* change doc

* fix snippets

* fix snippet

* port changes

* dev docs port
2023-05-09 10:33:12 +02:00
Pavel Esir
a4ff3318ea renumber FAQ (#17376) 2023-05-09 11:35:55 +04:00
Anastasiia Pnevskaia
44e7a003e7 Removed checks of unsatisfied dependencies in MO (#16991) (#17419)
* Fixed dependencies check, made unsatisfied dependencies show only in case of error.

* Small fix.

* Test correction.

* Small test correction.

* Temporarily added debug print.

* Debug output.

* Debug output.

* Debug output.

* Test fix.

* Removed debug output.

* Small fix.

* Moved tests to check_info_messages_test.py

* Remove dependies checks from MO.

* Small corrections.
2023-05-09 11:32:14 +04:00
Przemyslaw Wysocki
fa4112593d [Backport] OMZ submodule bump for Python 3.11 (#17325)
* backport

* Update sha
2023-05-08 18:49:28 +02:00
Surya Siddharth Pemmaraju
45e378f189 Added Torchscript backend (#17328)
* Added Torchscript backend

* Added some torchscript backend tests to ci

* Removed tests from CI as torch.compile doesn't support 3.11 currently

* Fixed linter issues

* Addressed PR comments and linter issues
2023-05-08 03:44:10 -07:00
Maciej Smyk
9320cbaa8c [DOCS] Recreation of BDTI PRs - 23.0 (#17383)
Porting: https://github.com/openvinotoolkit/openvino/pull/16913

Recreation of BDTI PRs for master.

Recreated PRs:

Docs: Update Dynamic Shapes documentation #15216
Docs: Edits to Performance Hints and Cumulative Throughput documentation #14793
Docs: Update Devices pages to state improved INT8 performance with 11th & 12th gen devices #12067
2023-05-08 10:36:56 +00:00
Maciej Smyk
718b194ad6 [DOCS] Legacy MO Extensibility update for 23.0
porting: https://github.com/openvinotoolkit/openvino/pull/15931

Divided MO Extensibility article into separate smaller articles,
Applied the suggestion from [DOCS] Better statement about MO extensions as internal API [Recreating #14062] #15679
Recreated images in svg format
Fixing directives
2023-05-08 12:25:37 +02:00
Maxim Vafin
8241540609 [PT FE] Improve exception when decoder cannot trace or script the model (#17338) (#17347)
* [PT FE] Improve exception when decoder cannot trace or script the model

* Add exception in convert_model

* Add test
2023-05-08 09:22:47 +04:00
Maxim Vafin
10d87b7332 [PT FE] Support default strides for avg and max pooling (#17337) (#17348)
* Support default strides for avg and max pooling

* Fix code style

* Remove changes from other ticket
2023-05-08 09:21:53 +04:00
Karol Blaszczak
386d773b33 [DOCS] fix typos in install guides (#17388) 2023-05-08 07:12:38 +02:00
Sun Xiaoxia
a5312f70db fix binding wrong core with latency mode in i9-13900 (#17364) 2023-05-06 17:17:11 +08:00
Roman Kazantsev
8f113ef24e [TF FE] Provide single tensor names for inputs and outputs in SavedModel (#17373)
* [TF FE] Provide single tensor names for inputs and outputs in SavedModel

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Fix build issue

* Xfail some cases due to internal problems in TF

* Xfail other layer test

* Extend documentation for function to adjust tensor names

* Use old path of tf2 layer testing for legacy frontend

---------

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2023-05-05 19:06:59 +02:00
Sebastian Golebiewski
c651bc5f87 [DOCS] Fix links - port (#17356) 2023-05-05 18:14:38 +02:00
Karol Blaszczak
12aab024d1 [GPU] Update dynamic shape document (#17274) (#17384)
porting: https://github.com/openvinotoolkit/openvino/pull/17384

* Update dynamic shape document for GPU
* Applied review comments

authored-by: Taylor Yeonbok Lee <taylor.lee@intel.com>
2023-05-05 17:58:19 +02:00
Ivan Tikhonov
3978511c5c Fix the names copying in TransposeSinking backward transformations (#17283) (#17344)
* Fix tensor names copying in TS transformations

* added a check that sinking is available for all consumers in TS backward transformations

* codestyle

* Apply review comments, add result sorting by tensor names in graph comparator

* delete debug code

* fix RemoveConsumers method implementation

* fix snippet tests

* use reference instead of raw pointer

* add new transformation tests

* fix transformation tests

Co-authored-by: Andrei Kochin <andrei.kochin@intel.com>
2023-05-05 17:09:44 +02:00
Chen Xu
0de0efd751 [CPU] Fix kernel precision mismatch in Reduce node (#17372)
* [CPU] Fix kernel precision mismatch in Reduce node

* Apply review comments
2023-05-05 14:39:30 +02:00
Sebastian Golebiewski
53e2997909 DOCS shift to rst (#17377) 2023-05-05 10:55:03 +02:00
Maciej Smyk
7779fea76f DOCS shift to rst - Opsets E for 23.0 (#17365) 2023-05-05 10:17:05 +02:00
Sebastian Golebiewski
c785551b57 DOCS shift to rst (#17346) 2023-05-04 13:29:16 +02:00
Roman Kazantsev
8c95c90e45 [TF FE] Use original input types for SavedModel (#17295) (#17335)
Also, refactor TF FE unit-tests

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2023-05-03 16:26:33 +04:00
Evgenya Stepyreva
bf829eead4 NMS-5 calculate upper-bound (#17332)
* NMS-5 calculate upper-bound

* Test
2023-05-03 15:22:08 +04:00
Roman Kazantsev
1141e90435 [MO][TF FE] Handle constant with undefined value (#17311) (#17327)
Since TF 2.10 the native model freezing can produce constants with undefined value,
i.e. tensor shape can be any and value is []. In this case the tensor just fills up with
the default value (0 - for numerics, "" - for strings)

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2023-05-03 12:09:57 +04:00
Roman Kazantsev
15b62d77cc [TF FE] Added additional pruned inputs for MetaGraph support (#17237) (#17326)
* Added handling of additional pruned inputs
Added possible topology of RestoreV2 -> AssignVariableOp
Added additional checks

* Extended tests coverage

Co-authored-by: Georgy Krivoruchko <georgy.krivoruchko@intel.com>
2023-05-03 12:09:20 +04:00
Maxim Vafin
e6347544e2 Fix issue with Pow when both inputs are scalars (#17305) (#17321)
* Fix issue with Pow when both inputs are scalars

* Fix code style
2023-05-03 11:32:13 +04:00
Anton Voronov
fcf261a048 [DOC] small fix for sparse weights decompression feature documentation (#17316) 2023-05-02 15:50:48 +02:00
Tatiana Savina
bba9f3094b [DOCS] Port docs: opsets, import keyword, deprecated options (#17289)
* Added missing import keyword (#17271)

* [DOCS] shift to rst - opsets N (#17267)

* opset to rst

* change list indentations

* fix formula

* add n operations

* add negative and nonzero

* fix link

* specs to rst

* fix matrixnms path

* change path to if

* fix list

* fix format

* DOCS remove deprecated options (#17167)

* DOCS remove deprecated options

* removed a couple more not actual questions

* remove the whole lines completely

* remove a couple of more deprecations

---------

Co-authored-by: Nikita Savelyev <nikita.savelyev@intel.com>
Co-authored-by: Pavel Esir <pavel.esir@intel.com>
2023-05-02 14:05:03 +02:00
Sergey Shlyapnikov
aa13ab63f5 [GPU] Use BFS processing order for out_of_order queue (#17304) 2023-05-02 15:25:21 +04:00
Tatiana Savina
8f978d2c60 update OTE and Datumaro links (#17269) (#17310) 2023-05-02 13:14:21 +02:00
Sebastian Golebiewski
a349ba7295 DOCS shift to rst - Opsets H & I - for 23.0 (#17307)
* update

* update

* cpp
2023-05-02 11:16:21 +02:00
Vladimir Paramuzov
73442bbc82 [GPU] Don't throw exception if no devices are found (#17302)
* [GPU] Don't throw exception if no devices are found

* Fix CAPI test
2023-05-01 23:18:51 +04:00
Tatiana Savina
76c237da8b [DOCS] Document Model Optimizer Python API port (#17287)
* [DOCS] Document Model Optimizer Python API (#14380)

* Added MO convert_model() documentation.

* Apply suggestions from code review

Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Updated Convert_Model pages for PyTorch and TF with PythonAPI info. United TF2 and TF formats lists.

* Added info on flag params, example_input formats list, small corrections.

* Moved MO python API to separate doc. Small text corrections.

* Added TF types conversion description.

* Removed duplicating info.

* Added description of InputCutInfo types and default onnx opset.

* Small correction.

* Changed type table to bullets, added blank lines.

* Added quote marks.

* Removed redunrant bullets.

* Apply suggestions from code review

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Apply suggestions from code review.

* Added new line.

* Apply comments from review.d

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_TensorFlow.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Added description of lists of parameters.

* Update docs/MO_DG/prepare_model/MO_Python_API.md

Co-authored-by: Sergey Lyalin <sergey.lyalin@intel.com>

* Added details about input_shape, example_input.

* Updated PyTorch page.

* Corrected input_signature description.

* Format correction.

* Format correction.

* Format correction.

* Format correction.

* Small correction.

* Small correction.

* Removed input_signature param description.

* Updated text.

* Small correction.

* Small correction.

* Removed not needed examples.

* Apply suggestions from code review

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Added new line.

* Update docs/MO_DG/prepare_model/MO_Python_API.md

Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Added titles of examples.

* Update docs/MO_DG/prepare_model/MO_Python_API.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Apply suggestions from code review

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Update docs/MO_DG/prepare_model/MO_Python_API.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_PyTorch.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

---------

Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>
Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>
Co-authored-by: Sergey Lyalin <sergey.lyalin@intel.com>

* fix first paragraph

* Update MO_Python_API.md

---------

Co-authored-by: Anastasiia Pnevskaia <anastasiia.pnevskaia@intel.com>
Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>
Co-authored-by: Sergey Lyalin <sergey.lyalin@intel.com>
2023-05-01 15:19:11 +04:00
Roman Lyamin
aebea2337e [GPU] Coverity fixes (#17241) (#17281) 2023-05-01 14:35:50 +04:00
Ilya Lavrenov
29c672d6d8 Fixed Python API build for Ubuntu 22.04 with python3.11 (#17297)
* Fixed Python API build for Ubuntu 22.04 with python3.11

* Update ONNX CI docker to test python 3.11 and system pybind11
2023-04-29 03:38:01 +04:00
Maksim Doronin
1f790df33c Fix enable_plugins_xml (#17293) 2023-04-29 00:02:43 +04:00
Ilya Lavrenov
5625424b91 Fixes for OpenCL via brew package (#17273) 2023-04-28 18:10:30 +04:00
Tatiana Savina
c7d0df39b5 remove pre-release note (#17265) 2023-04-28 13:04:31 +02:00
Alina Kladieva
85b57ea2bf Bump Azure refs to 2023/0 (#17264) 2023-04-27 22:09:27 +04:00
1128 changed files with 97023 additions and 20604 deletions

View File

@@ -141,7 +141,6 @@ jobs:
-DANDROID_STL=c++_shared
-DANDROID_PLATFORM=$(ANDROID_SDK_VERSION)
-DENABLE_TESTS=ON
-DENABLE_INTEL_GPU=ON
-DCMAKE_CXX_LINKER_LAUNCHER=ccache
-DCMAKE_C_LINKER_LAUNCHER=ccache
-DCMAKE_CXX_COMPILER_LAUNCHER=ccache

View File

@@ -32,13 +32,13 @@ resources:
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/openvino_contrib
ref: master
ref: releases/2023/0
- repository: testdata
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/testdata
ref: master
ref: releases/2023/0
variables:
- group: github
@@ -105,7 +105,7 @@ jobs:
steps:
- task: UsePythonVersion@0
inputs:
versionSpec: '$(OV_PYTHON_VERSION)' # Setting only major & minor version will download latest release from GH repo example 3.10 will be 3.10.10.
versionSpec: '$(OV_PYTHON_VERSION)' # Setting only major & minor version will download latest release from GH repo example 3.10 will be 3.10.10.
addToPath: true
disableDownloadFromRegistry: false
architecture: 'x64'
@@ -245,6 +245,7 @@ jobs:
-DCMAKE_CXX_COMPILER=clang++
-DCMAKE_C_COMPILER=clang
-DENABLE_SYSTEM_SNAPPY=ON
-DENABLE_SYSTEM_TBB=ON
-DCPACK_GENERATOR=$(CMAKE_CPACK_GENERATOR)
-DBUILD_nvidia_plugin=OFF
-S $(REPO_DIR)
@@ -366,7 +367,7 @@ jobs:
displayName: 'Build cpp samples - gcc'
- script: $(SAMPLES_INSTALL_DIR)/cpp/build_samples.sh -b $(BUILD_DIR)/cpp_samples_clang
env:
env:
CC: clang
CXX: clang++
displayName: 'Build cpp samples - clang'

View File

@@ -108,17 +108,17 @@ jobs:
- checkout: self
clean: 'true'
submodules: 'true'
path: openvino
- script: |
set -e
sudo -E $(OPENVINO_REPO_DIR)/install_build_dependencies.sh
python3 -m pip install --upgrade pip
python3 -m pip install -r $(OPENVINO_REPO_DIR)/src/bindings/python/requirements.txt
python3 -m pip install -r $(OPENVINO_REPO_DIR)/src/bindings/python/wheel/requirements-dev.txt
# install dependencies needed to build CPU plugin for ARM
sudo -E apt --assume-yes install scons crossbuild-essential-arm64
# generic dependencies
sudo -E apt --assume-yes install cmake ccache
# Speed up build
sudo -E apt -y --no-install-recommends install unzip
wget https://github.com/ninja-build/ninja/releases/download/v1.10.2/ninja-linux.zip
@@ -126,25 +126,60 @@ jobs:
sudo cp -v ninja /usr/local/bin/
displayName: 'Install dependencies'
- task: CMake@1
inputs:
cmakeArgs: >
-G "Ninja Multi-Config"
-DCMAKE_VERBOSE_MAKEFILE=ON
-DCMAKE_COMPILE_WARNING_AS_ERROR=ON
-DOpenCV_DIR=$(INSTALL_OPENCV)/cmake
-DENABLE_PYTHON=OFF
-DENABLE_TESTS=ON
-DENABLE_DATA=OFF
-DCMAKE_TOOLCHAIN_FILE=$(OPENVINO_REPO_DIR)/cmake/arm64.toolchain.cmake
-DCMAKE_VERBOSE_MAKEFILE=ON
-DCMAKE_CXX_COMPILER_LAUNCHER=ccache
-DCMAKE_C_COMPILER_LAUNCHER=ccache
-DARM_COMPUTE_SCONS_JOBS=$(NUM_PROC)
-DCMAKE_INSTALL_PREFIX=$(INSTALL_OPENVINO)
-S $(OPENVINO_REPO_DIR)
- script: |
git submodule update --init -- $(OPENVINO_REPO_DIR)/src/plugins
git submodule update --init -- $(OPENVINO_REPO_DIR)/thirdparty/gtest
displayName: 'Init submodules for non Conan dependencies'
- script: |
python3 -m pip install conan
# generate build profile
conan profile detect
# generate host profile for linux_arm64
echo "include(default)" > $(BUILD_OPENVINO)/linux_arm64
echo "[buildenv]" >> $(BUILD_OPENVINO)/linux_arm64
echo "CC=aarch64-linux-gnu-gcc" >> $(BUILD_OPENVINO)/linux_arm64
echo "CXX=aarch64-linux-gnu-g++" >> $(BUILD_OPENVINO)/linux_arm64
# install OpenVINO dependencies
export CMAKE_CXX_COMPILER_LAUNCHER=ccache
export CMAKE_C_COMPILER_LAUNCHER=ccache
conan install $(OPENVINO_REPO_DIR)/conanfile.txt \
-pr:h $(BUILD_OPENVINO)/linux_arm64 \
-s:h arch=armv8 \
-of $(BUILD_OPENVINO) \
-b missing
env:
CCACHE_DIR: $(OPENVINO_CCACHE_DIR)
CCACHE_TEMPDIR: $(TMP_DIR)/ccache
CCACHE_BASEDIR: $(Pipeline.Workspace)
CCACHE_MAXSIZE: 50G
displayName: 'Install conan and dependencies'
- script: |
source $(BUILD_OPENVINO)/conanbuild.sh
cmake \
-G Ninja \
-DCMAKE_VERBOSE_MAKEFILE=ON \
-DBUILD_SHARED_LIBS=ON \
-DCMAKE_COMPILE_WARNING_AS_ERROR=ON \
-DENABLE_CPPLINT=OFF \
-DENABLE_PYTHON=OFF \
-DENABLE_TESTS=ON \
-DENABLE_DATA=OFF \
-DENABLE_SYSTEM_TBB=ON \
-DENABLE_SYSTEM_PROTOBUF=ON \
-DENABLE_SYSTEM_SNAPPY=ON \
-DENABLE_SYSTEM_PUGIXML=ON \
-DCMAKE_TOOLCHAIN_FILE=$(BUILD_OPENVINO)/conan_toolchain.cmake \
-DCMAKE_CXX_COMPILER_LAUNCHER=ccache \
-DCMAKE_C_COMPILER_LAUNCHER=ccache \
-DARM_COMPUTE_SCONS_JOBS=$(NUM_PROC) \
-DCMAKE_INSTALL_PREFIX=$(INSTALL_OPENVINO) \
-DCMAKE_BUILD_TYPE=$(BUILD_TYPE) \
-S $(OPENVINO_REPO_DIR) \
-B $(BUILD_OPENVINO)
displayName: 'CMake OpenVINO ARM plugin'
source $(BUILD_OPENVINO)/deactivate_conanbuild.sh
displayName: 'CMake configure'
- script: cmake --build $(BUILD_OPENVINO) --parallel --config $(BUILD_TYPE)
env:
@@ -152,13 +187,13 @@ jobs:
CCACHE_TEMPDIR: $(TMP_DIR)/ccache
CCACHE_BASEDIR: $(Pipeline.Workspace)
CCACHE_MAXSIZE: 50G
displayName: 'Build OpenVINO ARM plugin'
displayName: 'Build OpenVINO Runtime'
- script: cmake --build $(BUILD_OPENVINO) --parallel --config $(BUILD_TYPE) --target install
displayName: 'Install OpenVINO ARM plugin'
displayName: 'Install OpenVINO Runtime'
- task: PublishBuildArtifacts@1
inputs:
PathtoPublish: $(Build.ArtifactStagingDirectory)
ArtifactName: 'openvino_aarch64_linux'
displayName: 'Publish OpenVINO AArch64 linux package'
displayName: 'Publish OpenVINO Runtime for ARM'

View File

@@ -35,6 +35,7 @@ resources:
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/testdata
ref: releases/2023/0
variables:
- group: github

View File

@@ -4,7 +4,7 @@ resources:
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/openvino_contrib
ref: master
ref: releases/2023/0
variables:
- group: github

View File

@@ -42,11 +42,13 @@ resources:
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/openvino_contrib
ref: releases/2023/0
- repository: testdata
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/testdata
ref: releases/2023/0
jobs:
- job: CUDAPlugin_Lin

View File

@@ -34,7 +34,7 @@ resources:
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/testdata
ref: master
ref: releases/2023/0
jobs:
- job: Lin_Debian

View File

@@ -4,7 +4,7 @@
# type: github
# endpoint: openvinotoolkit
# name: openvinotoolkit/testdata
# ref: master
# ref: releases/2023/0
jobs:
- job: Lin_lohika

View File

@@ -35,13 +35,13 @@ resources:
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/openvino_contrib
ref: master
ref: releases/2023/0
- repository: testdata
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/testdata
ref: master
ref: releases/2023/0
variables:
- group: github

View File

@@ -32,13 +32,13 @@ resources:
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/openvino_contrib
ref: master
ref: releases/2023/0
- repository: testdata
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/testdata
ref: master
ref: releases/2023/0
jobs:
- job: Win

View File

@@ -35,6 +35,7 @@ resources:
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/testdata
ref: releases/2023/0
variables:
- group: github
@@ -116,7 +117,7 @@ jobs:
-G Ninja ^
-DENABLE_CPPLINT=OFF ^
-DENABLE_GAPI_PREPROCESSING=OFF ^
-DENABLE_FASTER_BUILD=ON ^
-DENABLE_PLUGINS_XML=ON ^
-DCMAKE_COMPILE_WARNING_AS_ERROR=ON ^
-DCMAKE_BUILD_TYPE=$(BUILD_TYPE) ^
-DENABLE_PROFILING_ITT=ON ^
@@ -153,7 +154,6 @@ jobs:
-DVERBOSE_BUILD=ON ^
-DENABLE_CPPLINT=OFF ^
-DENABLE_GAPI_PREPROCESSING=OFF ^
-DENABLE_FASTER_BUILD=ON ^
-DENABLE_PROFILING_ITT=OFF ^
-DSELECTIVE_BUILD=ON ^
-DCMAKE_COMPILE_WARNING_AS_ERROR=ON ^

View File

@@ -1,4 +1,4 @@
FROM ubuntu:22.04
FROM ubuntu:23.04
LABEL version=2021.03.30.1
@@ -38,6 +38,7 @@ RUN apt-get update && apt-get -y --no-install-recommends install \
python3 \
python3-pip \
python3-dev \
pybind11-dev \
python3-virtualenv \
cython3 \
tox && \
@@ -71,5 +72,5 @@ RUN ninja install
WORKDIR /openvino/src/bindings/python
ENV OpenVINO_DIR=/openvino/dist/runtime/cmake
ENV LD_LIBRARY_PATH=/openvino/dist/runtime/lib/intel64:/openvino/dist/runtime/3rdparty/tbb/lib
ENV PYTHONPATH=/openvino/bin/intel64/${BUILD_TYPE}/python_api/python3.10:${PYTHONPATH}
ENV PYTHONPATH=/openvino/bin/intel64/${BUILD_TYPE}/python_api/python3.11:${PYTHONPATH}
CMD tox

1
.gitignore vendored
View File

@@ -26,6 +26,7 @@ temp/
.repo/
CMakeLists.txt.user
docs/IE_PLUGIN_DG/html/
CMakeUserPresets.json
*.project
*.cproject

View File

@@ -40,8 +40,6 @@ endif()
# resolving dependencies for the project
message (STATUS "CMAKE_VERSION ......................... " ${CMAKE_VERSION})
message (STATUS "CMAKE_BINARY_DIR ...................... " ${CMAKE_BINARY_DIR})
message (STATUS "CMAKE_SOURCE_DIR ...................... " ${CMAKE_SOURCE_DIR})
message (STATUS "OpenVINO_SOURCE_DIR ................... " ${OpenVINO_SOURCE_DIR})
message (STATUS "OpenVINO_BINARY_DIR ................... " ${OpenVINO_BINARY_DIR})
message (STATUS "CMAKE_GENERATOR ....................... " ${CMAKE_GENERATOR})
@@ -66,7 +64,7 @@ endif()
if(CMAKE_TOOLCHAIN_FILE)
message (STATUS "CMAKE_TOOLCHAIN_FILE .................. " ${CMAKE_TOOLCHAIN_FILE})
endif()
if(OV_GLIBC_VERSION)
if(NOT OV_GLIBC_VERSION VERSION_EQUAL 0.0)
message (STATUS "GLIBC_VERSION ......................... " ${OV_GLIBC_VERSION})
endif()

View File

@@ -2,14 +2,14 @@
<img src="docs/img/openvino-logo-purple-black.png" width="400px">
[![Stable release](https://img.shields.io/badge/version-2022.3-green.svg)](https://github.com/openvinotoolkit/openvino/releases/tag/2022.3.0)
[![Apache License Version 2.0](https://img.shields.io/badge/license-Apache_2.0-green.svg)](LICENSE)
![GitHub branch checks state](https://img.shields.io/github/checks-status/openvinotoolkit/openvino/master?label=GitHub%20checks)
![Azure DevOps builds (branch)](https://img.shields.io/azure-devops/build/openvinoci/b2bab62f-ab2f-4871-a538-86ea1be7d20f/13?label=Public%20CI)
[![PyPI Status](https://badge.fury.io/py/openvino.svg)](https://badge.fury.io/py/openvino)
[![Anaconda Status](https://anaconda.org/conda-forge/openvino/badges/version.svg)](https://anaconda.org/conda-forge/openvino/badges/version.svg)
[![Anaconda Status](https://anaconda.org/conda-forge/openvino/badges/version.svg)](https://anaconda.org/conda-forge/openvino)
[![brew Status](https://img.shields.io/homebrew/v/openvino)](https://formulae.brew.sh/formula/openvino)
[![PyPI Downloads](https://pepy.tech/badge/openvino)](https://pepy.tech/project/openvino)
[![Anaconda Downloads](https://anaconda.org/conda-forge/openvino/badges/downloads.svg)](https://anaconda.org/conda-forge/openvino/files)
[![brew Downloads](https://img.shields.io/homebrew/installs/dy/openvino)](https://formulae.brew.sh/formula/openvino)
</div>
## Contents:
@@ -70,24 +70,24 @@ The OpenVINO™ Runtime can infer models on different hardware devices. This sec
<tbody>
<tr>
<td rowspan=2>CPU</td>
<td> <a href="https://docs.openvino.ai/nightly/openvino_docs_OV_UG_supported_plugins_CPU.html#doxid-openvino-docs-o-v-u-g-supported-plugins-c-p-u">Intel CPU</a></tb>
<td> <a href="https://docs.openvino.ai/2023.0/openvino_docs_OV_UG_supported_plugins_CPU.html#doxid-openvino-docs-o-v-u-g-supported-plugins-c-p-u">Intel CPU</a></tb>
<td><b><i><a href="./src/plugins/intel_cpu">openvino_intel_cpu_plugin</a></i></b></td>
<td>Intel Xeon with Intel® Advanced Vector Extensions 2 (Intel® AVX2), Intel® Advanced Vector Extensions 512 (Intel® AVX-512), and AVX512_BF16, Intel Core Processors with Intel AVX2, Intel Atom Processors with Intel® Streaming SIMD Extensions (Intel® SSE)</td>
</tr>
<tr>
<td> <a href="https://docs.openvino.ai/nightly/openvino_docs_OV_UG_supported_plugins_ARM_CPU.html">ARM CPU</a></tb>
<td> <a href="https://docs.openvino.ai/2023.0/openvino_docs_OV_UG_supported_plugins_CPU.html">ARM CPU</a></tb>
<td><b><i><a href="https://github.com/openvinotoolkit/openvino_contrib/tree/master/modules/arm_plugin">openvino_arm_cpu_plugin</a></i></b></td>
<td>Raspberry Pi™ 4 Model B, Apple® Mac mini with M1 chip, NVIDIA® Jetson Nano™, Android™ devices
</tr>
<tr>
<td>GPU</td>
<td><a href="https://docs.openvino.ai/nightly/openvino_docs_OV_UG_supported_plugins_GPU.html#doxid-openvino-docs-o-v-u-g-supported-plugins-g-p-u">Intel GPU</a></td>
<td><a href="https://docs.openvino.ai/2023.0/openvino_docs_OV_UG_supported_plugins_GPU.html#doxid-openvino-docs-o-v-u-g-supported-plugins-g-p-u">Intel GPU</a></td>
<td><b><i><a href="./src/plugins/intel_gpu">openvino_intel_gpu_plugin</a></i></b></td>
<td>Intel Processor Graphics, including Intel HD Graphics and Intel Iris Graphics</td>
</tr>
<tr>
<td>GNA</td>
<td><a href="https://docs.openvino.ai/nightly/openvino_docs_OV_UG_supported_plugins_GNA.html#doxid-openvino-docs-o-v-u-g-supported-plugins-g-n-a">Intel GNA</a></td>
<td><a href="https://docs.openvino.ai/2023.0/openvino_docs_OV_UG_supported_plugins_GNA.html#doxid-openvino-docs-o-v-u-g-supported-plugins-g-n-a">Intel GNA</a></td>
<td><b><i><a href="./src/plugins/intel_gna">openvino_intel_gna_plugin</a></i></b></td>
<td>Intel Speech Enabling Developer Kit, Amazon Alexa* Premium Far-Field Developer Kit, Intel Pentium Silver J5005 Processor, Intel Pentium Silver N5000 Processor, Intel Celeron J4005 Processor, Intel Celeron J4105 Processor, Intel Celeron Processor N4100, Intel Celeron Processor N4000, Intel Core i3-8121U Processor, Intel Core i7-1065G7 Processor, Intel Core i7-1060G7 Processor, Intel Core i5-1035G4 Processor, Intel Core i5-1035G7 Processor, Intel Core i5-1035G1 Processor, Intel Core i5-1030G7 Processor, Intel Core i5-1030G4 Processor, Intel Core i3-1005G1 Processor, Intel Core i3-1000G1 Processor, Intel Core i3-1000G4 Processor</td>
</tr>
@@ -105,22 +105,22 @@ OpenVINO™ Toolkit also contains several plugins which simplify loading models
</thead>
<tbody>
<tr>
<td><a href="https://docs.openvino.ai/nightly/openvino_docs_IE_DG_supported_plugins_AUTO.html#doxid-openvino-docs-i-e-d-g-supported-plugins-a-u-t-o">Auto</a></td>
<td><a href="https://docs.openvino.ai/2023.0/openvino_docs_IE_DG_supported_plugins_AUTO.html#doxid-openvino-docs-i-e-d-g-supported-plugins-a-u-t-o">Auto</a></td>
<td><b><i><a href="./src/plugins/auto">openvino_auto_plugin</a></i></b></td>
<td>Auto plugin enables selecting Intel device for inference automatically</td>
</tr>
<tr>
<td><a href="https://docs.openvino.ai/nightly/openvino_docs_OV_UG_Automatic_Batching.html">Auto Batch</a></td>
<td><a href="https://docs.openvino.ai/2023.0/openvino_docs_OV_UG_Automatic_Batching.html">Auto Batch</a></td>
<td><b><i><a href="./src/plugins/auto_batch">openvino_auto_batch_plugin</a></i></b></td>
<td>Auto batch plugin performs on-the-fly automatic batching (i.e. grouping inference requests together) to improve device utilization, with no programming effort from the user</td>
</tr>
<tr>
<td><a href="https://docs.openvino.ai/nightly/openvino_docs_OV_UG_Hetero_execution.html#doxid-openvino-docs-o-v-u-g-hetero-execution">Hetero</a></td>
<td><a href="https://docs.openvino.ai/2023.0/openvino_docs_OV_UG_Hetero_execution.html#doxid-openvino-docs-o-v-u-g-hetero-execution">Hetero</a></td>
<td><b><i><a href="./src/plugins/hetero">openvino_hetero_plugin</a></i></b></td>
<td>Heterogeneous execution enables automatic inference splitting between several devices</td>
</tr>
<tr>
<td><a href="https://docs.openvino.ai/nightly/openvino_docs_OV_UG_Running_on_multiple_devices.html#doxid-openvino-docs-o-v-u-g-running-on-multiple-devices">Multi</a></td>
<td><a href="https://docs.openvino.ai/2023.0/openvino_docs_OV_UG_Running_on_multiple_devices.html#doxid-openvino-docs-o-v-u-g-running-on-multiple-devices">Multi</a></td>
<td><b><i><a href="./src/plugins/auto">openvino_auto_plugin</a></i></b></td>
<td>Multi plugin enables simultaneous inference of the same model on several devices in parallel</td>
</tr>
@@ -157,10 +157,10 @@ The list of OpenVINO tutorials:
## System requirements
The system requirements vary depending on platform and are available on dedicated pages:
- [Linux](https://docs.openvino.ai/nightly/openvino_docs_install_guides_installing_openvino_linux_header.html)
- [Windows](https://docs.openvino.ai/nightly/openvino_docs_install_guides_installing_openvino_windows_header.html)
- [macOS](https://docs.openvino.ai/nightly/openvino_docs_install_guides_installing_openvino_macos_header.html)
- [Raspbian](https://docs.openvino.ai/nightly/openvino_docs_install_guides_installing_openvino_raspbian.html)
- [Linux](https://docs.openvino.ai/2023.0/openvino_docs_install_guides_installing_openvino_linux_header.html)
- [Windows](https://docs.openvino.ai/2023.0/openvino_docs_install_guides_installing_openvino_windows_header.html)
- [macOS](https://docs.openvino.ai/2023.0/openvino_docs_install_guides_installing_openvino_macos_header.html)
- [Raspbian](https://docs.openvino.ai/2023.0/openvino_docs_install_guides_installing_openvino_raspbian.html)
## How to build
@@ -189,7 +189,6 @@ Report questions, issues and suggestions, using:
* [Neural Network Compression Framework (NNCF)](https://github.com/openvinotoolkit/nncf) - a suite of advanced algorithms for model inference optimization including quantization, filter pruning, binarization and sparsity
* [OpenVINO™ Training Extensions (OTE)](https://github.com/openvinotoolkit/training_extensions) - convenient environment to train Deep Learning models and convert them using OpenVINO for optimized inference.
* [OpenVINO™ Model Server (OVMS)](https://github.com/openvinotoolkit/model_server) - a scalable, high-performance solution for serving deep learning models optimized for Intel architectures
* [DL Workbench](https://docs.openvino.ai/nightly/workbench_docs_Workbench_DG_Introduction.html) - an alternative, web-based version of OpenVINO designed to facilitate optimization and compression of pre-trained deep learning models.
* [Computer Vision Annotation Tool (CVAT)](https://github.com/opencv/cvat) - an online, interactive video and image annotation tool for computer vision purposes.
* [Dataset Management Framework (Datumaro)](https://github.com/openvinotoolkit/datumaro) - a framework and CLI tool to build, transform, and analyze datasets.
@@ -197,7 +196,7 @@ Report questions, issues and suggestions, using:
\* Other names and brands may be claimed as the property of others.
[Open Model Zoo]:https://github.com/openvinotoolkit/open_model_zoo
[OpenVINO™ Runtime]:https://docs.openvino.ai/nightly/openvino_docs_OV_UG_OV_Runtime_User_Guide.html
[Model Optimizer]:https://docs.openvino.ai/nightly/openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html
[Post-Training Optimization Tool]:https://docs.openvino.ai/nightly/pot_introduction.html
[OpenVINO™ Runtime]:https://docs.openvino.ai/2023.0/openvino_docs_OV_UG_OV_Runtime_User_Guide.html
[Model Optimizer]:https://docs.openvino.ai/2023.0/openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html
[Post-Training Optimization Tool]:https://docs.openvino.ai/2023.0/pot_introduction.html
[Samples]:https://github.com/openvinotoolkit/openvino/tree/master/samples

View File

@@ -98,10 +98,10 @@ function(ov_download_tbb)
# TODO: add target_path to be platform specific as well, to avoid following if
# build oneTBB 2021.2.1 with Visual Studio 2019 (MSVC 14.21)
RESOLVE_DEPENDENCY(TBB
ARCHIVE_WIN "oneapi-tbb-2021.2.1-win.zip"
ARCHIVE_WIN "oneapi-tbb-2021.2.2-win.zip"
TARGET_PATH "${TEMP}/tbb"
ENVIRONMENT "TBBROOT"
SHA256 "d81591673bd7d3d9454054642f8ef799e1fdddc7b4cee810a95e6130eb7323d4"
SHA256 "103b19a8af288c6a7d83ed3f0d2239c4afd0dd189fc12aad1d34b3c9e78df94b"
USE_NEW_LOCATION TRUE)
elseif(ANDROID AND X86_64)
RESOLVE_DEPENDENCY(TBB

View File

@@ -111,8 +111,8 @@ else()
set(BIN_FOLDER "bin/${ARCH_FOLDER}")
endif()
if(CMAKE_GENERATOR MATCHES "^Ninja Multi-Config$")
# Ninja-Multi specific, see:
if(CMAKE_GENERATOR STREQUAL "Ninja Multi-Config")
# 'Ninja Multi-Config' specific, see:
# https://cmake.org/cmake/help/latest/variable/CMAKE_DEFAULT_BUILD_TYPE.html
set(CMAKE_DEFAULT_BUILD_TYPE "Release" CACHE STRING "CMake default build type")
elseif(NOT OV_GENERATOR_MULTI_CONFIG)
@@ -240,7 +240,7 @@ if(ENABLE_LTO)
LANGUAGES C CXX)
if(NOT IPO_SUPPORTED)
set(ENABLE_LTO "OFF" CACHE STRING "Enable Link Time Optmization" FORCE)
set(ENABLE_LTO "OFF" CACHE STRING "Enable Link Time Optimization" FORCE)
message(WARNING "IPO / LTO is not supported: ${OUTPUT_MESSAGE}")
endif()
endif()
@@ -250,8 +250,8 @@ endif()
macro(ov_install_static_lib target comp)
if(NOT BUILD_SHARED_LIBS)
get_target_property(target_type ${target} TYPE)
if(${target_type} STREQUAL "STATIC_LIBRARY")
set_target_properties(${target} PROPERTIES EXCLUDE_FROM_ALL FALSE)
if(target_type STREQUAL "STATIC_LIBRARY")
set_target_properties(${target} PROPERTIES EXCLUDE_FROM_ALL OFF)
endif()
install(TARGETS ${target} EXPORT OpenVINOTargets
ARCHIVE DESTINATION ${OV_CPACK_ARCHIVEDIR} COMPONENT ${comp} ${ARGN})

View File

@@ -4,23 +4,28 @@
if(WIN32)
set(PROGRAMFILES_ENV "ProgramFiles(X86)")
file(TO_CMAKE_PATH $ENV{${PROGRAMFILES_ENV}} PROGRAMFILES)
set(WDK_PATHS "${PROGRAMFILES}/Windows Kits/10/bin/${CMAKE_VS_WINDOWS_TARGET_PLATFORM_VERSION}/x64"
"${PROGRAMFILES}/Windows Kits/10/bin/x64")
# check that PROGRAMFILES_ENV is defined, because in case of cross-compilation for Windows
# we don't have such variable
if(DEFINED ENV{PROGRAMFILES_ENV})
file(TO_CMAKE_PATH $ENV{${PROGRAMFILES_ENV}} PROGRAMFILES)
message(STATUS "Trying to find apivalidator in: ")
foreach(wdk_path IN LISTS WDK_PATHS)
message(" * ${wdk_path}")
endforeach()
set(WDK_PATHS "${PROGRAMFILES}/Windows Kits/10/bin/${CMAKE_VS_WINDOWS_TARGET_PLATFORM_VERSION}/x64"
"${PROGRAMFILES}/Windows Kits/10/bin/x64")
find_host_program(ONECORE_API_VALIDATOR
NAMES apivalidator
PATHS ${WDK_PATHS}
DOC "ApiValidator for OneCore compliance")
message(STATUS "Trying to find apivalidator in: ")
foreach(wdk_path IN LISTS WDK_PATHS)
message(" * ${wdk_path}")
endforeach()
if(ONECORE_API_VALIDATOR)
message(STATUS "Found apivalidator: ${ONECORE_API_VALIDATOR}")
find_host_program(ONECORE_API_VALIDATOR
NAMES apivalidator
PATHS ${WDK_PATHS}
DOC "ApiValidator for OneCore compliance")
if(ONECORE_API_VALIDATOR)
message(STATUS "Found apivalidator: ${ONECORE_API_VALIDATOR}")
endif()
endif()
endif()

View File

@@ -4,8 +4,13 @@
macro(enable_fuzzing)
# Enable (libFuzzer)[https://llvm.org/docs/LibFuzzer.html] if supported.
set(FUZZING_COMPILER_FLAGS "-fsanitize=fuzzer-no-link -fprofile-instr-generate -fcoverage-mapping")
set(FUZZING_LINKER_FLAGS "-fsanitize-coverage=trace-pc-guard -fprofile-instr-generate")
if(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
# see https://learn.microsoft.com/en-us/cpp/build/reference/fsanitize?view=msvc-160#remarks
set(FUZZING_COMPILER_FLAGS "/fsanitize=fuzzer")
elseif(OV_COMPILER_IS_CLANG)
set(FUZZING_COMPILER_FLAGS "-fsanitize=fuzzer-no-link -fprofile-instr-generate -fcoverage-mapping")
set(FUZZING_LINKER_FLAGS "-fsanitize-coverage=trace-pc-guard -fprofile-instr-generate")
endif()
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} ${FUZZING_COMPILER_FLAGS}")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${FUZZING_COMPILER_FLAGS}")
@@ -20,6 +25,10 @@ function(add_fuzzer FUZZER_EXE_NAME FUZZER_SOURCES)
add_executable(${FUZZER_EXE_NAME} ${FUZZER_SOURCES})
target_link_libraries(${FUZZER_EXE_NAME} PRIVATE fuzz-testhelper)
if(ENABLE_FUZZING)
set_target_properties(${FUZZER_EXE_NAME} PROPERTIES LINK_FLAGS "-fsanitize=fuzzer")
if(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
# no extra flags are required
elseif(OV_COMPILER_IS_CLANG)
set_target_properties(${FUZZER_EXE_NAME} PROPERTIES LINK_FLAGS "-fsanitize=fuzzer")
endif()
endif()
endfunction(add_fuzzer)

View File

@@ -12,23 +12,17 @@ include(CheckCXXCompilerFlag)
# Defines ie_c_cxx_deprecated varaible which contains C / C++ compiler flags
#
macro(ov_disable_deprecated_warnings)
if(WIN32)
if(CMAKE_CXX_COMPILER_ID STREQUAL "Intel")
if(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
set(ie_c_cxx_deprecated "/wd4996")
elseif(CMAKE_CXX_COMPILER_ID STREQUAL "Intel")
if(WIN32)
set(ie_c_cxx_deprecated "/Qdiag-disable:1478,1786")
elseif(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
set(ie_c_cxx_deprecated "/wd4996")
elseif(OV_COMPILER_IS_CLANG)
set(ie_c_cxx_deprecated "-Wno-deprecated-declarations")
endif()
else()
if(CMAKE_CXX_COMPILER_ID STREQUAL "Intel")
else()
set(ie_c_cxx_deprecated "-diag-disable=1478,1786")
elseif(OV_COMPILER_IS_CLANG OR CMAKE_COMPILER_IS_GNUCXX)
set(ie_c_cxx_deprecated "-Wno-deprecated-declarations")
endif()
endif()
if(NOT ie_c_cxx_deprecated)
elseif(OV_COMPILER_IS_CLANG OR CMAKE_COMPILER_IS_GNUCXX)
set(ie_c_cxx_deprecated "-Wno-deprecated-declarations")
else()
message(WARNING "Unsupported CXX compiler ${CMAKE_CXX_COMPILER_ID}")
endif()
@@ -49,24 +43,18 @@ endmacro()
# Defines ie_c_cxx_deprecated_no_errors varaible which contains C / C++ compiler flags
#
macro(ov_deprecated_no_errors)
if(WIN32)
if(CMAKE_CXX_COMPILER_ID STREQUAL "Intel")
if(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
# show 4996 only for /w4
set(ie_c_cxx_deprecated_no_errors "/wd4996")
elseif(CMAKE_CXX_COMPILER_ID STREQUAL "Intel")
if(WIN32)
set(ie_c_cxx_deprecated_no_errors "/Qdiag-warning:1478,1786")
elseif(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
# show 4996 only for /w4
set(ie_c_cxx_deprecated_no_errors "/wd4996")
elseif(OV_COMPILER_IS_CLANG)
set(ie_c_cxx_deprecated_no_errors "-Wno-error=deprecated-declarations")
endif()
else()
if(CMAKE_CXX_COMPILER_ID STREQUAL "Intel")
else()
set(ie_c_cxx_deprecated_no_errors "-diag-warning=1478,1786")
elseif(OV_COMPILER_IS_CLANG OR CMAKE_COMPILER_IS_GNUCXX)
set(ie_c_cxx_deprecated_no_errors "-Wno-error=deprecated-declarations")
endif()
endif()
if(NOT ie_c_cxx_deprecated_no_errors)
elseif(OV_COMPILER_IS_CLANG OR CMAKE_COMPILER_IS_GNUCXX)
set(ie_c_cxx_deprecated_no_errors "-Wno-error=deprecated-declarations")
else()
message(WARNING "Unsupported CXX compiler ${CMAKE_CXX_COMPILER_ID}")
endif()
@@ -101,23 +89,21 @@ endmacro()
# Provides SSE4.2 compilation flags depending on an OS and a compiler
#
macro(ie_sse42_optimization_flags flags)
if(WIN32)
if(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
# No such option for MSVC 2019
elseif(CMAKE_CXX_COMPILER_ID STREQUAL "Intel")
if(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
# No such option for MSVC 2019
elseif(CMAKE_CXX_COMPILER_ID STREQUAL "Intel")
if(WIN32)
set(${flags} /QxSSE4.2)
else()
message(WARNING "Unsupported CXX compiler ${CMAKE_CXX_COMPILER_ID}")
set(${flags} -xSSE4.2)
endif()
elseif(OV_COMPILER_IS_CLANG OR CMAKE_COMPILER_IS_GNUCXX)
set(${flags} -msse4.2)
if(EMSCRIPTEN)
list(APPEND ${flags} -msimd128)
endif()
else()
if(CMAKE_CXX_COMPILER_ID STREQUAL "Intel")
set(${flags} -xSSE4.2)
else()
set(${flags} -msse4.2)
if(EMSCRIPTEN)
list(APPEND ${flags} -msimd128)
endif()
endif()
message(WARNING "Unsupported CXX compiler ${CMAKE_CXX_COMPILER_ID}")
endif()
endmacro()
@@ -127,20 +113,18 @@ endmacro()
# Provides AVX2 compilation flags depending on an OS and a compiler
#
macro(ie_avx2_optimization_flags flags)
if(WIN32)
if(CMAKE_CXX_COMPILER_ID STREQUAL "Intel")
if(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
set(${flags} /arch:AVX2)
elseif(CMAKE_CXX_COMPILER_ID STREQUAL "Intel")
if(WIN32)
set(${flags} /QxCORE-AVX2)
elseif(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
set(${flags} /arch:AVX2)
else()
message(WARNING "Unsupported CXX compiler ${CMAKE_CXX_COMPILER_ID}")
endif()
else()
if(CMAKE_CXX_COMPILER_ID STREQUAL "Intel")
set(${flags} -xCORE-AVX2)
else()
set(${flags} -mavx2 -mfma)
endif()
elseif(OV_COMPILER_IS_CLANG OR CMAKE_COMPILER_IS_GNUCXX)
set(${flags} -mavx2 -mfma)
else()
message(WARNING "Unsupported CXX compiler ${CMAKE_CXX_COMPILER_ID}")
endif()
endmacro()
@@ -151,24 +135,18 @@ endmacro()
# depending on an OS and a compiler
#
macro(ie_avx512_optimization_flags flags)
if(WIN32)
if(CMAKE_CXX_COMPILER_ID STREQUAL "Intel")
if(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
set(${flags} /arch:AVX512)
elseif(CMAKE_CXX_COMPILER_ID STREQUAL "Intel")
if(WIN32)
set(${flags} /QxCOMMON-AVX512)
elseif(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
set(${flags} /arch:AVX512)
else()
message(WARNING "Unsupported CXX compiler ${CMAKE_CXX_COMPILER_ID}")
endif()
else()
if(CMAKE_CXX_COMPILER_ID STREQUAL "Intel")
set(${flags} -xCOMMON-AVX512)
endif()
if(CMAKE_CXX_COMPILER_ID STREQUAL "GNU")
set(${flags} -mavx512f -mfma)
endif()
if(CMAKE_CXX_COMPILER_ID MATCHES "^(Clang|AppleClang)$")
set(${flags} -mavx512f -mfma)
endif()
elseif(OV_COMPILER_IS_CLANG OR CMAKE_COMPILER_IS_GNUCXX)
set(${flags} -mavx512f -mfma)
else()
message(WARNING "Unsupported CXX compiler ${CMAKE_CXX_COMPILER_ID}")
endif()
endmacro()
@@ -265,8 +243,10 @@ endfunction()
function(ov_force_include target scope header_file)
if(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
target_compile_options(${target} ${scope} /FI"${header_file}")
else()
elseif(OV_COMPILER_IS_CLANG OR CMAKE_COMPILER_IS_GNUCXX)
target_compile_options(${target} ${scope} -include "${header_file}")
else()
message(WARNING "Unsupported CXX compiler ${CMAKE_CXX_COMPILER_ID}")
endif()
endfunction()
@@ -318,11 +298,11 @@ set(CMAKE_VISIBILITY_INLINES_HIDDEN ON)
if(CMAKE_CL_64)
# Default char Type Is unsigned
# ie_add_compiler_flags(/J)
else()
elseif(CMAKE_COMPILER_IS_GNUCXX OR OV_COMPILER_IS_CLANG)
ie_add_compiler_flags(-fsigned-char)
endif()
if(WIN32)
if(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
#
# Common options / warnings enabled
#
@@ -335,16 +315,14 @@ if(WIN32)
# This option helps ensure the fewest possible hard-to-find code defects. Similar to -Wall on GNU / Clang
ie_add_compiler_flags(/W3)
if(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
# Increase Number of Sections in .Obj file
ie_add_compiler_flags(/bigobj)
# Build with multiple processes
ie_add_compiler_flags(/MP)
# Increase Number of Sections in .Obj file
ie_add_compiler_flags(/bigobj)
# Build with multiple processes
ie_add_compiler_flags(/MP)
if(AARCH64 AND NOT MSVC_VERSION LESS 1930)
# otherwise, _ARM64_EXTENDED_INTRINSICS is defined, which defines 'mvn' macro
ie_add_compiler_flags(/D_ARM64_DISTINCT_NEON_TYPES)
endif()
if(AARCH64 AND NOT MSVC_VERSION LESS 1930)
# otherwise, _ARM64_EXTENDED_INTRINSICS is defined, which defines 'mvn' macro
ie_add_compiler_flags(/D_ARM64_DISTINCT_NEON_TYPES)
endif()
# Handle Large Addresses
@@ -361,42 +339,62 @@ if(WIN32)
set(CMAKE_SHARED_LINKER_FLAGS "${CMAKE_SHARED_LINKER_FLAGS} /WX")
set(CMAKE_MODULE_LINKER_FLAGS "${CMAKE_MODULE_LINKER_FLAGS} /WX")
set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} /WX")
if(CMAKE_CXX_COMPILER_ID STREQUAL "Intel")
ie_add_compiler_flags(/Qdiag-warning:47,1740,1786)
endif()
endif()
#
# Disable noisy warnings
#
if(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
# C4251 needs to have dll-interface to be used by clients of class
ie_add_compiler_flags(/wd4251)
# C4275 non dll-interface class used as base for dll-interface class
ie_add_compiler_flags(/wd4275)
elseif(CMAKE_CXX_COMPILER_ID STREQUAL "Intel")
# 161: unrecognized pragma
# 177: variable was declared but never referenced
# 556: not matched type of assigned function pointer
# 1744: field of class type without a DLL interface used in a class with a DLL interface
# 1879: unimplemented pragma ignored
# 2586: decorated name length exceeded, name was truncated
# 2651: attribute does not apply to any entity
# 3180: unrecognized OpenMP pragma
# 11075: To get full report use -Qopt-report:4 -Qopt-report-phase ipo
# 15335: was not vectorized: vectorization possible but seems inefficient. Use vector always directive or /Qvec-threshold0 to override
ie_add_compiler_flags(/Qdiag-disable:161,177,556,1744,1879,2586,2651,3180,11075,15335)
endif()
# C4251 needs to have dll-interface to be used by clients of class
ie_add_compiler_flags(/wd4251)
# C4275 non dll-interface class used as base for dll-interface class
ie_add_compiler_flags(/wd4275)
#
# Debug information flags, by default CMake adds /Zi option
# but provides no way to specify CMAKE_COMPILE_PDB_NAME on root level
# In order to avoid issues with ninja we are replacing default flag instead of having two of them
# and observing warning D9025 about flag override
#
string(REPLACE "/Zi" "/Z7" CMAKE_C_FLAGS_DEBUG "${CMAKE_C_FLAGS_DEBUG}")
string(REPLACE "/Zi" "/Z7" CMAKE_CXX_FLAGS_DEBUG "${CMAKE_CXX_FLAGS_DEBUG}")
string(REPLACE "/Zi" "/Z7" CMAKE_C_FLAGS_RELWITHDEBINFO "${CMAKE_C_FLAGS_RELWITHDEBINFO}")
string(REPLACE "/Zi" "/Z7" CMAKE_CXX_FLAGS_RELWITHDEBINFO "${CMAKE_CXX_FLAGS_RELWITHDEBINFO}")
elseif(CMAKE_CXX_COMPILER_ID STREQUAL "Intel" AND WIN32)
#
# Warnings as errors
#
if(CMAKE_COMPILE_WARNING_AS_ERROR AND CMAKE_VERSION VERSION_LESS 3.24)
ie_add_compiler_flags(/Qdiag-warning:47,1740,1786)
endif()
#
# Disable noisy warnings
#
# 161: unrecognized pragma
ie_add_compiler_flags(/Qdiag-disable:161)
# 177: variable was declared but never referenced
ie_add_compiler_flags(/Qdiag-disable:177)
# 556: not matched type of assigned function pointer
ie_add_compiler_flags(/Qdiag-disable:556)
# 1744: field of class type without a DLL interface used in a class with a DLL interface
ie_add_compiler_flags(/Qdiag-disable:1744)
# 1879: unimplemented pragma ignored
ie_add_compiler_flags(/Qdiag-disable:1879)
# 2586: decorated name length exceeded, name was truncated
ie_add_compiler_flags(/Qdiag-disable:2586)
# 2651: attribute does not apply to any entity
ie_add_compiler_flags(/Qdiag-disable:2651)
# 3180: unrecognized OpenMP pragma
ie_add_compiler_flags(/Qdiag-disable:3180)
# 11075: To get full report use -Qopt-report:4 -Qopt-report-phase ipo
ie_add_compiler_flags(/Qdiag-disable:11075)
# 15335: was not vectorized: vectorization possible but seems inefficient.
# Use vector always directive or /Qvec-threshold0 to override
ie_add_compiler_flags(/Qdiag-disable:15335)
else()
#
# Common enabled warnings

View File

@@ -5,7 +5,9 @@
include(CheckCXXCompilerFlag)
if (ENABLE_SANITIZER)
if (WIN32)
if(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
# the flag is available since MSVC 2019 16.9
# see https://learn.microsoft.com/en-us/cpp/build/reference/fsanitize?view=msvc-160
check_cxx_compiler_flag("/fsanitize=address" SANITIZE_ADDRESS_SUPPORTED)
if (SANITIZE_ADDRESS_SUPPORTED)
set(SANITIZER_COMPILER_FLAGS "${SANITIZER_COMPILER_FLAGS} /fsanitize=address")
@@ -14,21 +16,23 @@ if (ENABLE_SANITIZER)
"Please, check requirements:\n"
"https://github.com/openvinotoolkit/openvino/wiki/AddressSanitizer-and-LeakSanitizer")
endif()
else()
elseif(CMAKE_COMPILER_IS_GNUCXX OR OV_COMPILER_IS_CLANG)
set(SANITIZER_COMPILER_FLAGS "${SANITIZER_COMPILER_FLAGS} -fsanitize=address")
check_cxx_compiler_flag("-fsanitize-recover=address" SANITIZE_RECOVER_ADDRESS_SUPPORTED)
if (SANITIZE_RECOVER_ADDRESS_SUPPORTED)
set(SANITIZER_COMPILER_FLAGS "${SANITIZER_COMPILER_FLAGS} -fsanitize-recover=address")
endif()
set(SANITIZER_LINKER_FLAGS "${SANITIZER_LINKER_FLAGS} -fsanitize=address")
else()
message(WARNING "Unsupported CXX compiler ${CMAKE_CXX_COMPILER_ID}")
endif()
endif()
if (ENABLE_UB_SANITIZER)
if (WIN32)
message(FATAL_ERROR "UndefinedBehavior sanitizer is not supported in Windows")
if(ENABLE_UB_SANITIZER)
if(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
message(FATAL_ERROR "UndefinedBehavior sanitizer is not supported in Windows with MSVC compiler. Please, use clang-cl or mingw")
endif()
# TODO: Remove -fno-sanitize=null as thirdparty/ocl/clhpp_headers UBSAN compatibility resolved:
# https://github.com/KhronosGroup/OpenCL-CLHPP/issues/17
# Mute -fsanitize=function Indirect call of a function through a function pointer of the wrong type.
@@ -48,43 +52,50 @@ if (ENABLE_UB_SANITIZER)
set(SANITIZER_COMPILER_FLAGS "${SANITIZER_COMPILER_FLAGS} -fno-sanitize=function")
endif()
if (CMAKE_CXX_COMPILER_ID STREQUAL "GNU")
# TODO: Remove -Wno-maybe-uninitialized after CVS-61143 fix
if(CMAKE_COMPILER_IS_GNUCXX)
# TODO: Remove -Wno-maybe-uninitialized after CVS-61143 is fixed
set(SANITIZER_COMPILER_FLAGS "${SANITIZER_COMPILER_FLAGS} -Wno-maybe-uninitialized")
endif()
check_cxx_compiler_flag("-fsanitize-recover=undefined" SANITIZE_RECOVER_UNDEFINED_SUPPORTED)
if (SANITIZE_RECOVER_UNDEFINED_SUPPORTED)
if(SANITIZE_RECOVER_UNDEFINED_SUPPORTED)
set(SANITIZER_COMPILER_FLAGS "${SANITIZER_COMPILER_FLAGS} -fsanitize-recover=undefined")
endif()
set(SANITIZER_LINKER_FLAGS "${SANITIZER_LINKER_FLAGS} -fsanitize=undefined")
endif()
if (ENABLE_THREAD_SANITIZER)
set(SANITIZER_COMPILER_FLAGS "${SANITIZER_COMPILER_FLAGS} -fsanitize=thread")
set(SANITIZER_LINKER_FLAGS "${SANITIZER_LINKER_FLAGS} -fsanitize=thread")
if(ENABLE_THREAD_SANITIZER)
if(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
message(FATAL_ERROR "Thread sanitizer is not supported in Windows with MSVC compiler. Please, use clang-cl or mingw")
elseif(CMAKE_COMPILER_IS_GNUCXX OR OV_COMPILER_IS_CLANG)
set(SANITIZER_COMPILER_FLAGS "${SANITIZER_COMPILER_FLAGS} -fsanitize=thread")
set(SANITIZER_LINKER_FLAGS "${SANITIZER_LINKER_FLAGS} -fsanitize=thread")
else()
message(WARNING "Unsupported CXX compiler ${CMAKE_CXX_COMPILER_ID}")
endif()
endif()
# common sanitizer options
if (DEFINED SANITIZER_COMPILER_FLAGS)
if(DEFINED SANITIZER_COMPILER_FLAGS)
# ensure symbols are present
if (NOT WIN32)
if(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
set(SANITIZER_COMPILER_FLAGS "${SANITIZER_COMPILER_FLAGS} /Oy-")
elseif(CMAKE_COMPILER_IS_GNUCXX OR OV_COMPILER_IS_CLANG)
set(SANITIZER_COMPILER_FLAGS "${SANITIZER_COMPILER_FLAGS} -g -fno-omit-frame-pointer")
if(NOT OV_COMPILER_IS_CLANG)
if(CMAKE_COMPILER_IS_GNUCXX)
# GPU plugin tests compilation is slow with -fvar-tracking-assignments on GCC.
# Clang has no var-tracking-assignments.
set(SANITIZER_COMPILER_FLAGS "${SANITIZER_COMPILER_FLAGS} -fno-var-tracking-assignments")
endif()
# prevent unloading libraries at runtime, so sanitizer can resolve their symbols
if (NOT CMAKE_CXX_COMPILER_ID STREQUAL "AppleClang")
if(NOT OV_COMPILER_IS_APPLECLANG)
set(SANITIZER_LINKER_FLAGS "${SANITIZER_LINKER_FLAGS} -Wl,-z,nodelete")
if(OV_COMPILER_IS_CLANG AND CMAKE_CXX_COMPILER_VERSION VERSION_GREATER_EQUAL 8.0)
set(SANITIZER_LINKER_FLAGS "${SANITIZER_LINKER_FLAGS} -fuse-ld=lld")
endif()
endif()
else()
set(SANITIZER_COMPILER_FLAGS "${SANITIZER_COMPILER_FLAGS} /Oy-")
message(WARNING "Unsupported CXX compiler ${CMAKE_CXX_COMPILER_ID}")
endif()
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} ${SANITIZER_COMPILER_FLAGS}")

View File

@@ -2,61 +2,68 @@
# SPDX-License-Identifier: Apache-2.0
#
if(UNIX)
set(IE_C_CXX_FLAGS "${IE_C_CXX_FLAGS} -Wformat -Wformat-security")
if(CMAKE_COMPILER_IS_GNUCXX OR OV_COMPILER_IS_CLANG OR
(UNIX AND CMAKE_CXX_COMPILER_ID STREQUAL "Intel"))
set(OV_C_CXX_FLAGS "${OV_C_CXX_FLAGS} -Wformat -Wformat-security")
if (NOT ENABLE_SANITIZER)
if(EMSCRIPTEN)
# emcc does not support fortification, see:
# https://stackoverflow.com/questions/58854858/undefined-symbol-stack-chk-guard-in-libopenh264-so-when-building-ffmpeg-wit
else()
# ASan does not support fortification https://github.com/google/sanitizers/issues/247
set(IE_C_CXX_FLAGS "${IE_C_CXX_FLAGS} -D_FORTIFY_SOURCE=2")
set(OV_C_CXX_FLAGS "${OV_C_CXX_FLAGS} -D_FORTIFY_SOURCE=2")
endif()
endif()
if(NOT APPLE)
set(CMAKE_EXE_LINKER_FLAGS_RELEASE "${CMAKE_EXE_LINKER_FLAGS_RELEASE} -pie")
endif()
if(CMAKE_CXX_COMPILER_ID STREQUAL "GNU")
set(IE_LINKER_FLAGS "${IE_LINKER_FLAGS} -z noexecstack -z relro -z now")
set(IE_C_CXX_FLAGS "${IE_C_CXX_FLAGS} -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv")
if(CMAKE_COMPILER_IS_GNUCXX)
set(OV_C_CXX_FLAGS "${OV_C_CXX_FLAGS} -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv")
if(CMAKE_CXX_COMPILER_VERSION VERSION_LESS 4.9)
set(IE_C_CXX_FLAGS "${IE_C_CXX_FLAGS} -fstack-protector-all")
set(OV_C_CXX_FLAGS "${OV_C_CXX_FLAGS} -fstack-protector-all")
else()
set(IE_C_CXX_FLAGS "${IE_C_CXX_FLAGS} -fstack-protector-strong")
set(OV_C_CXX_FLAGS "${OV_C_CXX_FLAGS} -fstack-protector-strong")
endif()
if (NOT ENABLE_SANITIZER)
set(IE_C_CXX_FLAGS "${IE_C_CXX_FLAGS} -s")
# Remove all symbol table and relocation information from the executable
set(OV_C_CXX_FLAGS "${OV_C_CXX_FLAGS} -s")
endif()
if(NOT MINGW)
set(OV_LINKER_FLAGS "${OV_LINKER_FLAGS} -z noexecstack -z relro -z now")
endif()
elseif(OV_COMPILER_IS_CLANG)
if(EMSCRIPTEN)
# emcc does not support fortification
# https://stackoverflow.com/questions/58854858/undefined-symbol-stack-chk-guard-in-libopenh264-so-when-building-ffmpeg-wit
else()
set(IE_C_CXX_FLAGS "${IE_C_CXX_FLAGS} -fstack-protector-all")
set(OV_C_CXX_FLAGS "${OV_C_CXX_FLAGS} -fstack-protector-all")
endif()
elseif(CMAKE_CXX_COMPILER_ID STREQUAL "Intel")
if (NOT ENABLE_SANITIZER)
set(IE_C_CXX_FLAGS "${IE_C_CXX_FLAGS} -Wl,--strip-all")
set(OV_C_CXX_FLAGS "${OV_C_CXX_FLAGS} -Wl,--strip-all")
endif()
set(IE_C_CXX_FLAGS "${IE_C_CXX_FLAGS} -fstack-protector-strong")
set(IE_LINKER_FLAGS "${IE_LINKER_FLAGS} -z noexecstack -z relro -z now")
endif()
else()
if(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
set(IE_C_CXX_FLAGS "${IE_C_CXX_FLAGS} /sdl")
endif()
set(IE_C_CXX_FLAGS "${IE_C_CXX_FLAGS} /guard:cf")
if(ENABLE_INTEGRITYCHECK)
set(CMAKE_SHARED_LINKER_FLAGS_RELEASE "${CMAKE_SHARED_LINKER_FLAGS_RELEASE} /INTEGRITYCHECK")
endif()
if(ENABLE_QSPECTRE)
ie_add_compiler_flags(/Qspectre)
set(OV_C_CXX_FLAGS "${OV_C_CXX_FLAGS} -fstack-protector-strong")
set(OV_LINKER_FLAGS "${OV_LINKER_FLAGS} -z noexecstack -z relro -z now")
endif()
elseif(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
set(OV_C_CXX_FLAGS "${OV_C_CXX_FLAGS} /sdl /guard:cf")
endif()
set(CMAKE_C_FLAGS_RELEASE "${CMAKE_C_FLAGS_RELEASE} ${IE_C_CXX_FLAGS}")
set(CMAKE_CXX_FLAGS_RELEASE "${CMAKE_CXX_FLAGS_RELEASE} ${IE_C_CXX_FLAGS}")
set(CMAKE_SHARED_LINKER_FLAGS_RELEASE "${CMAKE_SHARED_LINKER_FLAGS_RELEASE} ${IE_LINKER_FLAGS}")
set(CMAKE_MODULE_LINKER_FLAGS_RELEASE "${CMAKE_MODULE_LINKER_FLAGS_RELEASE} ${IE_LINKER_FLAGS}")
set(CMAKE_EXE_LINKER_FLAGS_RELEASE "${CMAKE_EXE_LINKER_FLAGS_RELEASE} ${IE_LINKER_FLAGS}")
if(ENABLE_QSPECTRE)
set(OV_C_CXX_FLAGS "${OV_C_CXX_FLAGS} /Qspectre")
endif()
if(ENABLE_INTEGRITYCHECK)
set(CMAKE_SHARED_LINKER_FLAGS_RELEASE "${CMAKE_SHARED_LINKER_FLAGS_RELEASE} /INTEGRITYCHECK")
endif()
set(CMAKE_C_FLAGS_RELEASE "${CMAKE_C_FLAGS_RELEASE} ${OV_C_CXX_FLAGS}")
set(CMAKE_CXX_FLAGS_RELEASE "${CMAKE_CXX_FLAGS_RELEASE} ${OV_C_CXX_FLAGS}")
set(CMAKE_SHARED_LINKER_FLAGS_RELEASE "${CMAKE_SHARED_LINKER_FLAGS_RELEASE} ${OV_LINKER_FLAGS}")
set(CMAKE_MODULE_LINKER_FLAGS_RELEASE "${CMAKE_MODULE_LINKER_FLAGS_RELEASE} ${OV_LINKER_FLAGS}")
set(CMAKE_EXE_LINKER_FLAGS_RELEASE "${CMAKE_EXE_LINKER_FLAGS_RELEASE} ${OV_LINKER_FLAGS}")
unset(OV_C_CXX_FLAGS)
unset(OV_LINKER_FLAGS)

View File

@@ -641,7 +641,7 @@ _repository = None
# Files to exclude from linting. This is set by the --exclude flag.
_excludes = None
# Whether to supress PrintInfo messages
# Whether to suppress PrintInfo messages
_quiet = False
# The allowed line length of files.
@@ -752,7 +752,7 @@ def ParseNolintSuppressions(filename, raw_line, linenum, error):
'Unknown NOLINT error category: %s' % category)
def ProcessGlobalSuppresions(lines):
def ProcessGlobalSuppressions(lines):
"""Updates the list of global error suppressions.
Parses any lint directives in the file that have global effect.
@@ -780,7 +780,7 @@ def IsErrorSuppressedByNolint(category, linenum):
"""Returns true if the specified error category is suppressed on this line.
Consults the global error_suppressions map populated by
ParseNolintSuppressions/ProcessGlobalSuppresions/ResetNolintSuppressions.
ParseNolintSuppressions/ProcessGlobalSuppressions/ResetNolintSuppressions.
Args:
category: str, the category of the error.
@@ -6203,7 +6203,7 @@ def ProcessFileData(filename, file_extension, lines, error,
ResetNolintSuppressions()
CheckForCopyright(filename, lines, error)
ProcessGlobalSuppresions(lines)
ProcessGlobalSuppressions(lines)
RemoveMultiLineComments(filename, lines, error)
clean_lines = CleansedLines(lines)

View File

@@ -74,7 +74,12 @@ ie_option (VERBOSE_BUILD "shows extra information about build" OFF)
ie_option (ENABLE_UNSAFE_LOCATIONS "skip check for MD5 for dependency" OFF)
ie_dependent_option (ENABLE_FUZZING "instrument build for fuzzing" OFF "OV_COMPILER_IS_CLANG;NOT WIN32" OFF)
if(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC" AND MSVC_VERSION GREATER_EQUAL 1930)
# Visual Studio 2022: 1930-1939 = VS 17.0 (v143 toolset)
set(_msvc_version_2022 ON)
endif()
ie_dependent_option (ENABLE_FUZZING "instrument build for fuzzing" OFF "OV_COMPILER_IS_CLANG OR _msvc_version_2022" OFF)
#
# Check features

View File

@@ -171,7 +171,7 @@ macro(ov_add_frontend)
endforeach()
# Disable all warnings for generated code
set_source_files_properties(${PROTO_SRCS} ${PROTO_HDRS} PROPERTIES COMPILE_OPTIONS -w GENERATED TRUE)
set_source_files_properties(${PROTO_SRCS} ${PROTO_HDRS} PROPERTIES COMPILE_OPTIONS -w GENERATED ON)
# Create library
add_library(${TARGET_NAME} ${LIBRARY_SRC} ${LIBRARY_HEADERS} ${LIBRARY_PUBLIC_HEADERS}
@@ -204,8 +204,7 @@ macro(ov_add_frontend)
ov_add_vs_version_file(NAME ${TARGET_NAME}
FILEDESCRIPTION ${OV_FRONTEND_FILEDESCRIPTION})
target_link_libraries(${TARGET_NAME} PUBLIC openvino::runtime)
target_link_libraries(${TARGET_NAME} PRIVATE ${OV_FRONTEND_LINK_LIBRARIES})
target_link_libraries(${TARGET_NAME} PRIVATE ${OV_FRONTEND_LINK_LIBRARIES} PUBLIC openvino::runtime)
ov_add_library_version(${TARGET_NAME})
# WA for TF frontends which always require protobuf (not protobuf-lite)
@@ -216,23 +215,34 @@ macro(ov_add_frontend)
if(proto_files)
if(OV_FRONTEND_PROTOBUF_LITE)
if(NOT protobuf_lite_installed)
ov_install_static_lib(${Protobuf_LITE_LIBRARIES} ${OV_CPACK_COMP_CORE})
set(protobuf_lite_installed ON CACHE INTERNAL "" FORCE)
endif()
link_system_libraries(${TARGET_NAME} PRIVATE ${Protobuf_LITE_LIBRARIES})
set(protobuf_target_name libprotobuf-lite)
set(protobuf_install_name "protobuf_lite_installed")
else()
if(NOT protobuf_installed)
ov_install_static_lib(${Protobuf_LIBRARIES} ${OV_CPACK_COMP_CORE})
set(protobuf_installed ON CACHE INTERNAL "" FORCE)
endif()
link_system_libraries(${TARGET_NAME} PRIVATE ${Protobuf_LIBRARIES})
set(protobuf_target_name libprotobuf)
set(protobuf_install_name "protobuf_installed")
endif()
if(ENABLE_SYSTEM_PROTOBUF)
# use imported target name with namespace
set(protobuf_target_name "protobuf::${protobuf_target_name}")
endif()
# prptobuf generated code emits -Wsuggest-override error
link_system_libraries(${TARGET_NAME} PRIVATE ${protobuf_target_name})
# protobuf generated code emits -Wsuggest-override error
if(SUGGEST_OVERRIDE_SUPPORTED)
target_compile_options(${TARGET_NAME} PRIVATE -Wno-suggest-override)
endif()
# install protobuf if it is not installed yet
if(NOT ${protobuf_install_name})
if(ENABLE_SYSTEM_PROTOBUF)
# we have to add find_package(Protobuf) to the OpenVINOConfig.cmake for static build
# no needs to install protobuf
else()
ov_install_static_lib(${protobuf_target_name} ${OV_CPACK_COMP_CORE})
set("${protobuf_install_name}" ON CACHE INTERNAL "" FORCE)
endif()
endif()
endif()
if(flatbuffers_schema_files)

View File

@@ -2,41 +2,6 @@
# SPDX-License-Identifier: Apache-2.0
#
include(target_flags)
# TODO: remove this function: we must not have conditions for particular OS names or versions
# cmake needs to look at /etc files only when we build for Linux on Linux
if(CMAKE_HOST_LINUX AND LINUX)
function(get_linux_name res_var)
if(EXISTS "/etc/lsb-release")
# linux version detection using cat /etc/lsb-release
file(READ "/etc/lsb-release" release_data)
set(name_regex "DISTRIB_ID=([^ \n]*)\n")
set(version_regex "DISTRIB_RELEASE=([0-9]+(\\.[0-9]+)?)")
else()
execute_process(COMMAND find -L /etc/ -maxdepth 1 -type f -name *-release -exec cat {} \;
OUTPUT_VARIABLE release_data
RESULT_VARIABLE result)
string(REPLACE "Red Hat" "CentOS" release_data "${release_data}")
set(name_regex "NAME=\"([^ \"\n]*).*\"\n")
set(version_regex "VERSION=\"([0-9]+(\\.[0-9]+)?)[^\n]*\"")
endif()
string(REGEX MATCH ${name_regex} name ${release_data})
set(os_name ${CMAKE_MATCH_1})
string(REGEX MATCH ${version_regex} version ${release_data})
set(os_name "${os_name} ${CMAKE_MATCH_1}")
if(os_name)
set(${res_var} ${os_name} PARENT_SCOPE)
else ()
set(${res_var} NOTFOUND PARENT_SCOPE)
endif ()
endfunction()
else()
function(get_linux_name res_var)
set(${res_var} NOTFOUND PARENT_SCOPE)
endfunction()
endif ()
function(get_linux_name res_var)
set(${res_var} NOTFOUND PARENT_SCOPE)
endfunction()

View File

@@ -99,6 +99,10 @@ function(ov_native_compile_external_project)
list(APPEND ARG_CMAKE_ARGS "-DCMAKE_C_COMPILER_LAUNCHER=${CMAKE_C_COMPILER_LAUNCHER}")
endif()
if(DEFINED CMAKE_MAKE_PROGRAM)
list(APPEND ARG_CMAKE_ARGS "-DCMAKE_MAKE_PROGRAM=${CMAKE_MAKE_PROGRAM}")
endif()
ExternalProject_Add(${ARG_TARGET_NAME}
# Directory Options
SOURCE_DIR "${CMAKE_CURRENT_SOURCE_DIR}"

View File

@@ -25,7 +25,7 @@ macro(ov_common_libraries_cpack_set_dirs)
set(OV_CPACK_IE_CMAKEDIR ${CMAKE_INSTALL_LIBDIR}/cmake/inferenceengine${OpenVINO_VERSION})
set(OV_CPACK_NGRAPH_CMAKEDIR ${CMAKE_INSTALL_LIBDIR}/cmake/ngraph${OpenVINO_VERSION})
set(OV_CPACK_OPENVINO_CMAKEDIR ${CMAKE_INSTALL_LIBDIR}/cmake/openvino${OpenVINO_VERSION})
set(OV_CPACK_DOCDIR ${CMAKE_INSTALL_DATADIR}/doc/openvino-${OpenVINO_VERSION})
set(OV_CPACK_LICENSESDIR licenses)
ov_get_pyversion(pyversion)
if(pyversion)

View File

@@ -31,6 +31,7 @@ macro(ov_debian_cpack_set_dirs)
set(OV_CPACK_NGRAPH_CMAKEDIR ${OV_CPACK_RUNTIMEDIR}/cmake/ngraph${OpenVINO_VERSION})
set(OV_CPACK_OPENVINO_CMAKEDIR ${OV_CPACK_RUNTIMEDIR}/cmake/openvino${OpenVINO_VERSION})
set(OV_CPACK_DOCDIR ${CMAKE_INSTALL_DATADIR}/doc/openvino-${OpenVINO_VERSION})
set(OV_CPACK_LICENSESDIR ${OV_CPACK_DOCDIR}/licenses)
set(OV_CPACK_PYTHONDIR lib/python3/dist-packages)
# non-native stuff

View File

@@ -29,6 +29,7 @@ macro(ov_cpack_set_dirs)
set(OV_CPACK_NGRAPH_CMAKEDIR runtime/cmake)
set(OV_CPACK_OPENVINO_CMAKEDIR runtime/cmake)
set(OV_CPACK_DOCDIR docs)
set(OV_CPACK_LICENSESDIR ${OV_CPACK_DOCDIR}/licenses)
set(OV_CPACK_SAMPLESDIR samples)
set(OV_CPACK_WHEELSDIR tools)
set(OV_CPACK_TOOLSDIR tools)
@@ -99,10 +100,10 @@ endif()
# if <FILE> is a symlink, we resolve it, but install file with a name of symlink
#
function(ov_install_with_name file component)
if((APPLE AND file MATCHES "^[^\.]+\.[0-9]+${CMAKE_SHARED_LIBRARY_SUFFIX}$") OR
(file MATCHES "^.*\.${CMAKE_SHARED_LIBRARY_SUFFIX}\.[0-9]+$"))
get_filename_component(actual_name "${file}" NAME)
if((APPLE AND actual_name MATCHES "^[^\.]+\.[0-9]+${CMAKE_SHARED_LIBRARY_SUFFIX}$") OR
(actual_name MATCHES "^.*\.${CMAKE_SHARED_LIBRARY_SUFFIX}\.[0-9]+$"))
if(IS_SYMLINK "${file}")
get_filename_component(actual_name "${file}" NAME)
get_filename_component(file "${file}" REALPATH)
set(install_rename RENAME "${actual_name}")
endif()
@@ -162,7 +163,7 @@ elseif(CPACK_GENERATOR STREQUAL "RPM")
include(packaging/rpm/rpm)
elseif(CPACK_GENERATOR STREQUAL "NSIS")
include(packaging/nsis)
elseif(CPACK_GENERATOR MATCHES "^(CONDA-FORGE|BREW)$")
elseif(CPACK_GENERATOR MATCHES "^(CONDA-FORGE|BREW|CONAN)$")
include(packaging/common-libraries)
endif()

View File

@@ -22,6 +22,11 @@ macro(ov_rpm_cpack_set_dirs)
set(OV_CPACK_NGRAPH_CMAKEDIR ${CMAKE_INSTALL_LIBDIR}/cmake/ngraph${OpenVINO_VERSION})
set(OV_CPACK_OPENVINO_CMAKEDIR ${CMAKE_INSTALL_LIBDIR}/cmake/openvino${OpenVINO_VERSION})
set(OV_CPACK_DOCDIR ${CMAKE_INSTALL_DATADIR}/doc/openvino-${OpenVINO_VERSION})
set(OV_CPACK_LICENSESDIR ${OV_CPACK_DOCDIR}/licenses)
# TODO:
# 1. define python installation directories for RPM packages
# 2. make sure only a single version of python API can be installed at the same time (define conflicts section)
# set(OV_CPACK_PYTHONDIR lib/python3/dist-packages)
ov_get_pyversion(pyversion)

View File

@@ -17,20 +17,44 @@ if(WIN32 AND CMAKE_CXX_COMPILER_ID STREQUAL "GNU")
endif()
if(CMAKE_HOST_SYSTEM_PROCESSOR MATCHES "amd64.*|x86_64.*|AMD64.*")
set(arch_flag X86_64)
set(host_arch_flag X86_64)
elseif(CMAKE_HOST_SYSTEM_PROCESSOR MATCHES "i686.*|i386.*|x86.*|amd64.*|AMD64.*")
set(arch_flag X86)
set(host_arch_flag X86)
elseif(CMAKE_HOST_SYSTEM_PROCESSOR MATCHES "^(arm64.*|aarch64.*|AARCH64.*|ARM64.*)")
set(arch_flag AARCH64)
set(host_arch_flag AARCH64)
elseif(CMAKE_HOST_SYSTEM_PROCESSOR MATCHES "^(arm.*|ARM.*)")
set(arch_flag ARM)
set(host_arch_flag ARM)
elseif(CMAKE_HOST_SYSTEM_PROCESSOR MATCHES "^riscv64$")
set(arch_flag RISCV64)
set(host_arch_flag RISCV64)
endif()
set(HOST_${arch_flag} ON)
set(HOST_${host_arch_flag} ON)
macro(_ie_process_msvc_generator_platform arch_flag)
macro(_ov_detect_arch_by_processor_type)
if(CMAKE_OSX_ARCHITECTURES AND APPLE)
if(CMAKE_OSX_ARCHITECTURES STREQUAL "arm64")
set(AARCH64 ON)
elseif(CMAKE_OSX_ARCHITECTURES STREQUAL "x86_64")
set(X86_64 ON)
elseif(CMAKE_OSX_ARCHITECTURES MATCHES ".*x86_64.*" AND CMAKE_OSX_ARCHITECTURES MATCHES ".*arm64.*")
set(UNIVERSAL2 ON)
else()
message(FATAL_ERROR "Unsupported value: CMAKE_OSX_ARCHITECTURES = ${CMAKE_OSX_ARCHITECTURES}")
endif()
elseif(CMAKE_SYSTEM_PROCESSOR MATCHES "amd64.*|x86_64.*|AMD64.*")
set(X86_64 ON)
elseif(CMAKE_SYSTEM_PROCESSOR MATCHES "i686.*|i386.*|x86.*|amd64.*|AMD64.*|wasm")
set(X86 ON)
elseif(CMAKE_SYSTEM_PROCESSOR MATCHES "^(arm64.*|aarch64.*|AARCH64.*|ARM64.*|armv8)")
set(AARCH64 ON)
elseif(CMAKE_SYSTEM_PROCESSOR MATCHES "^(arm.*|ARM.*)")
set(ARM ON)
elseif(CMAKE_SYSTEM_PROCESSOR MATCHES "^riscv64$")
set(RISCV64 ON)
endif()
endmacro()
macro(_ov_process_msvc_generator_platform)
# if cmake -A <ARM|ARM64|x64|Win32> is passed
if(CMAKE_GENERATOR_PLATFORM STREQUAL "ARM64")
set(AARCH64 ON)
@@ -41,45 +65,30 @@ macro(_ie_process_msvc_generator_platform arch_flag)
elseif(CMAKE_GENERATOR_PLATFORM STREQUAL "Win32")
set(X86 ON)
else()
set(${arch_flag} ON)
_ov_detect_arch_by_processor_type()
endif()
endmacro()
# TODO: why OpenCV is found by cmake
if(MSVC64 OR MINGW64)
_ie_process_msvc_generator_platform(${arch_flag})
_ov_process_msvc_generator_platform()
elseif(MINGW OR (MSVC AND NOT CMAKE_CROSSCOMPILING))
_ie_process_msvc_generator_platform(${arch_flag})
elseif(CMAKE_OSX_ARCHITECTURES AND APPLE)
if(CMAKE_OSX_ARCHITECTURES STREQUAL "arm64")
set(AARCH64 ON)
elseif(CMAKE_OSX_ARCHITECTURES STREQUAL "x86_64")
set(X86_64 ON)
elseif(CMAKE_OSX_ARCHITECTURES MATCHES ".*x86_64.*" AND CMAKE_OSX_ARCHITECTURES MATCHES ".*arm64.*")
set(UNIVERSAL2 ON)
else()
message(FATAL_ERROR "Unsupported value: CMAKE_OSX_ARCHITECTURES = ${CMAKE_OSX_ARCHITECTURES}")
endif()
elseif(CMAKE_SYSTEM_PROCESSOR MATCHES "amd64.*|x86_64.*|AMD64.*")
set(X86_64 ON)
elseif(CMAKE_SYSTEM_PROCESSOR MATCHES "i686.*|i386.*|x86.*|amd64.*|AMD64.*")
set(X86 ON)
elseif(CMAKE_SYSTEM_PROCESSOR MATCHES "^(arm64.*|aarch64.*|AARCH64.*|ARM64.*)")
set(AARCH64 ON)
elseif(CMAKE_SYSTEM_PROCESSOR MATCHES "^(arm.*|ARM.*)")
set(ARM ON)
elseif(CMAKE_SYSTEM_PROCESSOR MATCHES "^riscv64$")
set(RISCV64 ON)
_ov_process_msvc_generator_platform()
else()
_ov_detect_arch_by_processor_type()
endif()
if(CMAKE_SYSTEM_NAME STREQUAL "Emscripten")
set(EMSCRIPTEN ON)
endif()
if(UNIX AND NOT (APPLE OR ANDROID OR EMSCRIPTEN))
if(UNIX AND NOT (APPLE OR ANDROID OR EMSCRIPTEN OR CYGWIN))
set(LINUX ON)
endif()
if(NOT DEFINED CMAKE_HOST_LINUX AND CMAKE_HOST_SYSTEM_NAME STREQUAL "Linux")
if(CMAKE_VERSION VERSION_LESS 3.25 AND CMAKE_HOST_SYSTEM_NAME STREQUAL "Linux")
# the variable is available since 3.25
# https://cmake.org/cmake/help/latest/variable/CMAKE_HOST_LINUX.html
set(CMAKE_HOST_LINUX ON)
endif()

View File

@@ -40,6 +40,7 @@ function(ieTargetLinkWholeArchive targetName)
"-Wl,-noall_load"
)
else()
# non-Apple Clang and GCC / MinGW
list(APPEND libs
"-Wl,--whole-archive"
${staticLib}

View File

@@ -22,7 +22,7 @@ else()
set(ENABLE_INTEL_GPU_DEFAULT OFF)
endif()
ie_dependent_option (ENABLE_INTEL_GPU "GPU OpenCL-based plugin for OpenVINO Runtime" ${ENABLE_INTEL_GPU_DEFAULT} "X86_64 OR AARCH64;NOT APPLE;NOT MINGW;NOT WINDOWS_STORE;NOT WINDOWS_PHONE" OFF)
ie_dependent_option (ENABLE_INTEL_GPU "GPU OpenCL-based plugin for OpenVINO Runtime" ${ENABLE_INTEL_GPU_DEFAULT} "X86_64 OR AARCH64;NOT APPLE;NOT WINDOWS_STORE;NOT WINDOWS_PHONE" OFF)
if (ANDROID OR (CMAKE_COMPILER_IS_GNUCXX AND CMAKE_CXX_COMPILER_VERSION VERSION_LESS 7.0))
# oneDNN doesn't support old compilers and android builds for now, so we'll
@@ -34,6 +34,10 @@ endif()
ie_dependent_option (ENABLE_ONEDNN_FOR_GPU "Enable oneDNN with GPU support" ${ENABLE_ONEDNN_FOR_GPU_DEFAULT} "ENABLE_INTEL_GPU" OFF)
ie_option (ENABLE_DEBUG_CAPS "enable OpenVINO debug capabilities at runtime" OFF)
ie_dependent_option (ENABLE_GPU_DEBUG_CAPS "enable GPU debug capabilities at runtime" ON "ENABLE_DEBUG_CAPS;ENABLE_INTEL_CPU" OFF)
ie_dependent_option (ENABLE_CPU_DEBUG_CAPS "enable CPU debug capabilities at runtime" ON "ENABLE_DEBUG_CAPS;ENABLE_INTEL_GPU" OFF)
ie_option (ENABLE_PROFILING_ITT "Build with ITT tracing. Optionally configure pre-built ittnotify library though INTEL_VTUNE_DIR variable." OFF)
ie_option_enum(ENABLE_PROFILING_FILTER "Enable or disable ITT counter groups.\
@@ -81,41 +85,45 @@ ie_dependent_option (ENABLE_TBBBIND_2_5 "Enable TBBBind_2_5 static usage in Open
ie_dependent_option (ENABLE_INTEL_GNA "GNA support for OpenVINO Runtime" ON
"NOT APPLE;NOT ANDROID;X86_64;CMAKE_CXX_COMPILER_VERSION VERSION_GREATER_EQUAL 5.4" OFF)
ie_option (ENABLE_INTEL_GNA_DEBUG "GNA debug build" OFF)
ie_dependent_option (ENABLE_INTEL_GNA_DEBUG "GNA debug build" OFF "ENABLE_INTEL_GNA" OFF)
ie_dependent_option (ENABLE_V7_SERIALIZE "enables serialization to IR v7" OFF "ENABLE_INTEL_GNA" OFF)
ie_dependent_option (ENABLE_IR_V7_READER "Enables IR v7 reader" ${BUILD_SHARED_LIBS} "ENABLE_TESTS;ENABLE_INTEL_GNA" OFF)
ie_option (ENABLE_GAPI_PREPROCESSING "Enables G-API preprocessing" ON)
ie_dependent_option (ENABLE_GAPI_PREPROCESSING "Enables G-API preprocessing" ON "NOT MINGW64" OFF)
ie_option (ENABLE_MULTI "Enables MULTI Device Plugin" ON)
ie_option (ENABLE_AUTO "Enables AUTO Device Plugin" ON)
ie_option (ENABLE_AUTO_BATCH "Enables Auto-Batching Plugin" ON)
ie_option (ENABLE_HETERO "Enables Hetero Device Plugin" ON)
ie_option (ENABLE_TEMPLATE "Enable template plugin" ON)
ie_dependent_option (ENABLE_PLUGINS_XML "Generate plugins.xml configuration file or not" OFF "NOT BUILD_SHARED_LIBS" OFF)
ie_dependent_option (ENABLE_PLUGINS_XML "Generate plugins.xml configuration file or not" OFF "BUILD_SHARED_LIBS" OFF)
ie_dependent_option (GAPI_TEST_PERF "if GAPI unit tests should examine performance" OFF "ENABLE_TESTS;ENABLE_GAPI_PREPROCESSING" OFF)
ie_dependent_option (ENABLE_DATA "fetch models from testdata repo" ON "ENABLE_FUNCTIONAL_TESTS;NOT ANDROID" OFF)
ie_dependent_option (ENABLE_BEH_TESTS "tests oriented to check OpenVINO Runtime API correctness" ON "ENABLE_TESTS" OFF)
ie_dependent_option (ENABLE_FUNCTIONAL_TESTS "functional tests" ON "ENABLE_TESTS" OFF)
ie_option (ENABLE_SAMPLES "console samples are part of OpenVINO Runtime package" ON)
ie_option (ENABLE_OPENCV "enables custom OpenCV download" OFF)
ie_option (ENABLE_V7_SERIALIZE "enables serialization to IR v7" OFF)
set(OPENVINO_EXTRA_MODULES "" CACHE STRING "Extra paths for extra modules to include into OpenVINO build")
ie_dependent_option(ENABLE_TBB_RELEASE_ONLY "Only Release TBB libraries are linked to the OpenVINO Runtime binaries" ON "THREADING MATCHES TBB;LINUX" OFF)
find_host_package(PythonInterp 3 QUIET)
ie_option(ENABLE_OV_ONNX_FRONTEND "Enable ONNX FrontEnd" ${PYTHONINTERP_FOUND})
ie_option(ENABLE_OV_PADDLE_FRONTEND "Enable PaddlePaddle FrontEnd" ON)
ie_option(ENABLE_OV_IR_FRONTEND "Enable IR FrontEnd" ON)
ie_option(ENABLE_OV_PYTORCH_FRONTEND "Enable PyTorch FrontEnd" ON)
ie_option(ENABLE_OV_IR_FRONTEND "Enable IR FrontEnd" ON)
ie_option(ENABLE_OV_TF_FRONTEND "Enable TensorFlow FrontEnd" ON)
ie_option(ENABLE_OV_TF_LITE_FRONTEND "Enable TensorFlow Lite FrontEnd" ON)
ie_dependent_option(ENABLE_SNAPPY_COMPRESSION "Enables compression support for TF FE" ON
"ENABLE_OV_TF_FRONTEND" ON)
if(CMAKE_HOST_LINUX AND LINUX)
# Debian packages are enabled on Ubuntu systems
# so, system TBB / pugixml / OpenCL can be tried for usage
@@ -131,40 +139,37 @@ else()
set(ENABLE_SYSTEM_TBB_DEFAULT ${ENABLE_SYSTEM_LIBS_DEFAULT})
endif()
if(BUILD_SHARED_LIBS)
set(ENABLE_SYSTEM_PUGIXML_DEFAULT ${ENABLE_SYSTEM_LIBS_DEFAULT})
else()
# for static libraries case libpugixml.a must be compiled with -fPIC
# but we still need an ability to compile with system PugiXML and BUILD_SHARED_LIBS
# for Conan case where everything is compiled statically
set(ENABLE_SYSTEM_PUGIXML_DEFAULT OFF)
endif()
# users wants to use his own TBB version, specific either via env vars or cmake options
if(DEFINED ENV{TBBROOT} OR DEFINED ENV{TBB_DIR} OR DEFINED TBB_DIR OR DEFINED TBBROOT)
set(ENABLE_SYSTEM_TBB_DEFAULT OFF)
endif()
# for static libraries case libpugixml.a must be compiled with -fPIC
ie_dependent_option (ENABLE_SYSTEM_PUGIXML "use the system copy of pugixml" ${ENABLE_SYSTEM_LIBS_DEFAULT} "BUILD_SHARED_LIBS" OFF)
ie_dependent_option (ENABLE_SYSTEM_TBB "use the system version of TBB" ${ENABLE_SYSTEM_TBB_DEFAULT} "THREADING MATCHES TBB" OFF)
ie_dependent_option (ENABLE_SYSTEM_OPENCL "Use the system version of OpenCL" ${ENABLE_SYSTEM_LIBS_DEFAULT} "BUILD_SHARED_LIBS;ENABLE_INTEL_GPU" OFF)
ie_option (ENABLE_DEBUG_CAPS "enable OpenVINO debug capabilities at runtime" OFF)
ie_dependent_option (ENABLE_GPU_DEBUG_CAPS "enable GPU debug capabilities at runtime" ON "ENABLE_DEBUG_CAPS" OFF)
ie_dependent_option (ENABLE_CPU_DEBUG_CAPS "enable CPU debug capabilities at runtime" ON "ENABLE_DEBUG_CAPS" OFF)
find_host_package(PythonInterp 3 QUIET)
ie_option(ENABLE_OV_ONNX_FRONTEND "Enable ONNX FrontEnd" ${PYTHONINTERP_FOUND})
ie_option(ENABLE_OV_PADDLE_FRONTEND "Enable PaddlePaddle FrontEnd" ON)
ie_option(ENABLE_OV_IR_FRONTEND "Enable IR FrontEnd" ON)
ie_option(ENABLE_OV_PYTORCH_FRONTEND "Enable PyTorch FrontEnd" ON)
ie_option(ENABLE_OV_TF_FRONTEND "Enable TensorFlow FrontEnd" ON)
ie_option(ENABLE_OV_TF_LITE_FRONTEND "Enable TensorFlow Lite FrontEnd" ON)
ie_option(ENABLE_OV_IR_FRONTEND "Enable IR FrontEnd" ON)
ie_dependent_option(ENABLE_SNAPPY_COMPRESSION "Enables compression support for TF FE" ON
"ENABLE_OV_TF_FRONTEND" ON)
ie_dependent_option(ENABLE_SYSTEM_PROTOBUF "Enables use of system protobuf" OFF
"ENABLE_OV_ONNX_FRONTEND OR ENABLE_OV_PADDLE_FRONTEND OR ENABLE_OV_TF_FRONTEND;BUILD_SHARED_LIBS" OFF)
ie_dependent_option (ENABLE_SYSTEM_TBB "Enables use of system TBB" ${ENABLE_SYSTEM_TBB_DEFAULT}
"THREADING MATCHES TBB" OFF)
# TODO: turn it off by default during the work on cross-os distribution, because pugixml is not
# available out of box on all systems (like RHEL, UBI)
ie_option (ENABLE_SYSTEM_PUGIXML "Enables use of system PugiXML" ${ENABLE_SYSTEM_PUGIXML_DEFAULT})
# the option is on by default, because we use only flatc compiler and don't use any libraries
ie_dependent_option(ENABLE_SYSTEM_FLATBUFFERS "Enables use of system flatbuffers" ON
"ENABLE_OV_TF_LITE_FRONTEND" OFF)
ie_dependent_option(ENABLE_SYSTEM_SNAPPY "Enables use of system version of snappy" OFF "ENABLE_SNAPPY_COMPRESSION;BUILD_SHARED_LIBS" OFF)
ie_dependent_option (ENABLE_SYSTEM_OPENCL "Enables use of system OpenCL" ${ENABLE_SYSTEM_LIBS_DEFAULT}
"ENABLE_INTEL_GPU" OFF)
# the option is turned off by default, because we compile our own static version of protobuf
# with LTO and -fPIC options, while system one does not have such flags
ie_dependent_option (ENABLE_SYSTEM_PROTOBUF "Enables use of system Protobuf" OFF
"ENABLE_OV_ONNX_FRONTEND OR ENABLE_OV_PADDLE_FRONTEND OR ENABLE_OV_TF_FRONTEND" OFF)
# the option is turned off by default, because we don't want to have a dependency on libsnappy.so
ie_dependent_option (ENABLE_SYSTEM_SNAPPY "Enables use of system version of Snappy" OFF
"ENABLE_SNAPPY_COMPRESSION" OFF)
ie_option(ENABLE_OPENVINO_DEBUG "Enable output for OPENVINO_DEBUG statements" OFF)

View File

@@ -10,8 +10,8 @@ macro(ov_cpack_settings)
set(cpack_components_all ${CPACK_COMPONENTS_ALL})
unset(CPACK_COMPONENTS_ALL)
foreach(item IN LISTS cpack_components_all)
# filter out some components, which are not needed to be wrapped to conda-forge | brew
if(# python is not a part of conda | brew
# filter out some components, which are not needed to be wrapped to conda-forge | brew | conan
if(# python is not a part of conda | brew | conan
NOT item MATCHES "^${OV_CPACK_COMP_PYTHON_OPENVINO}_python.*" AND
# python wheels are not needed to be wrapped by conda | brew packages
NOT item STREQUAL OV_CPACK_COMP_PYTHON_WHEELS AND

View File

@@ -93,7 +93,7 @@ macro(ov_cpack_settings)
# - 2022.1.0 is the last public release with debian packages from Intel install team
# - 2022.1.1, 2022.2 do not have debian packages enabled, distributed only as archives
# - 2022.3 is the first release where Debian updated packages are introduced, others 2022.3.X are LTS
2022.3.0 2022.3.1 2022.3.2 2022.3.3 2022.3.4 2022.3.5
2022.3.0 2022.3.1 2022.3.2 2022.3.3 2022.3.4 2022.3.5 2023.0.0
)
#

View File

@@ -6,7 +6,7 @@ if(CPACK_GENERATOR STREQUAL "DEB")
include(cmake/packaging/debian.cmake)
elseif(CPACK_GENERATOR STREQUAL "RPM")
include(cmake/packaging/rpm.cmake)
elseif(CPACK_GENERATOR MATCHES "^(CONDA-FORGE|BREW)$")
elseif(CPACK_GENERATOR MATCHES "^(CONDA-FORGE|BREW|CONAN)$")
include(cmake/packaging/common-libraries.cmake)
elseif(CPACK_GENERATOR STREQUAL "NSIS")
include(cmake/packaging/nsis.cmake)

View File

@@ -79,7 +79,7 @@ macro(ov_cpack_settings)
# - 2022.1.0 is the last public release with rpm packages from Intel install team
# - 2022.1.1, 2022.2 do not have rpm packages enabled, distributed only as archives
# - 2022.3 is the first release where RPM updated packages are introduced, others 2022.3.X are LTS
2022.3.0 2022.3.1 2022.3.2 2022.3.3 2022.3.4 2022.3.5
2022.3.0 2022.3.1 2022.3.2 2022.3.3 2022.3.4 2022.3.5 2023.0.0
)
find_host_program(rpmlint_PROGRAM NAMES rpmlint DOC "Path to rpmlint")

View File

@@ -142,6 +142,14 @@ if(ENABLE_SYSTEM_PUGIXML)
endif()
endif()
set(_IE_nlohmann_json_FOUND "@nlohmann_json_FOUND@")
if(_IE_nlohmann_json_FOUND)
find_dependency(nlohmann_json)
set_target_properties(nlohmann_json::nlohmann_json PROPERTIES IMPORTED_GLOBAL ON)
add_library(IE::nlohmann_json ALIAS nlohmann_json::nlohmann_json)
endif()
unset(_IE_nlohmann_json_FOUND)
# inherit OpenCV from main IE project if enabled
if ("@OpenCV_FOUND@")
load_cache("${cache_path}" READ_WITH_PREFIX "" OpenCV_DIR)

View File

@@ -85,9 +85,9 @@
#
# `OpenVINO_VERSION_MAJOR`
# Major version component
#
#
# `OpenVINO_VERSION_MINOR`
# minor version component
# Minor version component
#
# `OpenVINO_VERSION_PATCH`
# Patch version component
@@ -138,7 +138,7 @@ endmacro()
macro(_ov_find_tbb)
set(THREADING "@THREADING@")
if((THREADING STREQUAL "TBB" OR THREADING STREQUAL "TBB_AUTO") AND NOT TBB_FOUND)
if(THREADING STREQUAL "TBB" OR THREADING STREQUAL "TBB_AUTO")
set(enable_pkgconfig_tbb "@tbb_FOUND@")
# try tbb.pc
@@ -153,10 +153,10 @@ macro(_ov_find_tbb)
endif()
pkg_search_module(tbb
${pkg_config_quiet_arg}
${pkg_config_required_arg}
IMPORTED_TARGET
tbb)
${pkg_config_quiet_arg}
${pkg_config_required_arg}
IMPORTED_TARGET
tbb)
unset(pkg_config_quiet_arg)
unset(pkg_config_required_arg)
@@ -223,28 +223,185 @@ macro(_ov_find_tbb)
PATHS ${_tbb_bind_dir}
NO_CMAKE_FIND_ROOT_PATH
NO_DEFAULT_PATH)
set_target_properties(${TBBBIND_2_5_IMPORTED_TARGETS} PROPERTIES IMPORTED_GLOBAL ON)
unset(_tbb_bind_dir)
endif()
unset(install_tbbbind)
endif()
endmacro()
macro(_ov_find_pugixml)
set(_OV_ENABLE_SYSTEM_PUGIXML "@ENABLE_SYSTEM_PUGIXML@")
if(_OV_ENABLE_SYSTEM_PUGIXML)
set(_ov_pugixml_pkgconfig_interface "@pugixml_FOUND@")
set(_ov_pugixml_cmake_interface "@PugiXML_FOUND@")
if(_ov_pugixml_pkgconfig_interface AND NOT ANDROID)
_ov_find_dependency(PkgConfig)
elseif(_ov_pugixml_cmake_interface)
_ov_find_dependency(PugiXML REQUIRED)
endif()
if(PugiXML_FOUND)
if(TARGET pugixml)
set(_ov_pugixml_target pugixml)
elseif(TARGET pugixml::pugixml)
set(_ov_pugixml_target pugixml::pugixml)
endif()
if(OpenVINODeveloperPackage_DIR)
set_property(TARGET ${_ov_pugixml_target} PROPERTY IMPORTED_GLOBAL ON)
# align with build tree
add_library(openvino::pugixml ALIAS ${_ov_pugixml_target})
endif()
unset(_ov_pugixml_target)
elseif(PkgConfig_FOUND)
if(${CMAKE_FIND_PACKAGE_NAME}_FIND_QUIETLY)
set(pkg_config_quiet_arg QUIET)
endif()
if(${CMAKE_FIND_PACKAGE_NAME}_FIND_REQUIRED)
set(pkg_config_required_arg REQUIRED)
endif()
pkg_search_module(pugixml
${pkg_config_quiet_arg}
${pkg_config_required_arg}
IMPORTED_TARGET
GLOBAL
pugixml)
unset(pkg_config_quiet_arg)
unset(pkg_config_required_arg)
if(pugixml_FOUND)
if(OpenVINODeveloperPackage_DIR)
add_library(openvino::pugixml ALIAS PkgConfig::pugixml)
endif()
# PATCH: on Ubuntu 18.04 pugixml.pc contains incorrect include directories
get_target_property(interface_include_dir PkgConfig::pugixml INTERFACE_INCLUDE_DIRECTORIES)
if(interface_include_dir AND NOT EXISTS "${interface_include_dir}")
set_target_properties(PkgConfig::pugixml PROPERTIES
INTERFACE_INCLUDE_DIRECTORIES "")
endif()
endif()
endif()
# debian 9 case: no cmake, no pkg-config files
if(NOT TARGET openvino::pugixml)
find_library(PUGIXML_LIBRARY NAMES pugixml DOC "Path to pugixml library")
if(PUGIXML_LIBRARY)
add_library(openvino::pugixml INTERFACE IMPORTED)
set_target_properties(openvino::pugixml PROPERTIES INTERFACE_LINK_LIBRARIES "${PUGIXML_LIBRARY}")
else()
message(FATAL_ERROR "Failed to find system pugixml in OpenVINO Developer Package")
endif()
endif()
endif()
endmacro()
macro(_ov_find_itt)
set(_ENABLE_PROFILING_ITT "@ENABLE_PROFILING_ITT@")
# whether 'ittapi' is found via find_package
set(_ENABLE_SYSTEM_ITTAPI "@ittapi_FOUND@")
if(_ENABLE_PROFILING_ITT AND _ENABLE_SYSTEM_ITTAPI)
_ov_find_dependency(ittapi)
endif()
unset(_ENABLE_PROFILING_ITT)
unset(_ENABLE_SYSTEM_ITTAPI)
endmacro()
macro(_ov_find_ade)
set(_OV_ENABLE_GAPI_PREPROCESSING "@ENABLE_GAPI_PREPROCESSING@")
# whether 'ade' is found via find_package
set(_ENABLE_SYSTEM_ADE "@ade_FOUND@")
if(_OV_ENABLE_GAPI_PREPROCESSING AND _ENABLE_SYSTEM_ADE)
_ov_find_dependency(ade 0.1.2)
endif()
unset(_OV_ENABLE_GAPI_PREPROCESSING)
unset(_ENABLE_SYSTEM_ADE)
endmacro()
macro(_ov_find_intel_cpu_dependencies)
set(_OV_ENABLE_CPU_ACL "@DNNL_USE_ACL@")
if(_OV_ENABLE_CPU_ACL)
if(_ov_as_external_package)
set_and_check(ARM_COMPUTE_LIB_DIR "@PACKAGE_ARM_COMPUTE_LIB_DIR@")
set(_ov_find_acl_options NO_DEFAULT_PATH)
set(_ov_find_acl_path "${CMAKE_CURRENT_LIST_DIR}")
else()
set_and_check(_ov_find_acl_path "@PACKAGE_FIND_ACL_PATH@")
endif()
_ov_find_dependency(ACL
NO_MODULE
PATHS "${_ov_find_acl_path}"
${_ov_find_acl_options})
unset(ARM_COMPUTE_LIB_DIR)
unset(_ov_find_acl_path)
unset(_ov_find_acl_options)
endif()
unset(_OV_ENABLE_CPU_ACL)
endmacro()
macro(_ov_find_intel_gpu_dependencies)
set(_OV_ENABLE_INTEL_GPU "@ENABLE_INTEL_GPU@")
set(_OV_ENABLE_SYSTEM_OPENCL "@ENABLE_SYSTEM_OPENCL@")
if(_OV_ENABLE_INTEL_GPU AND _OV_ENABLE_SYSTEM_OPENCL)
set(_OV_OpenCLICDLoader_FOUND "@OpenCLICDLoader_FOUND@")
if(_OV_OpenCLICDLoader_FOUND)
_ov_find_dependency(OpenCLICDLoader)
else()
_ov_find_dependency(OpenCL)
endif()
unset(_OV_OpenCLICDLoader_FOUND)
endif()
unset(_OV_ENABLE_INTEL_GPU)
unset(_OV_ENABLE_SYSTEM_OPENCL)
endmacro()
macro(_ov_find_intel_gna_dependencies)
set(_OV_ENABLE_INTEL_GNA "@ENABLE_INTEL_GNA@")
if(_OV_ENABLE_INTEL_GNA AND NOT libGNA_FOUND)
if(_OV_ENABLE_INTEL_GNA)
set_and_check(GNA_PATH "@PACKAGE_GNA_PATH@")
_ov_find_dependency(libGNA
COMPONENTS KERNEL
CONFIG
PATHS "${CMAKE_CURRENT_LIST_DIR}"
NO_CMAKE_FIND_ROOT_PATH
NO_DEFAULT_PATH)
unset(GNA_PATH)
endif()
unset(_OV_ENABLE_INTEL_GNA)
endmacro()
macro(_ov_find_protobuf_frontend_dependency)
set(_OV_ENABLE_SYSTEM_PROTOBUF "@ENABLE_SYSTEM_PROTOBUF@")
# TODO: remove check for target existence
if(_OV_ENABLE_SYSTEM_PROTOBUF AND NOT TARGET protobuf::libprotobuf)
_ov_find_dependency(Protobuf @Protobuf_VERSION@ EXACT)
endif()
unset(_OV_ENABLE_SYSTEM_PROTOBUF)
endmacro()
macro(_ov_find_tensorflow_frontend_dependencies)
set(_OV_ENABLE_SYSTEM_SNAPPY "@ENABLE_SYSTEM_SNAPPY@")
set(_ov_snappy_lib "@ov_snappy_lib@")
# TODO: remove check for target existence
if(_OV_ENABLE_SYSTEM_SNAPPY AND NOT TARGET ${_ov_snappy_lib})
_ov_find_dependency(Snappy @Snappy_VERSION@ EXACT)
endif()
unset(_OV_ENABLE_SYSTEM_SNAPPY)
unset(_ov_snappy_lib)
set(PACKAGE_PREFIX_DIR ${_ov_package_prefix_dir})
endmacro()
macro(_ov_find_onnx_frontend_dependencies)
set(_OV_ENABLE_SYSTEM_ONNX "@ENABLE_SYSTEM_ONNX@")
if(_OV_ENABLE_SYSTEM_ONNX)
_ov_find_dependency(ONNX @ONNX_VERSION@ EXACT)
endif()
unset(_OV_ENABLE_SYSTEM_ONNX)
endmacro()
function(_ov_target_no_deprecation_error)
if(NOT MSVC)
if(CMAKE_CXX_COMPILER_ID STREQUAL "Intel")
@@ -265,13 +422,41 @@ endfunction()
# OpenVINO config
#
cmake_policy(PUSH)
# we need CMP0057 to allow IN_LIST in if() command
if(POLICY CMP0057)
cmake_policy(SET CMP0057 NEW)
else()
message(FATAL_ERROR "OpenVINO requires CMake 3.3 or newer")
endif()
# need to store current PACKAGE_PREFIX_DIR, because it's overwritten by sub-package one
set(_ov_package_prefix_dir "${PACKAGE_PREFIX_DIR}")
set(_OV_ENABLE_OPENVINO_BUILD_SHARED "@BUILD_SHARED_LIBS@")
if(NOT TARGET openvino)
set(_ov_as_external_package ON)
endif()
if(NOT _OV_ENABLE_OPENVINO_BUILD_SHARED)
# common openvino dependencies
_ov_find_tbb()
_ov_find_itt()
_ov_find_pugixml()
# preprocessing dependencies
_ov_find_ade()
# frontend dependencies
_ov_find_protobuf_frontend_dependency()
_ov_find_tensorflow_frontend_dependencies()
_ov_find_onnx_frontend_dependencies()
# plugin dependencies
_ov_find_intel_cpu_dependencies()
_ov_find_intel_gpu_dependencies()
_ov_find_intel_gna_dependencies()
endif()
@@ -279,13 +464,26 @@ _ov_find_dependency(Threads)
unset(_OV_ENABLE_OPENVINO_BUILD_SHARED)
if(NOT TARGET openvino)
set(_ov_as_external_package ON)
set(_ov_imported_libs openvino::runtime openvino::runtime::c
openvino::frontend::onnx openvino::frontend::paddle openvino::frontend::tensorflow
openvino::frontend::pytorch openvino::frontend::tensorflow_lite)
if(_ov_as_external_package)
include("${CMAKE_CURRENT_LIST_DIR}/OpenVINOTargets.cmake")
foreach(target IN LISTS _ov_imported_libs)
if(TARGET ${target})
get_target_property(imported_configs ${target} IMPORTED_CONFIGURATIONS)
if(NOT RELWITHDEBINFO IN_LIST imported_configs)
set_property(TARGET ${target} PROPERTY MAP_IMPORTED_CONFIG_RELWITHDEBINFO RELEASE)
endif()
unset(imported_configs)
endif()
endforeach()
# WA for cmake version < 3.16 which does not export
# IMPORTED_LINK_DEPENDENT_LIBRARIES_** properties if no PUBLIC dependencies for the library
if((THREADING STREQUAL "TBB" OR THREADING STREQUAL "TBB_AUTO") AND TBB_FOUND)
if(THREADING STREQUAL "TBB" OR THREADING STREQUAL "TBB_AUTO")
foreach(type RELEASE DEBUG RELWITHDEBINFO MINSIZEREL)
foreach(tbb_target TBB::tbb TBB::tbbmalloc PkgConfig::tbb)
if(TARGET ${tbb_target})
@@ -326,12 +524,12 @@ endif()
# Apply common functions
#
foreach(target openvino::runtime openvino::runtime::c
openvino::frontend::onnx openvino::frontend::paddle openvino::frontend::tensorflow)
foreach(target IN LISTS _ov_imported_libs)
if(TARGET ${target} AND _ov_as_external_package)
_ov_target_no_deprecation_error(${target})
endif()
endforeach()
unset(_ov_imported_libs)
unset(_ov_as_external_package)
# restore PACKAGE_PREFIX_DIR
@@ -349,3 +547,7 @@ unset(${CMAKE_FIND_PACKAGE_NAME}_IR_FOUND)
unset(${CMAKE_FIND_PACKAGE_NAME}_Paddle_FOUND)
unset(${CMAKE_FIND_PACKAGE_NAME}_ONNX_FOUND)
unset(${CMAKE_FIND_PACKAGE_NAME}_TensorFlow_FOUND)
unset(${CMAKE_FIND_PACKAGE_NAME}_TensorFlowLite_FOUND)
unset(${CMAKE_FIND_PACKAGE_NAME}_PyTorch_FOUND)
cmake_policy(POP)

View File

@@ -56,6 +56,7 @@ find_dependency(OpenVINO
NO_DEFAULT_PATH)
_ov_find_tbb()
_ov_find_pugixml()
foreach(component @openvino_export_components@)
# TODO: remove legacy targets from some tests
@@ -65,58 +66,6 @@ foreach(component @openvino_export_components@)
# endif()
endforeach()
if(ENABLE_SYSTEM_PUGIXML)
set(_ov_pugixml_pkgconfig_interface "@pugixml_FOUND@")
set(_ov_pugixml_cmake_interface "@PugiXML_FOUND@")
if(_ov_pugixml_pkgconfig_interface)
find_dependency(PkgConfig)
elseif(_ov_pugixml_cmake_interface)
find_dependency(PugiXML)
endif()
if(PugiXML_FOUND)
set_property(TARGET pugixml PROPERTY IMPORTED_GLOBAL TRUE)
add_library(openvino::pugixml ALIAS pugixml)
elseif(PkgConfig_FOUND)
if(${CMAKE_FIND_PACKAGE_NAME}_FIND_QUIETLY)
set(pkg_config_quiet_arg QUIET)
endif()
if(${CMAKE_FIND_PACKAGE_NAME}_FIND_REQUIRED)
set(pkg_config_required_arg REQUIRED)
endif()
pkg_search_module(pugixml
${pkg_config_quiet_arg}
${pkg_config_required_arg}
IMPORTED_TARGET GLOBAL
pugixml)
unset(pkg_config_quiet_arg)
unset(pkg_config_required_arg)
if(pugixml_FOUND)
add_library(openvino::pugixml ALIAS PkgConfig::pugixml)
# PATCH: on Ubuntu 18.04 pugixml.pc contains incorrect include directories
get_target_property(interface_include_dir PkgConfig::pugixml INTERFACE_INCLUDE_DIRECTORIES)
if(interface_include_dir AND NOT EXISTS "${interface_include_dir}")
set_target_properties(PkgConfig::pugixml PROPERTIES
INTERFACE_INCLUDE_DIRECTORIES "")
endif()
endif()
endif()
# debian 9 case: no cmake, no pkg-config files
if(NOT TARGET openvino::pugixml)
find_library(PUGIXML_LIBRARY NAMES pugixml DOC "Path to pugixml library")
if(PUGIXML_LIBRARY)
add_library(openvino::pugixml INTERFACE IMPORTED GLOBAL)
set_target_properties(openvino::pugixml PROPERTIES INTERFACE_LINK_LIBRARIES "${PUGIXML_LIBRARY}")
else()
message(FATAL_ERROR "Failed to find system pugixml in OpenVINO Developer Package")
endif()
endif()
endif()
# inherit OpenCV from main OpenVINO project if enabled
if ("@OpenCV_FOUND@")
load_cache("${cache_path}" READ_WITH_PREFIX "" OpenCV_DIR)

View File

@@ -0,0 +1,95 @@
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
# Prerequisites:
#
# Build platform: Ubuntu
# apt-get install mingw-w64 mingw-w64-tools g++-mingw-w64-x86-64 gcc-mingw-w64-x86-64
#
# Build platform: macOS
# brew install mingw-w64
#
set(CMAKE_SYSTEM_NAME Windows)
set(CMAKE_SYSTEM_PROCESSOR x86_64)
set(CMAKE_C_COMPILER x86_64-w64-mingw32-gcc-posix)
set(CMAKE_CXX_COMPILER x86_64-w64-mingw32-g++-posix)
set(PKG_CONFIG_EXECUTABLE x86_64-w64-mingw32-pkg-config CACHE PATH "Path to Windows x86_64 pkg-config")
set(CMAKE_FIND_ROOT_PATH_MODE_PROGRAM NEVER)
set(CMAKE_FIND_ROOT_PATH_MODE_LIBRARY ONLY)
set(CMAKE_FIND_ROOT_PATH_MODE_INCLUDE ONLY)
set(CMAKE_FIND_ROOT_PATH_MODE_PACKAGE ONLY)
macro(__cmake_find_root_save_and_reset)
foreach(v
CMAKE_FIND_ROOT_PATH_MODE_LIBRARY
CMAKE_FIND_ROOT_PATH_MODE_INCLUDE
CMAKE_FIND_ROOT_PATH_MODE_PACKAGE
CMAKE_FIND_ROOT_PATH_MODE_PROGRAM
)
set(__save_${v} ${${v}})
set(${v} NEVER)
endforeach()
endmacro()
macro(__cmake_find_root_restore)
foreach(v
CMAKE_FIND_ROOT_PATH_MODE_LIBRARY
CMAKE_FIND_ROOT_PATH_MODE_INCLUDE
CMAKE_FIND_ROOT_PATH_MODE_PACKAGE
CMAKE_FIND_ROOT_PATH_MODE_PROGRAM
)
set(${v} ${__save_${v}})
unset(__save_${v})
endforeach()
endmacro()
# macro to find programs on the host OS
macro(find_host_program)
__cmake_find_root_save_and_reset()
if(CMAKE_HOST_WIN32)
SET(WIN32 1)
SET(UNIX)
SET(APPLE)
elseif(CMAKE_HOST_APPLE)
SET(APPLE 1)
SET(UNIX)
SET(WIN32)
elseif(CMAKE_HOST_UNIX)
SET(UNIX 1)
SET(WIN32)
SET(APPLE)
endif()
find_program(${ARGN})
SET(WIN32 1)
SET(APPLE)
SET(UNIX)
__cmake_find_root_restore()
endmacro()
# macro to find packages on the host OS
macro(find_host_package)
__cmake_find_root_save_and_reset()
if(CMAKE_HOST_WIN32)
SET(WIN32 1)
SET(UNIX)
SET(APPLE)
elseif(CMAKE_HOST_APPLE)
SET(APPLE 1)
SET(WIN32)
SET(UNIX)
elseif(CMAKE_HOST_UNIX)
SET(UNIX 1)
SET(WIN32)
SET(APPLE)
endif()
find_package(${ARGN})
SET(WIN32 1)
SET(APPLE)
SET(UNIX)
__cmake_find_root_restore()
endmacro()

View File

@@ -24,7 +24,7 @@ set(CMAKE_LINKER ${RISCV_TOOLCHAIN_ROOT}/bin/riscv64-unknown-linux-gnu-ld)
set(CMAKE_OBJCOPY ${RISCV_TOOLCHAIN_ROOT}/bin/riscv64-unknown-linux-gnu-objcopy)
set(CMAKE_OBJDUMP ${RISCV_TOOLCHAIN_ROOT}/bin/riscv64-unknown-linux-gnu-objdump)
set(CMAKE_READELF ${RISCV_TOOLCHAIN_ROOT}/bin/riscv64-unknown-linux-gnu-readelf)
set(PKG_CONFIG_EXECUTABLE "NOT-FOUND" CACHE PATH "Path to ARM64 pkg-config")
set(PKG_CONFIG_EXECUTABLE "NOT-FOUND" CACHE PATH "Path to RISC-V pkg-config")
# Don't run the linker on compiler check
set(CMAKE_TRY_COMPILE_TARGET_TYPE STATIC_LIBRARY)

View File

@@ -0,0 +1,75 @@
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
set(CMAKE_SYSTEM_NAME Linux)
set(CMAKE_SYSTEM_PROCESSOR amd64)
set(CMAKE_C_COMPILER x86_64-linux-gnu-gcc)
set(CMAKE_CXX_COMPILER x86_64-linux-gnu-g++)
set(CMAKE_STRIP x86_64-linux-gnu-strip)
set(PKG_CONFIG_EXECUTABLE "NOT-FOUND" CACHE PATH "Path to amd64 pkg-config")
set(CMAKE_FIND_ROOT_PATH_MODE_PROGRAM NEVER)
set(CMAKE_FIND_ROOT_PATH_MODE_LIBRARY ONLY)
set(CMAKE_FIND_ROOT_PATH_MODE_INCLUDE ONLY)
set(CMAKE_FIND_ROOT_PATH_MODE_PACKAGE ONLY)
macro(__cmake_find_root_save_and_reset)
foreach(v
CMAKE_FIND_ROOT_PATH_MODE_LIBRARY
CMAKE_FIND_ROOT_PATH_MODE_INCLUDE
CMAKE_FIND_ROOT_PATH_MODE_PACKAGE
CMAKE_FIND_ROOT_PATH_MODE_PROGRAM
)
set(__save_${v} ${${v}})
set(${v} NEVER)
endforeach()
endmacro()
macro(__cmake_find_root_restore)
foreach(v
CMAKE_FIND_ROOT_PATH_MODE_LIBRARY
CMAKE_FIND_ROOT_PATH_MODE_INCLUDE
CMAKE_FIND_ROOT_PATH_MODE_PACKAGE
CMAKE_FIND_ROOT_PATH_MODE_PROGRAM
)
set(${v} ${__save_${v}})
unset(__save_${v})
endforeach()
endmacro()
# macro to find programs on the host OS
macro(find_host_program)
__cmake_find_root_save_and_reset()
if(CMAKE_HOST_WIN32)
SET(WIN32 1)
SET(UNIX)
elseif(CMAKE_HOST_APPLE)
SET(APPLE 1)
SET(UNIX)
endif()
find_program(${ARGN})
SET(WIN32)
SET(APPLE)
SET(UNIX 1)
__cmake_find_root_restore()
endmacro()
# macro to find packages on the host OS
macro(find_host_package)
__cmake_find_root_save_and_reset()
if(CMAKE_HOST_WIN32)
SET(WIN32 1)
SET(UNIX)
elseif(CMAKE_HOST_APPLE)
SET(APPLE 1)
SET(UNIX)
endif()
find_package(${ARGN})
SET(WIN32)
SET(APPLE)
SET(UNIX 1)
__cmake_find_root_restore()
endmacro()

33
conanfile.txt Normal file
View File

@@ -0,0 +1,33 @@
[requires]
ade/0.1.2a
onetbb/[>=2021.2.1]
pugixml/[>=1.10]
protobuf/[>=3.20.3]
ittapi/[>=3.23.0]
zlib/[>=1.2.8]
opencl-icd-loader/[>=2022.09.30]
# opencl-clhpp-headers/[>=2022.09.30]
opencl-headers/[>=2022.09.30]
xbyak/[>=6.62]
snappy/[>=1.1.7]
gflags/2.2.2
onnx/1.13.1
nlohmann_json/[>=3.1.1]
pybind11/[>=2.10.1]
flatbuffers/[>=22.9.24]
[tool_requires]
cmake/[>=3.15]
patchelf/[>=0.12]
protobuf/[>=3.20.3]
flatbuffers/[>=22.9.24]
[options]
protobuf/*:lite=True
onetbb/*:tbbmalloc=True
onetbb/*:tbbproxy=True
flatbuffers/*:header_only=True
[generators]
CMakeDeps
CMakeToolchain

View File

@@ -77,7 +77,7 @@ function(build_docs)
if(ENABLE_OPENVINO_NOTEBOOKS)
set(NBDOC_SCRIPT "${DOCS_SOURCE_DIR}/nbdoc/nbdoc.py")
list(APPEND commands
COMMAND ${PYTHON_EXECUTABLE} "${NBDOC_SCRIPT}" "${RST_OUTPUT}/notebooks"
COMMAND ${PYTHON_EXECUTABLE} "${NBDOC_SCRIPT}" "${DOCS_SOURCE_DIR}/notebooks" "${RST_OUTPUT}/notebooks"
)
endif()

View File

@@ -0,0 +1,71 @@
# Datumaro {#datumaro_documentation}
@sphinxdirective
Datumaro provides a suite of basic data import/export (IE) for more than 35 public vision data
formats and manipulation functionalities such as validation, correction, filtration, and some
transformations. To achieve the web-scale training, this further aims to merge multiple
heterogeneous datasets through comparator and merger. Datumaro is integrated into Geti™, OpenVINO™
Training Extensions, and CVAT for the ease of data preparation. Datumaro is open-sourced and
available on `GitHub <https://github.com/openvinotoolkit/datumaro>`__.
Refer to the official `documentation <https://openvinotoolkit.github.io/datumaro/stable/docs/get-started/introduction.html>`__ to learn more.
Plus, enjoy `Jupyter notebooks <https://github.com/openvinotoolkit/datumaro/tree/develop/notebooks>`__ for the real Datumaro practices.
Detailed Workflow
#################
.. image:: ./_static/images/datumaro.png
1. To start working with Datumaro, download public datasets or prepare your own annotated dataset.
.. note::
Datumaro provides a CLI `datum download` for downloading `TensorFlow Datasets <https://www.tensorflow.org/datasets>`__.
2. Import data into Datumaro and manipulate the dataset for the data quality using `Validator`, `Corrector`, and `Filter`.
3. Compare two datasets and transform the label schemas (category information) before merging them.
4. Merge two datasets to a large-scale dataset.
.. note::
There are some choices of merger, i.e., `ExactMerger`, `IntersectMerger`, and `UnionMerger`.
5. Split the unified dataset into subsets, e.g., `train`, `valid`, and `test` through `Splitter`.
.. note::
We can split data with a given ratio of subsets according to both the number of samples or
annotations. Please see `SplitTask` for the task-specific split.
6. Export the cleaned and unified dataset for follow-up workflows such as model training.
Go to :doc:`OpenVINO™ Training Extensions <ote_documentation>`.
If the results are unsatisfactory, add datasets and perform the same steps, starting with dataset annotation.
Datumaro Components
###################
* `Datumaro CLIs <https://openvinotoolkit.github.io/datumaro/stable/docs/command-reference/overview.html>`__
* `Datumaro APIs <https://openvinotoolkit.github.io/datumaro/stable/docs/reference/datumaro_module.html>`__
* `Datumaro data format <https://openvinotoolkit.github.io/datumaro/stable/docs/data-formats/datumaro_format.html>`__
* `Supported data formats <https://openvinotoolkit.github.io/datumaro/stable/docs/data-formats/formats/index.html>`__
Tutorials
#########
* `Basic skills <https://openvinotoolkit.github.io/datumaro/stable/docs/level-up/basic_skills/index.html>`__
* `Intermediate skills <https://openvinotoolkit.github.io/datumaro/stable/docs/level-up/intermediate_skills/index.html>`__
* `Advanced skills <https://openvinotoolkit.github.io/datumaro/stable/docs/level-up/advanced_skills/index.html>`__
Python Hands-on Examples
########################
* `Data IE <https://openvinotoolkit.github.io/datumaro/stable/docs/jupyter_notebook_examples/dataset_IO.html>`__
* `Data manipulation <https://openvinotoolkit.github.io/datumaro/stable/docs/jupyter_notebook_examples/manipulate.html>`__
* `Data exploration <https://openvinotoolkit.github.io/datumaro/stable/docs/jupyter_notebook_examples/explore.html>`__
* `Data refinement <https://openvinotoolkit.github.io/datumaro/stable/docs/jupyter_notebook_examples/refine.html>`__
* `Data transformation <https://openvinotoolkit.github.io/datumaro/stable/docs/jupyter_notebook_examples/transform.html>`__
* `Deep learning end-to-end use-cases <https://openvinotoolkit.github.io/datumaro/stable/docs/jupyter_notebook_examples/e2e_example.html>`__
@endsphinxdirective

View File

@@ -1,33 +0,0 @@
# Running and Deploying Inference {#openvino_docs_deployment_guide_introduction}
@sphinxdirective
.. toctree::
:maxdepth: 1
:hidden:
Run and Deploy Locally <openvino_deployment_guide>
Deploy via Model Serving <ovms_what_is_openvino_model_server>
Once you have a model that meets both OpenVINO™ and your requirements, you can choose how to deploy it with your application.
.. panels::
:doc:`Deploy via OpenVINO Runtime <openvino_deployment_guide>`
^^^^^^^^^^^^^^
Local deployment uses OpenVINO Runtime that is called from, and linked to, the application directly.
It utilizes resources available to the system and provides the quickest way of launching inference.
---
:doc:`Deploy via Model Server <ovms_what_is_openvino_model_server>`
^^^^^^^^^^^^^^
Deployment via OpenVINO Model Server allows the application to connect to the inference server set up remotely.
This way inference can use external resources instead of those available to the application itself.
Apart from the default deployment options, you may also :doc:`deploy your application for the TensorFlow framework with OpenVINO Integration <ovtf_integration>`
@endsphinxdirective

View File

@@ -17,7 +17,7 @@ OpenVINO Runtime offers multiple inference modes to allow optimum hardware utili
The remaining modes assume certain levels of automation in selecting devices for inference. Using them in the deployed solution may potentially increase its performance and portability. The automated modes are:
* :doc:`Automatic Device Selection (AUTO) <openvino_docs_OV_UG_supported_plugins_AUTO>`
* :doc:``Multi-Device Execution (MULTI) <openvino_docs_OV_UG_Running_on_multiple_devices>`
* :doc:`Multi-Device Execution (MULTI) <openvino_docs_OV_UG_Running_on_multiple_devices>`
* :doc:`Heterogeneous Execution (HETERO) <openvino_docs_OV_UG_Hetero_execution>`
* :doc:`Automatic Batching Execution (Auto-batching) <openvino_docs_OV_UG_Automatic_Batching>`

View File

@@ -10,22 +10,48 @@
omz_tools_downloader
Every deep learning workflow begins with obtaining a model. You can choose to prepare a custom one, use a ready-made solution and adjust it to your needs, or even download and run a pre-trained network from an online database, such as OpenVINO's :doc:`Open Model Zoo <model_zoo>`.
Every deep learning workflow begins with obtaining a model. You can choose to prepare a custom one, use a ready-made solution and adjust it to your needs, or even download and run a pre-trained network from an online database, such as `TensorFlow Hub <https://tfhub.dev/>`__, `Hugging Face <https://huggingface.co/>`__, `Torchvision models <https://pytorch.org/hub/>`__.
:doc:`OpenVINO™ supports several model formats <Supported_Model_Formats>` and allows to convert them to it's own, OpenVINO IR, providing a tool dedicated to this task.
:doc:`OpenVINO™ supports several model formats <Supported_Model_Formats>` and allows converting them to it's own, `openvino.runtime.Model <api/ie_python_api/_autosummary/openvino.runtime.Model.html>`__ (`ov.Model <api/ie_python_api/_autosummary/openvino.runtime.Model.html>`__ ), providing a tool dedicated to this task.
:doc:`Model Optimizer <openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide>` reads the original model and creates the OpenVINO IR model (.xml and .bin files) so that inference can ultimately be performed without delays due to format conversion. Optionally, Model Optimizer can adjust the model to be more suitable for inference, for example, by :doc:`alternating input shapes <openvino_docs_MO_DG_prepare_model_convert_model_Converting_Model>`, :doc:`embedding preprocessing <openvino_docs_MO_DG_Additional_Optimization_Use_Cases>` and :doc:`cutting training parts off <openvino_docs_MO_DG_prepare_model_convert_model_Cutting_Model>`.
There are several options to convert a model from original framework to OpenVINO model format (``ov.Model``).
The approach to fully convert a model is considered the default choice, as it allows the full extent of OpenVINO features. The OpenVINO IR model format is used by other conversion and preparation tools, such as the Post-Training Optimization Tool, for further optimization of the converted model.
The ``read_model()`` method reads a model from a file and produces ``ov.Model``. If the file is in one of the supported original framework file formats, it is converted automatically to OpenVINO Intermediate Representation. If the file is already in the OpenVINO IR format, it is read "as-is", without any conversion involved. ``ov.Model`` can be serialized to IR using the ``ov.serialize()`` method. The serialized IR can be further optimized using :doc:`Neural Network Compression Framework (NNCF) <ptq_introduction>` that applies post-training quantization methods.
Conversion is not required for ONNX, PaddlePaddle, TensorFlow Lite and TensorFlow models (check :doc:`TensorFlow Frontend Capabilities and Limitations <openvino_docs_MO_DG_TensorFlow_Frontend>`), as OpenVINO provides C++ and Python APIs for importing them to OpenVINO Runtime directly. It provides a convenient way to quickly switch from framework-based code to OpenVINO-based code in your inference application.
Convert a model in Python
######################################
Model conversion API, specifically, the ``mo.convert_model()`` method converts a model from original framework to ``ov.Model``. ``mo.convert_model()`` returns ``ov.Model`` object in memory so the ``read_model()`` method is not required. The resulting ``ov.Model`` can be inferred in the same training environment (python script or Jupiter Notebook). ``mo.convert_model()`` provides a convenient way to quickly switch from framework-based code to OpenVINO-based code in your inference application. In addition to model files, ``mo.convert_model()`` can take OpenVINO extension objects constructed directly in Python for easier conversion of operations that are not supported in OpenVINO. The ``mo.convert_model()`` method also has a set of parameters to :doc:`cut the model <openvino_docs_MO_DG_prepare_model_convert_model_Cutting_Model>`, :doc:`set input shapes or layout <openvino_docs_MO_DG_prepare_model_convert_model_Converting_Model>`, :doc:`add preprocessing <openvino_docs_MO_DG_Additional_Optimization_Use_Cases>`, etc.
.. image:: _static/images/model_conversion_diagram.svg
:alt: model conversion diagram
Convert a model with ``mo`` command-line tool
#############################################
Another option to convert a model is to use ``mo`` command-line tool. ``mo`` is a cross-platform tool that facilitates the transition between training and deployment environments, performs static model analysis, and adjusts deep learning models for optimal execution on end-point target devices in the same measure, as the ``mo.convert_model`` method.
``mo`` requires the use of a pre-trained deep learning model in one of the supported formats: TensorFlow, TensorFlow Lite, PaddlePaddle, or ONNX. ``mo`` converts the model to the OpenVINO Intermediate Representation format (IR), which needs to be read with the ``ov.read_model()`` method. Then, you can compile and infer the ``ov.Model`` later with :doc:`OpenVINO™ Runtime <openvino_docs_OV_UG_OV_Runtime_User_Guide>`.
The figure below illustrates the typical workflow for deploying a trained deep learning model:
.. image:: _static/images/BASIC_FLOW_MO_simplified.svg
where IR is a pair of files describing the model:
* ``.xml`` - Describes the network topology.
* ``.bin`` - Contains the weights and biases binary data.
Model files (not Python objects) from ONNX, PaddlePaddle, TensorFlow and TensorFlow Lite (check :doc:`TensorFlow Frontend Capabilities and Limitations <openvino_docs_MO_DG_TensorFlow_Frontend>`) do not require a separate step for model conversion, that is ``mo.convert_model``. OpenVINO provides C++ and Python APIs for importing the models to OpenVINO Runtime directly by just calling the ``read_model`` method.
The results of both ``mo`` and ``mo.convert_model()`` conversion methods described above are the same. You can choose one of them, depending on what is most convenient for you. Keep in mind that there should not be any differences in the results of model conversion if the same set of parameters is used.
This section describes how to obtain and prepare your model for work with OpenVINO to get the best inference results:
* :doc:`See the supported formats and how to use them in your project <Supported_Model_Formats>`.
* :doc:`Convert different model formats to the OpenVINO IR format <openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide>`.
* `Automate model-related tasks with Model Downloader and additional OMZ Tools <https://docs.openvino.ai/latest/omz_tools_downloader.html>`__.
* :doc:`Convert different model formats to the ov.Model format <openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide>`.
To begin with, you may want to :doc:`browse a database of models for use in your projects <model_zoo>`.
@endsphinxdirective

View File

@@ -7,16 +7,16 @@
:hidden:
ote_documentation
ovtf_integration
datumaro_documentation
ovsa_get_started
openvino_inference_engine_tools_compile_tool_README
openvino_docs_tuning_utilities
OpenVINO™ is not just one tool. It is an expansive ecosystem of utilities, providing a comprehensive workflow for deep learning solution development. Learn more about each of them to reach the full potential of OpenVINO™ Toolkit.
Neural Network Compression Framework (NNCF)
###########################################
**Neural Network Compression Framework (NNCF)**
A suite of advanced algorithms for Neural Network inference optimization with minimal accuracy drop. NNCF applies quantization, filter pruning, binarization and sparsity algorithms to PyTorch and TensorFlow models during training.
@@ -27,8 +27,7 @@ More resources:
* `PyPI <https://pypi.org/project/nncf/>`__
OpenVINO™ Training Extensions
#############################
**OpenVINO™ Training Extensions**
A convenient environment to train Deep Learning models and convert them using the OpenVINO™ toolkit for optimized inference.
@@ -38,71 +37,60 @@ More resources:
* `GitHub <https://github.com/openvinotoolkit/training_extensions>`__
* `Documentation <https://openvinotoolkit.github.io/training_extensions/stable/guide/get_started/introduction.html>`__
OpenVINO™ Security Add-on
#########################
**OpenVINO™ Security Add-on**
A solution for Model Developers and Independent Software Vendors to use secure packaging and secure model execution.
More resources:
* `Documentation <https://docs.openvino.ai/latest/ovsa_get_started.html>`__
* :doc:`Documentation <ovsa_get_started>`
* `GitHub <https://github.com/openvinotoolkit/security_addon>`__
OpenVINO™ integration with TensorFlow (OVTF)
############################################
A solution empowering TensorFlow developers with OpenVINO's optimization capabilities. With just two lines of code in your application, you can offload inference to OpenVINO, while keeping the TensorFlow API.
More resources:
* `Documentation <https://github.com/openvinotoolkit/openvino_tensorflow>`__
* `PyPI <https://pypi.org/project/openvino-tensorflow/>`__
* `GitHub <https://github.com/openvinotoolkit/openvino_tensorflow>`__
DL Streamer
###########
A streaming media analytics framework, based on the GStreamer multimedia framework, for creating complex media analytics pipelines.
More resources:
* `Documentation on GitHub <https://dlstreamer.github.io/index.html>`__
* `Installation Guide on GitHub <https://github.com/openvinotoolkit/dlstreamer_gst/wiki/Install-Guide>`__
DL Workbench
############
A web-based tool for deploying deep learning models. Built on the core of OpenVINO and equipped with a graphics user interface, DL Workbench is a great way to explore the possibilities of the OpenVINO workflow, import, analyze, optimize, and build your pre-trained models. You can do all that by visiting `Intel® Developer Cloud <https://software.intel.com/content/www/us/en/develop/tools/devcloud.html>`__ and launching DL Workbench online.
More resources:
* `Documentation <https://docs.openvino.ai/2022.3/workbench_docs_Workbench_DG_Introduction.html>`__
* `Docker Hub <https://hub.docker.com/r/openvino/workbench>`__
* `PyPI <https://pypi.org/project/openvino-workbench/>`__
Computer Vision Annotation Tool (CVAT)
######################################
An online, interactive video and image annotation tool for computer vision purposes.
More resources:
* `Documentation on GitHub <https://opencv.github.io/cvat/docs/>`__
* `Web application <https://www.cvat.ai/>`__
* `Docker Hub <https://hub.docker.com/r/openvino/cvat_server>`__
* `GitHub <https://github.com/openvinotoolkit/cvat>`__
Dataset Management Framework (Datumaro)
#######################################
**Dataset Management Framework (Datumaro)**
A framework and CLI tool to build, transform, and analyze datasets.
More resources:
* `Documentation on GitHub <https://openvinotoolkit.github.io/datumaro/docs/>`__
* :doc:`Overview <datumaro_documentation>`
* `PyPI <https://pypi.org/project/datumaro/>`__
* `GitHub <https://github.com/openvinotoolkit/datumaro>`__
* `Documentation <https://openvinotoolkit.github.io/datumaro/stable/docs/get-started/introduction.html>`__
**Compile Tool**
Compile tool is now deprecated. If you need to compile a model for inference on a specific device, use the following script:
.. tab-set::
.. tab-item:: Python
:sync: py
.. doxygensnippet:: docs/snippets/export_compiled_model.py
:language: python
:fragment: [export_compiled_model]
.. tab-item:: C++
:sync: cpp
.. doxygensnippet:: docs/snippets/export_compiled_model.cpp
:language: cpp
:fragment: [export_compiled_model]
To learn which device supports the import / export functionality, see the :doc:`feature support matrix <openvino_docs_OV_UG_Working_with_devices>`.
For more details on preprocessing steps, refer to the :doc:`Optimize Preprocessing <openvino_docs_OV_UG_Preprocessing_Overview>`. To compile the model with advanced preprocessing capabilities, refer to the :doc:`Use Case - Integrate and Save Preprocessing Steps Into OpenVINO IR <openvino_docs_OV_UG_Preprocess_Usecase_save>`, which shows how to have all the preprocessing in the compiled blob.
**DL Workbench**
A web-based tool for deploying deep learning models. Built on the core of OpenVINO and equipped with a graphics user interface, DL Workbench is a great way to explore the possibilities of the OpenVINO workflow, import, analyze, optimize, and build your pre-trained models. You can do all that by visiting `Intel® Developer Cloud <https://software.intel.com/content/www/us/en/develop/tools/devcloud.html>`__ and launching DL Workbench online.
**OpenVINO™ integration with TensorFlow (OVTF)**
OpenVINO™ Integration with TensorFlow will no longer be supported as of OpenVINO release 2023.0. As part of the 2023.0 release, OpenVINO will feature a significantly enhanced TensorFlow user experience within native OpenVINO without needing offline model conversions. :doc:`Learn more <openvino_docs_MO_DG_TensorFlow_Frontend>`.
@endsphinxdirective

View File

@@ -1,55 +0,0 @@
# OpenVINO™ integration with TensorFlow {#ovtf_integration}
@sphinxdirective
**OpenVINO™ integration with TensorFlow** is a solution for TensorFlow developers who want to get started with OpenVINO™ in their inferencing applications. By adding just two lines of code you can now take advantage of OpenVINO™ toolkit optimizations with TensorFlow inference applications across a range of Intel® computation devices.
This is all you need:
.. code-block:: bash
import openvino_tensorflow
openvino_tensorflow.set_backend('<backend_name>')
**OpenVINO™ integration with TensorFlow** accelerates inference across many AI models on a variety of Intel® technologies, such as:
* Intel® CPUs
* Intel® integrated GPUs
.. note::
For maximum performance, efficiency, tooling customization, and hardware control, we recommend developers to adopt native OpenVINO™ solutions.
To find out more about the product itself, as well as learn how to use it in your project, check its dedicated `GitHub repository <https://github.com/openvinotoolkit/openvino_tensorflow/tree/master/docs>`__.
To see what you can do with **OpenVINO™ integration with TensorFlow**, explore the demos located in the `examples folder <https://github.com/openvinotoolkit/openvino_tensorflow/tree/master/examples>`__ in our GitHub repository.
Sample tutorials are also hosted on `Intel® DevCloud <https://www.intel.com/content/www/us/en/developer/tools/devcloud/edge/build/ovtfoverview.html>`__. The demo applications are implemented using Jupyter Notebooks. You can interactively execute them on Intel® DevCloud nodes, compare the results of **OpenVINO™ integration with TensorFlow**, native TensorFlow, and OpenVINO™.
License
#######
**OpenVINO™ integration with TensorFlow** is licensed under `Apache License Version 2.0 <https://github.com/openvinotoolkit/openvino_tensorflow/blob/master/LICENSE>`__.
By contributing to the project, you agree to the license and copyright terms therein
and release your contribution under these terms.
Support
#######
Submit your questions, feature requests and bug reports via `GitHub issues <https://github.com/openvinotoolkit/openvino_tensorflow/issues>`__.
How to Contribute
#################
We welcome community contributions to **OpenVINO™ integration with TensorFlow**. If you have an idea for improvement:
* Share your proposal via `GitHub issues <https://github.com/openvinotoolkit/openvino_tensorflow/issues>`__.
* Submit a `pull request <https://github.com/openvinotoolkit/openvino_tensorflow/pulls>`__.
We will review your contribution as soon as possible. If any additional fixes or modifications are necessary, we will guide you and provide feedback. Before you make your contribution, make sure you can build **OpenVINO™ integration with TensorFlow** and run all the examples with your fix/patch. If you want to introduce a large feature, create test cases for your feature. Upon our verification of your pull request, we will merge it to the repository provided that the pull request has met the above mentioned requirements and proved acceptable.
\* Other names and brands may be claimed as the property of others.
@endsphinxdirective

View File

@@ -19,21 +19,22 @@ Detailed Workflow
.. note::
Prepare a separate dataset or split the dataset you have for more accurate quality evaluation.
3. Having successful evaluation results received, you have an opportunity to deploy your model or continue optimizing it, using NNCF and POT. For more information about these frameworks, go to :doc:`Optimization Guide <openvino_docs_model_optimization_guide>`.
3. Having successful evaluation results received, you have an opportunity to deploy your model or continue optimizing it, using NNCF. For more information about these frameworks, go to :doc:`Optimization Guide <openvino_docs_model_optimization_guide>`.
If the results are unsatisfactory, add datasets and perform the same steps, starting with dataset annotation.
OpenVINO Training Extensions Components
#######################################
- `OpenVINO Training Extensions SDK <https://github.com/openvinotoolkit/training_extensions/tree/master/ote_sdk>`__
- `OpenVINO Training Extensions CLI <https://github.com/openvinotoolkit/training_extensions/tree/master/ote_cli>`__
- `OpenVINO Training Extensions Algorithms <https://github.com/openvinotoolkit/training_extensions/tree/master/external>`__
* `OpenVINO Training Extensions API <https://github.com/openvinotoolkit/training_extensions/tree/develop/otx/api>`__
* `OpenVINO Training Extensions CLI <https://github.com/openvinotoolkit/training_extensions/tree/develop/otx/cli>`__
* `OpenVINO Training Extensions Algorithms <https://github.com/openvinotoolkit/training_extensions/tree/develop/otx/algorithms>`__
Tutorials
#########
`Object Detection <https://github.com/openvinotoolkit/training_extensions/blob/master/ote_cli/notebooks/train.ipynb>`__
* `Base tutorial <https://openvinotoolkit.github.io/training_extensions/stable/guide/tutorials/base/index.html>`__
* `Advanced tutorial <https://openvinotoolkit.github.io/training_extensions/stable/guide/tutorials/advanced/index.html>`__
@endsphinxdirective

View File

@@ -9,16 +9,34 @@
Model Preparation <openvino_docs_model_processing_introduction>
Model Optimization and Compression <openvino_docs_model_optimization_guide>
Running and Deploying Inference <openvino_docs_deployment_guide_introduction>
Running Inference <openvino_docs_OV_UG_OV_Runtime_User_Guide>
Deployment on a Local System <openvino_deployment_guide>
Deployment on a Model Server <ovms_what_is_openvino_model_server>
| :doc:`Model Preparation <openvino_docs_model_processing_introduction>`
| With Model Downloader and Model Optimizer guides, you will learn to download pre-trained models and convert them for use with OpenVINO™. You can use your own models or choose some from a broad selection provided in the Open Model Zoo.
| With model conversion API guide, you will learn to convert pre-trained models for use with OpenVINO™. You can use your own models or choose some from a broad selection in online databases, such as `TensorFlow Hub <https://tfhub.dev/>`__, `Hugging Face <https://huggingface.co/>`__, `Torchvision models <https://pytorch.org/hub/>`__..
| :doc:`Model Optimization and Compression <openvino_docs_model_optimization_guide>`
| In this section you will find out how to optimize a model to achieve better inference performance. It describes multiple optimization methods for both the training and post-training stages.
| :doc:`Deployment <openvino_docs_deployment_guide_introduction>`
| This section explains the process of deploying your own inference application using either OpenVINO Runtime or OpenVINO Model Server. It describes how to run inference which is the most basic form of deployment and the quickest way of launching inference.
| :doc:`Running Inference <openvino_docs_OV_UG_OV_Runtime_User_Guide>`
| This section explains describes how to run inference which is the most basic form of deployment and the quickest way of launching inference.
Once you have a model that meets both OpenVINO™ and your requirements, you can choose how to deploy it with your application.
| :doc:`Option 1. Deployment via OpenVINO Runtime <openvino_deployment_guide>`
| Local deployment uses OpenVINO Runtime that is called from, and linked to, the application directly.
| It utilizes resources available to the system and provides the quickest way of launching inference.
| Deployment on a local system requires performing the steps from the running inference section.
| :doc:`Option 2. Deployment via Model Server <ovms_what_is_openvino_model_server>`
| Deployment via OpenVINO Model Server allows the application to connect to the inference server set up remotely.
| This way inference can use external resources instead of those available to the application itself.
| Deployment on a model server can be done quickly and without performing any additional steps described in the running inference section.
@endsphinxdirective

View File

@@ -9,7 +9,6 @@
openvino_docs_Extensibility_UG_add_openvino_ops
openvino_docs_Extensibility_UG_Frontend_Extensions
openvino_docs_Extensibility_UG_GPU
openvino_docs_MO_DG_prepare_model_customize_model_optimizer_Customize_Model_Optimizer
.. toctree::
:maxdepth: 1
@@ -18,14 +17,20 @@
openvino_docs_transformations
OpenVINO Plugin Developer Guide <openvino_docs_ie_plugin_dg_overview>
The Intel® Distribution of OpenVINO™ toolkit supports neural network models trained with various frameworks, including
TensorFlow, PyTorch, ONNX, TensorFlow Lite, PaddlePaddle, Apache MXNet, Caffe, and Kaldi. The list of supported operations is different for
each of the supported frameworks. To see the operations supported by your framework, refer to :doc:`Supported Framework Operations <openvino_docs_MO_DG_prepare_model_Supported_Frameworks_Layers>`.
.. toctree::
:maxdepth: 1
:hidden:
openvino_docs_MO_DG_prepare_model_customize_model_optimizer_Customize_Model_Optimizer
The Intel® Distribution of OpenVINO™ toolkit supports neural-network models trained with various frameworks, including
TensorFlow, PyTorch, ONNX, TensorFlow Lite, and PaddlePaddle (OpenVINO support for Apache MXNet, Caffe, and Kaldi is currently
being deprecated and will be removed entirely in the future). The list of supported operations is different for each of the supported frameworks.
To see the operations supported by your framework, refer to :doc:`Supported Framework Operations <openvino_resources_supported_operations_frontend>`.
Custom operations, which are not included in the list, are not recognized by OpenVINO out-of-the-box. The need for custom operation may appear in two cases:
1. A new or rarely used regular framework operation is not supported in OpenVINO yet.
2. A new user operation that was created for some specific model topology by the author of the model using framework extension capabilities.
Importing models with such operations requires additional steps. This guide illustrates the workflow for running inference on models featuring custom operations. This allows plugging in your own implementation for them. OpenVINO Extensibility API enables adding support for those custom operations and using one implementation for Model Optimizer and OpenVINO Runtime.
@@ -54,9 +59,9 @@ Mapping of custom operation is implemented differently, depending on model forma
1. If a model is represented in the ONNX (including models exported from Pytorch in ONNX), TensorFlow Lite, PaddlePaddle or TensorFlow formats, then one of the classes from :doc:`Frontend Extension API <openvino_docs_Extensibility_UG_Frontend_Extensions>` should be used. It consists of several classes available in C++ which can be used with the ``--extensions`` option in Model Optimizer or when a model is imported directly to OpenVINO runtime using the ``read_model`` method. Python API is also available for runtime model import.
2. If a model is represented in the Caffe, Kaldi or MXNet formats, then :doc:`Model Optimizer Extensions <openvino_docs_MO_DG_prepare_model_customize_model_optimizer_Customize_Model_Optimizer>` should be used. This approach is available for model conversion in Model Optimizer only.
2. If a model is represented in the Caffe, Kaldi or MXNet formats (as legacy frontends), then :doc:`[Legacy] Model Optimizer Extensions <openvino_docs_MO_DG_prepare_model_customize_model_optimizer_Customize_Model_Optimizer>` should be used. This approach is available for model conversion in Model Optimizer only.
Existing of two approaches simultaneously is explained by two different types of frontends used for model conversion in OpenVINO: new frontends (ONNX, PaddlePaddle, TensorFlow Lite and TensorFlow) and legacy frontends (Caffe, Kaldi and Apache MXNet). Model Optimizer can use both front-ends in contrast to the direct import of model with ``read_model`` method which can use new frontends only. Follow one of the appropriate guides referenced above to implement mappings depending on framework frontend.
Existing of two approaches simultaneously is explained by two different types of frontends used for model conversion in OpenVINO: new frontends (ONNX, PaddlePaddle, TensorFlow Lite, and TensorFlow) and legacy frontends (Caffe, Kaldi, and Apache MXNet). Model Optimizer can use both frontends in contrast to the direct import of model with ``read_model`` method which can use new frontends only. Follow one of the appropriate guides referenced above to implement mappings depending on framework frontend.
If you are implementing extensions for new ONNX, PaddlePaddle, TensorFlow Lite or TensorFlow frontends and plan to use the ``--extensions`` option in Model Optimizer for model conversion, then the extensions should be:
@@ -121,6 +126,8 @@ The ``Identity`` is a custom operation class defined in :doc:`Custom Operation G
When Python API is used, there is no way to implement a custom OpenVINO operation. Even if custom OpenVINO operation is implemented in C++ and loaded into the runtime by a shared library, there is still no way to add a frontend mapping extension that refers to this custom operation. In this case, use C++ shared library approach to implement both operations semantics and framework mapping.
Python can still be used to map and decompose operations when only operations from the standard OpenVINO operation set are used.
.. _create_a_library_with_extensions:
Create a Library with Extensions
++++++++++++++++++++++++++++++++
@@ -187,4 +194,4 @@ See Also
* :doc:`Using OpenVINO Runtime Samples <openvino_docs_OV_UG_Samples_Overview>`
* :doc:`Hello Shape Infer SSD sample <openvino_inference_engine_samples_hello_reshape_ssd_README>`
@endsphinxdirective
@endsphinxdirective

View File

@@ -19,6 +19,11 @@ guide.
operation that is a placeholder for your real custom operation. You can review the complete code,
which is fully compilable, to see how it works.
.. note::
You can find more examples of extensions in `openvino_contrib repository <https://github.com/openvinotoolkit/openvino_contrib/tree/master/modules/custom_operations>`_.
Single Operation Mapping with OpExtension
#########################################
@@ -91,7 +96,7 @@ In this case, you can directly say that 'MyRelu' -> ``Relu`` mapping should be u
:fragment: [frontend_extension_MyRelu]
.. tab-item:: Python
:sync: python
:sync: py
.. doxygensnippet:: docs/snippets/ov_extensions.py
:language: python
@@ -108,10 +113,18 @@ as it was demonstrated with ``TemplateExtension::Identity``.
Attribute Mapping
++++++++++++++++++
As described above, ``OpExtension`` is useful when attributes can be mapped one by one or initialized by a constant.
As described above, ``OpExtension`` is useful when attributes can be mapped one by one or initialized by a constant.
Attributes in OpenVINO operators are identified by their names, so for frameworks that also have named attributes (like TensorFlow, PaddlePaddle, ONNX),
you can specify name to name mapping. For frameworks where OpenVINO operator's attributes can be mapped to one of the framework
operator inputs (like PyTorch), there's a name to input index mapping.
Named attributes mapping
^^^^^^^^^^^^^^^^^^^^^^^^
If the set of attributes in framework representation and OpenVINO representation completely match by their names and types,
nothing should be specified in OpExtension constructor parameters. The attributes are discovered and mapped
automatically based on ``visit_attributes`` method that should be defined for any OpenVINO operation.
no attribute mapping has to be specified in OpExtension constructor parameters. The attributes are discovered and mapped automatically
based on ``visit_attributes`` method that should be defined for any OpenVINO operation.
Imagine you have CustomOperation class implementation that has two attributes with names: ``attr1`` and ``attr2``.
@@ -119,14 +132,15 @@ Imagine you have CustomOperation class implementation that has two attributes wi
:language: cpp
:fragment: [frontend_extension_CustomOperation]
And the original model in the framework representation also has operation named “CustomOperation with the same
And original model in framework representation also has operation with name ``CustomOperation`` with the same
``attr1`` and ``attr2`` attributes. Then with the following code:
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
:language: cpp
:fragment: [frontend_extension_CustomOperation_as_is]
Both ``attr1`` and ``attr2`` are copied from framework representation to OpenVINO representation automatically.
Both ``attr1`` and ``attr2`` are copied from framework representation to OpenVINO representation automatically.
If for some reason names of attributes are different but values still can be copied “as-is” you can pass attribute
names mapping in ``OpExtension`` constructor:
@@ -144,48 +158,123 @@ to achieve that do the following:
:language: cpp
:fragment: [frontend_extension_CustomOperation_rename_set]
So the conclusion is that each attribute of target OpenVINO operation should be initialized either by
1. Setting automatically due to name matching
2. Mapped by attribute name
3. Set to a constant value
This is achieved by specifying maps as arguments for `OpExtension` constructor.
This is achieved by specifying maps as arguments for ``OpExtension`` constructor.
Attribute mapping with named inputs and outputs
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Mappings in previous examples assume that inputs and outputs of an operator in framework model representation come
with a particular order so you can directly map framework operation input ``0`` to OpenVINO operation input ``0`` and so on.
That's not always the case, for frameworks like PaddlePaddle, operation inputs and outputs are identified by their names
and may be defined in any order. So to map it to OpenVINO operation inputs and outputs, you have to specify that order yourself.
This can be done by creating two vector of strings, one for input and one for output, where framework operation
input name at position ``i`` maps to OpenVINO operation input at position ``i`` (and similarly for outputs).
Let's see the following example. Like previously, we'd like to map ``CustomOperation`` in the original model,
to OpenVINO ``CustomOperation`` as is (so their name and attributes names match). This time, that framework operation
inputs and outputs are not stricly ordered and can be identified by their names ``A``, ``B``, ``C`` for inputs
and ``X``, ``Y`` for outputs. Those inputs and outputs can be mapped to OpenVINO operation, such that inputs
``A``, ``B``, ``C`` map to OpenVINO ``CustomOperation`` first, second and third input and ``X`` and ``Y``
outputs map to OpenVINO ``CustomOperation`` first and second output respectively.
Given that, such custom operation can be registered by the following:
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
:language: cpp
:fragment: [frontend_extension_CustomOperation_as_is_paddle]
Second example shows how to map the operation with named inputs and outputs, but when names of attributes are different:
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
:language: cpp
:fragment: [frontend_extension_CustomOperation_rename_paddle]
and the last one shows how to map the operation with named inputs and outputs, but when (in order to correctly map framework
operation to OpenVINO operation) one of the attributes has to be set to predefined value:
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
:language: cpp
:fragment: [frontend_extension_CustomOperation_rename_set_paddle]
Mapping attributes from operation inputs
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
For models (like PyTorch models), where operations have attributes on the input list, you can specify name to input index mapping.
For example, imagine you have created a custom OpenVINO operation that implements a variant of ELU activation function
with two attributes ``alpha`` and ``beta``:
.. math::
CustomElu=\left\lbrace
\begin{array}{ll}
beta * x & \textrm{if x > 0} \newline
alpha * (exp(x) - 1) & \textrm{otherwise}
\end{array}
\right.
Below is a snippet of ``CustomElu`` class showing how to define its attributes:
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
:language: cpp
:fragment: [frontend_extension_framework_map_CustomElu]
Let's see an example of how you can map ``CustomElu`` to PyTorch `aten::elu <https://pytorch.org/docs/stable/generated/torch.nn.functional.elu.html>`_
(note that if ``beta`` is equal to ``1``, ``CustomElu`` works the same as ``aten::elu``).
``aten::elu`` has ``alpha`` attribute second on the input list, but it doesn't have ``beta``.
So in order to map it to ``CustomElu`` you can use the following:
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
:language: cpp
:fragment: [frontend_extension_framework_map_CustomElu_mapping]
This will map ``alpha`` to the second input and map ``beta`` attribute to constant value ``1.0f``.
Such created extension can be used, e.g. in dynamic library, please refer to :ref:`Create a library with extensions <create_a_library_with_extensions>`.
Mapping custom operations to frontends with OPENVINO_FRAMEWORK_MAP macro
---------------------------------------------------------------------------
########################################################################
.. note::
Below solution works only for ONNX and Tensorflow frontends.
``OPENVINO_FRAMEWORK_MAP`` is a macro that should be used inside OpenVINO operation's class definition and that lets you specify
the mapping between this operation to a frontend operation.
``OPENVINO_FRAMEWORK_MAP`` is a macro that should be used inside OpenVINO operation's class definition
and that lets you specify the mapping between this operation to a frontend operation.
Let's consider the following example. Imagine you have an ONNX model with ``CustomOp`` operation (and this operation has ``mode`` attribute),
a TensorFlow model with ``CustomOpV3`` operation (this operation has ``axis`` attribute) and a PaddlePaddle model with ``CustomOp`` (with ``mode`` attribute)
that has input named "X" and output named "Out" and all of them can be implemented with a single OpenVINO operation ``CustomOp`` like follows:
Let's consider the following example. Imagine you have an ONNX model with ``CustomOp`` operation
(and this operation has ``mode`` attribute) and a Tensorflow model with ``CustomOpV3`` operation
(this operation has ``axis`` attribute) and both of them can be implemented with a single OpenVINO
operation ``CustomOp`` like follows:
.. doxygensnippet:: ov_extensions.cpp
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
:language: cpp
:fragment: [frontend_extension_framework_map_macro_headers]
.. doxygensnippet:: ov_extensions.cpp
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
:language: cpp
:fragment: [frontend_extension_framework_map_macro_CustomOp]
Let's take a closer look at the parameters this macro takes (note that there are two flavors - the second one is to map
for PaddlePaddle operations where input and output names have to be specified).
Let's take a closer look at the parameters this macro takes:
.. code-block::cpp
.. code-block:: cpp
OPENVINO_FRAMEWORK_MAP(framework, name, attributes_map, attributes_values)
OPENVINO_FRAMEWORK_MAP(framework, input_names, output_names, name, attributes_map, attributes_values)
- ``framework`` - framework name.
- ``name`` - the framework operation name. It's optional if the OpenVINO custom operation name
(that is the name that is passed as the first parameter to `OPENVINO_OP` macro) is the
same as the framework operation name and both ``attributes_map`` and ``attributes_values`` are not provided.
(that is the name that is passed as the first parameter to ``OPENVINO_OP`` macro) is the same
as the framework operation name and both ``attributes_map`` and ``attributes_values`` are not provided.
- ``input_names`` - vector of strings that specify the names of inputs (needed to map PaddlePaddle to OpenVINO operations),
- ``output_names`` - vector of strings that specify the names of outputs (needed to map PaddlePaddle to OpenVINO operations),
- ``attributes_map`` - used to provide a mapping between OpenVINO operation attribute and
framework operation attribute. Contains key-value pairs, where key is an OpenVINO operation
attribute name and value is its corresponding framework operation attribute name.
@@ -196,14 +285,21 @@ Let's take a closer look at the parameters this macro takes:
operation attribute name and the value is this attribute value. This parameter cannot be provided
if ``attributes_map`` contains all of OpenVINO operation attributes or if ``attributes_map`` is not provided.
In the example above, ``OPENVINO_FRAMEWORK_MAP`` is used twice.
In the example above, ``OPENVINO_FRAMEWORK_MAP`` is used three times.
First, OpenVINO ``CustomOp`` is mapped to ONNX ``CustomOp`` operation, ``m_mode`` attribute is mapped to ``mode``
attribute, while ``m_axis`` attribute gets the default value ``-1``. Secondly, OpenVINO `CustomOp` is mapped
to Tensorflow ``CustomOpV3`` operation, ``m_axis`` attribute is mapped to ``axis`` attribute, while ``m_mode``
attribute gets the default value ``linear``.
attribute, while ``m_axis`` attribute gets the default value ``-1``. Secondly, OpenVINO ``CustomOp`` is mapped
to TensorFlow ``CustomOpV3`` operation, ``m_axis`` attribute is mapped to ``axis`` attribute, while ``m_mode``
attribute gets the default value ``"linear"``. Thirdly, OpenVINO ``CustomOp`` is mapped to PaddlePaddle ``CustomOp`` operation,
``m_mode`` attribute is mapped to ``mode`` attribute, while ``m_axis`` attribute gets the default value ``-1``.
This mapping also specifies the input name "X" and output name "Out".
The last step is to register this custom operation by following:
@snippet ov_extensions.cpp frontend_extension_framework_map_macro_add_extension
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
:language: cpp
:fragment: [frontend_extension_framework_map_macro_add_extension]
Mapping to Multiple Operations with ConversionExtension
#######################################################
@@ -222,7 +318,7 @@ operations constructing dependency graph of any complexity.
operation classes. Follow chapter :ref:`Build a Model in OpenVINO Runtime <ov_ug_build_model>` to
learn how to use OpenVINO operation classes to build a fragment of model for replacement.
The next example illustrates using ``ConversionExtension`` for conversion of “ThresholdedRelu”
Below example illustrates using ``ConversionExtension`` for conversion of “ThresholdedRelu”
from ONNX according to the formula: ``ThresholdedRelu(x, alpha) -> Multiply(x, Convert(Greater(x, alpha), type=float))``.
.. note::
@@ -241,7 +337,7 @@ from ONNX according to the formula: ``ThresholdedRelu(x, alpha) -> Multiply(x, C
:fragment: [frontend_extension_ThresholdedReLU_header]
.. tab-item:: Python
:sync: python
:sync: py
.. doxygensnippet:: docs/snippets/ov_extensions.py
:language: python
@@ -257,23 +353,53 @@ from ONNX according to the formula: ``ThresholdedRelu(x, alpha) -> Multiply(x, C
:fragment: [frontend_extension_ThresholdedReLU]
.. tab-item:: Python
:sync: python
:sync: py
.. doxygensnippet:: docs/snippets/ov_extensions.py
:language: python
:fragment: [py_frontend_extension_ThresholdedReLU]
To access original framework operation attribute value and connect to inputs,
``node`` object of type ``NodeContext`` is used. It has two main methods:
The next example shows how to use ``ConversionExtension`` to convert PyTorch
`aten::hardtanh <https://pytorch.org/docs/stable/generated/torch.nn.functional.hardtanh.html>`_
to demonstrate how to use ``get_values_from_const_input`` function to fetch an attribute value from input:
.. doxygensnippet:: docs/snippets/ov_extensions.py
:language: python
:fragment: [py_frontend_extension_aten_hardtanh]
To access original framework operation attribute value and connect to inputs, ``node`` object of type ``NodeContext`` is used. It has three main methods:
* ``NodeContext::get_input`` to get input with a given index,
* ``NodeContext::get_attribute`` to get attribute value with a given name.
* ``NodeContext::get_attribute`` to get attribute value with a given name,
* ``NodeContext::get_values_from_const_input`` to get an attribute with a given input index.
The conversion function should return a vector of node outputs that are mapped to
corresponding outputs of the original framework operation in the same order.
Some frameworks require output names of the operation to be provided during conversion.
For PaddlePaddle operations, it is generally necessary to provide names for all outputs using the ``NamedOutputs`` container.
Usually those names can be found in source code of the individual operation in PaddlePaddle code.
The next example shows such conversion for the ``top_k_v2`` operation.
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
:language: cpp
:fragment: [frontend_extension_paddle_TopK]
For TensorFlow framework, if an operation has more than one output, it is recommended to assign names to
those outputs using the ``NamedOutputVector`` structure which allows both indexed and named output access.
For a description of TensorFlow operations, including the names of their outputs, refer to the
`tf.raw_ops <https://www.tensorflow.org/api_docs/python/tf/raw_ops/>`__ documentation page.
The next example shows such conversion for the ``TopKV2`` operation.
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
:language: cpp
:fragment: [frontend_extension_tf_TopK]
@endsphinxdirective

View File

@@ -87,7 +87,7 @@ Detailed Guides
API References
##############
* `OpenVINO Plugin API <https://docs.openvino.ai/nightly/groupov_dev_api.html>`__
* `OpenVINO Transformation API <https://docs.openvino.ai/2022.3/groupie_transformation_api.html>`__
* `OpenVINO Plugin API <https://docs.openvino.ai/2023.0/groupov_dev_api.html>`__
* `OpenVINO Transformation API <https://docs.openvino.ai/2023.0/groupie_transformation_api.html>`__
@endsphinxdirective

View File

@@ -53,8 +53,8 @@ Thus we can define:
Quantization specifics and restrictions
#######################################
In general, OpenVINO can represent and execute quantized models from different sources. However, the Post-training Optimization Tool (POT)
is considered the default way to get optimized models. Since the POT supports HW-aware quantization it means that specific rules can be implemented in it for
In general, OpenVINO can represent and execute quantized models from different sources. However, the Neural Network Compression Framework (NNCF)
is considered the default way to get optimized models. Since the NNCF supports HW-aware quantization it means that specific rules can be implemented in it for
the particular HW. However, it is reasonable to have compatibility with general-purpose HW such as CPU and GPU and support their quantization schemes.
Below we define these rules as follows:

View File

@@ -9,9 +9,9 @@
../groupov_dev_api
../groupie_transformation_api
@endsphinxdirective
The guides below provides extra API references needed for OpenVINO plugin development:
* [OpenVINO Plugin API](@ref ov_dev_api)
* [OpenVINO Transformation API](@ref ie_transformation_api)
* `OpenVINO Plugin API <https://docs.openvino.ai/2023.0/groupov_dev_api.html>`__
* `OpenVINO Transformation API <https://docs.openvino.ai/2023.0/groupie_transformation_api.html>`__
@endsphinxdirective

View File

@@ -308,13 +308,13 @@ This step is optional. It modifies the nGraph function to a device-specific oper
Result model overview
#####################
Let's explore quantized `TensorFlow implementation of ResNet-50 <https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/resnet-50-tf>`__ model. Use `Model Downloader <https://docs.openvino.ai/2022.3/omz_tools_downloader.html>`__ tool to download the ``fp16`` model from `OpenVINO™ Toolkit - Open Model Zoo repository <https://github.com/openvinotoolkit/open_model_zoo>`__:
Let's explore quantized `TensorFlow implementation of ResNet-50 <https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/resnet-50-tf>`__ model. Use :doc:`Model Downloader <omz_tools_downloader>` tool to download the ``fp16`` model from `OpenVINO™ Toolkit - Open Model Zoo repository <https://github.com/openvinotoolkit/open_model_zoo>`__:
.. code-block:: sh
omz_downloader --name resnet-50-tf --precisions FP16-INT8
After that you should quantize model by the `Model Quantizer <https://docs.openvino.ai/2022.3/omz_tools_downloader.html>`__ tool.
After that you should quantize model by the :doc:`Model Quantizer <omz_tools_downloader>` tool.
.. code-block:: sh
@@ -337,7 +337,7 @@ Results analysis
Result model depends on different factors:
* The original model quantization possibility and quantization quality. For some models, some operations are not possible to be quantized by POT and NNCF tools. In this case ``FakeQuantize`` operations are absent before these operations and they will be inferred in original precision.
* The original model quantization possibility and quantization quality. For some models, some operations are not possible to be quantized by NNCF tool. In this case ``FakeQuantize`` operations are absent before these operations and they will be inferred in original precision.
* LPT customization and plugin supported operations. If plugin doesn't support INT8 inference for some operation then corresponding LPT transformation should be disabled and the operation will be inferred in original precision.

View File

@@ -3,7 +3,7 @@
@sphinxdirective
Performance varies by use, configuration and other factors. Learn more at [www.intel.com/PerformanceIndex](https://www.intel.com/PerformanceIndex).
Performance varies by use, configuration and other factors. Learn more at `www.intel.com/PerformanceIndex <https://www.intel.com/PerformanceIndex>`__.
Performance results are based on testing as of dates shown in configurations and may not reflect all publicly available updates. See backup for configuration details. No product or component can be absolutely secure.

View File

@@ -1,4 +1,4 @@
# Model Optimizer Usage {#openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide}
# Convert a Model {#openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide}
@sphinxdirective
@@ -8,90 +8,134 @@
:maxdepth: 1
:hidden:
openvino_docs_model_inputs_outputs
openvino_docs_MO_DG_prepare_model_convert_model_Converting_Model
openvino_docs_MO_DG_prepare_model_convert_model_Cutting_Model
openvino_docs_MO_DG_Additional_Optimization_Use_Cases
openvino_docs_MO_DG_FP16_Compression
openvino_docs_MO_DG_Python_API
openvino_docs_MO_DG_prepare_model_Model_Optimizer_FAQ
To convert a model to OpenVINO model format (``ov.Model``), you can use the following command:
Model Optimizer is a cross-platform command-line tool that facilitates the transition between training and deployment environments, performs static model analysis, and adjusts deep learning models for optimal execution on end-point target devices.
.. tab-set::
To use it, you need a pre-trained deep learning model in one of the supported formats: TensorFlow, PyTorch, PaddlePaddle, TensorFlow Lite, MXNet, Caffe, Kaldi, or ONNX. Model Optimizer converts the model to the OpenVINO Intermediate Representation format (IR), which you can infer later with :doc:`OpenVINO™ Runtime <openvino_docs_OV_UG_OV_Runtime_User_Guide>`.
.. tab-item:: Python
:sync: mo-python-api
Note that Model Optimizer does not infer models.
.. code-block:: python
The figure below illustrates the typical workflow for deploying a trained deep learning model:
from openvino.tools.mo import convert_model
ov_model = convert_model(INPUT_MODEL)
.. image:: _static/images/BASIC_FLOW_MO_simplified.svg
.. tab-item:: CLI
:sync: cli-tool
where IR is a pair of files describing the model:
.. code-block:: sh
* ``.xml`` - Describes the network topology.
* ``.bin`` - Contains the weights and biases binary data.
The OpenVINO IR can be additionally optimized for inference by :doc:`Post-training optimization <pot_introduction>` that applies post-training quantization methods.
How to Run Model Optimizer
##########################
To convert a model to IR, you can run Model Optimizer by using the following command:
.. code-block:: sh
mo --input_model INPUT_MODEL
mo --input_model INPUT_MODEL
If the out-of-the-box conversion (only the ``--input_model`` parameter is specified) is not successful, use the parameters mentioned below to override input shapes and cut the model:
If the out-of-the-box conversion (only the ``input_model`` parameter is specified) is not successful, use the parameters mentioned below to override input shapes and cut the model:
- Model Optimizer provides two parameters to override original input shapes for model conversion: ``--input`` and ``--input_shape``.
- model conversion API provides two parameters to override original input shapes for model conversion: ``input`` and ``input_shape``.
For more information about these parameters, refer to the :doc:`Setting Input Shapes <openvino_docs_MO_DG_prepare_model_convert_model_Converting_Model>` guide.
- To cut off unwanted parts of a model (such as unsupported operations and training sub-graphs),
use the ``--input`` and ``--output`` parameters to define new inputs and outputs of the converted model.
use the ``input`` and ``output`` parameters to define new inputs and outputs of the converted model.
For a more detailed description, refer to the :doc:`Cutting Off Parts of a Model <openvino_docs_MO_DG_prepare_model_convert_model_Cutting_Model>` guide.
You can also insert additional input pre-processing sub-graphs into the converted model by using
the ``--mean_values``, ``scales_values``, ``--layout``, and other parameters described
the ``mean_values``, ``scales_values``, ``layout``, and other parameters described
in the :doc:`Embedding Preprocessing Computation <openvino_docs_MO_DG_Additional_Optimization_Use_Cases>` article.
The ``--compress_to_fp16`` compression parameter in Model Optimizer allows generating IR with constants (for example, weights for convolutions and matrix multiplications) compressed to ``FP16`` data type. For more details, refer to the :doc:`Compression of a Model to FP16 <openvino_docs_MO_DG_FP16_Compression>` guide.
The ``compress_to_fp16`` compression parameter in ``mo`` command-line tool allows generating IR with constants (for example, weights for convolutions and matrix multiplications) compressed to ``FP16`` data type. For more details, refer to the :doc:`Compression of a Model to FP16 <openvino_docs_MO_DG_FP16_Compression>` guide.
To get the full list of conversion parameters available in Model Optimizer, run the following command:
To get the full list of conversion parameters, run the following command:
.. code-block:: sh
.. tab-set::
mo --help
.. tab-item:: Python
:sync: mo-python-api
.. code-block:: python
from openvino.tools.mo import convert_model
ov_model = convert_model(help=True)
.. tab-item:: CLI
:sync: cli-tool
.. code-block:: sh
mo --help
Examples of CLI Commands
########################
Examples of model conversion parameters
#######################################
Below is a list of separate examples for different frameworks and Model Optimizer parameters:
Below is a list of separate examples for different frameworks and model conversion parameters:
1. Launch Model Optimizer for a TensorFlow MobileNet model in the binary protobuf format:
1. Launch model conversion for a TensorFlow MobileNet model in the binary protobuf format:
.. code-block:: sh
.. tab-set::
mo --input_model MobileNet.pb
.. tab-item:: Python
:sync: mo-python-api
.. code-block:: python
from openvino.tools.mo import convert_model
ov_model = convert_model("MobileNet.pb")
.. tab-item:: CLI
:sync: cli-tool
.. code-block:: sh
mo --input_model MobileNet.pb
Launch Model Optimizer for a TensorFlow BERT model in the SavedModel format with three inputs. Specify input shapes explicitly where the batch size and the sequence length equal 2 and 30 respectively:
Launch model conversion for a TensorFlow BERT model in the SavedModel format with three inputs. Specify input shapes explicitly where the batch size and the sequence length equal 2 and 30 respectively:
.. code-block:: sh
.. tab-set::
mo --saved_model_dir BERT --input mask,word_ids,type_ids --input_shape [2,30],[2,30],[2,30]
.. tab-item:: Python
:sync: mo-python-api
For more information, refer to the :doc:`Converting a TensorFlow Model <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow>` guide.
.. code-block:: python
2. Launch Model Optimizer for an ONNX OCR model and specify new output explicitly:
from openvino.tools.mo import convert_model
ov_model = convert_model("BERT", input_shape=[[2,30],[2,30],[2,30]])
.. code-block:: sh
.. tab-item:: CLI
:sync: cli-tool
mo --input_model ocr.onnx --output probabilities
.. code-block:: sh
mo --saved_model_dir BERT --input_shape [2,30],[2,30],[2,30]
For more information, refer to the :doc:`Converting a TensorFlow Model <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow>` guide.
2. Launch model conversion for an ONNX OCR model and specify new output explicitly:
.. tab-set::
.. tab-item:: Python
:sync: mo-python-api
.. code-block:: python
from openvino.tools.mo import convert_model
ov_model = convert_model("ocr.onnx", output="probabilities")
.. tab-item:: CLI
:sync: cli-tool
.. code-block:: sh
mo --input_model ocr.onnx --output probabilities
For more information, refer to the :doc:`Converting an ONNX Model <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_ONNX>` guide.
@@ -100,43 +144,30 @@ Below is a list of separate examples for different frameworks and Model Optimize
PyTorch models must be exported to the ONNX format before conversion into IR. More information can be found in :doc:`Converting a PyTorch Model <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_PyTorch>`.
3. Launch Model Optimizer for a PaddlePaddle UNet model and apply mean-scale normalization to the input:
3. Launch model conversion for a PaddlePaddle UNet model and apply mean-scale normalization to the input:
.. code-block:: sh
.. tab-set::
mo --input_model unet.pdmodel --mean_values [123,117,104] --scale 255
.. tab-item:: Python
:sync: mo-python-api
.. code-block:: python
from openvino.tools.mo import convert_model
ov_model = convert_model("unet.pdmodel", mean_values=[123,117,104], scale=255)
.. tab-item:: CLI
:sync: cli-tool
.. code-block:: sh
mo --input_model unet.pdmodel --mean_values [123,117,104] --scale 255
For more information, refer to the :doc:`Converting a PaddlePaddle Model <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Paddle>` guide.
4. Launch Model Optimizer for an Apache MXNet SSD Inception V3 model and specify first-channel layout for the input:
.. code-block:: sh
mo --input_model ssd_inception_v3-0000.params --layout NCHW
For more information, refer to the :doc:`Converting an Apache MXNet Model <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_MxNet>` guide.
5. Launch Model Optimizer for a Caffe AlexNet model with input channels in the RGB format which needs to be reversed:
.. code-block:: sh
mo --input_model alexnet.caffemodel --reverse_input_channels
For more information, refer to the :doc:`Converting a Caffe Model <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Caffe>` guide.
6. Launch Model Optimizer for a Kaldi LibriSpeech nnet2 model:
.. code-block:: sh
mo --input_model librispeech_nnet2.mdl --input_shape [1,140]
For more information, refer to the :doc:`Converting a Kaldi Model <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Kaldi>` guide.
- To get conversion recipes for specific TensorFlow, ONNX, PyTorch, Apache MXNet, and Kaldi models, refer to the :doc:`Model Conversion Tutorials <openvino_docs_MO_DG_prepare_model_convert_model_tutorials>`.
- To get conversion recipes for specific TensorFlow, ONNX, and PyTorch models, refer to the :doc:`Model Conversion Tutorials <openvino_docs_MO_DG_prepare_model_convert_model_tutorials>`.
- For more information about IR, see :doc:`Deep Learning Network Intermediate Representation and Operation Sets in OpenVINO™ <openvino_docs_MO_DG_IR_and_opsets>`.
- For more information about support of neural network models trained with various frameworks, see :doc:`OpenVINO Extensibility Mechanism <openvino_docs_Extensibility_UG_Intro>`
@endsphinxdirective

View File

@@ -2,149 +2,237 @@
@sphinxdirective
Input data for inference can be different from the training dataset and requires
additional preprocessing before inference. To accelerate the whole pipeline including
preprocessing and inference, Model Optimizer provides special parameters such as ``--mean_values``,
Input data for inference can be different from the training dataset and requires
additional preprocessing before inference. To accelerate the whole pipeline including
preprocessing and inference, model conversion API provides special parameters such as ``mean_values``,
``--scale_values``, ``--reverse_input_channels``, and ``--layout``. Based on these
parameters, Model Optimizer generates OpenVINO IR with additionally inserted sub-graphs
to perform the defined preprocessing. This preprocessing block can perform mean-scale
normalization of input data, reverting data along channel dimension, and changing
the data layout. See the following sections for details on the parameters, or the
:doc:`Overview of Preprocessing API <openvino_docs_OV_UG_Preprocessing_Overview>`
``scale_values``, ``reverse_input_channels``, and ``layout``. Based on these
parameters, model conversion API generates OpenVINO IR with additionally inserted sub-graphs
to perform the defined preprocessing. This preprocessing block can perform mean-scale
normalization of input data, reverting data along channel dimension, and changing
the data layout. See the following sections for details on the parameters, or the
:doc:`Overview of Preprocessing API <openvino_docs_OV_UG_Preprocessing_Overview>`
for the same functionality in OpenVINO Runtime.
Specifying Layout
#################
You may need to set input layouts, as it is required by some preprocessing, for
You may need to set input layouts, as it is required by some preprocessing, for
example, setting a batch, applying mean or scales, and reversing input channels (BGR<->RGB).
Layout defines the meaning of dimensions in shape and can be specified for both
inputs and outputs. Some preprocessing requires to set input layouts, for example,
Layout defines the meaning of dimensions in shape and can be specified for both
inputs and outputs. Some preprocessing requires to set input layouts, for example,
setting a batch, applying mean or scales, and reversing input channels (BGR<->RGB).
For the layout syntax, check the :doc:`Layout API overview <openvino_docs_OV_UG_Layout_Overview>`.
To specify the layout, you can use the ``--layout`` option followed by the layout value.
To specify the layout, you can use the ``layout`` option followed by the layout value.
For example, the following command specifies the ``NHWC`` layout for a Tensorflow
For example, the following command specifies the ``NHWC`` layout for a Tensorflow
``nasnet_large`` model that was exported to the ONNX format:
.. code-block:: sh
mo --input_model tf_nasnet_large.onnx --layout nhwc
.. tab-set::
.. tab-item:: Python
:sync: mo-python-api
.. code-block:: python
from openvino.tools.mo import convert_model
ov_model = convert_model("tf_nasnet_large.onnx", layout="nhwc")
.. tab-item:: CLI
:sync: cli-tool
.. code-block:: sh
mo --input_model tf_nasnet_large.onnx --layout nhwc
Additionally, if a model has more than one input or needs both input and output
Additionally, if a model has more than one input or needs both input and output
layouts specified, you need to provide the name of each input or output to apply the layout.
For example, the following command specifies the layout for an ONNX ``Yolo v3 Tiny``
model with its first input ``input_1`` in ``NCHW`` layout and second input ``image_shape``
For example, the following command specifies the layout for an ONNX ``Yolo v3 Tiny``
model with its first input ``input_1`` in ``NCHW`` layout and second input ``image_shape``
having two dimensions: batch and size of the image expressed as the ``N?`` layout:
.. code-block:: sh
.. tab-set::
mo --input_model yolov3-tiny.onnx --layout input_1(nchw),image_shape(n?)
.. tab-item:: Python
:sync: mo-python-api
.. code-block:: python
from openvino.tools.mo import convert_model
ov_model = convert_model("yolov3-tiny.onnx", layout={"input_1": "nchw", "image_shape": "n?"})
.. tab-item:: CLI
:sync: cli-tool
.. code-block:: sh
mo --input_model yolov3-tiny.onnx --layout input_1(nchw),image_shape(n?)
Changing Model Layout
#####################
Changing the model layout may be necessary if it differs from the one presented by input data.
Use either ``--layout`` or ``--source_layout`` with ``--target_layout`` to change the layout.
Changing the model layout may be necessary if it differs from the one presented by input data.
Use either ``layout`` or ``source_layout`` with ``target_layout`` to change the layout.
For example, for the same ``nasnet_large`` model mentioned previously, you can use
For example, for the same ``nasnet_large`` model mentioned previously, you can use
the following commands to provide data in the ``NCHW`` layout:
.. code-block:: sh
mo --input_model tf_nasnet_large.onnx --source_layout nhwc --target_layout nchw
mo --input_model tf_nasnet_large.onnx --layout "nhwc->nchw"
.. tab-set::
.. tab-item:: Python
:sync: mo-python-api
.. code-block:: python
from openvino.tools.mo import convert_model
ov_model = convert_model("tf_nasnet_large.onnx", source_layout="nhwc", target_layout="nchw")
ov_model = convert_model("tf_nasnet_large.onnx", layout="nhwc->nchw")
.. tab-item:: CLI
:sync: cli-tool
.. code-block:: sh
mo --input_model tf_nasnet_large.onnx --source_layout nhwc --target_layout nchw
mo --input_model tf_nasnet_large.onnx --layout "nhwc->nchw"
Again, if a model has more than one input or needs both input and output layouts
Again, if a model has more than one input or needs both input and output layouts
specified, you need to provide the name of each input or output to apply the layout.
For example, to provide data in the ``NHWC`` layout for the `Yolo v3 Tiny` model
For example, to provide data in the ``NHWC`` layout for the `Yolo v3 Tiny` model
mentioned earlier, use the following commands:
.. code-block:: sh
.. tab-set::
mo --input_model yolov3-tiny.onnx --source_layout "input_1(nchw),image_shape(n?)" --target_layout "input_1(nhwc)"
mo --input_model yolov3-tiny.onnx --layout "input_1(nchw->nhwc),image_shape(n?)"
.. tab-item:: Python
:sync: mo-python-api
.. code-block:: python
from openvino.tools.mo import convert_model
ov_model = convert_model("yolov3-tiny.onnx", source_layout={"input_1": "nchw", "image_shape": "n?"}, target_layout={"input_1": "nhwc"})
ov_model = convert_model("yolov3-tiny.onnx", layout={"input_1": "nchw->nhwc", "image_shape": "n?"}
.. tab-item:: CLI
:sync: cli-tool
.. code-block:: sh
mo --input_model yolov3-tiny.onnx --source_layout "input_1(nchw),image_shape(n?)" --target_layout "input_1(nhwc)"
mo --input_model yolov3-tiny.onnx --layout "input_1(nchw->nhwc),image_shape(n?)"
Specifying Mean and Scale Values
################################
Neural network models are usually trained with the normalized input data. This
means that the input data values are converted to be in a specific range, for example,
``[0, 1]`` or ``[-1, 1]``. Sometimes, the mean values (mean images) are subtracted
Neural network models are usually trained with the normalized input data. This
means that the input data values are converted to be in a specific range, for example,
``[0, 1]`` or ``[-1, 1]``. Sometimes, the mean values (mean images) are subtracted
from the input data values as part of the preprocessing.
There are two cases of how the input data preprocessing is implemented.
* The input preprocessing operations are a part of a model.
In this case, the application does not perform a separate preprocessing step:
everything is embedded into the model itself. Model Optimizer will generate the
OpenVINO IR format with required preprocessing operations, and no ``mean`` and
In this case, the application does not perform a separate preprocessing step:
everything is embedded into the model itself. ``convert_model()`` will generate the
ov.Model with required preprocessing operations, and no ``mean`` and
``scale`` parameters are required.
* The input preprocessing operations are not a part of a model and the preprocessing
* The input preprocessing operations are not a part of a model and the preprocessing
is performed within the application which feeds the model with input data.
In this case, information about mean/scale values should be provided to Model
Optimizer to embed it to the generated OpenVINO IR format.
In this case, information about mean/scale values should be provided to ``convert_model()``
to embed it to the generated ``ov.Model``.
Model Optimizer provides command-line parameters to specify the values: ``--mean_values``,
``--scale_values``, ``--scale``. Using these parameters, Model Optimizer embeds the
corresponding preprocessing block for mean-value normalization of the input data
and optimizes this block so that the preprocessing takes negligible time for inference.
Model conversion API represented by ``convert_model()`` provides command-line parameters
to specify the values: ``mean_values``, ``scale_values``, ``scale``. Using these parameters,
model conversion API embeds the corresponding preprocessing block for mean-value
normalization of the input data and optimizes this block so that the preprocessing
takes negligible time for inference.
For example, the following command runs Model Optimizer for the PaddlePaddle UNet
For example, the following command runs model conversion for the PaddlePaddle UNet
model and applies mean-scale normalization to the input data:
.. code-block:: sh
.. tab-set::
mo --input_model unet.pdmodel --mean_values [123,117,104] --scale 255
.. tab-item:: Python
:sync: mo-python-api
.. code-block:: python
from openvino.tools.mo import convert_model
ov_model = convert_model("unet.pdmodel", mean_values=[123,117,104], scale=255)
.. tab-item:: CLI
:sync: cli-tool
.. code-block:: sh
mo --input_model unet.pdmodel --mean_values [123,117,104] --scale 255
Reversing Input Channels
########################
Sometimes, input images for your application can be of the RGB (or BGR) format
and the model is trained on images of the BGR (or RGB) format, which is in the
opposite order of color channels. In this case, it is important to preprocess the
Sometimes, input images for your application can be of the RGB (or BGR) format
and the model is trained on images of the BGR (or RGB) format, which is in the
opposite order of color channels. In this case, it is important to preprocess the
input images by reverting the color channels before inference.
To embed this preprocessing step into OpenVINO IR, Model Optimizer provides the
``--reverse_input_channels`` command-line parameter to shuffle the color channels.
To embed this preprocessing step into ``ov.Model``, model conversion API provides the
``reverse_input_channels`` command-line parameter to shuffle the color channels.
The ``--reverse_input_channels`` parameter can be used to preprocess the model
The ``reverse_input_channels`` parameter can be used to preprocess the model
input in the following cases:
* Only one dimension in the input shape has a size equal to ``3``.
* One dimension has an undefined size and is marked as ``C`` channel using ``layout`` parameters.
Using the ``--reverse_input_channels`` parameter, Model Optimizer embeds the corresponding
preprocessing block for reverting the input data along channel dimension and optimizes
Using the ``reverse_input_channels`` parameter, model conversion API embeds the corresponding
preprocessing block for reverting the input data along channel dimension and optimizes
this block so that the preprocessing takes only negligible time for inference.
For example, the following command launches Model Optimizer for the TensorFlow AlexNet
For example, the following command launches model conversion for the TensorFlow AlexNet
model and embeds the ``reverse_input_channel`` preprocessing block into OpenVINO IR:
.. code-block:: sh
mo --input_model alexnet.pb --reverse_input_channels
.. tab-set::
.. tab-item:: Python
:sync: mo-python-api
.. code-block:: python
from openvino.tools.mo import convert_model
ov_model = convert_model("alexnet.pb", reverse_input_channels=True)
.. tab-item:: CLI
:sync: cli-tool
.. code-block:: sh
mo --input_model alexnet.pb --reverse_input_channels
.. note::
If both mean and scale values are specified, the mean is subtracted first and
then the scale is applied regardless of the order of options in the command-line.
Input values are *divided* by the scale value(s). If the ``--reverse_input_channels``
option is also used, ``reverse_input_channels`` will be applied first, then ``mean``
and after that ``scale``. The data flow in the model looks as follows:
If both mean and scale values are specified, the mean is subtracted first and
then the scale is applied regardless of the order of options in the command-line.
Input values are *divided* by the scale value(s). If the ``reverse_input_channels``
option is also used, ``reverse_input_channels`` will be applied first, then ``mean``
and after that ``scale``. The data flow in the model looks as follows:
``Parameter -> ReverseInputChannels -> Mean apply-> Scale apply -> the original body of the model``.
Additional Resources
@@ -153,4 +241,3 @@ Additional Resources
* :doc:`Overview of Preprocessing API <openvino_docs_OV_UG_Preprocessing_Overview>`
@endsphinxdirective

View File

@@ -2,16 +2,29 @@
@sphinxdirective
Model Optimizer can convert all floating-point weights to the ``FP16`` data type.
Optionally all relevant floating-point weights can be compressed to ``FP16`` data type during the model conversion.
It results in creating a "compressed ``FP16`` model", which occupies about half of
the original space in the file system. The compression may introduce a drop in accuracy.
but it is negligible for most models.
To compress the model, use the `--compress_to_fp16` or `--compress_to_fp16=True` option:
To compress the model, use the ``compress_to_fp16=True`` option:
.. code-block:: sh
.. tab-set::
mo --input_model INPUT_MODEL --compress_to_fp16
.. tab-item:: Python
:sync: mo-python-api
.. code-block:: python
from openvino.tools.mo import convert_model
ov_model = convert_model(INPUT_MODEL, compress_to_fp16=False)
.. tab-item:: CLI
:sync: cli-tool
.. code-block:: sh
mo --input_model INPUT_MODEL --compress_to_fp16=False
For details on how plugins handle compressed ``FP16`` models, see
@@ -28,7 +41,7 @@ For details on how plugins handle compressed ``FP16`` models, see
Some large models (larger than a few GB) when compressed to ``FP16`` may consume enormous amount of RAM on the loading
phase of the inference. In case if you are facing such problems, please try to convert them without compression:
`mo --input_model INPUT_MODEL --compress_to_fp16=False`
``convert_model(INPUT_MODEL, compress_to_fp16=False)`` or ``convert_model(INPUT_MODEL)``
@endsphinxdirective

View File

@@ -0,0 +1,115 @@
# Convert Models Represented as Python Objects {#openvino_docs_MO_DG_Python_API}
@sphinxdirective
Model conversion API is represented by ``convert_model()`` method in openvino.tools.mo namespace. ``convert_model()`` is compatible with types from openvino.runtime, like PartialShape, Layout, Type, etc.
``convert_model()`` has the ability available from the command-line tool, plus the ability to pass Python model objects, such as a Pytorch model or TensorFlow Keras model directly, without saving them into files and without leaving the training environment (Jupyter Notebook or training scripts). In addition to input models consumed directly from Python, ``convert_model`` can take OpenVINO extension objects constructed directly in Python for easier conversion of operations that are not supported in OpenVINO.
.. note::
Model conversion can be performed by the ``convert_model()`` method and MO command line tool. The functionality from this article is applicable for ``convert_model()`` only and it is not present in command line tool.
``convert_model()`` returns an openvino.runtime.Model object which can be compiled and inferred or serialized to IR.
Example of converting a PyTorch model directly from memory:
.. code-block:: python
import torchvision
model = torchvision.models.resnet50(pretrained=True)
ov_model = convert_model(model)
The following types are supported as an input model for ``convert_model()``:
* PyTorch - ``torch.nn.Module``, ``torch.jit.ScriptModule``, ``torch.jit.ScriptFunction``. Refer to the :doc:`Converting a PyTorch Model<openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_PyTorch>` article for more details.
* TensorFlow / TensorFlow 2 / Keras - ``tf.keras.Model``, ``tf.keras.layers.Layer``, ``tf.compat.v1.Graph``, ``tf.compat.v1.GraphDef``, ``tf.Module``, ``tf.function``, ``tf.compat.v1.session``, ``tf.train.checkpoint``. Refer to the :doc:`Converting a TensorFlow Model<openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow>` article for more details.
``convert_model()`` accepts all parameters available in the MO command-line tool. Parameters can be specified by Python classes or string analogs, similar to the command-line tool.
Example of using native Python classes to set ``input_shape``, ``mean_values`` and ``layout``:
.. code-block:: python
from openvino.runtime import PartialShape, Layout
ov_model = convert_model(model, input_shape=PartialShape([1,3,100,100]), mean_values=[127, 127, 127], layout=Layout("NCHW"))
Example of using strings for setting ``input_shape``, ``mean_values`` and ``layout``:
.. code-block:: python
ov_model = convert_model(model, input_shape="[1,3,100,100]", mean_values="[127,127,127]", layout="NCHW")
The ``input`` parameter can be set by a ``tuple`` with a name, shape, and type. The input name of the type string is required in the tuple. The shape and type are optional.
The shape can be a ``list`` or ``tuple`` of dimensions (``int`` or ``openvino.runtime.Dimension``), or ``openvino.runtime.PartialShape``, or ``openvino.runtime.Shape``. The type can be of numpy type or ``openvino.runtime.Type``.
Example of using a tuple in the ``input`` parameter to cut a model:
.. code-block:: python
ov_model = convert_model(model, input=("input_name", [3], np.float32))
For complex cases, when a value needs to be set in the ``input`` parameter, the ``InputCutInfo`` class can be used. ``InputCutInfo`` accepts four parameters: ``name``, ``shape``, ``type``, and ``value``.
``InputCutInfo("input_name", [3], np.float32, [0.5, 2.1, 3.4])`` is equivalent of ``InputCutInfo(name="input_name", shape=[3], type=np.float32, value=[0.5, 2.1, 3.4])``.
Supported types for ``InputCutInfo``:
* name: ``string``.
* shape: ``list`` or ``tuple`` of dimensions (``int`` or ``openvino.runtime.Dimension``), ``openvino.runtime.PartialShape``, ``openvino.runtime.Shape``.
* type: ``numpy type``, ``openvino.runtime.Type``.
* value: ``numpy.ndarray``, ``list`` of numeric values, ``bool``.
Example of using ``InputCutInfo`` to freeze an input with value:
.. code-block:: python
from openvino.tools.mo import convert_model, InputCutInfo
ov_model = convert_model(model, input=InputCutInfo("input_name", [3], np.float32, [0.5, 2.1, 3.4]))
To set parameters for models with multiple inputs, use ``list`` of parameters.
Parameters supporting ``list``:
* input
* input_shape
* layout
* source_layout
* dest_layout
* mean_values
* scale_values
Example of using lists to set shapes, types and layout for multiple inputs:
.. code-block:: python
ov_model = convert_model(model, input=[("input1", [1,3,100,100], np.float32), ("input2", [1,3,100,100], np.float32)], layout=[Layout("NCHW"), LayoutMap("NCHW", "NHWC")])
``layout``, ``source_layout`` and ``dest_layout`` accept an ``openvino.runtime.Layout`` object or ``string``.
Example of using the ``Layout`` class to set the layout of a model input:
.. code-block:: python
from openvino.runtime import Layout
from openvino.tools.mo import convert_model
ov_model = convert_model(model, source_layout=Layout("NCHW"))
To set both source and destination layouts in the ``layout`` parameter, use the ``LayoutMap`` class. ``LayoutMap`` accepts two parameters: ``source_layout`` and ``target_layout``.
``LayoutMap("NCHW", "NHWC")`` is equivalent to ``LayoutMap(source_layout="NCHW", target_layout="NHWC")``.
Example of using the ``LayoutMap`` class to change the layout of a model input:
.. code-block:: python
from openvino.tools.mo import convert_model, LayoutMap
ov_model = convert_model(model, layout=LayoutMap("NCHW", "NHWC"))
@endsphinxdirective

View File

@@ -2,8 +2,22 @@
@sphinxdirective
.. important::
All of the issues below refer to :doc:`legacy functionalities <openvino_docs_MO_DG_prepare_model_customize_model_optimizer_Customize_Model_Optimizer>`.
If your question is not covered by the topics below, use the `OpenVINO Support page <https://software.intel.com/en-us/openvino-toolkit/documentation/get-started>`__, where you can participate on a free forum.
If your question is not covered by the topics below, use the
`OpenVINO Support page <https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/bd-p/distribution-openvino-toolkit>`__,
where you can participate in a free forum discussion.
.. warning::
Note that OpenVINO support for Apache MXNet, Caffe, and Kaldi is currently being deprecated.
As legacy formats, they will not be supported as actively as the main frontends and will be removed entirely in the future.
.. _question-1:
Q1. What does the message "[ ERROR ]: Current caffe.proto does not contain field" mean?
@@ -45,7 +59,7 @@ For example, to add the description of the ``CustomReshape`` layer, which is an
3. Now, Model Optimizer is able to load the model into memory and start working with your extensions if there are any.
However, since your model has custom layers, you must register them as custom. To learn more about it, refer to :doc:`Custom Layers in Model Optimizer <openvino_docs_MO_DG_prepare_model_customize_model_optimizer_Customize_Model_Optimizer>`.
However, since your model has custom layers, you must register them as custom. To learn more about it, refer to the :doc:`[Legacy] Custom Layers in Model Optimizer <openvino_docs_MO_DG_prepare_model_customize_model_optimizer_Customize_Model_Optimizer>`.
.. _question-2:
@@ -69,28 +83,8 @@ Q3. What does the message "[ ERROR ]: Unable to create ports for node with id" m
**A:** Most likely, Model Optimizer does not know how to infer output shapes of some layers in the given topology.
To lessen the scope, compile the list of layers that are custom for Model Optimizer: present in the topology,
absent in the :doc:`list of supported layers <openvino_docs_MO_DG_prepare_model_Supported_Frameworks_Layers>` for the target framework. Then, refer to available options in the corresponding section in the :doc:`Custom Layers in Model Optimizer <openvino_docs_MO_DG_prepare_model_customize_model_optimizer_Customize_Model_Optimizer>` page.
.. _question-4:
Q4. What does the message "Input image of shape is larger than mean image from file" mean?
#####################################################################################################################################################
**A:** Your model input shapes must be smaller than or equal to the shapes of the mean image file you provide. The idea behind the mean file is to subtract its values from the input image in an element-wise manner. When the mean file is smaller than the input image, there are not enough values to perform element-wise subtraction. Also, make sure you use the mean file that was used during the network training phase. Note that the mean file is dependent on dataset.
.. _question-5:
Q5. What does the message "Mean file is empty" mean?
#####################################################################################################################################################
**A:** Most likely, the mean file specified with the ``--mean_file`` flag is empty while Model Optimizer is launched. Make sure that this is exactly the required mean file and try to regenerate it from the given dataset if possible.
.. _question-6:
Q6. What does the message "Probably mean file has incorrect format" mean?
#####################################################################################################################################################
**A:** The mean file that you provide for Model Optimizer must be in the ``.binaryproto`` format. You can try to check the content, using recommendations from the BVLC Caffe (`#290 <https://github.com/BVLC/caffe/issues/290>`__).
absent in the :doc:`list of supported operations <openvino_resources_supported_operations_frontend>` for the target framework.
Then, refer to available options in the corresponding section in the :doc:`[Legacy] Custom Layers in Model Optimizer <openvino_docs_MO_DG_prepare_model_customize_model_optimizer_Customize_Model_Optimizer>` page.
.. _question-7:
@@ -211,14 +205,6 @@ Q9. What does the message "Mean file for topologies with multiple inputs is not
**A:** Model Optimizer does not support mean file processing for topologies with more than one input. In this case, you need to perform preprocessing of the inputs for a generated Intermediate Representation in OpenVINO Runtime to perform subtraction for every input of your multi-input model. See the :doc:`Overview of Preprocessing <openvino_docs_OV_UG_Preprocessing_Overview>` for details.
.. _question-10:
Q10. What does the message "Cannot load or process mean file: value error" mean?
#####################################################################################################################################################
**A:** There are multiple reasons why Model Optimizer does not accept the mean file.
See FAQs :ref:`#4 <question-4>`, :ref:`#5 <question-5>`, and :ref:`#6 <question-6>`.
.. _question-11:
Q11. What does the message "Invalid prototxt file: value error" mean?
@@ -238,7 +224,7 @@ Q12. What does the message "Error happened while constructing caffe.Net in the C
Q13. What does the message "Cannot infer shapes due to exception in Caffe" mean?
#####################################################################################################################################################
**A:** Model Optimizer tried to infer a custom layer via the Caffe framework, but the model could not be inferred using Caffe. This might happen if you try to convert the model with some noise weights and biases, which conflict with layers that have dynamic shapes. You should write your own extension for every custom layer your topology might have. For more details, refer to the :doc:`Model Optimizer Extensibility <openvino_docs_MO_DG_prepare_model_customize_model_optimizer_Customize_Model_Optimizer>` page.
**A:** Model Optimizer tried to infer a custom layer via the Caffe framework, but the model could not be inferred using Caffe. This might happen if you try to convert the model with some noise weights and biases, which conflict with layers that have dynamic shapes. You should write your own extension for every custom layer your topology might have. For more details, refer to the :doc:`[Legacy] Model Optimizer Extensibility <openvino_docs_MO_DG_prepare_model_customize_model_optimizer_Customize_Model_Optimizer>` page.
.. _question-14:
@@ -263,18 +249,8 @@ Q16. What does the message "Input shape is required to convert MXNet model. Plea
.. _question-17:
Q17. What does the message "Both --mean_file and mean_values are specified. Specify either mean file or mean values" mean?
#####################################################################################################################################################
**A:** The ``--mean_file`` and ``--mean_values`` options are two ways of specifying preprocessing for the input. However, they cannot be used together, as it would mean double subtraction and lead to ambiguity. Choose one of these options and pass it with the corresponding CLI option.
.. _question-18:
Q18. What does the message "Negative value specified for --mean_file_offsets option. Please specify positive integer values in format '(x,y)'" mean?
#####################################################################################################################################################
**A:** You might have specified negative values with ``--mean_file_offsets``. Only positive integer values in format '(x,y)' must be used.
.. _question-19:
Q19. What does the message "Both --scale and --scale_values are defined. Specify either scale factor or scale values per input channels" mean?
@@ -291,255 +267,257 @@ Q20. What does the message "Cannot find prototxt file: for Caffe please specify
.. _question-21:
Q21. What does the message "Failed to create directory .. . Permission denied!" mean?
.. _question-22:
Q22. What does the message "Failed to create directory .. . Permission denied!" mean?
#####################################################################################################################################################
**A:** Model Optimizer cannot create a directory specified via ``--output_dir``. Make sure that you have enough permissions to create the specified directory.
.. _question-22:
.. _question-23:
Q22. What does the message "Discovered data node without inputs and value" mean?
Q23. What does the message "Discovered data node without inputs and value" mean?
#####################################################################################################################################################
**A:** One of the layers in the specified topology might not have inputs or values. Make sure that the provided ``caffemodel`` and ``protobuf`` files are correct.
.. _question-23:
.. _question-24:
Q23. What does the message "Part of the nodes was not translated to IE. Stopped" mean?
Q24. What does the message "Part of the nodes was not translated to IE. Stopped" mean?
#####################################################################################################################################################
**A:** Some of the operations are not supported by OpenVINO Runtime and cannot be translated to OpenVINO Intermediate Representation. You can extend Model Optimizer by allowing generation of new types of operations and implement these operations in the dedicated OpenVINO plugins. For more information, refer to the :doc:`OpenVINO Extensibility Mechanism <openvino_docs_Extensibility_UG_Intro>` guide.
.. _question-24:
.. _question-25:
Q24. What does the message "While creating an edge from .. to .. : node name is undefined in the graph. Check correctness of the input model" mean?
Q25. What does the message "While creating an edge from .. to .. : node name is undefined in the graph. Check correctness of the input model" mean?
#####################################################################################################################################################
**A:** Model Optimizer cannot build a graph based on a specified model. Most likely, it is incorrect.
.. _question-25:
.. _question-26:
Q25. What does the message "Node does not exist in the graph" mean?
Q26. What does the message "Node does not exist in the graph" mean?
#####################################################################################################################################################
**A:** You might have specified an output node via the ``--output`` flag that does not exist in a provided model. Make sure that the specified output is correct and this node exists in the current model.
.. _question-26:
.. _question-27:
Q26. What does the message "--input parameter was provided. Other inputs are needed for output computation. Provide more inputs or choose another place to cut the net" mean?
Q27. What does the message "--input parameter was provided. Other inputs are needed for output computation. Provide more inputs or choose another place to cut the net" mean?
##############################################################################################################################################################################
**A:** Most likely, Model Optimizer tried to cut the model by a specified input. However, other inputs are needed.
.. _question-27:
.. _question-28:
Q27. What does the message "Placeholder node does not have an input port, but input port was provided" mean?
Q28. What does the message "Placeholder node does not have an input port, but input port was provided" mean?
#####################################################################################################################################################
**A:** You might have specified a placeholder node with an input node, while the placeholder node does not have it in the model.
.. _question-28:
.. _question-29:
Q28. What does the message "Port index is out of number of available input ports for node" mean?
Q29. What does the message "Port index is out of number of available input ports for node" mean?
#####################################################################################################################################################
**A:** This error occurs when an incorrect input port is specified with the ``--input`` command line argument. When using ``--input``, you may optionally specify an input port in the form: ``X:node_name``, where ``X`` is an integer index of the input port starting from 0 and ``node_name`` is the name of a node in the model. This error occurs when the specified input port ``X`` is not in the range 0..(n-1), where n is the number of input ports for the node. Specify a correct port index, or do not use it if it is not needed.
.. _question-29:
.. _question-30:
Q29. What does the message "Node has more than 1 input and input shapes were provided. Try not to provide input shapes or specify input port with PORT:NODE notation, where PORT is an integer" mean?
Q30. What does the message "Node has more than 1 input and input shapes were provided. Try not to provide input shapes or specify input port with PORT:NODE notation, where PORT is an integer" mean?
######################################################################################################################################################################################################
**A:** This error occurs when an incorrect combination of the ``--input`` and ``--input_shape`` command line options is used. Using both ``--input`` and ``--input_shape`` is valid only if ``--input`` points to the ``Placeholder`` node, a node with one input port or ``--input`` has the form ``PORT:NODE``, where ``PORT`` is an integer port index of input for node ``NODE``. Otherwise, the combination of ``--input`` and ``--input_shape`` is incorrect.
.. _question-30:
.. _question-31:
Q30. What does the message "Input port > 0 in --input is not supported if --input_shape is not provided. Node: NAME_OF_THE_NODE. Omit port index and all input ports will be replaced by placeholders. Or provide --input_shape" mean?
Q31. What does the message "Input port > 0 in --input is not supported if --input_shape is not provided. Node: NAME_OF_THE_NODE. Omit port index and all input ports will be replaced by placeholders. Or provide --input_shape" mean?
#######################################################################################################################################################################################################################################
**A:** When using the ``PORT:NODE`` notation for the ``--input`` command line argument and ``PORT`` > 0, you should specify ``--input_shape`` for this input. This is a limitation of the current Model Optimizer implementation.
> **NOTE**: It is no longer relevant message since the limitation on input port index for model truncation has been resolved.
.. note: It is no longer relevant message since the limitation on input port index for model truncation has been resolved.
.. _question-31:
.. _question-32:
Q31. What does the message "No or multiple placeholders in the model, but only one shape is provided, cannot set it" mean?
Q32. What does the message "No or multiple placeholders in the model, but only one shape is provided, cannot set it" mean?
#####################################################################################################################################################
**A:** You might have provided only one shape for the placeholder, while there are none or multiple inputs in the model. Make sure that you have provided the correct data for placeholder nodes.
.. _question-32:
Q32. What does the message "The amount of input nodes for port is not equal to 1" mean?
#####################################################################################################################################################
**A:** This error occurs when the ``SubgraphMatch.single_input_node`` function is used for an input port that supplies more than one node in a sub-graph. The ``single_input_node`` function can be used only for ports that has a single consumer inside the matching sub-graph. When multiple nodes are connected to the port, use the ``input_nodes`` function or ``node_by_pattern`` function instead of ``single_input_node``. For more details, refer to the **Graph Transformation Extensions** section in the :doc:`Model Optimizer Extensibility <openvino_docs_MO_DG_prepare_model_customize_model_optimizer_Customize_Model_Optimizer>` guide.
.. _question-33:
Q33. What does the message "Output node for port has already been specified" mean?
Q33. What does the message "The amount of input nodes for port is not equal to 1" mean?
#####################################################################################################################################################
**A:** This error occurs when the ``SubgraphMatch.single_input_node`` function is used for an input port that supplies more than one node in a sub-graph. The ``single_input_node`` function can be used only for ports that has a single consumer inside the matching sub-graph. When multiple nodes are connected to the port, use the ``input_nodes`` function or ``node_by_pattern`` function instead of ``single_input_node``. For more details, refer to the **Graph Transformation Extensions** section in the :doc:`[Legacy] Model Optimizer Extensibility <openvino_docs_MO_DG_prepare_model_customize_model_optimizer_Model_Optimizer_Extensions_Model_Optimizer_Transformation_Extensions>` guide.
.. _question-34:
Q34. What does the message "Output node for port has already been specified" mean?
#####################################################################################################################################################
**A:** This error occurs when the ``SubgraphMatch._add_output_node`` function is called manually from user's extension code. This is an internal function, and you should not call it directly.
.. _question-34:
Q34. What does the message "Unsupported match kind.... Match kinds "points" or "scope" are supported only" mean?
#####################################################################################################################################################
**A:** While using configuration file to implement a TensorFlow front replacement extension, an incorrect match kind was used. Only ``points`` or ``scope`` match kinds are supported. For more details, refer to the :doc:`Model Optimizer Extensibility <openvino_docs_MO_DG_prepare_model_customize_model_optimizer_Customize_Model_Optimizer>` guide.
.. _question-35:
Q35. What does the message "Cannot write an event file for the TensorBoard to directory" mean?
Q35. What does the message "Unsupported match kind.... Match kinds "points" or "scope" are supported only" mean?
#####################################################################################################################################################
**A:** While using configuration file to implement a TensorFlow front replacement extension, an incorrect match kind was used. Only ``points`` or ``scope`` match kinds are supported. For more details, refer to the :doc:`[Legacy] Model Optimizer Extensibility <openvino_docs_MO_DG_prepare_model_customize_model_optimizer_Customize_Model_Optimizer>` guide.
.. _question-36:
Q36. What does the message "Cannot write an event file for the TensorBoard to directory" mean?
#####################################################################################################################################################
**A:** Model Optimizer tried to write an event file in the specified directory but failed to do that. That could happen when the specified directory does not exist or you do not have permissions to write in it.
.. _question-36:
.. _question-37:
Q36. What does the message "There is no registered 'infer' function for node with op = .. . Please implement this function in the extensions" mean?
Q37. What does the message "There is no registered 'infer' function for node with op = .. . Please implement this function in the extensions" mean?
#####################################################################################################################################################
**A** Most likely, you tried to extend Model Optimizer with a new primitive, but you did not specify an infer function. For more information on extensions, see the :doc:`OpenVINO Extensibility Mechanism <openvino_docs_Extensibility_UG_Intro>` guide.
.. _question-37:
.. _question-38:
Q37. What does the message "Stopped shape/value propagation at node" mean?
Q38. What does the message "Stopped shape/value propagation at node" mean?
#####################################################################################################################################################
**A:** Model Optimizer cannot infer shapes or values for the specified node. It can happen because of the following reasons: a bug exists in the custom shape infer function, the node inputs have incorrect values/shapes, or the input shapes are incorrect.
.. _question-38:
.. _question-39:
Q38. What does the message "The input with shape .. does not have the batch dimension" mean?
Q39. What does the message "The input with shape .. does not have the batch dimension" mean?
#####################################################################################################################################################
**A:** Batch dimension is the first dimension in the shape and it should be equal to 1 or undefined. In your case, it is not either equal to 1 or undefined, which is why the ``-b`` shortcut produces undefined and unspecified behavior. To resolve the issue, specify full shapes for each input with the ``--input_shape`` option. Run Model Optimizer with the ``--help`` option to learn more about the notation for input shapes.
.. _question-39:
.. _question-40:
Q39. What does the message "Not all output shapes were inferred or fully defined for node" mean?
Q40. What does the message "Not all output shapes were inferred or fully defined for node" mean?
#####################################################################################################################################################
**A:** Most likely, the shape is not defined (partially or fully) for the specified node. You can use ``--input_shape`` with positive integers to override model input shapes.
.. _question-40:
.. _question-41:
Q40. What does the message "Shape for tensor is not defined. Can not proceed" mean?
Q41. What does the message "Shape for tensor is not defined. Can not proceed" mean?
#####################################################################################################################################################
**A:** This error occurs when the ``--input`` command-line option is used to cut a model and ``--input_shape`` is not used to override shapes for a node, so a shape for the node cannot be inferred by Model Optimizer. You need to help Model Optimizer by specifying shapes with ``--input_shape`` for each node specified with the ``--input`` command-line option.
.. _question-41:
.. _question-42:
Q41. What does the message "Module TensorFlow was not found. Please install TensorFlow 1.2 or higher" mean?
Q42. What does the message "Module TensorFlow was not found. Please install TensorFlow 1.2 or higher" mean?
#####################################################################################################################################################
**A:** To convert TensorFlow models with Model Optimizer, TensorFlow 1.2 or newer must be installed. For more information on prerequisites, see the :doc:`Configuring Model Optimizer <openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide>` guide.
.. _question-42:
.. _question-43:
Q42. What does the message "Cannot read the model file: it is incorrect TensorFlow model file or missing" mean?
Q43. What does the message "Cannot read the model file: it is incorrect TensorFlow model file or missing" mean?
#####################################################################################################################################################
**A:** The model file should contain a frozen TensorFlow graph in the text or binary format. Make sure that ``--input_model_is_text`` is provided for a model in the text format. By default, a model is interpreted as binary file.
.. _question-43:
.. _question-44:
Q43. What does the message "Cannot pre-process TensorFlow graph after reading from model file. File is corrupt or has unsupported format" mean?
Q44. What does the message "Cannot pre-process TensorFlow graph after reading from model file. File is corrupt or has unsupported format" mean?
#####################################################################################################################################################
**A:** Most likely, there is a problem with the specified file for the model. The file exists, but it has an invalid format or is corrupted.
.. _question-44:
.. _question-45:
Q44. What does the message "Found custom layer. Model Optimizer does not support this layer. Please, register it in CustomLayersMapping.xml or implement extension" mean?
Q45. What does the message "Found custom layer. Model Optimizer does not support this layer. Please, register it in CustomLayersMapping.xml or implement extension" mean?
##########################################################################################################################################################################
**A:** This means that the layer ``{layer_name}`` is not supported in Model Optimizer. You will find a list of all unsupported layers in the corresponding section. You should implement the extensions for this layer. See :doc:`OpenVINO Extensibility Mechanism <openvino_docs_Extensibility_UG_Intro>` for more information.
.. _question-45:
.. _question-46:
Q45. What does the message "Custom replacement configuration file does not exist" mean?
Q46. What does the message "Custom replacement configuration file does not exist" mean?
#####################################################################################################################################################
**A:** A path to the custom replacement configuration file was provided with the ``--transformations_config`` flag, but the file could not be found. Make sure the specified path is correct and the file exists.
.. _question-46:
.. _question-47:
Q46. What does the message "Extractors collection have case insensitive duplicates" mean?
Q47. What does the message "Extractors collection have case insensitive duplicates" mean?
#####################################################################################################################################################
**A:** When extending Model Optimizer with new primitives, keep in mind that their names are case-insensitive. Most likely, another operation with the same name is already defined. For more information, see the :doc:`OpenVINO Extensibility Mechanism <openvino_docs_Extensibility_UG_Intro>` guide.
.. _question-47:
.. _question-48:
Q47. What does the message "Input model name is not in an expected format, cannot extract iteration number" mean?
Q48. What does the message "Input model name is not in an expected format, cannot extract iteration number" mean?
#####################################################################################################################################################
**A:** Model Optimizer cannot load an MXNet model in the specified file format. Make sure you use the ``.json`` or ``.param`` format.
.. _question-48:
.. _question-49:
Q48. What does the message "Cannot convert type of placeholder because not all of its outputs are 'Cast' to float operations" mean?
Q49. What does the message "Cannot convert type of placeholder because not all of its outputs are 'Cast' to float operations" mean?
#####################################################################################################################################################
**A:** There are models where ``Placeholder`` has the UINT8 type and the first operation after it is 'Cast', which casts the input to FP32. Model Optimizer detected that the ``Placeholder`` has the UINT8 type, but the next operation is not 'Cast' to float. Model Optimizer does not support such a case. Make sure you change the model to have ``Placeholder`` for FP32.
.. _question-49:
.. _question-50:
Q49. What does the message "Data type is unsupported" mean?
Q50. What does the message "Data type is unsupported" mean?
#####################################################################################################################################################
**A:** Model Optimizer cannot read the value with the specified data type. Currently, the following types are supported: bool, float16, float32, double, int8, int16, int32, int64, uint8, uint16, uint32, uint64, str.
.. _question-50:
.. _question-51:
Q50. What does the message "No node with name ..." mean?
Q51. What does the message "No node with name ..." mean?
#####################################################################################################################################################
**A:** Model Optimizer tried to access a node that does not exist. This could happen if you have incorrectly specified placeholder, input or output node name.
.. _question-51:
.. _question-52:
Q51. What does the message "Module MXNet was not found. Please install MXNet 1.0.0" mean?
Q52. What does the message "Module MXNet was not found. Please install MXNet 1.0.0" mean?
#####################################################################################################################################################
**A:** To convert MXNet models with Model Optimizer, Apache MXNet 1.0.0 must be installed. For more information about prerequisites, see the :doc:`Configuring Model Optimizer <openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide>` guide.
.. _question-52:
.. _question-53:
Q52. What does the message "The following error happened while loading MXNet model .." mean?
Q53. What does the message "The following error happened while loading MXNet model .." mean?
#####################################################################################################################################################
**A:** Most likely, there is a problem with loading of the MXNet model. Make sure the specified path is correct, the model exists and is not corrupted, and you have sufficient permissions to work with it.
.. _question-53:
.. _question-54:
Q53. What does the message "The following error happened while processing input shapes: .." mean?
Q54. What does the message "The following error happened while processing input shapes: .." mean?
#####################################################################################################################################################
**A:** Make sure inputs are defined and have correct shapes. You can use ``--input_shape`` with positive integers to override model input shapes.
.. _question-54:
.. _question-55:
Q54. What does the message "Attempt to register of custom name for the second time as class. Note that custom names are case-insensitive" mean?
Q55. What does the message "Attempt to register of custom name for the second time as class. Note that custom names are case-insensitive" mean?
#####################################################################################################################################################
**A:** When extending Model Optimizer with new primitives, keep in mind that their names are case-insensitive. Most likely, another operation with the same name is already defined. For more information, see the :doc:`OpenVINO Extensibility Mechanism <openvino_docs_Extensibility_UG_Intro>` guide.
.. _question-55:
.. _question-56:
Q55. What does the message "Both --input_shape and --batch were provided. Please, provide only one of them" mean?
Q56. What does the message "Both --input_shape and --batch were provided. Please, provide only one of them" mean?
#####################################################################################################################################################
**A:** Specifying the batch and the input shapes at the same time is not supported. You must specify a desired batch as the first value of the input shape.
.. _question-56:
.. _question-57:
Q56. What does the message "Input shape .. cannot be parsed" mean?
Q57. What does the message "Input shape .. cannot be parsed" mean?
#####################################################################################################################################################
**A:** The specified input shape cannot be parsed. Define it in one of the following ways:
@@ -565,170 +543,170 @@ Q56. What does the message "Input shape .. cannot be parsed" mean?
Keep in mind that there is no space between and inside the brackets for input shapes.
.. _question-57:
.. _question-58:
Q57. What does the message "Please provide input layer names for input layer shapes" mean?
Q58. What does the message "Please provide input layer names for input layer shapes" mean?
#####################################################################################################################################################
**A:** When specifying input shapes for several layers, you must provide names for inputs, whose shapes will be overwritten. For usage examples, see the :doc:`Converting a Caffe Model <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Caffe>`. Additional information for ``--input_shape`` is in FAQ :ref:`#56 <question-56>`.
.. _question-58:
.. _question-59:
Q58. What does the message "Values cannot be parsed" mean?
Q59. What does the message "Values cannot be parsed" mean?
#####################################################################################################################################################
**A:** Mean values for the given parameter cannot be parsed. It should be a string with a list of mean values. For example, in '(1,2,3)', 1 stands for the RED channel, 2 for the GREEN channel, 3 for the BLUE channel.
.. _question-59:
.. _question-60:
Q59. What does the message ".. channels are expected for given values" mean?
Q60. What does the message ".. channels are expected for given values" mean?
#####################################################################################################################################################
**A:** The number of channels and the number of given values for mean values do not match. The shape should be defined as '(R,G,B)' or '[R,G,B]'. The shape should not contain undefined dimensions (? or -1). The order of values is as follows: (value for a RED channel, value for a GREEN channel, value for a BLUE channel).
.. _question-60:
.. _question-61:
Q60. What does the message "You should specify input for each mean value" mean?
Q61. What does the message "You should specify input for each mean value" mean?
#####################################################################################################################################################
**A:** Most likely, you didn't specify inputs using ``--mean_values``. Specify inputs with the ``--input`` flag. For usage examples, refer to the FAQ :ref:`#62 <question-62>`.
.. _question-61:
.. _question-62:
Q61. What does the message "You should specify input for each scale value" mean?
Q62. What does the message "You should specify input for each scale value" mean?
#####################################################################################################################################################
**A:** Most likely, you didn't specify inputs using ``--scale_values``. Specify inputs with the ``--input`` flag. For usage examples, refer to the FAQ :ref:`#63 <question-63>`.
.. _question-62:
.. _question-63:
Q62. What does the message "Number of inputs and mean values does not match" mean?
Q63. What does the message "Number of inputs and mean values does not match" mean?
#####################################################################################################################################################
**A:** The number of specified mean values and the number of inputs must be equal. For a usage example, refer to the :doc:`Converting a Caffe Model <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Caffe>` guide.
.. _question-63:
.. _question-64:
Q63. What does the message "Number of inputs and scale values does not match" mean?
Q64. What does the message "Number of inputs and scale values does not match" mean?
#####################################################################################################################################################
**A:** The number of specified scale values and the number of inputs must be equal. For a usage example, refer to the :doc:`Converting a Caffe Model <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Caffe>` guide.
.. _question-64:
.. _question-65:
Q64. What does the message "No class registered for match kind ... Supported match kinds are .. " mean?
Q65. What does the message "No class registered for match kind ... Supported match kinds are .. " mean?
#####################################################################################################################################################
**A:** A replacement defined in the configuration file for sub-graph replacement, using node names patterns or start/end nodes, has the ``match_kind`` attribute. The attribute may have only one of the values: ``scope`` or ``points``. If a different value is provided, this error is displayed.
.. _question-65:
Q65. What does the message "No instance(s) is(are) defined for the custom replacement" mean?
#####################################################################################################################################################
**A:** A replacement defined in the configuration file for sub-graph replacement, using node names patterns or start/end nodes, has the ``instances`` attribute. This attribute is mandatory. This error will occur if the attribute is missing. For more details, refer to the **Graph Transformation Extensions** section in the :doc:`Model Optimizer Extensibility <openvino_docs_MO_DG_prepare_model_customize_model_optimizer_Customize_Model_Optimizer>` guide.
.. _question-66:
Q66. What does the message "The instance must be a single dictionary for the custom replacement with id .." mean?
Q66. What does the message "No instance(s) is(are) defined for the custom replacement" mean?
#####################################################################################################################################################
**A:** A replacement defined in the configuration file for sub-graph replacement, using start/end nodes, has the ``instances`` attribute. For this type of replacement, the instance must be defined with a dictionary with two keys ``start_points`` and ``end_points``. Values for these keys are lists with the start and end node names, respectively. For more details, refer to the **Graph Transformation Extensions** section in the :doc:`Model Optimizer Extensibility <openvino_docs_MO_DG_prepare_model_customize_model_optimizer_Customize_Model_Optimizer>` guide.
**A:** A replacement defined in the configuration file for sub-graph replacement, using node names patterns or start/end nodes, has the ``instances`` attribute. This attribute is mandatory. This error will occur if the attribute is missing. For more details, refer to the **Graph Transformation Extensions** section in the :doc:`[Legacy] Model Optimizer Extensibility <openvino_docs_MO_DG_prepare_model_customize_model_optimizer_Customize_Model_Optimizer>` guide.
.. _question-67:
Q67. What does the message "No instances are defined for replacement with id .. " mean?
Q67. What does the message "The instance must be a single dictionary for the custom replacement with id .." mean?
#####################################################################################################################################################
**A:** A replacement defined in the configuration file for sub-graph replacement, using start/end nodes, has the ``instances`` attribute. For this type of replacement, the instance must be defined with a dictionary with two keys ``start_points`` and ``end_points``. Values for these keys are lists with the start and end node names, respectively. For more details, refer to the **Graph Transformation Extensions** section in the :doc:`[Legacy] Model Optimizer Extensibility <openvino_docs_MO_DG_prepare_model_customize_model_optimizer_Model_Optimizer_Extensions_Model_Optimizer_Transformation_Extensions>` guide.
.. _question-68:
Q68. What does the message "No instances are defined for replacement with id .. " mean?
#####################################################################################################################################################
**A:** A replacement for the specified id is not defined in the configuration file. For more information, refer to the FAQ :ref:`#65 <question-65>`.
.. _question-68:
.. _question-69:
Q68. What does the message "Custom replacements configuration file .. does not exist" mean?
Q69. What does the message "Custom replacements configuration file .. does not exist" mean?
#####################################################################################################################################################
**A:** The path to a custom replacement configuration file was provided with the ``--transformations_config`` flag, but it cannot be found. Make sure the specified path is correct and the file exists.
.. _question-69:
.. _question-70:
Q69. What does the message "Failed to parse custom replacements configuration file .." mean?
Q70. What does the message "Failed to parse custom replacements configuration file .." mean?
#####################################################################################################################################################
**A:** The file for custom replacement configuration provided with the ``--transformations_config`` flag cannot be parsed. In particular, it should have a valid JSON structure. For more details, refer to the `JSON Schema Reference <https://spacetelescope.github.io/understanding-json-schema/reference/index.html>`__ page.
.. _question-70:
.. _question-71:
Q70. What does the message "One of the custom replacements in the configuration file .. does not contain attribute 'id'" mean?
Q71. What does the message "One of the custom replacements in the configuration file .. does not contain attribute 'id'" mean?
#####################################################################################################################################################
**A:** Every custom replacement should declare a set of mandatory attributes and their values. For more details, refer to FAQ :ref:`#71 <question-71>`.
.. _question-71:
.. _question-72:
Q71. What does the message "File .. validation failed" mean?
Q72. What does the message "File .. validation failed" mean?
#####################################################################################################################################################
**A:** The file for custom replacement configuration provided with the ``--transformations_config`` flag cannot pass validation. Make sure you have specified ``id``, ``instances``, and ``match_kind`` for all the patterns.
.. _question-72:
.. _question-73:
Q72. What does the message "Cannot update the file .. because it is broken" mean?
Q73. What does the message "Cannot update the file .. because it is broken" mean?
#####################################################################################################################################################
**A:** The custom replacement configuration file provided with the ``--tensorflow_custom_operations_config_update`` cannot be parsed. Make sure that the file is correct and refer to FAQ :ref:`#68 <question-68>`, :ref:`#69 <question-69>`, :ref:`#70 <question-70>`, and :ref:`#71 <question-71>`.
.. _question-73:
.. _question-74:
Q73. What does the message "End node .. is not reachable from start nodes: .." mean?
Q74. What does the message "End node .. is not reachable from start nodes: .." mean?
#####################################################################################################################################################
**A:** This error occurs when you try to make a sub-graph match. It is detected that between the start and end nodes that were specified as inputs/outputs for the subgraph to find, there are nodes marked as outputs but there is no path from them to the input nodes. Make sure the subgraph you want to match does actually contain all the specified output nodes.
.. _question-74:
Q74. What does the message "Sub-graph contains network input node .." mean?
#####################################################################################################################################################
**A:** The start or end node for the sub-graph replacement using start/end nodes is specified incorrectly. Model Optimizer finds internal nodes of the sub-graph strictly "between" the start and end nodes, and then adds all input nodes to the sub-graph (and the inputs of their inputs, etc.) for these "internal" nodes. This error reports that Model Optimizer reached input node during this phase. This means that the start/end points are specified incorrectly in the configuration file. For more details, refer to the **Graph Transformation Extensions** section in the :doc:`Model Optimizer Extensibility <openvino_docs_MO_DG_prepare_model_customize_model_optimizer_Customize_Model_Optimizer>` guide.
.. _question-75:
Q75. What does the message "... elements of ... were clipped to infinity while converting a blob for node [...] to ..." mean?
Q75. What does the message "Sub-graph contains network input node .." mean?
#####################################################################################################################################################
**A:** This message may appear when the ``--compress_to_fp16`` (or deprecated ``--data_type``) command-line option is used. This option implies compression of all the model weights, biases, and other constant values to FP16. If a value of a constant is out of the range of valid FP16 values, the value is converted to positive or negative infinity. It may lead to incorrect results of inference or may not be a problem, depending on the model. The number of such elements and the total number of elements in the constant value is printed out together with the name of the node, where this value is used.
**A:** The start or end node for the sub-graph replacement using start/end nodes is specified incorrectly. Model Optimizer finds internal nodes of the sub-graph strictly "between" the start and end nodes, and then adds all input nodes to the sub-graph (and the inputs of their inputs, etc.) for these "internal" nodes. This error reports that Model Optimizer reached input node during this phase. This means that the start/end points are specified incorrectly in the configuration file. For more details, refer to the **Graph Transformation Extensions** section in the :doc:`[Legacy] Model Optimizer Extensibility <openvino_docs_MO_DG_prepare_model_customize_model_optimizer_Model_Optimizer_Extensions_Model_Optimizer_Transformation_Extensions>` guide.
.. _question-76:
Q76. What does the message "... elements of ... were clipped to zero while converting a blob for node [...] to ..." mean?
Q76. What does the message "... elements of ... were clipped to infinity while converting a blob for node [...] to ..." mean?
#####################################################################################################################################################
**A:** This message may appear when the ``--compress_to_fp16`` (or deprecated ``--data_type``) command-line option is used. This option implies conversion of all blobs in the mode to FP16. If a value in the blob is so close to zero that it cannot be represented as a valid FP16 value, it is converted to a true zero FP16 value. Depending on the model, it may lead to incorrect results of inference or may not be a problem. The number of such elements and the total number of elements in the blob are printed out together with a name of the node, where this blob is used.
**A:** This message may appear when the ``--compress_to_fp16`` command-line option is used. This option implies compression of all the model weights, biases, and other constant values to FP16. If a value of a constant is out of the range of valid FP16 values, the value is converted to positive or negative infinity. It may lead to incorrect results of inference or may not be a problem, depending on the model. The number of such elements and the total number of elements in the constant value is printed out together with the name of the node, where this value is used.
.. _question-77:
Q77. What does the message "The amount of nodes matched pattern ... is not equal to 1" mean?
Q77. What does the message "... elements of ... were clipped to zero while converting a blob for node [...] to ..." mean?
#####################################################################################################################################################
**A:** This error occurs when the ``SubgraphMatch.node_by_pattern`` function is used with a pattern that does not uniquely identify a single node in a sub-graph. Try to extend the pattern string to make unambiguous match to a single sub-graph node. For more details, refer to the **Graph Transformation Extensions** section in the :doc:`Model Optimizer Extensibility <openvino_docs_MO_DG_prepare_model_customize_model_optimizer_Customize_Model_Optimizer>` guide.
**A:** This message may appear when the ``--compress_to_fp16`` command-line option is used. This option implies conversion of all blobs in the mode to FP16. If a value in the blob is so close to zero that it cannot be represented as a valid FP16 value, it is converted to a true zero FP16 value. Depending on the model, it may lead to incorrect results of inference or may not be a problem. The number of such elements and the total number of elements in the blob are printed out together with a name of the node, where this blob is used.
.. _question-78:
Q78. What does the message "The topology contains no "input" layers" mean?
Q78. What does the message "The amount of nodes matched pattern ... is not equal to 1" mean?
#####################################################################################################################################################
**A:** This error occurs when the ``SubgraphMatch.node_by_pattern`` function is used with a pattern that does not uniquely identify a single node in a sub-graph. Try to extend the pattern string to make unambiguous match to a single sub-graph node. For more details, refer to the **Graph Transformation Extensions** section in the :doc:`[Legacy] Model Optimizer Extensibility <openvino_docs_MO_DG_prepare_model_customize_model_optimizer_Model_Optimizer_Extensions_Model_Optimizer_Transformation_Extensions>` guide.
.. _question-79:
Q79. What does the message "The topology contains no "input" layers" mean?
#####################################################################################################################################################
**A:** Your Caffe topology ``.prototxt`` file is intended for training. Model Optimizer expects a deployment-ready ``.prototxt`` file. To fix the problem, prepare a deployment-ready ``.prototxt`` file. Preparation of a deploy-ready topology usually results in removing ``data`` layer(s), adding ``input`` layer(s), and removing loss layer(s).
.. _question-79:
.. _question-80:
Q79. What does the message "Warning: please expect that Model Optimizer conversion might be slow" mean?
Q80. What does the message "Warning: please expect that Model Optimizer conversion might be slow" mean?
#####################################################################################################################################################
**A:** You are using an unsupported Python version. Use only versions 3.4 - 3.6 for the C++ ``protobuf`` implementation that is supplied with OpenVINO toolkit. You can still boost the conversion speed by building the protobuf library from sources. For complete instructions about building ``protobuf`` from sources, see the appropriate section in the :doc:`Converting a Model to Intermediate Representation <openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide>` guide.
.. _question-80:
.. _question-81:
Q80. What does the message "Arguments --nd_prefix_name, --pretrained_model_name and --input_symbol should be provided. Please provide all or do not use any." mean?
Q81. What does the message "Arguments --nd_prefix_name, --pretrained_model_name and --input_symbol should be provided. Please provide all or do not use any." mean?
####################################################################################################################################################################
**A:** This error occurs if you did not provide the ``--nd_prefix_name``, ``--pretrained_model_name``, and ``--input_symbol`` parameters.
@@ -740,52 +718,51 @@ from one ``.params`` file and two additional ``.nd`` files (``*_args.nd``, ``*_a
To do that, provide both CLI options or do not pass them if you want to convert an MXNet model without additional weights.
For more information, refer to the :doc:`Converting an MXNet Model <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_MxNet>` guide.
.. _question-81:
.. _question-82:
Q81. What does the message "You should specify input for mean/scale values" mean?
Q82. What does the message "You should specify input for mean/scale values" mean?
#####################################################################################################################################################
**A:** When the model has multiple inputs and you want to provide mean/scale values, you need to pass those values for each input. More specifically, the number of passed values should be the same as the number of inputs of the model.
For more information, refer to the :doc:`Converting a Model to Intermediate Representation <openvino_docs_MO_DG_prepare_model_convert_model_Converting_Model>` guide.
.. _question-82:
.. _question-83:
Q82. What does the message "Input with name ... not found!" mean?
Q83. What does the message "Input with name ... not found!" mean?
#####################################################################################################################################################
**A:** When you passed the mean/scale values and specify names of input layers of the model, you might have used the name that does not correspond to any input layer. Make sure that you list only names of the input layers of your model when passing values with the ``--input`` option.
For more information, refer to the :doc:`Converting a Model to Intermediate Representation <openvino_docs_MO_DG_prepare_model_convert_model_Converting_Model>` guide.
.. _question-84:
.. _question-83:
Q83. What does the message "Specified input json ... does not exist" mean?
Q84. What does the message "Specified input json ... does not exist" mean?
#####################################################################################################################################################
**A:** Most likely, ``.json`` file does not exist or has a name that does not match the notation of Apache MXNet. Make sure the file exists and has a correct name.
For more information, refer to the :doc:`Converting an MXNet Model <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_MxNet>` guide.
.. _question-84:
.. _question-85:
Q84. What does the message "Unsupported Input model file type ... Model Optimizer support only .params and .nd files format" mean?
Q85. What does the message "Unsupported Input model file type ... Model Optimizer support only .params and .nd files format" mean?
#####################################################################################################################################################
**A:** Model Optimizer for Apache MXNet supports only ``.params`` and ``.nd`` files formats. Most likely, you specified an unsupported file format in ``--input_model``.
For more information, refer to :doc:`Converting an MXNet Model <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_MxNet>`.
.. _question-85:
.. _question-86:
Q85. What does the message "Operation ... not supported. Please register it as custom op" mean?
Q86. What does the message "Operation ... not supported. Please register it as custom op" mean?
#####################################################################################################################################################
**A:** Model Optimizer tried to load the model that contains some unsupported operations.
If you want to convert model that contains unsupported operations, you need to prepare extension for all such operations.
For more information, refer to the :doc:`OpenVINO Extensibility Mechanism <openvino_docs_Extensibility_UG_Intro>` guide.
.. _question-86:
.. _question-87:
Q86. What does the message "Can not register Op ... Please, call function 'register_caffe_python_extractor' with parameter 'name'" mean?
Q87. What does the message "Can not register Op ... Please, call function 'register_caffe_python_extractor' with parameter 'name'" mean?
#####################################################################################################################################################
**A:** This error appears if the class of implementation of ``Op`` for Python Caffe layer could not be used by Model Optimizer. Python layers should be handled differently comparing to ordinary Caffe layers.
@@ -837,52 +814,51 @@ The second call prevents Model Optimizer from using this extension as if it is a
a layer with type ``Proposal``. Otherwise, this layer can be chosen as an implementation of extension that can lead to potential issues.
For more information, refer to the :doc:`OpenVINO Extensibility Mechanism <openvino_docs_Extensibility_UG_Intro>` guide.
.. _question-87:
.. _question-88:
Q87. What does the message "Model Optimizer is unable to calculate output shape of Memory node .." mean?
Q88. What does the message "Model Optimizer is unable to calculate output shape of Memory node .." mean?
#####################################################################################################################################################
**A:** Model Optimizer supports only ``Memory`` layers, in which ``input_memory`` goes before ``ScaleShift`` or the ``FullyConnected`` layer.
This error message means that in your model the layer after input memory is not of the ``ScaleShift`` or ``FullyConnected`` type.
This is a known limitation.
.. _question-88:
.. _question-89:
Q88. What do the messages "File ... does not appear to be a Kaldi file (magic number does not match)", "Kaldi model should start with <Nnet> tag" mean?
Q89. What do the messages "File ... does not appear to be a Kaldi file (magic number does not match)", "Kaldi model should start with <Nnet> tag" mean?
#########################################################################################################################################################
**A:** These error messages mean that Model Optimizer does not support your Kaldi model, because the ``checksum`` of the model is not
16896 (the model should start with this number), or the model file does not contain the ``<Net>`` tag as a starting one.
Make sure that you provide a path to a true Kaldi model and try again.
.. _question-90:
.. _question-89:
Q89. What do the messages "Expect counts file to be one-line file." or "Expect counts file to contain list of integers" mean?
Q90. What do the messages "Expect counts file to be one-line file." or "Expect counts file to contain list of integers" mean?
#####################################################################################################################################################
**A:** These messages mean that the file counts you passed contain not one line. The count file should start with
``[`` and end with ``]``, and integer values should be separated by spaces between those brackets.
.. _question-90:
.. _question-91:
Q90. What does the message "Model Optimizer is not able to read Kaldi model .." mean?
Q91. What does the message "Model Optimizer is not able to read Kaldi model .." mean?
#####################################################################################################################################################
**A:** There are multiple reasons why Model Optimizer does not accept a Kaldi topology, including:
the file is not available or does not exist. Refer to FAQ :ref:`#88 <question-88>`.
.. _question-91:
.. _question-92:
Q91. What does the message "Model Optimizer is not able to read counts file .." mean?
Q92. What does the message "Model Optimizer is not able to read counts file .." mean?
#####################################################################################################################################################
**A:** There are multiple reasons why Model Optimizer does not accept a counts file, including:
the file is not available or does not exist. Refer to FAQ :ref:`#89 <question-89>`.
.. _question-92:
.. _question-93:
Q92. What does the message "For legacy MXNet models Model Optimizer does not support conversion of old MXNet models (trained with 1.0.0 version of MXNet and lower) with custom layers." mean?
Q93. What does the message "For legacy MXNet models Model Optimizer does not support conversion of old MXNet models (trained with 1.0.0 version of MXNet and lower) with custom layers." mean?
###############################################################################################################################################################################################
**A:** This message means that if you have a model with custom layers and its JSON file has been generated with Apache MXNet version
@@ -891,9 +867,21 @@ MXNet with unsupported layers or generate a new JSON file with Apache MXNet vers
OpenVINO extension to use custom layers.
For more information, refer to the :doc:`OpenVINO Extensibility Mechanism <openvino_docs_Extensibility_UG_Intro>` guide.
.. _question-93:
.. _question-94:
Q93. What does the message "Graph contains a cycle. Can not proceed .." mean?
Q94. What does the message "Expected token ``</ParallelComponent>``, has ``...``" mean?
#####################################################################################################################################################
**A:** This error messages mean that Model Optimizer does not support your Kaldi model, because the Net contains ``ParallelComponent`` that does not end with the ``</ParallelComponent>`` tag.
Make sure that you provide a path to a true Kaldi model and try again.
.. _question-95:
.. _question-96:
.. _question-97:
Q97. What does the message "Graph contains a cycle. Can not proceed .." mean?
#####################################################################################################################################################
**A:** Model Optimizer supports only straightforward models without cycles.
@@ -906,76 +894,52 @@ For Tensorflow:
For all frameworks:
1. :doc:`Replace cycle containing Sub-graph in Model Optimizer <openvino_docs_MO_DG_prepare_model_customize_model_optimizer_Customize_Model_Optimizer>`
1. :doc:`Replace cycle containing Sub-graph in Model Optimizer [Legacy Solution] <openvino_docs_MO_DG_prepare_model_customize_model_optimizer_Customize_Model_Optimizer>`
2. See :doc:`OpenVINO Extensibility Mechanism <openvino_docs_Extensibility_UG_Intro>`
or
* Edit the model in its original framework to exclude cycle.
.. _question-94:
.. _question-98:
Q94. What does the message "Can not transpose attribute '..' with value .. for node '..' .." mean?
#####################################################################################################################################################
.. _question-99:
**A:** This message means that the model is not supported. It may be caused by using shapes larger than 4-D.
There are two ways to avoid such message:
.. _question-100:
* :doc:`Cut off parts of the model <openvino_docs_MO_DG_prepare_model_convert_model_Cutting_Model>`.
* Edit the network in its original framework to exclude such layers.
.. _question-95:
Q95. What does the message "Expected token ``</ParallelComponent>``, has ``...``" mean?
#####################################################################################################################################################
**A:** This error messages mean that Model Optimizer does not support your Kaldi model, because the Net contains ``ParallelComponent`` that does not end with the ``</ParallelComponent>`` tag.
Make sure that you provide a path to a true Kaldi model and try again.
.. _question-96:
Q96. What does the message "Interp layer shape inference function may be wrong, please, try to update layer shape inference function in the file (extensions/ops/interp.op at the line ...)." mean?
Q100. What does the message "Interp layer shape inference function may be wrong, please, try to update layer shape inference function in the file (extensions/ops/interp.op at the line ...)." mean?
####################################################################################################################################################################################################
**A:** There are many flavors of Caffe framework, and most layers in them are implemented identically.
However, there are exceptions. For example, the output value of layer Interp is calculated differently in Deeplab-Caffe and classic Caffe. Therefore, if your model contains layer Interp and the conversion of your model has failed, modify the ``interp_infer`` function in the ``extensions/ops/interp.op`` file according to the comments in the file.
.. _question-97:
.. _question-101:
Q97. What does the message "Mean/scale values should ..." mean?
Q101. What does the message "Mean/scale values should ..." mean?
#####################################################################################################################################################
**A:** It means that your mean/scale values have a wrong format. Specify mean/scale values in the form of ``layer_name(val1,val2,val3)``.
You need to specify values for each input of the model. For more information, refer to the :doc:`Converting a Model to Intermediate Representation <openvino_docs_MO_DG_prepare_model_convert_model_Converting_Model>` guide.
.. _question-98:
.. _question-102:
Q98. What does the message "Operation _contrib_box_nms is not supported ..." mean?
Q102. What does the message "Operation _contrib_box_nms is not supported ..." mean?
#####################################################################################################################################################
**A:** It means that you are trying to convert a topology contains the ``_contrib_box_nms`` operation which is not supported directly. However, the sub-graph of operations including ``_contrib_box_nms`` could be replaced with the DetectionOutput layer if your topology is one of the ``gluoncv`` topologies. Specify the ``--enable_ssd_gluoncv`` command-line parameter for Model Optimizer to enable this transformation.
.. _question-99:
.. _question-103:
Q99. What does the message "ModelOptimizer is not able to parse *.caffemodel" mean?
Q103. What does the message "ModelOptimizer is not able to parse *.caffemodel" mean?
#####################################################################################################################################################
**A:** If a ``*.caffemodel`` file exists and is correct, the error occurred possibly because of the use of Python protobuf implementation. In some cases, error messages may appear during model parsing, for example: "``utf-8`` codec can't decode byte 0xe0 in position 4: invalid continuation byte in field: mo_caffe.SpatialTransformerParameter.transform_type". You can either use Python 3.7 or build the ``cpp`` implementation of ``protobuf`` yourself for your version of Python. For the complete instructions about building ``protobuf`` from sources, see the appropriate section in the :doc:`Converting Models with Model Optimizer <openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide>` guide.
**A:** If a ``*.caffemodel`` file exists and is correct, the error occurred possibly because of the use of Python protobuf implementation. In some cases, error messages may appear during model parsing, for example: "``utf-8`` codec can't decode byte 0xe0 in position 4: invalid continuation byte in field: mo_caffe.SpatialTransformerParameter.transform_type". You can either use a newer Python version (3.7 - 3.11) or build the ``cpp`` implementation of ``protobuf`` yourself for your version of Python. For the complete instructions about building ``protobuf`` from sources, see the appropriate section in the :doc:`Converting Models with Model Optimizer <openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide>` guide.
.. _question-104:
.. _question-100:
.. _question-105:
Q100. What does the message "SyntaxError: 'yield' inside list comprehension" during MxNet model conversion mean?
#####################################################################################################################################################
**A:** The issue "SyntaxError: ``yield`` inside list comprehension" might occur during converting MXNet models (``mobilefacedet-v1-mxnet``, ``brain-tumor-segmentation-0001``) on Windows platform with Python 3.8 environment. This issue is caused by the API changes for ``yield expression`` in Python 3.8.
The following workarounds are suggested to resolve this issue:
1. Use Python 3.7 to convert MXNet models on Windows
2. Update Apache MXNet by using ``pip install mxnet==1.7.0.post2``
Note that it might have conflicts with previously installed PyPI dependencies.
.. _question-101:
Q101. What does the message "The IR preparation was executed by the legacy MO path. ..." mean?
Q105. What does the message "The IR preparation was executed by the legacy MO path. ..." mean?
#####################################################################################################################################################
**A:** For the models in ONNX format, there are two available paths of IR conversion.

View File

@@ -1,933 +0,0 @@
# Supported Framework Layers {#openvino_docs_MO_DG_prepare_model_Supported_Frameworks_Layers}
@sphinxdirective
In this article, you can find lists of supported framework layers, grouped by frameworks.
Caffe Supported Layers
##########################################
========================================== ==========================================================================================
Layer Name in Caffe Limitations
========================================== ==========================================================================================
Axpy
BN
BatchNorm
Bias
Binarization (Intel experimental)
Concat
Convolution
ConvolutionBinary
Crop
Deconvolution
DetectionOutput
Dropout Not needed for inference.
Eltwise
Flatten
GlobalInput
InnerProduct
Input
LRN
Normalize
Python Supported only for the Python Proposal operation.
Permute
Pooling
Power
PReLU
PriorBox
PriorBoxClustered
Proposal
PSROIPooling
ROIPooling
RegionYolo
ReorgYolo
ReLU
Resample
Reshape
Scale
ShuffleChannel
Sigmoid
Slice
Softmax
Tile
========================================== ==========================================================================================
Apache MXNet Supported Symbols
##########################################
========================================== ==========================================================================================
Symbol Name in Apache MXNet Limitations
========================================== ==========================================================================================
_Plus
_contrib_arange_like
_contrib_box_nms
_contrib_DeformableConvolution
_contrib_DeformablePSROIPooling
_contrib_div_sqrt_dim
_contrib_MultiBoxDetection ``force_suppress`` = 1 is not supported, non-default variances are not supported.
_contrib_MultiBoxPrior
_contrib_Proposal
_copy Not needed for inference
_div_scalar
_greater_scalar
_minus_scalar
_mul_scalar
_plus_scalar
_random_uniform Operation provides sequence from uniform distribution, but exact values won't match.
_rnn_param_concat
_arange
_contrib_AdaptiveAvgPooling2D Converted to the Average Pooling with fixed paddings.
_maximum
_minimum
_np_roll
_zeros
add_n
arccosh
arcsinh
arctanh
batch_dot
broadcast_add
broadcast_div
broadcast_mul
broadcast_sub
BlockGrad
cumsum
div_scalar
elementwise_sub
elemwise_add
elemwise_mul
elemwise_sub
exp
expand_dims
greater_scalar
max
minus_scalar
null Not needed for inference.
LayerNorm ``output_mean_var`` = True is not supported.
repeat
rnn
rnn_param_concat
round
sigmoid
slice
SliceChannel
slice_axis
slice_channel
slice_like
softmax
stack
swapaxis
tile
transpose
zeros
Activation Supported ``act_type`` = ``relu``, ``sigmoid``, ``softrelu`` or ``tanh``
BatchNorm
Concat
Convolution
Crop ``center_crop`` = 1 is not supported.
Custom See :doc:`Custom Layers in Model Optimizer <openvino_docs_MO_DG_prepare_model_customize_model_optimizer_Customize_Model_Optimizer>`
Deconvolution
DeformableConvolution
DeformablePSROIPooling
Dropout Not needed for inference.
ElementWiseSum
Embedding
Flatten
FullyConnected
InstanceNorm
L2Normalization Only 4D input is supported.
LRN
LeakyReLU Supported ``act_type`` = ``prelu``, ``elu``, ``leaky``, ``gelu``
ones_like
Pad
Pooling
ROIPooling
ReLU
Reshape
ScaleShift
SoftmaxActivation
SoftmaxOutput
SoftSign
Take The attribute ``mode`` is not supported.
Tile
UpSampling
Where
zeros_like
========================================== ==========================================================================================
TensorFlow Supported Operations
#########################################
Some of TensorFlow operations do not match any OpenVINO operations. Yet, they are still supported by
Model Optimizer and can be used on constant propagation path. These layers are labeled
with ``Constant propagation`` in the table below:
========================================== ==========================================================================================
Operation Name in TensorFlow Limitations
========================================== ==========================================================================================
Abs
Acosh
Add
AddV2
AddN
All
Any
ArgMax
ArgMin
Asinh
Assert Not needed for inference.
Assign Not needed for inference.
AssignSub Not needed for inference.
Atanh
AvgPool
AvgPoolV2 Supported only for constant-foldable ``kernel_size`` and strides inputs.
AvgPool3D
BatchMatMul
BatchMatMulV2
BatchToSpaceND
BiasAdd
BlockLSTM
Bucketize CPU only.
BroadcastTo
Cast
Ceil
ClipByValue
Concat
ConcatV2
Const
Conv2D
Conv2DBackpropInput
Conv3D
Conv3DBackpropInputV2
Cos
Cosh
CropAndResize ``method`` = ``bilinear`` only.
CTCGreedyDecoder Supported only with decoded indices output in a dense format.
CTCLoss Supported only with decoded indices input in a dense format.
CumSum
DepthToSpace
DepthwiseConv2dNative
Einsum Supported only with equation that does not contain repeated labels within a subscript.
Elu
EmptyTensorList Supported only when it is part of a sub-graph of the special form.
Enter Supported only when it is fused to the TensorIterator layer.
Equal
Erf
Exit Supported only when it is fused to the TensorIterator layer.
Exp
ExpandDims
ExperimentalSparseWeightedSum CPU only.
ExtractImagePatches
EuclideanNorm
FakeQuantWithMinMaxVars
FakeQuantWithMinMaxVarsPerChannel
FFT Supported only when it is part of a sub-graph of the special form.
FFT2D Supported only when it is part of a sub-graph of the special form.
FFT3D Supported only when it is part of a sub-graph of the special form.
FIFOQueueV2 Supported only when it is part of a sub-graph of the special form.
Fill
Floor
FloorDiv
FloorMod
FusedBatchNorm
FusedBatchNormV2
FusedBatchNormV3
Gather
GatherNd
GatherTree
GatherV2
Greater
GreaterEqual
Identity Not needed for shape inference.
IdentityN
IFFT Supported only when it is part of a sub-graph of the special form.
IFFT2D Supported only when it is part of a sub-graph of the special form.
IFFT3D Supported only when it is part of a sub-graph of the special form.
IteratorGetNext Supported only when it is part of a sub-graph of the special form.
LRN
LeakyRelu
Less
LessEqual
Log
Log1p
LogicalAnd
LogicalOr
LogicalNot
LogSoftmax
LookupTableInsertV2 Supported only when it is part of a sub-graph of the special form.
LoopCond Supported only when it is fused to the TensorIterator layer.
MatMul
Max
MaxPool
MaxPoolV2 Supported only for constant-foldable ``kernel_size`` and strides inputs.
MaxPool3D
Maximum
Mean
Merge Supported only when it is fused to the TensorIterator layer.
Min
Minimum
MirrorPad
Mod
Mul
Neg
NextIteration Supported only when it is fused to the TensorIterator layer.
NonMaxSuppressionV2
NonMaxSuppressionV3
NonMaxSuppressionV4
NonMaxSuppressionV5
NotEqual
NoOp
OneHot
Pack
Pad
PadV2
Placeholder
PlaceholderWithDefault
Prod
QueueDequeue Supported only when it is part of a sub-graph of the special form.
QueueDequeueUpToV2 Supported only when it is part of a sub-graph of the special form.
QueueDequeueV2 Supported only when it is part of a sub-graph of the special form.
RandomUniform
RandomUniformInt
Range
Rank
RealDiv
Reciprocal
Relu
Relu6
Reshape
ResizeBilinear
ResizeNearestNeighbor
ResourceGather
ReverseSequence
ReverseV2 Supported only when it can be converted to the ReverseSequence operation.
Roll
Round
Pow
Rsqrt
ScatterNd
Select
SelectV2
Shape
Sigmoid
Sin
Sinh
Size
Slice
Softmax
Softplus
Softsign
SpaceToBatchND
SpaceToDepth
SparseFillEmptyRows Supported only when it is part of a sub-graph of the special form.
SparseReshape Supported only when it is part of a sub-graph of the special form.
SparseSegmentSum Supported only when it is part of a sub-graph of the special form.
SparseSegmentMean Supported only when it is part of a sub-graph of the special form.
SparseToDense CPU only
Split
SplitV
Sqrt
Square
SquaredDifference
Square
Squeeze Cases in which squeeze axis is not specified are not supported.
StatelessWhile
StopGradient Not needed for shape inference.
StridedSlice Supported only for constant-foldable ``begin``, ``end``, and ``strides`` inputs.
Sub
Sum
Swish
swish_f32
Switch Control flow propagation.
Tan
Tanh
TensorArrayGatherV3 Supported only when it is fused to the TensorIterator layer.
TensorArrayReadV3 Supported only when it is fused to the TensorIterator layer.
TensorArrayScatterV3 Supported only when it is fused to the TensorIterator layer.
TensorArraySizeV3 Supported only when it is fused to the TensorIterator layer.
TensorArrayV3 Supported only when it is fused to the TensorIterator layer.
TensorArrayWriteV3 Supported only when it is fused to the TensorIterator layer.
TensorListPushBack Supported only when it is part of a sub-graph of the special form.
Tile
TopkV2
Transpose
Unpack
Variable
VariableV2
Where Supported only when it is part of a sub-graph of the special form.
ZerosLike
========================================== ==========================================================================================
TensorFlow 2 Keras Supported Operations
##########################################
========================================== ==========================================================================================
Operation Name in TensorFlow 2 Keras Limitations
========================================== ==========================================================================================
ActivityRegularization
Add
AdditiveAttention
AlphaDropout
Attention
Average
AveragePooling1D
AveragePooling2D
AveragePooling3D
BatchNormalization
Bidirectional
Concatenate
Conv1D
Conv1DTranspose Not supported if ``dilation`` is not equal to 1.
Conv2D
Conv2DTranspose
Conv3D
Conv3DTranspose
Cropping1D
Cropping2D
Cropping3D
Dense
DenseFeatures Not supported for categorical and crossed features.
DepthwiseConv2D
Dot
Dropout
ELU
Embedding
Flatten
GRU
GRUCell
GaussianDropout
GaussianNoise
GlobalAveragePooling1D
GlobalAveragePooling2D
GlobalAveragePooling3D
GlobalMaxPool1D
GlobalMaxPool2D
GlobalMaxPool3D
LSTM
LSTMCell
Lambda
LayerNormalization
LeakyReLU
LocallyConnected1D
LocallyConnected2D
MaxPool1D
MaxPool2D
MaxPool3D
Maximum
Minimum
Multiply
PReLU
Permute
RNN Not supported for some custom cells.
ReLU
RepeatVector
Reshape
Roll
SeparableConv1D
SeparableConv2D
SimpleRNN
SimpleRNNCell
Softmax
SpatialDropout1D
SpatialDropout2D
SpatialDropout3D
StackedRNNCells
Subtract
ThresholdedReLU
TimeDistributed
UpSampling1D
UpSampling2D
UpSampling3D
ZeroPadding1D
ZeroPadding2D
ZeroPadding3D
========================================== ==========================================================================================
Kaldi Supported Layers
##########################################
========================================== ==========================================================================================
Symbol Name in Kaldi Limitations
========================================== ==========================================================================================
addshift
affinecomponent
affinecomponentpreconditionedonline
affinetransform
backproptruncationcomponent
batchnormcomponent
clipgradientcomponent Not needed for inference.
concat
convolutional1dcomponent
convolutionalcomponent
copy
dropoutmaskcomponent
elementwiseproductcomponent
fixedaffinecomponent
fixedbiascomponent
fixedscalecomponent
generaldropoutcomponent Not needed for inference.
linearcomponent
logsoftmaxcomponent
lstmnonlinearitycomponent
lstmprojected
lstmprojectedstreams
maxpoolingcomponent
naturalgradientaffinecomponent
naturalgradientperelementscalecomponent
noopcomponent Not needed for inference.
normalizecomponent
parallelcomponent
pnormcomponent
rectifiedlinearcomponent
rescale
sigmoid
sigmoidcomponent
softmax
softmaxComponent
specaugmenttimemaskcomponent Not needed for inference.
splicecomponent
tanhcomponent
tdnncomponent
timeheightconvolutioncomponent
========================================== ==========================================================================================
ONNX Supported Operators
##########################################
Standard ONNX Operators
++++++++++++++++++++++++++++++++++++++++++
========================================== ==========================================================================================
ONNX Operator Name
========================================== ==========================================================================================
Abs
Acos
Acosh
And
ArgMin
ArgMax
Asin
Asinh
Atan
ATen
Atanh
AveragePool
BatchNormalization
BitShift
Cast
CastLike
Ceil
Clip
Concat
Constant
ConstantOfShape
Conv
ConvInteger
ConvTranspose
Compress
Cos
Cosh
ConstantFill
CumSum
DepthToSpace
DequantizeLinear
Div
Dropout
Einsum
Elu
Equal
Erf
Exp
Expand
EyeLike
Flatten
Floor
Gather
GatherElements
GatherND
Gemm
GlobalAveragePool
GlobalLpPool
GlobalMaxPool
Greater
GRU
Hardmax
HardSigmoid
HardSwish
Identity
If
ImageScaler
InstanceNormalization
LeakyRelu
Less
Log
LogSoftmax
Loop
LpNormalization
LRN
LSTM
MatMulInteger
MatMul
MaxPool
Max
Mean
MeanVarianceNormalization
Min
Mod
Mul
Neg
NonMaxSuppression
NonZero
Not
Or
OneHot
Pad
Pow
PRelu
QLinearConv
QLinearMatMul
QuantizeLinear
Range
RandomNormal
RandomNormalLike
RandomUniform
RandomUniformLike
Reciprocal
ReduceLogSum
ReduceLogSumExp
ReduceL1
ReduceL2
ReduceMax
ReduceMean
ReduceMin
ReduceProd
ReduceSum
ReduceSumSquare
Relu
Reshape
Resize
ReverseSequence
RNN
RoiAlign
Round
ScatterElements
ScatterND
Selu
Shape
Shrink
Sigmoid
Sign
Sin
Sinh
Size
Slice
Softmax
Softplus
Softsign
SpaceToDepth
Split
Sqrt
Squeeze
Sub
Sum
Tan
Tanh
ThresholdedRelu
Tile
TopK
Transpose
Unsqueeze
Where
Xor
========================================== ==========================================================================================
Deprecated ONNX Operators (Supported)
++++++++++++++++++++++++++++++++++++++++++
========================================== ==========================================================================================
ONNX Operator Name
========================================== ==========================================================================================
Affine
Crop
Scatter
Upsample
========================================== ==========================================================================================
Operators From the org.openvinotoolkit Domain
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
===================================================== ===============================================================================
Custom ONNX Operator Name
===================================================== ===============================================================================
DeformableConv2D
DetectionOutput
ExperimentalDetectronDetectionOutput
ExperimentalDetectronGenerateProposalsSingleImage
ExperimentalDetectronGroupNorm
ExperimentalDetectronPriorGridGenerator
ExperimentalDetectronROIFeatureExtractor
ExperimentalDetectronTopKROIs
FakeQuantize
GroupNorm
Normalize
PriorBox
PriorBoxClustered
Swish
===================================================== ===============================================================================
Operators From the com.microsoft Domain
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
===================================================== ===============================================================================
Custom ONNX Operator Name
===================================================== ===============================================================================
Attention
BiasGelu
EmbedLayerNormalization
SkipLayerNormalization
===================================================== ===============================================================================
PaddlePaddle Supported Operators
###########################################################
paddlepaddle >= 2.1
========================================== ===============================================================================
Operator Name in PaddlePaddle Limitations
========================================== ===============================================================================
arg_max The ``int32`` output data_type is not supported.
adaptive_pool2d The ``NHWC`` data_layout is not supported.
assign
assign_value
batch_norm
bicubic_interp
bilinear_interp ``NCW``, ``NWC``, ``NHWC``, ``NCDHW``, ``NDHWC`` data_layout are not supported
bmm
box_coder
cast
ceil
clip
concat
conditional_block
conv2d ``NHWC`` data_layout is not supported
conv2d_transpose
cumsum
deformable_conv
depthwise_conv2d ``NHWC`` data_layout is not supported.
depthwise_conv2d_transpose
dropout
elementwise_add
elementwise_div
elementwise_floordiv
elementwise_max
elementwise_min
elementwise_mod
elementwise_mul
elementwise_pow
elementwise_sub
equal
exp
expand
fill_any_like
fill_constant
fill_constant_batch_size_like
flatten_contiguous_range
floor
gather
gather_nd
gelu
generate_proposals
greater_equal
greater_than
group_norm
hard_sigmoid
hard_swish
layer_norm
leaky_relu
less_than
linear_interp
log
logical_and
logical_not
logical_or
logical_xor
lookup_table
matmul
matrix_nms Only supports IE CPU plugin with "number of selected boxes" static shape (e.g.: ``min(min(num_boxes, nms_top_k) * num_classes_output, keep_top_k)``).
max_pool2d_with_index
meshgrid
multiclass_nms Only supports IE CPU plugin with "number of selected boxes" static shape (e.g.: ``min(min(num_boxes, nms_top_k) * num_classes_output, keep_top_k)``).
nearest_interp ``NCW``, ``NWC``, ``NHWC``, ``NCDHW``, ``NDHWC`` data_layout are not supported.
not_equal
p_norm
pad3d ``Circular`` mode is not supported.
pool2d ``NHWC`` data_layout is not supported.
pow
prior_box
range
reduce_max
reduce_mean
reduce_min
reduce_prod
reduce_sum
relu
reshape
reverse
rnn ``SimpleRNN`` and ``GRU`` modes are not supported.
roi_align
scale
select_input
shape
sigmoid
slice
softmax
softplus
split
sqrt
squeeze
stack
strided_slice
sum
swish
sync_batch_norm
tanh
tile
top_k
transpose
trilinear_interp
unsqueeze
where
where_index
while
yolo_box
========================================== ===============================================================================
TensorFlow Lite Supported Operators
###########################################################
========================================== ===============================================================================
Operator Name in TensorFlow Lite Limitations
========================================== ===============================================================================
ABS
ADD
ADD_N
ARG_MAX
ARG_MIN
AVERAGE_POOL_2D
BATCH_MATMUL
BATCH_TO_SPACE_ND
BROADCAST_ARGS
BROADCAST_TO
CAST
CEIL
COMPLEX_ABS Supported in a specific pattern with RFFT2D
CONCATENATION
CONV_2D
COS
DEPTH_TO_SPACE
DEPTHWISE_CONV_2D
DEQUANTIZE
DIV
ELU
EQUAL
EXP
EXPAND_DIMS
FILL
FLOOR
FLOOR_DIV
FLOOR_MOD
FULLY_CONNECTED
GATHER
GATHER_ND
GREATER
GREATER_EQUAL
HARD_SWISH
L2_NORMALIZATION
LEAKY_RELU
LESS
LESS_EQUAL
LOG
LOG_SOFTMAX
LOGICAL_AND
LOGICAL_NOT
LOGICAL_OR
LOGISTIC
MATRIX_DIAG
MAX_POOL_2D
MAXIMUM
MEAN
MINIMUM
MIRROR_PAD
MUL
NEG
NOT_EQUAL
ONE_HOT
PACK
PAD
PADV2
POW
PRELU
QUANTIZE
RANGE
RANK
REDUCE_ALL
REDUCE_ANY
REDUCE_MAX
REDUCE_MIN
REDUCE_PROD
RELU
RELU6
RESHAPE
RESIZE_BILINEAR
RESIZE_NEAREST_NEIGHBOR
REVERSE_V2
RFFT2D Supported in a specific pattern with COMPLEX_ABS
ROUND
RSQRT
SCATTER_ND
SEGMENT_SUM
SELECT
SELECT_V2
SHAPE
SIGN
SIN
SLICE
SOFTMAX
SPACE_TO_BATCH_ND
SPACE_TO_DEPTH
SPLIT
SPLIT_V
SQRT
SQUARE
SQUARED_DIFFERENCE
SQUEEZE
STRIDED_SLICE
SUB
SUM
TANH
TILE
TOPK_V2
TRANSPOSE
TRANSPOSE_CONV
UNIQUE
UNPACK
WHERE
ZEROS_LIKE
========================================== ===============================================================================
@endsphinxdirective

View File

@@ -2,7 +2,11 @@
@sphinxdirective
To convert a Caffe model, run Model Optimizer with the path to the input model ``.caffemodel`` file:
.. warning::
Note that OpenVINO support for Caffe is currently being deprecated and will be removed entirely in the future.
To convert a Caffe model, run ``mo`` with the path to the input model ``.caffemodel`` file:
.. code-block:: cpp
@@ -38,18 +42,18 @@ The following list provides the Caffe-specific parameters.
CLI Examples Using Caffe-Specific Parameters
++++++++++++++++++++++++++++++++++++++++++++
* Launching Model Optimizer for `bvlc_alexnet.caffemodel <https://github.com/BVLC/caffe/tree/master/models/bvlc_alexnet>`__ with a specified `prototxt` file. This is needed when the name of the Caffe model and the `.prototxt` file are different or are placed in different directories. Otherwise, it is enough to provide only the path to the input `model.caffemodel` file.
* Launching model conversion for `bvlc_alexnet.caffemodel <https://github.com/BVLC/caffe/tree/master/models/bvlc_alexnet>`__ with a specified `prototxt` file. This is needed when the name of the Caffe model and the `.prototxt` file are different or are placed in different directories. Otherwise, it is enough to provide only the path to the input `model.caffemodel` file.
.. code-block:: cpp
mo --input_model bvlc_alexnet.caffemodel --input_proto bvlc_alexnet.prototxt
* Launching Model Optimizer for `bvlc_alexnet.caffemodel <https://github.com/BVLC/caffe/tree/master/models/bvlc_alexnet>`__ with a specified `CustomLayersMapping` file. This is the legacy method of quickly enabling model conversion if your model has custom layers. This requires the Caffe system on the computer. Example of ``CustomLayersMapping.xml`` can be found in ``<OPENVINO_INSTALLATION_DIR>/mo/front/caffe/CustomLayersMapping.xml.example``. The optional parameters without default values and not specified by the user in the ``.prototxt`` file are removed from the Intermediate Representation, and nested parameters are flattened:
* Launching model conversion for `bvlc_alexnet.caffemodel <https://github.com/BVLC/caffe/tree/master/models/bvlc_alexnet>`__ with a specified `CustomLayersMapping` file. This is the legacy method of quickly enabling model conversion if your model has custom layers. This requires the Caffe system on the computer. Example of ``CustomLayersMapping.xml`` can be found in ``<OPENVINO_INSTALLATION_DIR>/mo/front/caffe/CustomLayersMapping.xml.example``. The optional parameters without default values and not specified by the user in the ``.prototxt`` file are removed from the Intermediate Representation, and nested parameters are flattened:
.. code-block:: cpp
mo --input_model bvlc_alexnet.caffemodel -k CustomLayersMapping.xml --disable_omitting_optional --enable_flattening_nested_params
This example shows a multi-input model with input layers: ``data``, ``rois``
.. code-block:: cpp
@@ -71,7 +75,7 @@ CLI Examples Using Caffe-Specific Parameters
}
}
* Launching the Model Optimizer for a multi-input model with two inputs and providing a new shape for each input in the order they are passed to the Model Optimizer. In particular, for data, set the shape to ``1,3,227,227``. For rois, set the shape to ``1,6,1,1``:
* Launching model conversion for a multi-input model with two inputs and providing a new shape for each input in the order they are passed to the model conversion API. In particular, for data, set the shape to ``1,3,227,227``. For rois, set the shape to ``1,6,1,1``:
.. code-block:: cpp
@@ -80,26 +84,26 @@ CLI Examples Using Caffe-Specific Parameters
Custom Layer Definition
########################
Internally, when you run Model Optimizer, it loads the model, goes through the topology, and tries to find each layer type in a list of known layers. Custom layers are layers that are not included in the list. If your topology contains such kind of layers, Model Optimizer classifies them as custom.
For the definition of custom layers, refer to the :doc:`Cutting Off Parts of a Model <openvino_docs_MO_DG_prepare_model_convert_model_Cutting_Model>` page.
Supported Caffe Layers
#######################
For the list of supported standard layers, refer to the :doc:`Supported Framework Layers <openvino_docs_MO_DG_prepare_model_Supported_Frameworks_Layers>` page.
For the list of supported standard layers, refer to the :doc:`Supported Operations <openvino_resources_supported_operations_frontend>` page.
Frequently Asked Questions (FAQ)
################################
Model Optimizer provides explanatory messages when it is unable to complete conversions due to typographical errors, incorrectly used options, or other issues. A message describes the potential cause of the problem and gives a link to :doc:`Model Optimizer FAQ <openvino_docs_MO_DG_prepare_model_Model_Optimizer_FAQ>` which provides instructions on how to resolve most issues. The FAQ also includes links to relevant sections to help you understand what went wrong.
Model conversion API provides explanatory messages when it is unable to complete conversions due to typographical errors, incorrectly used options, or other issues. A message describes the potential cause of the problem and gives a link to :doc:`Model Optimizer FAQ <openvino_docs_MO_DG_prepare_model_Model_Optimizer_FAQ>` which provides instructions on how to resolve most issues. The FAQ also includes links to relevant sections in :doc:`Convert a Model <openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide>`to help you understand what went wrong.
Summary
#######
In this document, you learned:
* Basic information about how the Model Optimizer works with Caffe models.
* Basic information about how model conversion works with Caffe models.
* Which Caffe models are supported.
* How to convert a trained Caffe model by using Model Optimizer with both framework-agnostic and Caffe-specific command-line options.
* How to convert a trained Caffe model by using model conversion API with both framework-agnostic and Caffe-specific command-line parameters.
Additional Resources
####################

View File

@@ -2,17 +2,22 @@
@sphinxdirective
.. note::
Model Optimizer supports the `nnet1 <http://kaldi-asr.org/doc/dnn1.html>`__ and `nnet2 <http://kaldi-asr.org/doc/dnn2.html>`__ formats of Kaldi models. The support of the `nnet3 <http://kaldi-asr.org/doc/dnn3.html>`__ format is limited.
To convert a Kaldi model, run Model Optimizer with the path to the input model ``.nnet`` or ``.mdl`` file:
.. warning::
Note that OpenVINO support for Kaldi is currently being deprecated and will be removed entirely in the future.
.. note::
Model conversion API supports the `nnet1 <http://kaldi-asr.org/doc/dnn1.html>`__ and `nnet2 <http://kaldi-asr.org/doc/dnn2.html>`__ formats of Kaldi models. The support of the `nnet3 <http://kaldi-asr.org/doc/dnn3.html>`__ format is limited.
To convert a Kaldi model, run model conversion with the path to the input model ``.nnet`` or ``.mdl`` file:
.. code-block:: cpp
mo --input_model <INPUT_MODEL>.nnet
Using Kaldi-Specific Conversion Parameters
Using Kaldi-Specific Conversion Parameters
##########################################
The following list provides the Kaldi-specific parameters.
@@ -28,51 +33,51 @@ The following list provides the Kaldi-specific parameters.
Examples of CLI Commands
########################
* To launch Model Optimizer for the ``wsj_dnn5b_smbr`` model with the specified ``.nnet`` file:
* To launch model conversion for the ``wsj_dnn5b_smbr`` model with the specified ``.nnet`` file:
.. code-block:: cpp
mo --input_model wsj_dnn5b_smbr.nnet
* To launch Model Optimizer for the ``wsj_dnn5b_smbr`` model with the existing file that contains counts for the last layer with biases:
* To launch model conversion for the ``wsj_dnn5b_smbr`` model with the existing file that contains counts for the last layer with biases:
.. code-block:: cpp
mo --input_model wsj_dnn5b_smbr.nnet --counts wsj_dnn5b_smbr.counts
* The Model Optimizer normalizes сounts in the following way:
* The model conversion normalizes сounts in the following way:
.. math::
S = \frac{1}{\sum_{j = 0}^{|C|}C_{j}}
.. math::
C_{i}=log(S\*C_{i})
where :math:`C` - the counts array, :math:`C_{i} - i^{th}` element of the counts array, :math:`|C|` - number of elements in the counts array;
where :math:`C` - the counts array, :math:`C_{i} - i^{th}` element of the counts array, :math:`|C|` - number of elements in the counts array;
* The normalized counts are subtracted from biases of the last or next to last layer (if last layer is SoftMax).
.. note:: Model Optimizer will show a warning if a model contains values of counts and the `--counts` option is not used.
* If you want to remove the last SoftMax layer in the topology, launch the Model Optimizer with the `--remove_output_softmax` flag:
.. note:: Model conversion API will show a warning if a model contains values of counts and the ``counts`` option is not used.
.. code-block:: cpp
* If you want to remove the last SoftMax layer in the topology, launch the model conversion with the ``remove_output_softmax`` flag:
mo --input_model wsj_dnn5b_smbr.nnet --counts wsj_dnn5b_smbr.counts --remove_output_softmax
.. code-block:: cpp
The Model Optimizer finds the last layer of the topology and removes this layer only if it is a SoftMax layer.
mo --input_model wsj_dnn5b_smbr.nnet --counts wsj_dnn5b_smbr.counts --remove_output_softmax
.. note:: Model Optimizer can remove SoftMax layer only if the topology has one output.
Model conversion API finds the last layer of the topology and removes this layer only if it is a SoftMax layer.
* You can use the *OpenVINO Speech Recognition* sample application for the sample inference of Kaldi models. This sample supports models with only one output. If your model has several outputs, specify the desired one with the ``--output`` option.
.. note:: Model conversion can remove SoftMax layer only if the topology has one output.
* You can use the *OpenVINO Speech Recognition* sample application for the sample inference of Kaldi models. This sample supports models with only one output. If your model has several outputs, specify the desired one with the ``output`` option.
Supported Kaldi Layers
######################
For the list of supported standard layers, refer to the :doc:`Supported Framework Layers <openvino_docs_MO_DG_prepare_model_Supported_Frameworks_Layers>` page.
For the list of supported standard layers, refer to the :doc:`Supported Operations <openvino_resources_supported_operations_frontend>` page.
Additional Resources
####################

View File

@@ -2,14 +2,18 @@
@sphinxdirective
To convert an MXNet model, run Model Optimizer with the path to the ``.params`` file of the input model:
.. warning::
Note that OpenVINO support for Apache MXNet is currently being deprecated and will be removed entirely in the future.
To convert an MXNet model, run model conversion with the path to the ``.params`` file of the input model:
.. code-block:: sh
mo --input_model model-file-0000.params
Using MXNet-Specific Conversion Parameters
Using MXNet-Specific Conversion Parameters
##########################################
The following list provides the MXNet-specific parameters.
@@ -35,33 +39,33 @@ The following list provides the MXNet-specific parameters.
Use only if your topology is one of ssd gluoncv topologies
.. note::
.. note::
By default, Model Optimizer does not use the Apache MXNet loader. It transforms the topology to another format which is compatible with the latest version of Apache MXNet. However, the Apache MXNet loader is required for models trained with lower version of Apache MXNet. If your model was trained with an Apache MXNet version lower than 1.0.0, specify the ``--legacy_mxnet_model`` key to enable the Apache MXNet loader. Note that the loader does not support models with custom layers. In this case, you must manually recompile Apache MXNet with custom layers and install it in your environment.
By default, model conversion API does not use the Apache MXNet loader. It transforms the topology to another format which is compatible with the latest version of Apache MXNet. However, the Apache MXNet loader is required for models trained with lower version of Apache MXNet. If your model was trained with an Apache MXNet version lower than 1.0.0, specify the ``--legacy_mxnet_model`` key to enable the Apache MXNet loader. Note that the loader does not support models with custom layers. In this case, you must manually recompile Apache MXNet with custom layers and install it in your environment.
Custom Layer Definition
#######################
Internally, when you run Model Optimizer, it loads the model, goes through the topology, and tries to find each layer type in a list of known layers. Custom layers are layers that are not included in the list. If your topology contains such kind of layers, Model Optimizer classifies them as custom.
For the definition of custom layers, refer to the :doc:`Cutting Off Parts of a Model <openvino_docs_MO_DG_prepare_model_convert_model_Cutting_Model>` page.
Supported MXNet Layers
#######################
For the list of supported standard layers, refer to the :doc:`Supported Framework Layers <openvino_docs_MO_DG_prepare_model_Supported_Frameworks_Layers>` page.
For the list of supported standard layers, refer to the :doc:`Supported Operations <openvino_resources_supported_operations>` page.
Frequently Asked Questions (FAQ)
################################
Model Optimizer provides explanatory messages when it is unable to complete conversions due to typographical errors, incorrectly used options, or other issues. A message describes the potential cause of the problem and gives a link to :doc:`Model Optimizer FAQ <openvino_docs_MO_DG_prepare_model_Model_Optimizer_FAQ>` which provides instructions on how to resolve most issues. The FAQ also includes links to relevant sections to help you understand what went wrong.
Model conversion API provides explanatory messages when it is unable to complete conversions due to typographical errors, incorrectly used options, or other issues. A message describes the potential cause of the problem and gives a link to :doc:`Model Optimizer FAQ <openvino_docs_MO_DG_prepare_model_Model_Optimizer_FAQ>` which provides instructions on how to resolve most issues. The FAQ also includes links to relevant sections in :doc:`Convert a Model <openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide>` to help you understand what went wrong.
Summary
########
In this document, you learned:
* Basic information about how Model Optimizer works with MXNet models.
* Basic information about how model conversion API works with MXNet models.
* Which MXNet models are supported.
* How to convert a trained MXNet model by using the Model Optimizer with both framework-agnostic and MXNet-specific command-line options.
* How to convert a trained MXNet model by using model conversion API with both framework-agnostic and MXNet-specific command-line parameters.
Additional Resources
####################
@@ -72,4 +76,3 @@ See the :doc:`Model Conversion Tutorials <openvino_docs_MO_DG_prepare_model_conv
* :doc:`Convert MXNet Style Transfer Model <openvino_docs_MO_DG_prepare_model_convert_model_mxnet_specific_Convert_Style_Transfer_From_MXNet>`
@endsphinxdirective

View File

@@ -10,11 +10,11 @@ Introduction to ONNX
Converting an ONNX Model
########################
This page provides instructions on how to convert a model from the ONNX format to the OpenVINO IR format using Model Optimizer. To use Model Optimizer, install OpenVINO Development Tools by following the :doc:`installation instructions <openvino_docs_install_guides_install_dev_tools>`.
This page provides instructions on model conversion from the ONNX format to the OpenVINO IR format. To use model conversion API, install OpenVINO Development Tools by following the :doc:`installation instructions <openvino_docs_install_guides_install_dev_tools>`.
The Model Optimizer process assumes you have an ONNX model that was directly downloaded from a public repository or converted from any framework that supports exporting to the ONNX format.
Model conversion process assumes you have an ONNX model that was directly downloaded from a public repository or converted from any framework that supports exporting to the ONNX format.
To convert an ONNX model, run Model Optimizer with the path to the input model ``.onnx`` file:
To convert an ONNX model, run model conversion with the path to the input model ``.onnx`` file:
.. code-block:: sh
@@ -25,7 +25,7 @@ There are no ONNX specific parameters, so only framework-agnostic parameters are
Supported ONNX Layers
#####################
For the list of supported standard layers, refer to the :doc:`Supported Framework Layers <openvino_docs_MO_DG_prepare_model_Supported_Frameworks_Layers>` page.
For the list of supported standard layers, refer to the :doc:`Supported Operations <openvino_resources_supported_operations_frontend>` page.
Additional Resources
####################

View File

@@ -14,10 +14,88 @@ To convert a PaddlePaddle model, use the ``mo`` script and specify the path to t
mo --input_model=yolov3.pdmodel --input=image,im_shape,scale_factor --input_shape=[1,3,608,608],[1,2],[1,2] --reverse_input_channels --output=save_infer_model/scale_0.tmp_1,save_infer_model/scale_1.tmp_1
Converting PaddlePaddle Model From Memory Using Python API
##########################################################
Model conversion API supports passing PaddlePaddle models directly from memory.
Following PaddlePaddle model formats are supported:
* ``paddle.hapi.model.Model``
* ``paddle.fluid.dygraph.layers.Layer``
* ``paddle.fluid.executor.Executor``
Converting certain PaddlePaddle models may require setting ``example_input`` or ``example_output``. Below examples show how to execute such the conversion.
* Example of converting ``paddle.hapi.model.Model`` format model:
.. code-block:: python
import paddle
from openvino.tools.mo import convert_model
# create a paddle.hapi.model.Model format model
resnet50 = paddle.vision.models.resnet50()
x = paddle.static.InputSpec([1,3,224,224], 'float32', 'x')
y = paddle.static.InputSpec([1,1000], 'float32', 'y')
model = paddle.Model(resnet50, x, y)
# convert to OpenVINO IR format
ov_model = convert_model(model)
# optional: serialize OpenVINO IR to *.xml & *.bin
from openvino.runtime import serialize
serialize(ov_model, "ov_model.xml", "ov_model.bin")
* Example of converting ``paddle.fluid.dygraph.layers.Layer`` format model:
``example_input`` is required while ``example_output`` is optional, which accept the following formats:
``list`` with tensor(``paddle.Tensor``) or InputSpec(``paddle.static.input.InputSpec``)
.. code-block:: python
import paddle
from openvino.tools.mo import convert_model
# create a paddle.fluid.dygraph.layers.Layer format model
model = paddle.vision.models.resnet50()
x = paddle.rand([1,3,224,224])
# convert to OpenVINO IR format
ov_model = convert_model(model, example_input=[x])
* Example of converting ``paddle.fluid.executor.Executor`` format model:
``example_input`` and ``example_output`` are required, which accept the following formats:
``list`` or ``tuple`` with variable(``paddle.static.data``)
.. code-block:: python
import paddle
from openvino.tools.mo import convert_model
paddle.enable_static()
# create a paddle.fluid.executor.Executor format model
x = paddle.static.data(name="x", shape=[1,3,224])
y = paddle.static.data(name="y", shape=[1,3,224])
relu = paddle.nn.ReLU()
sigmoid = paddle.nn.Sigmoid()
y = sigmoid(relu(x))
exe = paddle.static.Executor(paddle.CPUPlace())
exe.run(paddle.static.default_startup_program())
# convert to OpenVINO IR format
ov_model = convert_model(exe, example_input=[x], example_output=[y])
Supported PaddlePaddle Layers
#############################
For the list of supported standard layers, refer to the :doc:`Supported Framework Layers <openvino_docs_MO_DG_prepare_model_Supported_Frameworks_Layers>` page.
For the list of supported standard layers, refer to the :doc:`Supported Operations <openvino_resources_supported_operations_frontend>` page.
Officially Supported PaddlePaddle Models
########################################
@@ -77,7 +155,7 @@ The following PaddlePaddle models have been officially validated and confirmed t
Frequently Asked Questions (FAQ)
################################
When Model Optimizer is unable to run to completion due to typographical errors, incorrectly used options, or other issues, it provides explanatory messages. They describe the potential cause of the problem and give a link to the :doc:`Model Optimizer FAQ <openvino_docs_MO_DG_prepare_model_Model_Optimizer_FAQ>`, which provides instructions on how to resolve most issues. The FAQ also includes links to relevant sections in the Model Optimizer Developer Guide to help you understand what went wrong.
When model conversion API is unable to run to completion due to typographical errors, incorrectly used options, or other issues, it provides explanatory messages. They describe the potential cause of the problem and give a link to the :doc:`Model Optimizer FAQ <openvino_docs_MO_DG_prepare_model_Model_Optimizer_FAQ>`, which provides instructions on how to resolve most issues. The FAQ also includes links to relevant sections in :doc:`Convert a Model <openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide>` to help you understand what went wrong.
Additional Resources
####################

View File

@@ -2,14 +2,61 @@
@sphinxdirective
The PyTorch framework is supported through export to the ONNX format. In order to optimize and deploy a model that was trained with it:
This page provides instructions on how to convert a model from the PyTorch format to the OpenVINO IR format using Model Optimizer.
Model Optimizer Python API allows the conversion of PyTorch models using the ``convert_model()`` method.
1. `Export a PyTorch model to ONNX <#exporting-a-pytorch-model-to-onnx-format>`__.
2. :doc:`Convert the ONNX model <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_ONNX>` to produce an optimized :doc:`Intermediate Representation <openvino_docs_MO_DG_IR_and_opsets>` of the model based on the trained network topology, weights, and biases values.
(Experimental) Converting a PyTorch model with PyTorch Frontend
###############################################################
Example of PyTorch model conversion:
.. code-block:: python
import torchvision
import torch
from openvino.tools.mo import convert_model
model = torchvision.models.resnet50(pretrained=True)
ov_model = convert_model(model)
Following PyTorch model formats are supported:
* ``torch.nn.Module``
* ``torch.jit.ScriptModule``
* ``torch.jit.ScriptFunction``
Converting certain PyTorch models may require model tracing, which needs ``input_shape`` or ``example_input`` parameters to be set.
``example_input`` is used as example input for model tracing.
``input_shape`` is used for constructing a float zero-filled torch.Tensor for model tracing.
Example of using ``example_input``:
.. code-block:: python
import torchvision
import torch
from openvino.tools.mo import convert_model
model = torchvision.models.resnet50(pretrained=True)
ov_model = convert_model(model, example_input=torch.zeros(1, 3, 100, 100))
``example_input`` accepts the following formats:
* ``openvino.runtime.Tensor``
* ``torch.Tensor``
* ``np.ndarray``
* ``list`` or ``tuple`` with tensors (``openvino.runtime.Tensor`` / ``torch.Tensor`` / ``np.ndarray``)
* ``dictionary`` where key is the input name, value is the tensor (``openvino.runtime.Tensor`` / ``torch.Tensor`` / ``np.ndarray``)
Exporting a PyTorch Model to ONNX Format
########################################
Currently, the most robust method of converting PyTorch models is exporting a PyTorch model to ONNX and then converting it to IR. To convert and deploy a PyTorch model, follow these steps:
1. `Export a PyTorch model to ONNX <#exporting-a-pytorch-model-to-onnx-format>`__.
2. :doc:`Convert the ONNX model <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_ONNX>` to produce an optimized :doc:`Intermediate Representation <openvino_docs_MO_DG_IR_and_opsets>` of the model based on the trained network topology, weights, and biases values.
PyTorch models are defined in Python. To export them, use the ``torch.onnx.export()`` method. The code to
evaluate or test the model is usually provided with its code and can be used for its initialization and export.
The export to ONNX is crucial for this process, but it is covered by PyTorch framework, therefore, It will not be covered here in detail.

View File

@@ -2,39 +2,39 @@
@sphinxdirective
This page provides general instructions on how to convert a model from a TensorFlow format to the OpenVINO IR format using Model Optimizer. The instructions are different depending on whether your model was created with TensorFlow v1.X or TensorFlow v2.X.
This page provides general instructions on how to run model conversion from a TensorFlow format to the OpenVINO IR format. The instructions are different depending on whether your model was created with TensorFlow v1.X or TensorFlow v2.X.
To use Model Optimizer, install OpenVINO Development Tools by following the :doc:`installation instructions <openvino_docs_install_guides_install_dev_tools>`.
To use model conversion API, install OpenVINO Development Tools by following the :doc:`installation instructions <openvino_docs_install_guides_install_dev_tools>`.
Converting TensorFlow 1 Models
Converting TensorFlow 1 Models
###############################
Converting Frozen Model Format
Converting Frozen Model Format
+++++++++++++++++++++++++++++++
To convert a TensorFlow model, use the ``*mo*`` script to simply convert a model with a path to the input model ``*.pb*`` file:
.. code-block:: cpp
.. code-block:: sh
mo --input_model <INPUT_MODEL>.pb
Converting Non-Frozen Model Formats
Converting Non-Frozen Model Formats
+++++++++++++++++++++++++++++++++++
There are three ways to store non-frozen TensorFlow models and convert them by Model Optimizer:
There are three ways to store non-frozen TensorFlow models and convert them by model conversion API:
1. **Checkpoint**. In this case, a model consists of two files: ``inference_graph.pb`` (or ``inference_graph.pbtxt``) and ``checkpoint_file.ckpt``.
If you do not have an inference graph file, refer to the `Freezing Custom Models in Python <#Freezing-Custom-Models-in-Python>`__ section.
To convert the model with the inference graph in ``.pb`` format, run the `mo` script with a path to the checkpoint file:
.. code-block:: cpp
.. code-block:: sh
mo --input_model <INFERENCE_GRAPH>.pb --input_checkpoint <INPUT_CHECKPOINT>
To convert the model with the inference graph in ``.pbtxt`` format, run the ``mo`` script with a path to the checkpoint file:
.. code-block:: cpp
.. code-block:: sh
mo --input_model <INFERENCE_GRAPH>.pbtxt --input_checkpoint <INPUT_CHECKPOINT> --input_model_is_text
@@ -43,7 +43,7 @@ To convert the model with the inference graph in ``.pbtxt`` format, run the ``mo
``model_name.data-00000-of-00001`` (the numbers may vary), and ``checkpoint`` (optional).
To convert such TensorFlow model, run the `mo` script with a path to the MetaGraph ``.meta`` file:
.. code-block:: cpp
.. code-block:: sh
mo --input_meta_graph <INPUT_META_GRAPH>.meta
@@ -52,7 +52,7 @@ To convert such TensorFlow model, run the `mo` script with a path to the MetaGra
and several subfolders: ``variables``, ``assets``, and ``assets.extra``. For more information about the SavedModel directory, refer to the `README <https://github.com/tensorflow/tensorflow/tree/master/tensorflow/python/saved_model#components>`__ file in the TensorFlow repository.
To convert such TensorFlow model, run the ``mo`` script with a path to the SavedModel directory:
.. code-block:: cpp
.. code-block:: sh
mo --saved_model_dir <SAVED_MODEL_DIRECTORY>
@@ -62,14 +62,14 @@ If a model contains operations currently unsupported by OpenVINO, prune these op
To determine custom input nodes, display a graph of the model in TensorBoard. To generate TensorBoard logs of the graph, use the ``--tensorboard_logs`` option.
TensorFlow 2.x SavedModel format has a specific graph due to eager execution. In case of pruning, find custom input nodes in the ``StatefulPartitionedCall/*`` subgraph of TensorFlow 2.x SavedModel format.
Freezing Custom Models in Python
Freezing Custom Models in Python
++++++++++++++++++++++++++++++++
When a network is defined in Python code, you have to create an inference graph file. Graphs are usually built in a form
that allows model training. That means all trainable parameters are represented as variables in the graph.
To be able to use such graph with Model Optimizer, it should be frozen and dumped to a file with the following code:
To be able to use such graph with model conversion API, it should be frozen and dumped to a file with the following code:
.. code-block:: python
.. code-block:: python
import tensorflow as tf
from tensorflow.python.framework import graph_io
@@ -84,7 +84,7 @@ Where:
* ``inference_graph.pb`` is the name of the generated inference graph file.
* ``as_text`` specifies whether the generated file should be in human readable text format or binary.
Converting TensorFlow 2 Models
Converting TensorFlow 2 Models
###############################
To convert TensorFlow 2 models, ensure that `openvino-dev[tensorflow2]` is installed via `pip`.
@@ -97,7 +97,7 @@ SavedModel Format
A model in the SavedModel format consists of a directory with a ``saved_model.pb`` file and two subfolders: ``variables`` and ``assets``.
To convert such a model, run the `mo` script with a path to the SavedModel directory:
.. code-block:: cpp
.. code-block:: sh
mo --saved_model_dir <SAVED_MODEL_DIRECTORY>
@@ -108,12 +108,27 @@ If a model contains operations currently unsupported by OpenVINO™,
prune these operations by explicit specification of input nodes using the ``--input`` or ``--output``
options. To determine custom input nodes, visualize a model graph in the TensorBoard.
To generate TensorBoard logs of the graph, use the Model Optimizer ``--tensorboard_logs`` command-line
option.
TensorFlow 2 SavedModel format has a specific graph structure due to eager execution. In case of
pruning, find custom input nodes in the ``StatefulPartitionedCall/*`` subgraph.
Since the 2023.0 release, direct pruning of models in SavedModel format is not supported.
It is essential to freeze the model before pruning. Use the following code snippet for model freezing:
.. code-block:: python
import tensorflow as tf
from tensorflow.python.framework.convert_to_constants import convert_variables_to_constants_v2
saved_model_dir = "./saved_model"
imported = tf.saved_model.load(saved_model_dir)
# retrieve the concrete function and freeze
concrete_func = imported.signatures[tf.saved_model.DEFAULT_SERVING_SIGNATURE_DEF_KEY]
frozen_func = convert_variables_to_constants_v2(concrete_func,
lower_control_flow=False,
aggressive_inlining=True)
# retrieve GraphDef and save it into .pb format
graph_def = frozen_func.graph.as_graph_def(add_shapes=True)
tf.io.write_graph(graph_def, '.', 'model.pb', as_text=False)
Keras H5
++++++++
@@ -140,53 +155,150 @@ For example, the model with a custom layer ``CustomLayer`` from ``custom_layer.p
Then follow the above instructions for the SavedModel format.
.. note::
.. note::
Do not use other hacks to resave TensorFlow 2 models into TensorFlow 1 formats.
Command-Line Interface (CLI) Examples Using TensorFlow-Specific Parameters
##########################################################################
* Launching the Model Optimizer for Inception V1 frozen model when model file is a plain text protobuf:
* Launching model conversion for Inception V1 frozen model when model file is a plain text protobuf:
.. code-block:: cpp
.. code-block:: sh
mo --input_model inception_v1.pbtxt --input_model_is_text -b 1
mo --input_model inception_v1.pbtxt --input_model_is_text -b 1
* Launching the Model Optimizer for Inception V1 frozen model and dump information about the graph to TensorBoard log dir ``/tmp/log_dir``
* Launching model conversion for Inception V1 frozen model and dump information about the graph to TensorBoard log dir ``/tmp/log_dir``
.. code-block:: cpp
.. code-block:: sh
mo --input_model inception_v1.pb -b 1 --tensorboard_logdir /tmp/log_dir
mo --input_model inception_v1.pb -b 1 --tensorboard_logdir /tmp/log_dir
* Launching the Model Optimizer for BERT model in the SavedModel format, with three inputs. Specify explicitly the input shapes where the batch size and the sequence length equal 2 and 30 respectively.
* Launching model conversion for BERT model in the SavedModel format, with three inputs. Specify explicitly the input shapes where the batch size and the sequence length equal 2 and 30 respectively.
.. code-block:: cpp
.. code-block:: sh
mo --saved_model_dir BERT --input mask,word_ids,type_ids --input_shape [2,30],[2,30],[2,30]
mo --saved_model_dir BERT --input mask,word_ids,type_ids --input_shape [2,30],[2,30],[2,30]
Conversion of TensorFlow models from memory using Python API
############################################################
Model conversion API supports passing TensorFlow/TensorFlow2 models directly from memory.
* ``tf.keras.Model``
.. code-block:: python
model = tf.keras.applications.ResNet50(weights="imagenet")
ov_model = convert_model(model)
* ``tf.keras.layers.Layer``. Requires setting the "input_shape".
.. code-block:: python
import tensorflow_hub as hub
model = hub.KerasLayer("https://tfhub.dev/google/imagenet/mobilenet_v1_100_224/classification/5")
ov_model = convert_model(model, input_shape=[-1, 224, 224, 3])
* ``tf.Module``. Requires setting the "input_shape".
.. code-block:: python
class MyModule(tf.Module):
def __init__(self, name=None):
super().__init__(name=name)
self.variable1 = tf.Variable(5.0, name="var1")
self.variable2 = tf.Variable(1.0, name="var2")
def __call__(self, x):
return self.variable1 * x + self.variable2
model = MyModule(name="simple_module")
ov_model = convert_model(model, input_shape=[-1])
* ``tf.compat.v1.Graph``
.. code-block:: python
with tf.compat.v1.Session() as sess:
inp1 = tf.compat.v1.placeholder(tf.float32, [100], 'Input1')
inp2 = tf.compat.v1.placeholder(tf.float32, [100], 'Input2')
output = tf.nn.relu(inp1 + inp2, name='Relu')
tf.compat.v1.global_variables_initializer()
model = sess.graph
ov_model = convert_model(model)
* ``tf.compat.v1.GraphDef``
.. code-block:: python
with tf.compat.v1.Session() as sess:
inp1 = tf.compat.v1.placeholder(tf.float32, [100], 'Input1')
inp2 = tf.compat.v1.placeholder(tf.float32, [100], 'Input2')
output = tf.nn.relu(inp1 + inp2, name='Relu')
tf.compat.v1.global_variables_initializer()
model = sess.graph_def
ov_model = convert_model(model)
* ``tf.function``
.. code-block:: python
@tf.function(
input_signature=[tf.TensorSpec(shape=[1, 2, 3], dtype=tf.float32),
tf.TensorSpec(shape=[1, 2, 3], dtype=tf.float32)])
def func(x, y):
return tf.nn.sigmoid(tf.nn.relu(x + y))
ov_model = convert_model(func)
* ``tf.compat.v1.session``
.. code-block:: python
with tf.compat.v1.Session() as sess:
inp1 = tf.compat.v1.placeholder(tf.float32, [100], 'Input1')
inp2 = tf.compat.v1.placeholder(tf.float32, [100], 'Input2')
output = tf.nn.relu(inp1 + inp2, name='Relu')
tf.compat.v1.global_variables_initializer()
ov_model = convert_model(sess)
* ``tf.train.checkpoint``
.. code-block:: python
model = tf.keras.Model(...)
checkpoint = tf.train.Checkpoint(model)
save_path = checkpoint.save(save_directory)
# ...
checkpoint.restore(save_path)
ov_model = convert_model(checkpoint)
Supported TensorFlow and TensorFlow 2 Keras Layers
##################################################
For the list of supported standard layers, refer to the :doc:`Supported Framework Layers <openvino_docs_MO_DG_prepare_model_Supported_Frameworks_Layers>` page.
For the list of supported standard layers, refer to the :doc:`Supported Operations <openvino_resources_supported_operations_frontend>` page.
Frequently Asked Questions (FAQ)
################################
The Model Optimizer provides explanatory messages if it is unable to run to completion due to typographical errors, incorrectly used options, or other issues. The message describes the potential cause of the problem and gives a link to the :doc:`Model Optimizer FAQ <openvino_docs_MO_DG_prepare_model_Model_Optimizer_FAQ>`. The FAQ provides instructions on how to resolve most issues. The FAQ also includes links to relevant sections in the Model Optimizer Developer Guide to help you understand what went wrong.
The model conversion API provides explanatory messages if it is unable to run to completion due to typographical errors, incorrectly used options, or other issues. The message describes the potential cause of the problem and gives a link to the :doc:`Model Optimizer FAQ <openvino_docs_MO_DG_prepare_model_Model_Optimizer_FAQ>`. The FAQ provides instructions on how to resolve most issues. The FAQ also includes links to relevant sections in :doc:`Convert a Model <openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide>` to help you understand what went wrong.
Summary
#######
In this document, you learned:
* Basic information about how the Model Optimizer works with TensorFlow models.
* Basic information about how the model conversion API works with TensorFlow models.
* Which TensorFlow models are supported.
* How to freeze a TensorFlow model.
* How to convert a trained TensorFlow model using the Model Optimizer with both framework-agnostic and TensorFlow-specific command-line options.
* How to convert a trained TensorFlow model using model conversion API with both framework-agnostic and TensorFlow-specific command-line parameters.
Additional Resources
####################

View File

@@ -13,7 +13,7 @@ To convert a TensorFlow Lite model, use the ``mo`` script and specify the path t
Supported TensorFlow Lite Layers
###################################
For the list of supported standard layers, refer to the :doc:`Supported Framework Layers <openvino_docs_MO_DG_prepare_model_Supported_Frameworks_Layers>` page.
For the list of supported standard layers, refer to the :doc:`Supported Operations <openvino_resources_supported_operations_frontend>` page.
Supported TensorFlow Lite Models
###################################

View File

@@ -36,9 +36,18 @@
openvino_docs_MO_DG_prepare_model_convert_model_kaldi_specific_Aspire_Tdnn_Model
This section provides a set of tutorials that demonstrate conversion methods for specific TensorFlow, ONNX, PyTorch, MXNet, and Kaldi models, that unnecessarily cover your case.
Before studying the tutorials, try to convert the model out-of-the-box by specifying only the ``--input_model`` parameter in the command line.
You will find a collection of :doc:`Python tutorials <tutorials>` written for running on Jupyter notebooks that provide an introduction to the OpenVINO™ toolkit and explain how to use the Python API and tools for optimized deep learning inference.
This section provides a set of tutorials that demonstrate conversion methods for specific
TensorFlow, ONNX, PyTorch, MXNet, and Kaldi models, which does not necessarily cover your case.
Before studying the tutorials, try to convert the model out-of-the-box by specifying only the
``--input_model`` parameter in the command line.
.. warning::
Note that OpenVINO support for Apache MXNet, Caffe, and Kaldi is currently being deprecated and will be removed entirely in the future.
You will find a collection of :doc:`Python tutorials <tutorials>` written for running on Jupyter notebooks
that provide an introduction to the OpenVINO™ toolkit and explain how to use the Python API and tools for
optimized deep learning inference.
@endsphinxdirective

View File

@@ -1,98 +1,145 @@
# Setting Input Shapes {#openvino_docs_MO_DG_prepare_model_convert_model_Converting_Model}
With Model Optimizer you can increase your model's efficiency by providing an additional shape definition, with these two parameters: `--input_shape` and `--static_shape`.
With model conversion API you can increase your model's efficiency by providing an additional shape definition, with these two parameters: `input_shape` and `static_shape`.
@sphinxdirective
.. _when_to_specify_input_shapes:
Specifying --input_shape Command-line Parameter
###############################################
Specifying input_shape parameter
################################
Model Optimizer supports conversion of models with dynamic input shapes that contain undefined dimensions.
``convert_model()`` supports conversion of models with dynamic input shapes that contain undefined dimensions.
However, if the shape of data is not going to change from one inference request to another,
it is recommended to set up static shapes (when all dimensions are fully defined) for the inputs.
Doing it at this stage, instead of during inference in runtime, can be beneficial in terms of performance and memory consumption.
To set up static shapes, Model Optimizer provides the ``--input_shape`` parameter.
To set up static shapes, model conversion API provides the ``input_shape`` parameter.
For more information on input shapes under runtime, refer to the :doc:`Changing input shapes <openvino_docs_OV_UG_ShapeInference>` guide.
To learn more about dynamic shapes in runtime, refer to the :doc:`Dynamic Shapes <openvino_docs_OV_UG_DynamicShapes>` guide.
The OpenVINO Runtime API may present certain limitations in inferring models with undefined dimensions on some hardware. See the :doc:`Features support matrix <openvino_docs_OV_UG_Working_with_devices>` for reference.
In this case, the ``--input_shape`` parameter and the :doc:`reshape method <openvino_docs_OV_UG_ShapeInference>` can help to resolve undefined dimensions.
In this case, the ``input_shape`` parameter and the :doc:`reshape method <openvino_docs_OV_UG_ShapeInference>` can help to resolve undefined dimensions.
Sometimes, Model Optimizer is unable to convert models out-of-the-box (only the ``--input_model`` parameter is specified).
Such problem can relate to models with inputs of undefined ranks and a case of cutting off parts of a model.
In this case, input shapes must be specified explicitly with the ``--input_shape`` parameter.
For example, run Model Optimizer for the TensorFlow MobileNet model with the single input
For example, run model conversion for the TensorFlow MobileNet model with the single input
and specify the input shape of ``[2,300,300,3]``:
.. code-block:: sh
.. tab-set::
mo --input_model MobileNet.pb --input_shape [2,300,300,3]
.. tab-item:: Python
:sync: mo-python-api
.. code-block:: python
from openvino.tools.mo import convert_model
ov_model = convert_model("MobileNet.pb", input_shape=[2,300,300,3])
.. tab-item:: CLI
:sync: cli-tool
.. code-block:: sh
mo --input_model MobileNet.pb --input_shape [2,300,300,3]
If a model has multiple inputs, ``--input_shape`` must be used in conjunction with ``--input`` parameter.
The ``--input`` parameter contains a list of input names, for which shapes in the same order are defined via ``--input_shape``.
For example, launch Model Optimizer for the ONNX OCR model with a pair of inputs ``data`` and ``seq_len``
If a model has multiple inputs, ``input_shape`` must be used in conjunction with ``input`` parameter.
The ``input`` parameter contains a list of input names, for which shapes in the same order are defined via ``input_shape``.
For example, launch model conversion for the ONNX OCR model with a pair of inputs ``data`` and ``seq_len``
and specify shapes ``[3,150,200,1]`` and ``[3]`` for them:
.. code-block:: sh
.. tab-set::
mo --input_model ocr.onnx --input data,seq_len --input_shape [3,150,200,1],[3]
.. tab-item:: Python
:sync: mo-python-api
.. code-block:: python
from openvino.tools.mo import convert_model
ov_model = convert_model("ocr.onnx", input=["data","seq_len"], input_shape=[[3,150,200,1],[3]])
.. tab-item:: CLI
:sync: cli-tool
.. code-block:: sh
mo --input_model ocr.onnx --input data,seq_len --input_shape [3,150,200,1],[3]
Alternatively, specify input shapes, using the ``--input`` parameter as follows:
Alternatively, specify input shapes, using the ``input`` parameter as follows:
.. code-block:: sh
.. tab-set::
mo --input_model ocr.onnx --input data[3,150,200,1],seq_len[3]
.. tab-item:: Python
:sync: mo-python-api
.. code-block:: python
from openvino.tools.mo import convert_model
ov_model = convert_model("ocr.onnx", input=[("data",[3,150,200,1]),("seq_len",[3])])
.. tab-item:: CLI
:sync: cli-tool
.. code-block:: sh
mo --input_model ocr.onnx --input data[3,150,200,1],seq_len[3]
The ``--input_shape`` parameter allows overriding original input shapes to ones compatible with a given model.
The ``input_shape`` parameter allows overriding original input shapes to ones compatible with a given model.
Dynamic shapes, i.e. with dynamic dimensions, can be replaced in the original model with static shapes for the converted model, and vice versa.
The dynamic dimension can be marked in Model Optimizer command-line as ``-1``* or *``?``.
For example, launch Model Optimizer for the ONNX OCR model and specify dynamic batch dimension for inputs:
The dynamic dimension can be marked in model conversion API parameter as ``-1`` or ``?``.
For example, launch model conversion for the ONNX OCR model and specify dynamic batch dimension for inputs:
.. code-block:: sh
.. tab-set::
mo --input_model ocr.onnx --input data,seq_len --input_shape [-1,150,200,1],[-1]
.. tab-item:: Python
:sync: mo-python-api
.. code-block:: python
from openvino.tools.mo import convert_model
ov_model = convert_model("ocr.onnx", input=["data","seq_len"], input_shape=[[-1,150,200,1],[-1]]
.. tab-item:: CLI
:sync: cli-tool
.. code-block:: sh
mo --input_model ocr.onnx --input data,seq_len --input_shape [-1,150,200,1],[-1]
To optimize memory consumption for models with undefined dimensions in run-time, Model Optimizer provides the capability to define boundaries of dimensions.
To optimize memory consumption for models with undefined dimensions in run-time, model conversion API provides the capability to define boundaries of dimensions.
The boundaries of undefined dimension can be specified with ellipsis.
For example, launch Model Optimizer for the ONNX OCR model and specify a boundary for the batch dimension:
For example, launch model conversion for the ONNX OCR model and specify a boundary for the batch dimension:
.. code-block:: sh
.. tab-set::
mo --input_model ocr.onnx --input data,seq_len --input_shape [1..3,150,200,1],[1..3]
.. tab-item:: Python
:sync: mo-python-api
.. code-block:: python
from openvino.tools.mo import convert_model
from openvino.runtime import Dimension
ov_model = convert_model("ocr.onnx", input=["data","seq_len"], input_shape=[[Dimension(1,3),150,200,1],[Dimension(1,3)]]
.. tab-item:: CLI
:sync: cli-tool
.. code-block:: sh
mo --input_model ocr.onnx --input data,seq_len --input_shape [1..3,150,200,1],[1..3]
Practically, some models are not ready for input shapes change.
In this case, a new input shape cannot be set via Model Optimizer.
For more information about shape follow the :doc:`inference troubleshooting <troubleshooting_reshape_errors>`
In this case, a new input shape cannot be set via model conversion API.
For more information about shape follow the :doc:`inference troubleshooting <troubleshooting_reshape_errors>`
and :ref:`ways to relax shape inference flow <how-to-fix-non-reshape-able-model>` guides.
Specifying --static_shape Command-line Parameter
################################################
Model Optimizer provides the ``--static_shape`` parameter that allows evaluating shapes of all operations in the model for fixed input shapes
and folding shape computing sub-graphs into constants. The resulting IR may be more compact in size and the loading time for such IR may decrease.
However, the resulting IR will not be reshape-able with the help of the :doc:`reshape method <openvino_docs_OV_UG_ShapeInference>` from OpenVINO Runtime API.
It is worth noting that the ``--input_shape`` parameter does not affect reshapeability of the model.
For example, launch Model Optimizer for the ONNX OCR model using ``--static_shape``:
.. code-block:: sh
mo --input_model ocr.onnx --input data[3,150,200,1],seq_len[3] --static_shape
Additional Resources
####################
* :doc:`Introduction to converting models with Model Optimizer <openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide>`
* :doc:`Convert a Model <openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide>`
* :doc:`Cutting Off Parts of a Model <openvino_docs_MO_DG_prepare_model_convert_model_Cutting_Model>`
@endsphinxdirective
@endsphinxdirective

View File

@@ -2,7 +2,7 @@
@sphinxdirective
Sometimes, it is necessary to remove parts of a model when converting it to OpenVINO IR. This chapter describes how to do it, using Model Optimizer command-line options. Model cutting applies mostly to TensorFlow models, which is why TensorFlow will be used in this chapter's examples, but it may be also useful for other frameworks.
Sometimes, it is necessary to remove parts of a model when converting it to OpenVINO IR. This chapter describes how to do it, using model conversion API parameters. Model cutting applies mostly to TensorFlow models, which is why TensorFlow will be used in this chapter's examples, but it may be also useful for other frameworks.
Purpose of Model Cutting
########################
@@ -12,25 +12,29 @@ The following examples are the situations when model cutting is useful or even r
* A model has pre- or post-processing parts that cannot be translated to existing OpenVINO operations.
* A model has a training part that is convenient to be kept in the model but not used during inference.
* A model is too complex be converted at once, because it contains a lot of unsupported operations that cannot be easily implemented as custom layers.
* A problem occurs with model conversion in Model Optimizer or inference in OpenVINO™ Runtime. To identify the issue, limit the conversion scope by iterative search for problematic areas in the model.
* A problem occurs with model conversion or inference in OpenVINO™ Runtime. To identify the issue, limit the conversion scope by iterative search for problematic areas in the model.
* A single custom layer or a combination of custom layers is isolated for debugging purposes.
Command-Line Options
####################
.. note::
Model Optimizer provides command line options ``--input`` and ``--output`` to specify new entry and exit nodes, while ignoring the rest of the model:
Internally, when you run model conversion API, it loads the model, goes through the topology, and tries to find each layer type in a list of known layers. Custom layers are layers that are not included in the list. If your topology contains such kind of layers, model conversion API classifies them as custom.
* ``--input`` option accepts a comma-separated list of layer names of the input model that should be treated as new entry points to the model.
* ``--output`` option accepts a comma-separated list of layer names of the input model that should be treated as new exit points from the model.
Model conversion API parameters
###############################
The ``--input`` option is required for cases unrelated to model cutting. For example, when the model contains several inputs and ``--input_shape`` or ``--mean_values`` options are used, the ``--input`` option specifies the order of input nodes for correct mapping between multiple items provided in ``--input_shape`` and ``--mean_values`` and the inputs in the model.
Model conversion API provides command line options ``input`` and ``output`` to specify new entry and exit nodes, while ignoring the rest of the model:
Model cutting is illustrated with the Inception V1 model, found in the ``models/research/slim`` repository. To proceed with this chapter, make sure you do the necessary steps to :doc:`prepare the model for Model Optimizer <openvino_docs_MO_DG_prepare_model_convert_model_Converting_Model>`.
* ``input`` option accepts a list of layer names of the input model that should be treated as new entry points to the model. See the full list of accepted types for input on :doc:`Model Conversion Python API <openvino_docs_MO_DG_Python_API>` page.
* ``output`` option accepts a list of layer names of the input model that should be treated as new exit points from the model.
Default Behavior without --input and --output
#############################################
The ``input`` option is required for cases unrelated to model cutting. For example, when the model contains several inputs and ``input_shape`` or ``mean_values`` options are used, the ``input`` option specifies the order of input nodes for correct mapping between multiple items provided in ``input_shape`` and ``mean_values`` and the inputs in the model.
The input model is converted as a whole if neither ``--input`` nor ``--output`` command line options are used. All ``Placeholder`` operations in a TensorFlow graph are automatically identified as entry points. The ``Input`` layer type is generated for each of them. All nodes that have no consumers are automatically identified as exit points.
Model cutting is illustrated with the Inception V1 model, found in the ``models/research/slim`` repository. To proceed with this chapter, make sure you do the necessary steps to :doc:`prepare the model for model conversion <openvino_docs_MO_DG_prepare_model_convert_model_Converting_Model>`.
Default Behavior without input and output
#########################################
The input model is converted as a whole if neither ``input`` nor ``output`` command line options are used. All ``Placeholder`` operations in a TensorFlow graph are automatically identified as entry points. The ``Input`` layer type is generated for each of them. All nodes that have no consumers are automatically identified as exit points.
For Inception_V1, there is one ``Placeholder``: input. If the model is viewed in TensorBoard, the input operation is easy to find:
@@ -44,16 +48,28 @@ In TensorBoard, along with some of its predecessors, it looks as follows:
.. image:: _static/images/inception_v1_std_output.svg
:alt: TensorBoard with predecessors
Convert this model and put the results in a writable output directory:
Convert this model to ``ov.Model``:
.. code-block:: sh
.. tab-set::
mo --input_model inception_v1.pb -b 1 --output_dir <OUTPUT_MODEL_DIR>
.. tab-item:: Python
:sync: mo-python-api
.. code-block:: python
from openvino.tools.mo import convert_model
ov_model = convert_model("inception_v1.pb", batch=1)
.. tab-item:: CLI
:sync: cli-tool
.. code-block:: sh
mo --input_model inception_v1.pb -b 1 --output_dir <OUTPUT_MODEL_DIR>
(The other examples on this page assume that you first go to the ``model_optimizer`` directory and add the ``--output_dir`` argument with a directory where you have read/write permissions.)
The output ``.xml`` file with an Intermediate Representation contains the ``Input`` layer among other layers in the model:
``ov.Model`` can be serialized with the ``ov.serialize()`` method to Intermediate Representation which can be used for model structure exploring.
In IR, the structure of a model has the following layers:
.. code-block:: xml
@@ -94,13 +110,28 @@ The last layer in the model is ``InceptionV1/Logits/Predictions/Reshape_1``, whi
</layer>
Due to automatic identification of inputs and outputs, providing the ``--input`` and ``--output`` options to convert the whole model is not required. The following commands are equivalent for the Inception V1 model:
Due to automatic identification of inputs and outputs, providing the ``input`` and ``output`` options to convert the whole model is not required. The following commands are equivalent for the Inception V1 model:
.. code-block:: sh
.. tab-set::
mo --input_model inception_v1.pb -b 1 --output_dir <OUTPUT_MODEL_DIR>
.. tab-item:: Python
:sync: mo-python-api
mo --input_model inception_v1.pb -b 1 --input input --output InceptionV1/Logits/Predictions/Reshape_1 --output_dir <OUTPUT_MODEL_DIR>
.. code-block:: python
from openvino.tools.mo import convert_model
ov_model = convert_model("inception_v1.pb", batch=1)
ov_model = convert_model("inception_v1.pb", batch=1, input="input", output="InceptionV1/Logits/Predictions/Reshape_1")
.. tab-item:: CLI
:sync: cli-tool
.. code-block:: sh
mo --input_model inception_v1.pb -b 1 --output_dir <OUTPUT_MODEL_DIR>
mo --input_model inception_v1.pb -b 1 --input input --output InceptionV1/Logits/Predictions/Reshape_1 --output_dir <OUTPUT_MODEL_DIR>
The Intermediate Representations are identical for both conversions. The same is true if the model has multiple inputs and/or outputs.
@@ -120,9 +151,22 @@ If you want to cut your model at the end, you have the following options:
1. The following command cuts off the rest of the model after the ``InceptionV1/InceptionV1/Conv2d_1a_7x7/Relu``, making this node the last in the model:
.. code-block:: sh
.. tab-set::
mo --input_model inception_v1.pb -b 1 --output=InceptionV1/InceptionV1/Conv2d_1a_7x7/Relu --output_dir <OUTPUT_MODEL_DIR>
.. tab-item:: Python
:sync: mo-python-api
.. code-block:: python
from openvino.tools.mo import convert_model
ov_model = convert_model("inception_v1.pb", batch=1, output="InceptionV1/InceptionV1/Conv2d_1a_7x7/Relu")
.. tab-item:: CLI
:sync: cli-tool
.. code-block:: sh
mo --input_model inception_v1.pb -b 1 --output=InceptionV1/InceptionV1/Conv2d_1a_7x7/Relu --output_dir <OUTPUT_MODEL_DIR>
The resulting Intermediate Representation has three layers:
@@ -166,13 +210,26 @@ If you want to cut your model at the end, you have the following options:
</net>
As shown in the TensorBoard picture, the original model has more nodes than its Intermediate Representation. Model Optimizer has fused batch normalization ``InceptionV1/InceptionV1/Conv2d_1a_7x7/BatchNorm`` with convolution ``InceptionV1/InceptionV1/Conv2d_1a_7x7/convolution``, which is why it is not present in the final model. This is not an effect of the ``--output`` option, it is the typical behavior of Model Optimizer for batch normalizations and convolutions. The effect of the ``--output`` is that the ``ReLU`` layer becomes the last one in the converted model.
As shown in the TensorBoard picture, the original model has more nodes than its Intermediate Representation. Model conversion, using ``convert_model()``, consists of a set of model transformations, including fusing of batch normalization ``InceptionV1/InceptionV1/Conv2d_1a_7x7/BatchNorm`` with convolution ``InceptionV1/InceptionV1/Conv2d_1a_7x7/convolution``, which is why it is not present in the final model. This is not an effect of the ``output`` option, it is the typical behavior of model conversion API for batch normalizations and convolutions. The effect of the ``output`` is that the ``ReLU`` layer becomes the last one in the converted model.
2. The following command cuts the edge that comes from 0 output port of the ``InceptionV1/InceptionV1/Conv2d_1a_7x7/Relu`` and the rest of the model, making this node the last one in the model:
.. code-block::
.. tab-set::
mo --input_model inception_v1.pb -b 1 --output InceptionV1/InceptionV1/Conv2d_1a_7x7/Relu:0 --output_dir <OUTPUT_MODEL_DIR>
.. tab-item:: Python
:sync: mo-python-api
.. code-block:: python
from openvino.tools.mo import convert_model
ov_model = convert_model("inception_v1.pb", batch=1, output="InceptionV1/InceptionV1/Conv2d_1a_7x7/Relu:0")
.. tab-item:: CLI
:sync: cli-tool
.. code-block:: sh
mo --input_model inception_v1.pb -b 1 --output InceptionV1/InceptionV1/Conv2d_1a_7x7/Relu:0 --output_dir <OUTPUT_MODEL_DIR>
The resulting Intermediate Representation has three layers, which are the same as in the previous case:
@@ -220,9 +277,22 @@ If you want to cut your model at the end, you have the following options:
3. The following command cuts the edge that comes to 0 input port of the ``InceptionV1/InceptionV1/Conv2d_1a_7x7/Relu`` and the rest of the model including ``InceptionV1/InceptionV1/Conv2d_1a_7x7/Relu``, deleting this node and making the previous node ``InceptionV1/InceptionV1/Conv2d_1a_7x7/Conv2D`` the last in the model:
.. code-block:: sh
.. tab-set::
mo --input_model inception_v1.pb -b 1 --output=0:InceptionV1/InceptionV1/Conv2d_1a_7x7/Relu --output_dir <OUTPUT_MODEL_DIR>
.. tab-item:: Python
:sync: mo-python-api
.. code-block:: python
from openvino.tools.mo import convert_model
ov_model = convert_model("inception_v1.pb", batch=1, output="0:InceptionV1/InceptionV1/Conv2d_1a_7x7/Relu")
.. tab-item:: CLI
:sync: cli-tool
.. code-block:: sh
mo --input_model inception_v1.pb -b 1 --output=0:InceptionV1/InceptionV1/Conv2d_1a_7x7/Relu --output_dir <OUTPUT_MODEL_DIR>
The resulting Intermediate Representation has two layers, which are the same as the first two layers in the previous case:
@@ -262,11 +332,25 @@ Cutting from the Beginning
If you want to go further and cut the beginning of the model, leaving only the ``ReLU`` layer, you have the following options:
1. Use the following command line, where ``--input`` and ``--output`` specify the same node in the graph:
1. Use the following parameters, where ``input`` and ``output`` specify the same node in the graph:
.. code-block:: sh
.. tab-set::
.. tab-item:: Python
:sync: mo-python-api
.. code-block:: python
from openvino.tools.mo import convert_model
ov_model = convert_model("inception_v1.pb", batch=1, output="InceptionV1/InceptionV1/Conv2d_1a_7x7/Relu", input="InceptionV1/InceptionV1/Conv2d_1a_7x7/Relu")
.. tab-item:: CLI
:sync: cli-tool
.. code-block:: sh
mo --input_model=inception_v1.pb -b 1 --output InceptionV1/InceptionV1/Conv2d_1a_7x7/Relu --input InceptionV1/InceptionV1/Conv2d_1a_7x7/Relu --output_dir <OUTPUT_MODEL_DIR>
mo --input_model=inception_v1.pb -b 1 --output InceptionV1/InceptionV1/Conv2d_1a_7x7/Relu --input InceptionV1/InceptionV1/Conv2d_1a_7x7/Relu --output_dir <OUTPUT_MODEL_DIR>
The resulting Intermediate Representation looks as follows:
@@ -295,15 +379,29 @@ If you want to go further and cut the beginning of the model, leaving only the `
</net>
``Input`` layer is automatically created to feed the layer that is converted from the node specified in ``--input``, which is ``InceptionV1/InceptionV1/Conv2d_1a_7x7/Relu`` in this case. Model Optimizer does not replace the ``ReLU`` node by the ``Input`` layer. It produces such Intermediate Representation to make the node the first executable node in the final Intermediate Representation. Therefore, Model Optimizer creates enough ``Inputs`` to feed all input ports of the node that is passed in ``--input``.
``Input`` layer is automatically created to feed the layer that is converted from the node specified in ``input``, which is ``InceptionV1/InceptionV1/Conv2d_1a_7x7/Relu`` in this case. ``convert_model()`` does not replace the ``ReLU`` node by the ``Input`` layer. It produces such ``ov.Model`` to make the node the first executable node in the final Intermediate Representation. Therefore, model conversion creates enough ``Inputs`` to feed all input ports of the node that is passed in ``input``.
Even though ``--input_shape`` is not specified in the command line, the shapes for layers are inferred from the beginning of the original TensorFlow model to the point, at which the new input is defined. It has the same shape ``[1,64,112,112]`` as the model converted as a whole or without cutting off the beginning.
Even though ``input_shape`` is not specified in the command line, the shapes for layers are inferred from the beginning of the original TensorFlow model to the point, at which the new input is defined. It has the same shape ``[1,64,112,112]`` as the model converted as a whole or without cutting off the beginning.
2. Cut the edge incoming to layer by port number. To specify the incoming port, use the following notation ``--input=port:input_node``. To cut everything before ``ReLU`` layer, cut the edge incoming to port 0 of ``InceptionV1/InceptionV1/Conv2d_1a_7x7/Relu`` node:
2. Cut the edge incoming to layer by port number. To specify the incoming port, use the following notation ``input=port:input_node``. To cut everything before ``ReLU`` layer, cut the edge incoming to port 0 of ``InceptionV1/InceptionV1/Conv2d_1a_7x7/Relu`` node:
.. code-block:: sh
.. tab-set::
.. tab-item:: Python
:sync: mo-python-api
.. code-block:: python
from openvino.tools.mo import convert_model
ov_model = convert_model("inception_v1.pb", batch=1, input="0:InceptionV1/InceptionV1/Conv2d_1a_7x7/Relu", output="InceptionV1/InceptionV1/Conv2d_1a_7x7/Relu")
.. tab-item:: CLI
:sync: cli-tool
.. code-block:: sh
mo --input_model inception_v1.pb -b 1 --input 0:InceptionV1/InceptionV1/Conv2d_1a_7x7/Relu --output InceptionV1/InceptionV1/Conv2d_1a_7x7/Relu --output_dir <OUTPUT_MODEL_DIR>
mo --input_model inception_v1.pb -b 1 --input 0:InceptionV1/InceptionV1/Conv2d_1a_7x7/Relu --output InceptionV1/InceptionV1/Conv2d_1a_7x7/Relu --output_dir <OUTPUT_MODEL_DIR>
The resulting Intermediate Representation looks as follows:
@@ -332,15 +430,28 @@ If you want to go further and cut the beginning of the model, leaving only the `
</net>
``Input`` layer is automatically created to feed the layer that is converted from the node specified in ``--input``, which is ``InceptionV1/InceptionV1/Conv2d_1a_7x7/Relu`` in this case. Model Optimizer does not replace the ``ReLU`` node by the ``Input`` layer, it produces such Intermediate Representation to make the node be the first executable node in the final Intermediate Representation. Therefore, Model Optimizer creates enough ``Inputs`` to feed all input ports of the node that is passed in ``--input``.
``Input`` layer is automatically created to feed the layer that is converted from the node specified in ``input``, which is ``InceptionV1/InceptionV1/Conv2d_1a_7x7/Relu`` in this case. ``convert_model()`` does not replace the ``ReLU`` node by the ``Input`` layer, it produces such ``ov.Model`` to make the node be the first executable node in the final Intermediate Representation. Therefore, ``convert_model()`` creates enough ``Inputs`` to feed all input ports of the node that is passed in ``input``.
Even though ``--input_shape`` is not specified in the command line, the shapes for layers are inferred from the beginning of the original TensorFlow model to the point, at which the new input is defined. It has the same shape ``[1,64,112,112]`` as the model converted as a whole or without cutting off the beginning.
Even though ``input_shape`` is not specified in the command line, the shapes for layers are inferred from the beginning of the original TensorFlow model to the point, at which the new input is defined. It has the same shape ``[1,64,112,112]`` as the model converted as a whole or without cutting off the beginning.
3. Cut edge outcoming from layer by port number. To specify the outcoming port, use the following notation ``--input=input_node:port``. To cut everything before ``ReLU`` layer, cut edge from ``InceptionV1/InceptionV1/Conv2d_1a_7x7/BatchNorm/batchnorm/add_1`` node to ``ReLU``:
3. Cut edge outcoming from layer by port number. To specify the outcoming port, use the following notation ``input=input_node:port``. To cut everything before ``ReLU`` layer, cut edge from ``InceptionV1/InceptionV1/Conv2d_1a_7x7/BatchNorm/batchnorm/add_1`` node to ``ReLU``:
.. code-block:: sh
.. tab-set::
mo --input_model inception_v1.pb -b 1 --input InceptionV1/InceptionV1/Conv2d_1a_7x7/BatchNorm/batchnorm/add_1:0 --output InceptionV1/InceptionV1/Conv2d_1a_7x7/Relu --output_dir <OUTPUT_MODEL_DIR>
.. tab-item:: Python
:sync: mo-python-api
.. code-block:: python
from openvino.tools.mo import convert_model
ov_model = convert_model("inception_v1.pb", batch=1, input="InceptionV1/InceptionV1/Conv2d_1a_7x7/BatchNorm/batchnorm/add_1:0", output="InceptionV1/InceptionV1/Conv2d_1a_7x7/Relu")
.. tab-item:: CLI
:sync: cli-tool
.. code-block:: sh
mo --input_model inception_v1.pb -b 1 --input InceptionV1/InceptionV1/Conv2d_1a_7x7/BatchNorm/batchnorm/add_1:0 --output InceptionV1/InceptionV1/Conv2d_1a_7x7/Relu --output_dir <OUTPUT_MODEL_DIR>
The resulting Intermediate Representation looks as follows:
@@ -370,79 +481,57 @@ If you want to go further and cut the beginning of the model, leaving only the `
</net>
Shape Override for New Inputs
#############################
The input shape can be overridden with ``--input_shape``. In this case, the shape is applied to the node referenced in ``--input``, not to the original ``Placeholder`` in the model. For example, the command below:
.. code-block:: sh
mo --input_model inception_v1.pb --input_shape=[1,5,10,20] --output InceptionV1/InceptionV1/Conv2d_1a_7x7/Relu --input InceptionV1/InceptionV1/Conv2d_1a_7x7/Relu --output_dir <OUTPUT_MODEL_DIR>
gives the following shapes in the ``Input`` and ``ReLU`` layers:
.. code-block:: xml
<layer id="0" name="InceptionV1/InceptionV1/Conv2d_1a_7x7/Relu/placeholder_port_0" precision="FP32" type="Input">
<output>
<port id="0">
<dim>1</dim>
<dim>20</dim>
<dim>5</dim>
<dim>10</dim>
</port>
</output>
</layer>
<layer id="3" name="InceptionV1/InceptionV1/Conv2d_1a_7x7/Relu" precision="FP32" type="ReLU">
<input>
<port id="0">
<dim>1</dim>
<dim>20</dim>
<dim>5</dim>
<dim>10</dim>
</port>
</input>
<output>
<port id="1">
<dim>1</dim>
<dim>20</dim>
<dim>5</dim>
<dim>10</dim>
</port>
</output>
</layer>
An input shape ``[1,20,5,10]`` in the final Intermediate Representation differs from the shape ``[1,5,10,20]`` specified in the command line, because the original TensorFlow model uses NHWC layout, but the Intermediate Representation uses NCHW layout. Thus, usual NHWC to NCHW layout conversion occurred.
When ``--input_shape`` is specified, shape inference inside Model Optimizer is not performed for the nodes in the beginning of the model that are not included in the translated region. It differs from the case when ``--input_shape`` is not specified as noted in the previous section, where the shape inference is still performed for such nodes to deduce shape for the layers that should fall into the final Intermediate Representation. Therefore, ``--input_shape`` should be used for a model with a complex graph with loops, which are not supported by Model Optimizer, to exclude such parts from the Model Optimizer shape inference process completely.
Inputs with Multiple Input Ports
################################
There are operations that contain more than one input port. In the example considered here, the convolution ``InceptionV1/InceptionV1/Conv2d_1a_7x7/convolution`` is such operation. When ``--input_shape`` is not provided, a new ``Input`` layer is created for each dynamic input port for the node. If a port is evaluated to a constant blob, this constant remains in the model and a corresponding input layer is not created. TensorFlow convolution used in this model contains two ports:
There are operations that contain more than one input port. In the example considered here, the convolution ``InceptionV1/InceptionV1/Conv2d_1a_7x7/convolution`` is such operation. When ``input_shape`` is not provided, a new ``Input`` layer is created for each dynamic input port for the node. If a port is evaluated to a constant blob, this constant remains in the model and a corresponding input layer is not created. TensorFlow convolution used in this model contains two ports:
* port 0: input tensor for convolution (dynamic)
* port 1: convolution weights (constant)
Following this behavior, Model Optimizer creates an ``Input`` layer for port 0 only, leaving port 1 as a constant. Thus, the result of:
Following this behavior, ``convert_model()`` creates an ``Input`` layer for port 0 only, leaving port 1 as a constant. Thus, the result of:
.. code-block:: sh
.. tab-set::
mo --input_model inception_v1.pb -b 1 --input InceptionV1/InceptionV1/Conv2d_1a_7x7/convolution --output_dir <OUTPUT_MODEL_DIR>
.. tab-item:: Python
:sync: mo-python-api
.. code-block:: python
from openvino.tools.mo import convert_model
ov_model = convert_model("inception_v1.pb", batch=1, input="InceptionV1/InceptionV1/Conv2d_1a_7x7/convolution")
.. tab-item:: CLI
:sync: cli-tool
.. code-block:: sh
mo --input_model inception_v1.pb -b 1 --input InceptionV1/InceptionV1/Conv2d_1a_7x7/convolution --output_dir <OUTPUT_MODEL_DIR>
is identical to the result of conversion of the model as a whole, because this convolution is the first executable operation in Inception V1.
Different behavior occurs when ``--input_shape`` is also used as an attempt to override the input shape:
Different behavior occurs when ``input_shape`` is also used as an attempt to override the input shape:
.. code-block:: sh
.. tab-set::
mo --input_model inception_v1.pb--input=InceptionV1/InceptionV1/Conv2d_1a_7x7/convolution --input_shape [1,224,224,3] --output_dir <OUTPUT_MODEL_DIR>
.. tab-item:: Python
:sync: mo-python-api
.. code-block:: python
from openvino.tools.mo import convert_model
ov_model = convert_model("inception_v1.pb", input="InceptionV1/InceptionV1/Conv2d_1a_7x7/convolution", input_shape=[1,224,224,3])
.. tab-item:: CLI
:sync: cli-tool
.. code-block:: sh
mo --input_model inception_v1.pb--input=InceptionV1/InceptionV1/Conv2d_1a_7x7/convolution --input_shape [1,224,224,3] --output_dir <OUTPUT_MODEL_DIR>
An error occurs (for more information, see the :ref:`Model Optimizer FAQ <question-30>`):
An error occurs (for more information, see the :ref:`Model Conversion FAQ <question-30>`):
.. code-block:: sh
@@ -450,13 +539,26 @@ An error occurs (for more information, see the :ref:`Model Optimizer FAQ <questi
Try not to provide input shapes or specify input port with PORT:NODE notation, where PORT is an integer.
For more information, see FAQ #30
When ``--input_shape`` is specified and the node contains multiple input ports, you need to provide an input port index together with an input node name. The input port index is specified in front of the node name with ``:`` as a separator (``PORT:NODE``). In this case, the port index 0 of the node ``InceptionV1/InceptionV1/Conv2d_1a_7x7/convolution`` should be specified as ``0:InceptionV1/InceptionV1/Conv2d_1a_7x7/convolution``.
When ``input_shape`` is specified and the node contains multiple input ports, you need to provide an input port index together with an input node name. The input port index is specified in front of the node name with ``:`` as a separator (``PORT:NODE``). In this case, the port index 0 of the node ``InceptionV1/InceptionV1/Conv2d_1a_7x7/convolution`` should be specified as ``0:InceptionV1/InceptionV1/Conv2d_1a_7x7/convolution``.
The correct command line is:
.. code-block:: sh
.. tab-set::
mo --input_model inception_v1.pb --input 0:InceptionV1/InceptionV1/Conv2d_1a_7x7/convolution --input_shape=[1,224,224,3] --output_dir <OUTPUT_MODEL_DIR>
.. tab-item:: Python
:sync: mo-python-api
.. code-block:: python
from openvino.tools.mo import convert_model
ov_model = convert_model("inception_v1.pb", input="0:InceptionV1/InceptionV1/Conv2d_1a_7x7/convolution", input_shape=[1,224,224,3])
.. tab-item:: CLI
:sync: cli-tool
.. code-block:: sh
mo --input_model inception_v1.pb --input 0:InceptionV1/InceptionV1/Conv2d_1a_7x7/convolution --input_shape=[1,224,224,3] --output_dir <OUTPUT_MODEL_DIR>
@endsphinxdirective

View File

@@ -12,8 +12,8 @@ Intermediate Representation should be specifically formed to be suitable for low
Such a model is called a Low Precision IR and can be generated in two ways:
* By :doc:`quantizing regular IR with the Post-Training Optimization tool <pot_introduction>`
* Using Model Optimizer for a model pre-trained for Low Precision inference: TensorFlow models (``.pb`` model file with ``FakeQuantize`` operations), quantized TensorFlow Lite models and ONNX quantized models.
* By :doc:`quantizing regular IR with the Neural Network Compression Framework (NNCF) <openvino_docs_model_optimization_guide>`
* Using model conversion of a model pre-trained for Low Precision inference: TensorFlow models (``.pb`` model file with ``FakeQuantize`` operations), quantized TensorFlow Lite models and ONNX quantized models.
TensorFlow and ONNX quantized models can be prepared by `Neural Network Compression Framework <https://github.com/openvinotoolkit/nncf/blob/develop/README.md>`__.
For an operation to be executed in INT8, it must have `FakeQuantize` operations as inputs.
@@ -43,6 +43,6 @@ See the visualization of `Convolution` with the compressed weights:
.. image:: _static/images/compressed_int8_Convolution_weights.png
Both Model Optimizer and Post-Training Optimization tool generate a compressed IR by default.
Model conversion API generates a compressed IR by default.
@endsphinxdirective

View File

@@ -2,13 +2,18 @@
@sphinxdirective
.. warning::
Note that OpenVINO support for Kaldi is currently being deprecated and will be removed entirely in the future.
At the beginning, you should `download a pre-trained model <https://kaldi-asr.org/models/1/0001_aspire_chain_model.tar.gz>`__
for the ASpIRE Chain Time Delay Neural Network (TDNN) from the Kaldi project official website.
Converting an ASpIRE Chain TDNN Model to IR
###########################################
Generate the Intermediate Representation of the model by running Model Optimizer with the following parameters:
Generate the Intermediate Representation of the model by running model conversion with the following parameters:
.. code-block:: sh
@@ -96,8 +101,7 @@ Prepare ivectors for the Speech Recognition sample:
<path_to_kaldi_repo>/src/featbin/copy-feats --binary=False ark:ivector_online.1.ark ark,t:ivector_online.1.ark.txt
5. For the Speech Recognition sample, the ``.ark`` file must contain an ivector
for each frame. Copy the ivector ``frame_count`` times by running the below script in the Python command prompt:
5. For the Speech Recognition sample, the ``.ark`` file must contain an ivector for each frame. Copy the ivector ``frame_count`` times by running the below script in the Python command prompt:
.. code-block:: python

View File

@@ -2,7 +2,11 @@
@sphinxdirective
This article provides the instructions and examples on how to use Model Optimizer to convert `GluonCV SSD and YOLO-v3 models <https://gluon-cv.mxnet.io/model_zoo/detection.html>`__ to IR.
.. warning::
Note that OpenVINO support for Apache MXNet is currently being deprecated and will be removed entirely in the future.
This article provides the instructions and examples on how to convert `GluonCV SSD and YOLO-v3 models <https://gluon-cv.mxnet.io/model_zoo/detection.html>`__ to IR.
1. Choose the topology available from the `GluonCV Model Zoo <https://gluon-cv.mxnet.io/model_zoo/detection.html>`__ and export to the MXNet format using the GluonCV API. For example, for the ``ssd_512_mobilenet1.0`` topology:
@@ -15,27 +19,27 @@ This article provides the instructions and examples on how to use Model Optimize
As a result, you will get an MXNet model representation in ``ssd_512_mobilenet1.0.params`` and ``ssd_512_mobilenet1.0.json`` files generated in the current directory.
2. Run the Model Optimizer tool, specifying the ``--enable_ssd_gluoncv`` option. Make sure the ``--input_shape`` parameter is set to the input shape layout of your model (NHWC or NCHW). The examples below illustrate running the Model Optimizer for the SSD and YOLO-v3 models trained with the NHWC layout and located in the ``<model_directory>``:
2. Run model conversion API, specifying the ``enable_ssd_gluoncv`` option. Make sure the ``input_shape`` parameter is set to the input shape layout of your model (NHWC or NCHW). The examples below illustrate running model conversion for the SSD and YOLO-v3 models trained with the NHWC layout and located in the ``<model_directory>``:
* **For GluonCV SSD topologies:**
* **For GluonCV SSD topologies:**
.. code-block:: sh
.. code-block:: sh
mo --input_model <model_directory>/ssd_512_mobilenet1.0.params --enable_ssd_gluoncv --input_shape [1,512,512,3] --input data --output_dir <OUTPUT_MODEL_DIR>
mo --input_model <model_directory>/ssd_512_mobilenet1.0.params --enable_ssd_gluoncv --input_shape [1,512,512,3] --input data --output_dir <OUTPUT_MODEL_DIR>
* **For YOLO-v3 topology:**
* **For YOLO-v3 topology:**
* To convert the model:
* To convert the model:
.. code-block:: sh
.. code-block:: sh
mo --input_model <model_directory>/yolo3_mobilenet1.0_voc-0000.params --input_shape [1,255,255,3] --output_dir <OUTPUT_MODEL_DIR>
mo --input_model <model_directory>/yolo3_mobilenet1.0_voc-0000.params --input_shape [1,255,255,3] --output_dir <OUTPUT_MODEL_DIR>
* To convert the model with replacing the subgraph with RegionYolo layers:
* To convert the model with replacing the subgraph with RegionYolo layers:
.. code-block:: sh
.. code-block:: sh
mo --input_model <model_directory>/models/yolo3_mobilenet1.0_voc-0000.params --input_shape [1,255,255,3] --transformations_config "front/mxnet/yolo_v3_mobilenet1_voc. json" --output_dir <OUTPUT_MODEL_DIR>
mo --input_model <model_directory>/models/yolo3_mobilenet1.0_voc-0000.params --input_shape [1,255,255,3] --transformations_config "front/mxnet/ yolo_v3_mobilenet1_voc. json" --output_dir <OUTPUT_MODEL_DIR>
@endsphinxdirective

View File

@@ -2,6 +2,13 @@
@sphinxdirective
.. warning::
Note that OpenVINO support for Apache MXNet is currently being deprecated and will be removed entirely in the future.
This article provides instructions on how to generate a model for style transfer, using the public MXNet neural style transfer sample.
**Step 1**: Download or clone the repository `Zhaw's Neural Style Transfer repository <https://github.com/zhaw/neural_style>`__ with an MXNet neural style transfer sample.
@@ -134,7 +141,7 @@ The ``models/13`` string in the code above is composed of the following substrin
Any style can be selected from `collection of pre-trained weights <https://pan.baidu.com/s/1skMHqYp>`__. On the Chinese-language page, click the down arrow next to a size in megabytes. Then wait for an overlay box to appear, and click the blue button in it to download. The ``generate()`` function generates ``nst_vgg19-symbol.json`` and ``vgg19-symbol.json`` files for the specified shape. In the code, it is ``[1024 x 768]`` for a 4:3 ratio. You can specify another, for example, ``[224,224]`` for a square ratio.
**Step 6**: Run the Model Optimizer to generate an Intermediate Representation (IR):
**Step 6**: Run model conversion to generate an Intermediate Representation (IR):
1. Create a new directory. For example:
@@ -159,7 +166,7 @@ Any style can be selected from `collection of pre-trained weights <https://pan.b
Make sure that all the ``.params`` and ``.json`` files are in the same directory as the ``.nd`` files. Otherwise, the conversion process fails.
3. Run the Model Optimizer for Apache MXNet. Use the ``--nd_prefix_name`` option to specify the decoder prefix and ``--input_shape`` to specify input shapes in ``[N,C,W,H]`` order. For example:
3. Run model conversion for Apache MXNet. Use the ``--nd_prefix_name`` option to specify the decoder prefix and ``input_shape`` to specify input shapes in ``[N,C,W,H]`` order. For example:
.. code-block:: sh

View File

@@ -6,19 +6,19 @@ The instructions below are applicable **only** to the Faster R-CNN model convert
1. Download the pretrained model file from `onnx/models <https://github.com/onnx/models/tree/master/vision/object_detection_segmentation/faster-rcnn>`__ (commit-SHA: 8883e49e68de7b43e263d56b9ed156dfa1e03117).
2. Generate the Intermediate Representation of the model, by changing your current working directory to the Model Optimizer installation directory, and running Model Optimizer with the following parameters:
2. Generate the Intermediate Representation of the model, by changing your current working directory to the model conversion API installation directory, and running model conversion with the following parameters:
.. code-block:: sh
.. code-block:: sh
mo \
--input_model FasterRCNN-10.onnx \
--input_shape [1,3,800,800] \
--input 0:2 \
--mean_values [102.9801,115.9465,122.7717] \
--transformations_config front/onnx/faster_rcnn.json
mo \
--input_model FasterRCNN-10.onnx \
--input_shape [1,3,800,800] \
--input 0:2 \
--mean_values [102.9801,115.9465,122.7717] \
--transformations_config front/onnx/faster_rcnn.json
Be aware that the height and width specified with the ``input_shape`` command line parameter could be different. For more information about supported input image dimensions and required pre- and post-processing steps, refer to the `Faster R-CNN article <https://github.com/onnx/models/tree/master/vision/object_detection_segmentation/faster-rcnn>`__.
Be aware that the height and width specified with the ``input_shape`` command line parameter could be different. For more information about supported input image dimensions and required pre- and post-processing steps, refer to the `Faster R-CNN article <https://github.com/onnx/models/tree/master/vision/object_detection_segmentation/ faster-rcnn>`__.
3. Interpret the outputs of the generated IR: class indices, probabilities and box coordinates. Below are the outputs from the ``DetectionOutput`` layer:

View File

@@ -15,7 +15,7 @@ To download the model and sample test data, go to `this model <https://github.co
Converting an ONNX GPT-2 Model to IR
####################################
Generate the Intermediate Representation of the model GPT-2 by running Model Optimizer with the following parameters:
Generate the Intermediate Representation of the model GPT-2 by running model conversion with the following parameters:
.. code-block:: sh

View File

@@ -6,26 +6,26 @@ The instructions below are applicable **only** to the Mask R-CNN model converted
1. Download the pretrained model file from `onnx/models <https://github.com/onnx/models/tree/master/vision/object_detection_segmentation/mask-rcnn>`__ (commit-SHA: 8883e49e68de7b43e263d56b9ed156dfa1e03117).
2. Generate the Intermediate Representation of the model by changing your current working directory to the Model Optimizer installation directory and running Model Optimizer with the following parameters:
2. Generate the Intermediate Representation of the model by changing your current working directory to the model conversion API installation directory and running model conversion with the following parameters:
.. code-block:: sh
.. code-block:: sh
mo \
--input_model mask_rcnn_R_50_FPN_1x.onnx \
--input "0:2" \
--input_shape [1,3,800,800] \
--mean_values [102.9801,115.9465,122.7717] \
--transformations_config front/onnx/mask_rcnn.json
mo \
--input_model mask_rcnn_R_50_FPN_1x.onnx \
--input "0:2" \
--input_shape [1,3,800,800] \
--mean_values [102.9801,115.9465,122.7717] \
--transformations_config front/onnx/mask_rcnn.json
Be aware that the height and width specified with the ``input_shape`` command line parameter could be different. For more information about supported input image dimensions and required pre- and post-processing steps, refer to the `documentation <https://github.com/onnx/models/tree/master/vision/object_detection_segmentation/mask-rcnn>`__.
Be aware that the height and width specified with the ``input_shape`` command line parameter could be different. For more information about supported input image dimensions and required pre- and post-processing steps, refer to the `documentation <https://github.com/onnx/models/tree/master/vision/object_detection_segmentation/mask-rcnn>`__.
3. Interpret the outputs of the generated IR file: masks, class indices, probabilities and box coordinates:
* masks
* class indices
* probabilities
* box coordinates
* box coordinates
The first one is a layer with the name ``6849/sink_port_0``, and rest are outputs from the ``DetectionOutput`` layer.

View File

@@ -9,15 +9,15 @@ Downloading and Converting Model to ONNX
* Clone the `repository <https://github.com/open-mmlab/mmdetection>`__ :
.. code-block:: sh
.. code-block:: sh
git clone https://github.com/open-mmlab/mmdetection
cd mmdetection
git clone https://github.com/open-mmlab/mmdetection
cd mmdetection
.. note::
.. note::
To set up an environment, refer to the `instructions <https://github.com/open-mmlab/mmdetection/blob/master/docs/en/get_started.md#installation>`__.
To set up an environment, refer to the `instructions <https://github.com/open-mmlab/mmdetection/blob/master/docs/en/get_started.md#installation>`__.
* Download the pre-trained `model <https://download.openmmlab.com/mmdetection/v2.0/cascade_rcnn/cascade_rcnn_r101_fpn_1x_coco/cascade_rcnn_r101_fpn_1x_coco_20200317-0b6a2fbf.pth>`__. The model is also available `here <https://github.com/open-mmlab/mmdetection/blob/master/configs/cascade_rcnn/README.md>`__.

View File

@@ -14,20 +14,20 @@ Here are the instructions on how to obtain QuartzNet in ONNX format.
2. Run the following code:
.. code-block:: python
import nemo
import nemo.collections.asr as nemo_asr
quartznet = nemo_asr.models.EncDecCTCModel.from_pretrained(model_name="QuartzNet15x5Base-En")
# Export QuartzNet model to ONNX format
quartznet.decoder.export('decoder_qn.onnx')
quartznet.encoder.export('encoder_qn.onnx')
quartznet.export('qn.onnx')
.. code-block:: python
import nemo
import nemo.collections.asr as nemo_asr
quartznet = nemo_asr.models.EncDecCTCModel.from_pretrained(model_name="QuartzNet15x5Base-En")
# Export QuartzNet model to ONNX format
quartznet.decoder.export('decoder_qn.onnx')
quartznet.encoder.export('encoder_qn.onnx')
quartznet.export('qn.onnx')
This code produces 3 ONNX model files: ``encoder_qn.onnx``, ``decoder_qn.onnx``, ``qn.onnx``.
They are ``decoder``, ``encoder``, and a combined ``decoder(encoder(x))`` models, respectively.
This code produces 3 ONNX model files: ``encoder_qn.onnx``, ``decoder_qn.onnx``, ``qn.onnx``.
They are ``decoder``, ``encoder``, and a combined ``decoder(encoder(x))`` models, respectively.
Converting an ONNX QuartzNet model to IR
########################################

View File

@@ -184,9 +184,9 @@ Converting a YOLACT Model to the OpenVINO IR format
mo --input_model /path/to/yolact.onnx
**Step 4**. Embed input preprocessing into the IR:
**Step 5**. Embed input preprocessing into the IR:
To get performance gain by offloading to the OpenVINO application of mean/scale values and RGB->BGR conversion, use the following options of the Model Optimizer (MO):
To get performance gain by offloading to the OpenVINO application of mean/scale values and RGB->BGR conversion, use the following model conversion API parameters:
* If the backbone of the model is Resnet50-FPN or Resnet101-FPN, use the following MO command line:

View File

@@ -19,9 +19,11 @@
**OpenVINO IR (Intermediate Representation)** - the proprietary format of OpenVINO™, benefiting from the full extent of its features.
**ONNX, PaddlePaddle, TensorFlow, TensorFlow Lite** - formats supported directly, which means they can be used with OpenVINO Runtime without any prior conversion. For a guide on how to run inference on ONNX, PaddlePaddle, or TensorFlow, see how to :doc:`Integrate OpenVINO™ with Your Application <openvino_docs_OV_UG_Integrate_OV_with_your_application>`.
**ONNX, PaddlePaddle, TensorFlow, TensorFlow Lite** - formats supported directly, which means they can be used with
OpenVINO Runtime without any prior conversion. For a guide on how to run inference on ONNX, PaddlePaddle, or TensorFlow,
see how to :doc:`Integrate OpenVINO™ with Your Application <openvino_docs_OV_UG_Integrate_OV_with_your_application>`.
**MXNet, Caffe, Kaldi** - formats supported indirectly, which means they need to be converted to OpenVINO IR before running inference. The conversion is done with Model Optimizer and in some cases may involve intermediate steps.
**MXNet, Caffe, Kaldi** - formats supported indirectly, which means they need to be converted to OpenVINO IR before running inference. The conversion is done with Model Conversion API and in some cases may involve intermediate steps.
Refer to the following articles for details on conversion for different formats and models:

View File

@@ -29,7 +29,7 @@ The original AOCR model includes the preprocessing data, which contains:
* Decoding input data to binary format where input data is an image represented as a string.
* Resizing binary image to working resolution.
The resized image is sent to the convolution neural network (CNN). Because Model Optimizer does not support image decoding, the preprocessing part of the model should be cut off, using the ``--input`` command-line parameter.
The resized image is sent to the convolution neural network (CNN). Because model conversion API does not support image decoding, the preprocessing part of the model should be cut off, using the ``input`` command-line parameter.
.. code-block:: sh

View File

@@ -38,7 +38,7 @@ Pretrained model meta-graph files are ``bert_model.ckpt.*``.
Converting a TensorFlow BERT Model to IR
#########################################
To generate the BERT Intermediate Representation (IR) of the model, run Model Optimizer with the following parameters:
To generate the BERT Intermediate Representation (IR) of the model, run model conversion with the following parameters:
.. code-block:: sh
@@ -59,96 +59,96 @@ Follow these steps to make a pretrained TensorFlow BERT model reshapable over ba
2. Clone google-research/bert git repository:
.. code-block:: sh
.. code-block:: sh
https://github.com/google-research/bert.git
https://github.com/google-research/bert.git
3. Go to the root directory of the cloned repository:
.. code-block:: sh
.. code-block:: sh
cd bert
cd bert
4. (Optional) Checkout to the commit that the conversion was tested on:
.. code-block:: sh
.. code-block:: sh
git checkout eedf5716c
git checkout eedf5716c
5. Download script to load GLUE data:
* For UNIX-like systems, run the following command:
.. code-block:: sh
.. code-block:: sh
wget https://gist.githubusercontent.com/W4ngatang/60c2bdb54d156a41194446737ce03e2e/raw/17b8dd0d724281ed7c3b2aeeda662b92809aadd5/download_glue_data.py
wget https://gist.githubusercontent.com/W4ngatang/60c2bdb54d156a41194446737ce03e2e/raw/17b8dd0d724281ed7c3b2aeeda662b92809aadd5/download_glue_data.py
* For Windows systems:
Download the `Python script <https://gist.githubusercontent.com/W4ngatang/60c2bdb54d156a41194446737ce03e2e/raw/17b8dd0d724281ed7c3b2aeeda662b92809aadd5/download_glue_data.py>`__ to the current working directory.
Download the `Python script <https://gist.githubusercontent.com/W4ngatang/60c2bdb54d156a41194446737ce03e2e/raw/17b8dd0d724281ed7c3b2aeeda662b92809aadd5/download_glue_data.py>`__ to the current working directory.
6. Download GLUE data by running:
.. code-block:: sh
.. code-block:: sh
python3 download_glue_data.py --tasks MRPC
python3 download_glue_data.py --tasks MRPC
7. Open the file ``modeling.py`` in the text editor and delete lines 923-924. They should look like this:
.. code-block:: python
.. code-block:: python
if not non_static_indexes:
return shape
if not non_static_indexes:
return shape
8. Open the file ``run_classifier.py`` and insert the following code after the line 645:
.. code-block:: python
.. code-block:: python
import os, sys
import tensorflow as tf
from tensorflow.python.framework import graph_io
with tf.compat.v1.Session(graph=tf.compat.v1.get_default_graph()) as sess:
(assignment_map, initialized_variable_names) = \
modeling.get_assignment_map_from_checkpoint(tf.compat.v1.trainable_variables(), init_checkpoint)
tf.compat.v1.train.init_from_checkpoint(init_checkpoint, assignment_map)
sess.run(tf.compat.v1.global_variables_initializer())
frozen = tf.compat.v1.graph_util.convert_variables_to_constants(sess, sess.graph_def, ["bert/pooler/dense/Tanh"])
graph_io.write_graph(frozen, './', 'inference_graph.pb', as_text=False)
print('BERT frozen model path {}'.format(os.path.join(os.path.dirname(__file__), 'inference_graph.pb')))
sys.exit(0)
import os, sys
import tensorflow as tf
from tensorflow.python.framework import graph_io
with tf.compat.v1.Session(graph=tf.compat.v1.get_default_graph()) as sess:
(assignment_map, initialized_variable_names) = \
modeling.get_assignment_map_from_checkpoint(tf.compat.v1.trainable_variables(), init_checkpoint)
tf.compat.v1.train.init_from_checkpoint(init_checkpoint, assignment_map)
sess.run(tf.compat.v1.global_variables_initializer())
frozen = tf.compat.v1.graph_util.convert_variables_to_constants(sess, sess.graph_def, ["bert/pooler/dense/Tanh"])
graph_io.write_graph(frozen, './', 'inference_graph.pb', as_text=False)
print('BERT frozen model path {}'.format(os.path.join(os.path.dirname(__file__), 'inference_graph.pb')))
sys.exit(0)
Lines before the inserted code should look like this:
Lines before the inserted code should look like this:
.. code-block:: python
.. code-block:: python
(total_loss, per_example_loss, logits, probabilities) = create_model(
bert_config, is_training, input_ids, input_mask, segment_ids, label_ids,
num_labels, use_one_hot_embeddings)
(total_loss, per_example_loss, logits, probabilities) = create_model(
bert_config, is_training, input_ids, input_mask, segment_ids, label_ids,
num_labels, use_one_hot_embeddings)
9. Set environment variables ``BERT_BASE_DIR``, ``BERT_REPO_DIR`` and run the script ``run_classifier.py`` to create ``inference_graph.pb`` file in the root of the cloned BERT repository.
.. code-block:: sh
.. code-block:: sh
export BERT_BASE_DIR=/path/to/bert/uncased_L-12_H-768_A-12
export BERT_REPO_DIR=/current/working/directory
export BERT_BASE_DIR=/path/to/bert/uncased_L-12_H-768_A-12
export BERT_REPO_DIR=/current/working/directory
python3 run_classifier.py \
--task_name=MRPC \
--do_eval=true \
--data_dir=$BERT_REPO_DIR/glue_data/MRPC \
--vocab_file=$BERT_BASE_DIR/vocab.txt \
--bert_config_file=$BERT_BASE_DIR/bert_config.json \
--init_checkpoint=$BERT_BASE_DIR/bert_model.ckpt \
--output_dir=./
python3 run_classifier.py \
--task_name=MRPC \
--do_eval=true \
--data_dir=$BERT_REPO_DIR/glue_data/MRPC \
--vocab_file=$BERT_BASE_DIR/vocab.txt \
--bert_config_file=$BERT_BASE_DIR/bert_config.json \
--init_checkpoint=$BERT_BASE_DIR/bert_model.ckpt \
--output_dir=./
Run Model Optimizer with the following command line parameters to generate reshape-able BERT Intermediate Representation (IR):
Run model conversion with the following command line parameters to generate reshape-able BERT Intermediate Representation (IR):
.. code-block:: sh
.. code-block:: sh
mo \
--input_model inference_graph.pb \
--input "IteratorGetNext:0{i32}[1,128],IteratorGetNext:1{i32}[1,128],IteratorGetNext:4{i32}[1,128]"
mo \
--input_model inference_graph.pb \
--input "IteratorGetNext:0{i32}[1,128],IteratorGetNext:1{i32}[1,128],IteratorGetNext:4{i32}[1,128]"
For other applicable parameters, refer to the :doc:`Convert Model from TensorFlow <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow>` guide.

View File

@@ -6,7 +6,7 @@ This tutorial explains how to convert a CRNN model to OpenVINO™ Intermediate R
There are several public versions of TensorFlow CRNN model implementation available on GitHub. This tutorial explains how to convert the model from
the `CRNN Tensorflow <https://github.com/MaybeShewill-CV/CRNN_Tensorflow>`__ repository to IR, and is validated with Python 3.7, TensorFlow 1.15.0, and protobuf 3.19.0.
If you have another implementation of CRNN model, it can be converted to OpenVINO IR in a similar way. You need to get inference graph and run Model Optimizer on it.
If you have another implementation of CRNN model, it can be converted to OpenVINO IR in a similar way. You need to get inference graph and run model conversion of it.
**To convert the model to IR:**
@@ -14,58 +14,56 @@ If you have another implementation of CRNN model, it can be converted to OpenVIN
1. Clone the repository:
.. code-block:: sh
.. code-block:: sh
git clone https://github.com/MaybeShewill-CV/CRNN_Tensorflow.git
git clone https://github.com/MaybeShewill-CV/CRNN_Tensorflow.git
2. Go to the ``CRNN_Tensorflow`` directory of the cloned repository:
.. code-block:: sh
cd path/to/CRNN_Tensorflow
.. code-block:: sh
cd path/to/CRNN_Tensorflow
3. Check out the necessary commit:
.. code-block:: sh
git checkout 64f1f1867bffaacfeacc7a80eebf5834a5726122
.. code-block:: sh
git checkout 64f1f1867bffaacfeacc7a80eebf5834a5726122
**Step 2.** Train the model using the framework or the pretrained checkpoint provided in this repository.
**Step 3.** Create an inference graph:
1. Add the ``CRNN_Tensorflow`` folder to ``PYTHONPATH``.
* For Linux:
.. code-block:: sh
.. code-block:: sh
export PYTHONPATH="${PYTHONPATH}:/path/to/CRNN_Tensorflow/"
export PYTHONPATH="${PYTHONPATH}:/path/to/CRNN_Tensorflow/"
* For Windows, add ``/path/to/CRNN_Tensorflow/`` to the ``PYTHONPATH`` environment variable in settings.
2. Edit the ``tools/demo_shadownet.py`` script. After ``saver.restore(sess=sess, save_path=weights_path)`` line, add the following code:
.. code-block:: python
.. code-block:: python
from tensorflow.python.framework import graph_io
frozen = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['shadow/LSTMLayers/transpose_time_major'])
graph_io.write_graph(frozen, '.', 'frozen_graph.pb', as_text=False)
from tensorflow.python.framework import graph_io
frozen = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['shadow/LSTMLayers/transpose_time_major'])
graph_io.write_graph(frozen, '.', 'frozen_graph.pb', as_text=False)
3. Run the demo with the following command:
.. code-block:: sh
.. code-block:: sh
python tools/demo_shadownet.py --image_path data/test_images/test_01.jpg --weights_path model/shadownet/shadownet_2017-10-17-11-47-46.ckpt-199999
python tools/demo_shadownet.py --image_path data/test_images/test_01.jpg --weights_path model/shadownet/shadownet_2017-10-17-11-47-46.ckpt-199999
If you want to use your checkpoint, replace the path in the ``--weights_path`` parameter with a path to your checkpoint.
If you want to use your checkpoint, replace the path in the ``--weights_path`` parameter with a path to your checkpoint.
4. In the ``CRNN_Tensorflow`` directory, you will find the inference CRNN graph ``frozen_graph.pb``. You can use this graph with OpenVINO to convert the model to IR and then run inference.
**Step 4.** Convert the model to IR:

View File

@@ -19,10 +19,10 @@ To download the model, follow the instruction below:
* For UNIX-like systems, run the following command:
.. code-block:: sh
.. code-block:: sh
wget -O - https://github.com/mozilla/DeepSpeech/archive/v0.8.2.tar.gz | tar xvfz -
wget -O - https://github.com/mozilla/DeepSpeech/releases/download/v0.8.2/deepspeech-0.8.2-checkpoint.tar.gz | tar xvfz -
wget -O - https://github.com/mozilla/DeepSpeech/archive/v0.8.2.tar.gz | tar xvfz -
wget -O - https://github.com/mozilla/DeepSpeech/releases/download/v0.8.2/deepspeech-0.8.2-checkpoint.tar.gz | tar xvfz -
* For Windows systems:
@@ -73,14 +73,14 @@ are assigned to cell state and hidden state, which are these two variables.
Converting the Main Part of DeepSpeech Model into OpenVINO IR
#############################################################
Model Optimizer assumes that the output model is for inference only. That is why you should cut ``previous_state_c`` and ``previous_state_h`` variables off and resolve keeping cell and hidden states on the application level.
Model conversion API assumes that the output model is for inference only. That is why you should cut ``previous_state_c`` and ``previous_state_h`` variables off and resolve keeping cell and hidden states on the application level.
There are certain limitations for the model conversion:
* Time length (``time_len``) and sequence length (``seq_len``) are equal.
* Original model cannot be reshaped, so you should keep original shapes.
To generate the IR, run Model Optimizer with the following parameters:
To generate the IR, run model conversion with the following parameters:
.. code-block:: sh
@@ -94,6 +94,6 @@ Where:
* ``input_lengths->[16]`` Replaces the input node with name "input_lengths" with a constant tensor of shape [1] with a single integer value of 16. This means that the model now can consume input sequences of length 16 only.
* ``input_node[1 16 19 26],previous_state_h[1 2048],previous_state_c[1 2048]`` replaces the variables with a placeholder.
* ``--output ".../GatherNd_1,.../GatherNd,logits"`` output node names.
* ``output ".../GatherNd_1,.../GatherNd,logits"`` output node names.
@endsphinxdirective

View File

@@ -14,7 +14,7 @@ There are two inputs in this network: boolean ``phase_train`` which manages stat
Converting a TensorFlow FaceNet Model to the IR
###############################################
To generate a FaceNet OpenVINO model, feed a TensorFlow FaceNet model to Model Optimizer with the following parameters:
To generate a FaceNet OpenVINO model, feed a TensorFlow FaceNet model to model conversion API with the following parameters:
.. code-block:: sh
@@ -23,11 +23,11 @@ To generate a FaceNet OpenVINO model, feed a TensorFlow FaceNet model to Model O
--freeze_placeholder_with_value "phase_train->False"
The batch joining pattern transforms to a placeholder with the model default shape if ``--input_shape`` or ``--batch`*/*`-b`` are not provided. Otherwise, the placeholder shape has custom parameters.
The batch joining pattern transforms to a placeholder with the model default shape if ``input_shape`` or ``batch`*/*`-b`` are not provided. Otherwise, the placeholder shape has custom parameters.
* ``--freeze_placeholder_with_value "phase_train->False"`` to switch graph to inference mode
* ``--batch`*/*`-b`` is applicable to override original network batch
* ``--input_shape`` is applicable with or without ``--input``
* ``freeze_placeholder_with_value "phase_train->False"`` to switch graph to inference mode
* ``batch`*/*`-b`` is applicable to override original network batch
* ``input_shape`` is applicable with or without ``input``
* other options are applicable
@endsphinxdirective

View File

@@ -6,7 +6,7 @@ This tutorial explains how to convert Google Neural Machine Translation (GNMT) m
There are several public versions of TensorFlow GNMT model implementation available on GitHub. This tutorial explains how to convert the GNMT model from the `TensorFlow Neural Machine Translation (NMT) repository <https://github.com/tensorflow/nmt>`__ to the IR.
Creating a Patch File
Creating a Patch File
#####################
Before converting the model, you need to create a patch file for the repository. The patch modifies the framework code by adding a special command-line argument to the framework options that enables inference graph dumping:
@@ -14,130 +14,130 @@ Before converting the model, you need to create a patch file for the repository.
1. Go to a writable directory and create a ``GNMT_inference.patch`` file.
2. Copy the following diff code to the file:
.. code-block:: cpp
.. code-block:: cpp
diff --git a/nmt/inference.py b/nmt/inference.py
index 2cbef07..e185490 100644
--- a/nmt/inference.py
+++ b/nmt/inference.py
@@ -17,9 +17,11 @@
from __future__ import print_function
diff --git a/nmt/inference.py b/nmt/inference.py
index 2cbef07..e185490 100644
--- a/nmt/inference.py
+++ b/nmt/inference.py
@@ -17,9 +17,11 @@
from __future__ import print_function
import codecs
+import os
import time
import codecs
+import os
import time
import tensorflow as tf
+from tensorflow.python.framework import graph_io
import tensorflow as tf
+from tensorflow.python.framework import graph_io
from . import attention_model
from . import gnmt_model
@@ -105,6 +107,29 @@ def start_sess_and_load_model(infer_model, ckpt_path):
return sess, loaded_infer_model
from . import attention_model
from . import gnmt_model
@@ -105,6 +107,29 @@ def start_sess_and_load_model(infer_model, ckpt_path):
return sess, loaded_infer_model
+def inference_dump_graph(ckpt_path, path_to_dump, hparams, scope=None):
+ model_creator = get_model_creator(hparams)
+ infer_model = model_helper.create_infer_model(model_creator, hparams, scope)
+ sess = tf.Session(
+ graph=infer_model.graph, config=utils.get_config_proto())
+ with infer_model.graph.as_default():
+ loaded_infer_model = model_helper.load_model(
+ infer_model.model, ckpt_path, sess, "infer")
+ utils.print_out("Dumping inference graph to {}".format(path_to_dump))
+ loaded_infer_model.saver.save(
+ sess,
+ os.path.join(path_to_dump + 'inference_GNMT_graph')
+ )
+ utils.print_out("Dumping done!")
+
+ output_node_name = 'index_to_string_Lookup'
+ utils.print_out("Freezing GNMT graph with output node {}...".format(output_node_name))
+ frozen = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def,
+ [output_node_name])
+ graph_io.write_graph(frozen, '.', os.path.join(path_to_dump, 'frozen_GNMT_inference_graph.pb'), as_text=False)
+ utils.print_out("Freezing done. Freezed model frozen_GNMT_inference_graph.pb saved to {}".format(path_to_dump))
+
+
def inference(ckpt_path,
inference_input_file,
inference_output_file,
diff --git a/nmt/nmt.py b/nmt/nmt.py
index f5823d8..a733748 100644
--- a/nmt/nmt.py
+++ b/nmt/nmt.py
@@ -310,6 +310,13 @@ def add_arguments(parser):
parser.add_argument("--num_intra_threads", type=int, default=0,
help="number of intra_op_parallelism_threads")
+def inference_dump_graph(ckpt_path, path_to_dump, hparams, scope=None):
+ model_creator = get_model_creator(hparams)
+ infer_model = model_helper.create_infer_model(model_creator, hparams, scope)
+ sess = tf.Session(
+ graph=infer_model.graph, config=utils.get_config_proto())
+ with infer_model.graph.as_default():
+ loaded_infer_model = model_helper.load_model(
+ infer_model.model, ckpt_path, sess, "infer")
+ utils.print_out("Dumping inference graph to {}".format(path_to_dump))
+ loaded_infer_model.saver.save(
+ sess,
+ os.path.join(path_to_dump + 'inference_GNMT_graph')
+ )
+ utils.print_out("Dumping done!")
+
+ output_node_name = 'index_to_string_Lookup'
+ utils.print_out("Freezing GNMT graph with output node {}...".format(output_node_name))
+ frozen = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def,
+ [output_node_name])
+ graph_io.write_graph(frozen, '.', os.path.join(path_to_dump, 'frozen_GNMT_inference_graph.pb'), as_text=False)
+ utils.print_out("Freezing done. Freezed model frozen_GNMT_inference_graph.pb saved to {}".format(path_to_dump))
+
+
def inference(ckpt_path,
inference_input_file,
inference_output_file,
diff --git a/nmt/nmt.py b/nmt/nmt.py
index f5823d8..a733748 100644
--- a/nmt/nmt.py
+++ b/nmt/nmt.py
@@ -310,6 +310,13 @@ def add_arguments(parser):
parser.add_argument("--num_intra_threads", type=int, default=0,
help="number of intra_op_parallelism_threads")
+ # Special argument for inference model dumping without inference
+ parser.add_argument("--dump_inference_model", type="bool", nargs="?",
+ const=True, default=False,
+ help="Argument for dump inference graph for specified trained ckpt")
+
+ parser.add_argument("--path_to_dump", type=str, default="",
+ help="Path to dump inference graph.")
+ # Special argument for inference model dumping without inference
+ parser.add_argument("--dump_inference_model", type="bool", nargs="?",
+ const=True, default=False,
+ help="Argument for dump inference graph for specified trained ckpt")
+
+ parser.add_argument("--path_to_dump", type=str, default="",
+ help="Path to dump inference graph.")
def create_hparams(flags):
"""Create training hparams."""
@@ -396,6 +403,9 @@ def create_hparams(flags):
language_model=flags.language_model,
num_intra_threads=flags.num_intra_threads,
num_inter_threads=flags.num_inter_threads,
+
+ dump_inference_model=flags.dump_inference_model,
+ path_to_dump=flags.path_to_dump,
)
def create_hparams(flags):
"""Create training hparams."""
@@ -396,6 +403,9 @@ def create_hparams(flags):
language_model=flags.language_model,
num_intra_threads=flags.num_intra_threads,
num_inter_threads=flags.num_inter_threads,
+
+ dump_inference_model=flags.dump_inference_model,
+ path_to_dump=flags.path_to_dump,
)
@@ -613,7 +623,7 @@ def create_or_load_hparams(
return hparams
@@ -613,7 +623,7 @@ def create_or_load_hparams(
return hparams
-def run_main(flags, default_hparams, train_fn, inference_fn, target_session=""):
+def run_main(flags, default_hparams, train_fn, inference_fn, inference_dump, target_session=""):
"""Run main."""
# Job
jobid = flags.jobid
@@ -653,8 +663,26 @@ def run_main(flags, default_hparams, train_fn, inference_fn, target_session=""):
out_dir, default_hparams, flags.hparams_path,
save_hparams=(jobid == 0))
-def run_main(flags, default_hparams, train_fn, inference_fn, target_session=""):
+def run_main(flags, default_hparams, train_fn, inference_fn, inference_dump, target_session=""):
"""Run main."""
# Job
jobid = flags.jobid
@@ -653,8 +663,26 @@ def run_main(flags, default_hparams, train_fn, inference_fn, target_session=""):
out_dir, default_hparams, flags.hparams_path,
save_hparams=(jobid == 0))
- ## Train / Decode
- if flags.inference_input_file:
+ # Dumping inference model
+ if flags.dump_inference_model:
+ # Inference indices
+ hparams.inference_indices = None
+ if flags.inference_list:
+ (hparams.inference_indices) = (
+ [int(token) for token in flags.inference_list.split(",")])
+
+ # Ckpt
+ ckpt = flags.ckpt
+ if not ckpt:
+ ckpt = tf.train.latest_checkpoint(out_dir)
+
+ # Path to dump graph
+ assert flags.path_to_dump != "", "Please, specify path_to_dump model."
+ path_to_dump = flags.path_to_dump
+ if not tf.gfile.Exists(path_to_dump): tf.gfile.MakeDirs(path_to_dump)
+
+ inference_dump(ckpt, path_to_dump, hparams)
+ elif flags.inference_input_file:
# Inference output directory
trans_file = flags.inference_output_file
assert trans_file
@@ -693,7 +721,8 @@ def main(unused_argv):
default_hparams = create_hparams(FLAGS)
train_fn = train.train
inference_fn = inference.inference
- run_main(FLAGS, default_hparams, train_fn, inference_fn)
+ inference_dump = inference.inference_dump_graph
+ run_main(FLAGS, default_hparams, train_fn, inference_fn, inference_dump)
- ## Train / Decode
- if flags.inference_input_file:
+ # Dumping inference model
+ if flags.dump_inference_model:
+ # Inference indices
+ hparams.inference_indices = None
+ if flags.inference_list:
+ (hparams.inference_indices) = (
+ [int(token) for token in flags.inference_list.split(",")])
+
+ # Ckpt
+ ckpt = flags.ckpt
+ if not ckpt:
+ ckpt = tf.train.latest_checkpoint(out_dir)
+
+ # Path to dump graph
+ assert flags.path_to_dump != "", "Please, specify path_to_dump model."
+ path_to_dump = flags.path_to_dump
+ if not tf.gfile.Exists(path_to_dump): tf.gfile.MakeDirs(path_to_dump)
+
+ inference_dump(ckpt, path_to_dump, hparams)
+ elif flags.inference_input_file:
# Inference output directory
trans_file = flags.inference_output_file
assert trans_file
@@ -693,7 +721,8 @@ def main(unused_argv):
default_hparams = create_hparams(FLAGS)
train_fn = train.train
inference_fn = inference.inference
- run_main(FLAGS, default_hparams, train_fn, inference_fn)
+ inference_dump = inference.inference_dump_graph
+ run_main(FLAGS, default_hparams, train_fn, inference_fn, inference_dump)
if __name__ == "__main__":
if __name__ == "__main__":
3. Save and close the file.
@@ -151,15 +151,15 @@ Converting a GNMT Model to the IR
1. Clone the NMT reposirory:
.. code-block:: sh
.. code-block:: sh
git clone https://github.com/tensorflow/nmt.git
git clone https://github.com/tensorflow/nmt.git
2. Check out the necessary commit:
.. code-block:: sh
.. code-block:: sh
git checkout b278487980832417ad8ac701c672b5c3dc7fa553
git checkout b278487980832417ad8ac701c672b5c3dc7fa553
**Step 2**. Get a trained model. You have two options:
@@ -176,25 +176,25 @@ For the GNMT model, the training graph and the inference graph have different de
1. Apply the ``GNMT_inference.patch`` patch to the repository. `Create a Patch File <#Creating-a-Patch-File>`__ instructions if you do not have it:
.. code-block:: sh
.. code-block:: sh
git apply /path/to/patch/GNMT_inference.patch
git apply /path/to/patch/GNMT_inference.patch
2. Run the NMT framework to dump the inference model:
.. code-block:: sh
.. code-block:: sh
python -m nmt.nmt
--src=de
--tgt=en
--ckpt=/path/to/ckpt/translate.ckpt
--hparams_path=/path/to/repository/nmt/nmt/standard_hparams/wmt16_gnmt_4_layer.json
--vocab_prefix=/path/to/vocab/vocab.bpe.32000
--out_dir=""
--dump_inference_model
--infer_mode beam_search
--path_to_dump /path/to/dump/model/
python -m nmt.nmt
--src=de
--tgt=en
--ckpt=/path/to/ckpt/translate.ckpt
--hparams_path=/path/to/repository/nmt/nmt/standard_hparams/wmt16_gnmt_4_layer.json
--vocab_prefix=/path/to/vocab/vocab.bpe.32000
--out_dir=""
--dump_inference_model
--infer_mode beam_search
--path_to_dump /path/to/dump/model/
If you use different checkpoints, use the corresponding values for the ``src``, ``tgt``, ``ckpt``, ``hparams_path``, and ``vocab_prefix`` parameters.
@@ -253,52 +253,52 @@ Outputs of the model:
that contains ``beam_size`` best translations for every sentence from input (also decoded as indices of words in
vocabulary).
.. note::
.. note::
The shape of this tensor in TensorFlow can be different: instead of ``max_sequence_length * 2``, it can be any value less than that, because OpenVINO does not support dynamic shapes of outputs, while TensorFlow can stop decoding iterations when ``eos`` symbol is generated.
Running GNMT IR
Running GNMT IR
---------------
1. With benchmark app:
.. code-block:: sh
.. code-block:: sh
benchmark_app -m <path to the generated GNMT IR> -d CPU
benchmark_app -m <path to the generated GNMT IR> -d CPU
2. With OpenVINO Runtime Python API:
.. note::
.. note::
Before running the example, insert a path to your GNMT ``.xml`` and ``.bin`` files into ``MODEL_PATH`` and ``WEIGHTS_PATH``, and fill ``input_data_tensor`` and ``seq_lengths`` tensors according to your input data.
Before running the example, insert a path to your GNMT ``.xml`` and ``.bin`` files into ``MODEL_PATH`` and ``WEIGHTS_PATH``, and fill ``input_data_tensor`` and ``seq_lengths`` tensors according to your input data.
.. code-block:: py
.. code-block:: py
from openvino.inference_engine import IENetwork, IECore
from openvino.inference_engine import IENetwork, IECore
MODEL_PATH = '/path/to/IR/frozen_GNMT_inference_graph.xml'
WEIGHTS_PATH = '/path/to/IR/frozen_GNMT_inference_graph.bin'
MODEL_PATH = '/path/to/IR/frozen_GNMT_inference_graph.xml'
WEIGHTS_PATH = '/path/to/IR/frozen_GNMT_inference_graph.bin'
# Creating network
net = IENetwork(
model=MODEL_PATH,
weights=WEIGHTS_PATH)
# Creating network
net = IENetwork(
model=MODEL_PATH,
weights=WEIGHTS_PATH)
# Creating input data
input_data = {'IteratorGetNext/placeholder_out_port_0': input_data_tensor,
'IteratorGetNext/placeholder_out_port_1': seq_lengths}
# Creating input data
input_data = {'IteratorGetNext/placeholder_out_port_0': input_data_tensor,
'IteratorGetNext/placeholder_out_port_1': seq_lengths}
# Creating plugin and loading extensions
ie = IECore()
ie.add_extension(extension_path="libcpu_extension.so", device_name="CPU")
# Creating plugin and loading extensions
ie = IECore()
ie.add_extension(extension_path="libcpu_extension.so", device_name="CPU")
# Loading network
exec_net = ie.load_network(network=net, device_name="CPU")
# Loading network
exec_net = ie.load_network(network=net, device_name="CPU")
# Run inference
result_ie = exec_net.infer(input_data)
# Run inference
result_ie = exec_net.infer(input_data)
For more information about Python API, refer to the `OpenVINO Runtime Python API <https://docs.openvino.ai/2022.3/api/ie_python_api/api.html>`__ guide.
For more information about Python API, refer to the :doc:`OpenVINO Runtime Python API <api/ie_python_api/api>` guide.
@endsphinxdirective

View File

@@ -10,44 +10,43 @@ This tutorial explains how to convert Neural Collaborative Filtering (NCF) model
2. Freeze the inference graph you get in the previous step in ``model_dir``, following the instructions from the **Freezing Custom Models in Python** section of the :doc:`Converting a TensorFlow Model <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow>` guide.
Run the following commands:
Run the following commands:
.. code-block:: py
.. code-block:: py
import tensorflow as tf
from tensorflow.python.framework import graph_io
import tensorflow as tf
from tensorflow.python.framework import graph_io
sess = tf.compat.v1.Session()
saver = tf.compat.v1.train.import_meta_graph("/path/to/model/model.meta")
saver.restore(sess, tf.train.latest_checkpoint('/path/to/model/'))
sess = tf.compat.v1.Session()
saver = tf.compat.v1.train.import_meta_graph("/path/to/model/model.meta")
saver.restore(sess, tf.train.latest_checkpoint('/path/to/model/'))
frozen = tf.compat.v1.graph_util.convert_variables_to_constants(sess, sess.graph_def, \
["rating/BiasAdd"])
graph_io.write_graph(frozen, './', 'inference_graph.pb', as_text=False)
frozen = tf.compat.v1.graph_util.convert_variables_to_constants(sess, sess.graph_def, \
["rating/BiasAdd"])
graph_io.write_graph(frozen, './', 'inference_graph.pb', as_text=False)
where ``rating/BiasAdd`` is an output node.
where ``rating/BiasAdd`` is an output node.
3. Convert the model to the OpenVINO format. If you look at your frozen model, you can see that
it has one input that is split into four ``ResourceGather`` layers. (Click image to zoom in.)
3. Convert the model to the OpenVINO format. If you look at your frozen model, you can see that it has one input that is split into four ``ResourceGather`` layers. (Click image to zoom in.)
.. image:: ./_static/images/NCF_start.svg
.. image:: ./_static/images/NCF_start.svg
However, as the Model Optimizer does not support such data feeding, you should skip it. Cut
the edges incoming in ``ResourceGather`` port 1:
However, as the model conversion API does not support such data feeding, you should skip it. Cut
the edges incoming in ``ResourceGather`` port 1:
.. code-block:: shell
.. code-block:: sh
mo --input_model inference_graph.pb \
--input 1:embedding/embedding_lookup,1:embedding_1/embedding_lookup, \
1:embedding_2/embedding_lookup,1:embedding_3/embedding_lookup \
--input_shape [256],[256],[256],[256] \
--output_dir <OUTPUT_MODEL_DIR>
mo --input_model inference_graph.pb \
--input 1:embedding/embedding_lookup,1:embedding_1/embedding_lookup, \
1:embedding_2/embedding_lookup,1:embedding_3/embedding_lookup \
--input_shape [256],[256],[256],[256] \
--output_dir <OUTPUT_MODEL_DIR>
In the ``input_shape`` parameter, 256 specifies the ``batch_size`` for your model.
In the ``input_shape`` parameter, 256 specifies the ``batch_size`` for your model.
Alternatively, you can do steps 2 and 3 in one command line:
.. code-block:: shell
.. code-block:: sh
mo --input_meta_graph /path/to/model/model.meta \
--input 1:embedding/embedding_lookup,1:embedding_1/embedding_lookup, \

View File

@@ -2,9 +2,9 @@
@sphinxdirective
* Starting with the 2022.1 release, Model Optimizer can convert the TensorFlow Object Detection API Faster and Mask RCNNs topologies differently. By default, Model Optimizer adds operation "Proposal" to the generated IR. This operation needs an additional input to the model with name "image_info" which should be fed with several values describing the preprocessing applied to the input image (refer to the :doc:`Proposal <openvino_docs_ops_detection_Proposal_4>` operation specification for more information). However, this input is redundant for the models trained and inferred with equal size images. Model Optimizer can generate IR for such models and insert operation :doc:`DetectionOutput <openvino_docs_ops_detection_DetectionOutput_1>` instead of ``Proposal``. The `DetectionOutput` operation does not require additional model input "image_info". Moreover, for some models the produced inference results are closer to the original TensorFlow model. In order to trigger new behavior, the attribute "operation_to_add" in the corresponding JSON transformation configuration file should be set to value "DetectionOutput" instead of default one "Proposal".
* Starting with the 2021.1 release, Model Optimizer converts the TensorFlow Object Detection API SSDs, Faster and Mask RCNNs topologies keeping shape-calculating sub-graphs by default, so topologies can be re-shaped in the OpenVINO Runtime using dedicated reshape API. Refer to the :doc:`Using Shape Inference <openvino_docs_OV_UG_ShapeInference>` guide for more information on how to use this feature. It is possible to change the both spatial dimensions of the input image and batch size.
* To generate IRs for TF 1 SSD topologies, Model Optimizer creates a number of ``PriorBoxClustered`` operations instead of a constant node with prior boxes calculated for the particular input image size. This change allows you to reshape the topology in the OpenVINO Runtime using dedicated API. The reshaping is supported for all SSD topologies except FPNs, which contain hardcoded shapes for some operations preventing from changing topology input shape.
* Starting with the 2022.1 release, model conversion API can convert the TensorFlow Object Detection API Faster and Mask RCNNs topologies differently. By default, model conversion adds operation "Proposal" to the generated IR. This operation needs an additional input to the model with name "image_info" which should be fed with several values describing the preprocessing applied to the input image (refer to the :doc:`Proposal <openvino_docs_ops_detection_Proposal_4>` operation specification for more information). However, this input is redundant for the models trained and inferred with equal size images. Model conversion API can generate IR for such models and insert operation :doc:`DetectionOutput <openvino_docs_ops_detection_DetectionOutput_1>` instead of ``Proposal``. The `DetectionOutput` operation does not require additional model input "image_info". Moreover, for some models the produced inference results are closer to the original TensorFlow model. In order to trigger new behavior, the attribute "operation_to_add" in the corresponding JSON transformation configuration file should be set to value "DetectionOutput" instead of default one "Proposal".
* Starting with the 2021.1 release, model conversion API converts the TensorFlow Object Detection API SSDs, Faster and Mask RCNNs topologies keeping shape-calculating sub-graphs by default, so topologies can be re-shaped in the OpenVINO Runtime using dedicated reshape API. Refer to the :doc:`Using Shape Inference <openvino_docs_OV_UG_ShapeInference>` guide for more information on how to use this feature. It is possible to change the both spatial dimensions of the input image and batch size.
* To generate IRs for TF 1 SSD topologies, model conversion API creates a number of ``PriorBoxClustered`` operations instead of a constant node with prior boxes calculated for the particular input image size. This change allows you to reshape the topology in the OpenVINO Runtime using dedicated API. The reshaping is supported for all SSD topologies except FPNs, which contain hardcoded shapes for some operations preventing from changing topology input shape.
Converting a Model
##################
@@ -13,12 +13,12 @@ You can download TensorFlow Object Detection API models from the `TensorFlow 1 D
.. note::
Before converting, make sure you have configured Model Optimizer. For configuration steps, refer to the :doc:`Configuring Model Optimizer <openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide>`.
Before converting, make sure you have configured model conversion API. For configuration steps, refer to the :doc:`Convert a Model <openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide>`.
To convert a TensorFlow Object Detection API model, run the ``mo`` command with the following required parameters:
* ``--input_model <path_to_frozen.pb>`` - File with a pretrained model (binary or text .pb file after freezing) OR ``--saved_model_dir <path_to_saved_model>`` for the TensorFlow 2 models
* ``--transformations_config <path_to_subgraph_replacement_configuration_file.json>`` - A subgraph replacement configuration file with transformations description. For the models downloaded from the TensorFlow Object Detection API zoo, you can find the configuration files in the ``<PYTHON_SITE_PACKAGES>/openvino/tools/mo/front/tf`` directory. Use:
* ``input_model <path_to_frozen.pb>`` - File with a pretrained model (binary or text .pb file after freezing) OR ``saved_model_dir <path_to_saved_model>`` for the TensorFlow 2 models
* ``transformations_config <path_to_subgraph_replacement_configuration_file.json>`` - A subgraph replacement configuration file with transformations description. For the models downloaded from the TensorFlow Object Detection API zoo, you can find the configuration files in the ``<PYTHON_SITE_PACKAGES>/openvino/tools/mo/front/tf`` directory. Use:
* ``ssd_v2_support.json`` - for frozen SSD topologies from the models zoo version up to 1.13.X inclusively
* ``ssd_support_api_v.1.14.json`` - for SSD topologies trained using the TensorFlow Object Detection API version 1.14 up to 1.14.X inclusively
@@ -48,12 +48,12 @@ To convert a TensorFlow Object Detection API model, run the ``mo`` command with
* ``rfcn_support_api_v1.13.json`` - for RFCN topology from the models zoo frozen with TensorFlow version 1.13.X
* ``rfcn_support_api_v1.14.json`` - for RFCN topology from the models zoo frozen with TensorFlow version 1.14.0 or higher
* ``--tensorflow_object_detection_api_pipeline_config <path_to_pipeline.config>`` - A special configuration file that describes the topology hyper-parameters and structure of the TensorFlow Object Detection API model. For the models downloaded from the TensorFlow Object Detection API zoo, the configuration file is named ``pipeline.config``. If you plan to train a model yourself, you can find templates for these files in the `models repository <https://github.com/tensorflow/models/tree/master/research/object_detection/samples/configs>`__.
* ``--input_shape`` (optional) - A custom input image shape. For more information how the ``--input_shape`` parameter is handled for the TensorFlow Object Detection API models, refer to the `Custom Input Shape <#Custom-Input-Shape>`__ guide.
* ``tensorflow_object_detection_api_pipeline_config <path_to_pipeline.config>`` - A special configuration file that describes the topology hyper-parameters and structure of the TensorFlow Object Detection API model. For the models downloaded from the TensorFlow Object Detection API zoo, the configuration file is named ``pipeline.config``. If you plan to train a model yourself, you can find templates for these files in the `models repository <https://github.com/tensorflow/models/tree/master/research/object_detection/samples/configs>`__.
* ``input_shape`` (optional) - A custom input image shape. For more information how the ``input_shape`` parameter is handled for the TensorFlow Object Detection API models, refer to the `Custom Input Shape <#Custom-Input-Shape>`__ guide.
.. note::
The color channel order (RGB or BGR) of an input data should match the channel order of the model training dataset. If they are different, perform the ``RGB<->BGR`` conversion specifying the command-line parameter: ``--reverse_input_channels``. Otherwise, inference results may be incorrect. If you convert a TensorFlow Object Detection API model to use with the OpenVINO sample applications, you must specify the ``--reverse_input_channels`` parameter. For more information about the parameter, refer to the **When to Reverse Input Channels** section of the :doc:`Converting a Model to Intermediate Representation (IR) <openvino_docs_MO_DG_prepare_model_convert_model_Converting_Model>` guide.
The color channel order (RGB or BGR) of an input data should match the channel order of the model training dataset. If they are different, perform the ``RGB<->BGR`` conversion specifying the command-line parameter: ``reverse_input_channels``. Otherwise, inference results may be incorrect. If you convert a TensorFlow Object Detection API model to use with the OpenVINO sample applications, you must specify the ``reverse_input_channels`` parameter. For more information about the parameter, refer to the **When to Reverse Input Channels** section of the :doc:`Converting a Model to Intermediate Representation (IR) <openvino_docs_MO_DG_prepare_model_convert_model_Converting_Model>` guide.
Additionally to the mandatory parameters listed above you can use optional conversion parameters if needed. A full list of parameters is available in the :doc:`Converting a TensorFlow Model <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow>` guide.
@@ -84,82 +84,82 @@ There are several important notes about feeding input images to the samples:
2. TensorFlow implementation of image resize may be different from the one implemented in the sample. Even reading input image from compressed format (like ``.jpg``) could give different results in the sample and TensorFlow. If it is necessary to compare accuracy between the TensorFlow and the OpenVINO, it is recommended to pass pre-resized input image in a non-compressed format (like ``.bmp``).
3. If you want to infer the model with the OpenVINO samples, convert the model specifying the ``--reverse_input_channels`` command line parameter. The samples load images in BGR channels order, while TensorFlow models were trained with images in RGB order. When the ``--reverse_input_channels`` command line parameter is specified, Model Optimizer performs first convolution or other channel dependent operation weights modification so the output will be like the image is passed with RGB channels order.
3. If you want to infer the model with the OpenVINO samples, convert the model specifying the ``reverse_input_channels`` command line parameter. The samples load images in BGR channels order, while TensorFlow models were trained with images in RGB order. When the ``reverse_input_channels`` command line parameter is specified, model conversion API performs first convolution or other channel dependent operation weights modification so the output will be like the image is passed with RGB channels order.
4. Read carefully the messages printed by Model Optimizer during a model conversion. They contain important instructions on how to prepare input data before running the inference and how to interpret the output.
4. Read carefully the messages printed by model conversion API. They contain important instructions on how to prepare input data before running the inference and how to interpret the output.
Custom Input Shape
##################
Model Optimizer handles the command line parameter ``--input_shape`` for TensorFlow Object Detection API models in a special way depending on the image resizer type defined in the ``pipeline.config`` file. TensorFlow Object Detection API generates different ``Preprocessor`` sub-graph based on the image resizer type. Model Optimizer supports two types of image resizer:
Model conversion handles the command line parameter ``input_shape`` for TensorFlow Object Detection API models in a special way depending on the image resizer type defined in the ``pipeline.config`` file. TensorFlow Object Detection API generates different ``Preprocessor`` sub-graph based on the image resizer type. Model conversion API supports two types of image resizer:
* ``fixed_shape_resizer`` --- *Stretches* input image to the specific height and width. The ``pipeline.config`` snippet below shows a ``fixed_shape_resizer`` sample definition:
.. code-block:: sh
.. code-block:: sh
image_resizer {
fixed_shape_resizer {
height: 300
width: 300
image_resizer {
fixed_shape_resizer {
height: 300
width: 300
}
}
}
* ``keep_aspect_ratio_resizer`` --- Resizes the input image *keeping aspect ratio* to satisfy the minimum and maximum size constraints. The ``pipeline.config`` snippet below shows a ``keep_aspect_ratio_resizer`` sample definition:
.. code-block:: sh
.. code-block:: sh
image_resizer {
keep_aspect_ratio_resizer {
min_dimension: 600
max_dimension: 1024
image_resizer {
keep_aspect_ratio_resizer {
min_dimension: 600
max_dimension: 1024
}
}
}
If an additional parameter "pad_to_max_dimension" is equal to "true", then the resized image will be padded with 0s to the square image of size "max_dimension".
Fixed Shape Resizer Replacement
+++++++++++++++++++++++++++++++
* If the ``--input_shape`` command line parameter is not specified, Model Optimizer generates an input operation with the height and width as defined in the ``pipeline.config``.
* If the ``input_shape`` command line parameter is not specified, model conversion generates an input operation with the height and width as defined in the ``pipeline.config``.
* If the ``--input_shape [1, H, W, 3]`` command line parameter is specified, Model Optimizer sets the input operation height to ``H`` and width to ``W`` and convert the model. However, the conversion may fail because of the following reasons:
* If the ``input_shape [1, H, W, 3]`` command line parameter is specified, model conversion sets the input operation height to ``H`` and width to ``W`` and convert the model. However, the conversion may fail because of the following reasons:
* The model is not reshape-able, meaning that it's not possible to change the size of the model input image. For example, SSD FPN models have ``Reshape`` operations with hard-coded output shapes, but the input size to these ``Reshape`` instances depends on the input image size. In this case, Model Optimizer shows an error during the shape inference phase. Run Model Optimizer with ``--log_level DEBUG`` to see the inferred operations output shapes to see the mismatch.
* Custom input shape is too small. For example, if you specify ``--input_shape [1,100,100,3]`` to convert a SSD Inception V2 model, one of convolution or pooling nodes decreases input tensor spatial dimensions to non-positive values. In this case, Model Optimizer shows error message like this: '[ ERROR ] Shape [ 1 -1 -1 256] is not fully defined for output X of "node_name".'
* The model is not reshape-able, meaning that it's not possible to change the size of the model input image. For example, SSD FPN models have ``Reshape`` operations with hard-coded output shapes, but the input size to these ``Reshape`` instances depends on the input image size. In this case, model conversion API shows an error during the shape inference phase. Run model conversion with ``log_level DEBUG`` to see the inferred operations output shapes to see the mismatch.
* Custom input shape is too small. For example, if you specify ``input_shape [1,100,100,3]`` to convert a SSD Inception V2 model, one of convolution or pooling nodes decreases input tensor spatial dimensions to non-positive values. In this case, model conversion API shows error message like this: '[ ERROR ] Shape [ 1 -1 -1 256] is not fully defined for output X of "node_name".'
Keeping Aspect Ratio Resizer Replacement
++++++++++++++++++++++++++++++++++++++++
* If the ``--input_shape`` command line parameter is not specified, Model Optimizer generates an input operation with both height and width equal to the value of parameter ``min_dimension`` in the ``keep_aspect_ratio_resizer``.
* If the ``input_shape`` command line parameter is not specified, model conversion API generates an input operation with both height and width equal to the value of parameter ``min_dimension`` in the ``keep_aspect_ratio_resizer``.
* If the ``--input_shape [1, H, W, 3]`` command line parameter is specified, Model Optimizer scales the specified input image height ``H`` and width ``W`` to satisfy the ``min_dimension`` and ``max_dimension`` constraints defined in the ``keep_aspect_ratio_resizer``. The following function calculates the input operation height and width:
* If the ``input_shape [1, H, W, 3]`` command line parameter is specified, model conversion API scales the specified input image height ``H`` and width ``W`` to satisfy the ``min_dimension`` and ``max_dimension`` constraints defined in the ``keep_aspect_ratio_resizer``. The following function calculates the input operation height and width:
.. code-block:: py
.. code-block:: py
def calculate_shape_keeping_aspect_ratio(H: int, W: int, min_dimension: int, max_dimension: int):
ratio_min = min_dimension / min(H, W)
ratio_max = max_dimension / max(H, W)
ratio = min(ratio_min, ratio_max)
return int(round(H * ratio)), int(round(W * ratio))
def calculate_shape_keeping_aspect_ratio(H: int, W: int, min_dimension: int, max_dimension: int):
ratio_min = min_dimension / min(H, W)
ratio_max = max_dimension / max(H, W)
ratio = min(ratio_min, ratio_max)
return int(round(H * ratio)), int(round(W * ratio))
The ``--input_shape`` command line parameter should be specified only if the "pad_to_max_dimension" does not exist of is set to "false" in the ``keep_aspect_ratio_resizer``.
The ``input_shape`` command line parameter should be specified only if the "pad_to_max_dimension" does not exist of is set to "false" in the ``keep_aspect_ratio_resizer``.
Models with ``keep_aspect_ratio_resizer`` were trained to recognize object in real aspect ratio, in contrast with most of the classification topologies trained to recognize objects stretched vertically and horizontally as well. By default, Model Optimizer converts topologies with ``keep_aspect_ratio_resizer`` to consume a square input image. If the non-square image is provided as input, it is stretched without keeping aspect ratio that results to object detection quality decrease.
Models with ``keep_aspect_ratio_resizer`` were trained to recognize object in real aspect ratio, in contrast with most of the classification topologies trained to recognize objects stretched vertically and horizontally as well. By default, topologies are converted with ``keep_aspect_ratio_resizer`` to consume a square input image. If the non-square image is provided as input, it is stretched without keeping aspect ratio that results to object detection quality decrease.
.. note::
It is highly recommended to specify the ``--input_shape`` command line parameter for the models with ``keep_aspect_ratio_resizer``, if the input image dimensions are known in advance.
It is highly recommended to specify the ``input_shape`` command line parameter for the models with ``keep_aspect_ratio_resizer``, if the input image dimensions are known in advance.
Model Conversion Process in Detail
##################################
This section is intended for users who want to understand how Model Optimizer performs Object Detection API models conversion in details. The information in this section is also useful for users having complex models that are not converted with Model Optimizer out of the box. It is highly recommended to read the **Graph Transformation Extensions** section in the :doc:`Model Optimizer Extensibility <openvino_docs_MO_DG_prepare_model_customize_model_optimizer_Customize_Model_Optimizer>` documentation first to understand sub-graph replacement concepts which are used here.
This section is intended for users who want to understand how model conversion API performs Object Detection API models conversion in details. The information in this section is also useful for users having complex models that are not converted with model conversion API out of the box. It is highly recommended to read the **Graph Transformation Extensions** section in the :doc:`[Legacy] Model Optimizer Extensibility <openvino_docs_MO_DG_prepare_model_customize_model_optimizer_Customize_Model_Optimizer>` documentation first to understand sub-graph replacement concepts which are used here.
It is also important to open the model in the `TensorBoard <https://www.tensorflow.org/guide/summaries_and_tensorboard>`__ to see the topology structure. Model Optimizer can create an event file that can be then fed to the TensorBoard tool. Run Model Optimizer, providing two command line parameters:
It is also important to open the model in the `TensorBoard <https://www.tensorflow.org/guide/summaries_and_tensorboard>`__ to see the topology structure. Model conversion API can create an event file that can be then fed to the TensorBoard tool. Run model conversion, providing two command line parameters:
* ``--input_model <path_to_frozen.pb>`` --- Path to the frozen model.
* ``--tensorboard_logdir`` --- Path to the directory where TensorBoard looks for the event files.
* ``input_model <path_to_frozen.pb>`` --- Path to the frozen model.
* ``tensorboard_logdir`` --- Path to the directory where TensorBoard looks for the event files.
Implementation of the transformations for Object Detection API models is located in the `file <https://github.com/openvinotoolkit/openvino/blob/releases/2022/1/tools/mo/openvino/tools/mo/front/tf/ObjectDetectionAPI.py>`__. Refer to the code in this file to understand the details of the conversion process.

View File

@@ -5,16 +5,16 @@
This tutorial explains how to convert a RetinaNet model to the Intermediate Representation (IR).
`Public RetinaNet model <https://github.com/fizyr/keras-retinanet>`__ does not contain pretrained TensorFlow weights.
To convert this model to the TensorFlow format, follow the `Reproduce Keras to TensorFlow Conversion tutorial <https://docs.openvino.ai/2022.3/omz_models_model_retinanet_tf.html>`__.
To convert this model to the TensorFlow format, follow the `Reproduce Keras to TensorFlow Conversion tutorial <https://docs.openvino.ai/2023.0/omz_models_model_retinanet_tf.html>`__.
After converting the model to TensorFlow format, run the Model Optimizer command below:
After converting the model to TensorFlow format, run the following command:
.. code-block:: sh
mo --input "input_1[1,1333,1333,3]" --input_model retinanet_resnet50_coco_best_v2.1.0.pb --transformations_config front/tf/retinanet.json
Where ``transformations_config`` command-line parameter specifies the configuration json file containing model conversion hints for the Model Optimizer.
Where ``transformations_config`` command-line parameter specifies the configuration json file containing model conversion hints for model conversion API.
The json file contains some parameters that need to be changed if you train the model yourself. It also contains information on how to match endpoints
to replace the subgraph nodes. After the model is converted to the OpenVINO IR format, the output nodes will be replaced with DetectionOutput layer.

View File

@@ -7,7 +7,7 @@
1. Download the TensorFlow-Slim models `git repository <https://github.com/tensorflow/models>`__.
2. Download the pre-trained model `checkpoint <https://github.com/tensorflow/models/tree/master/research/slim#pre-trained-models>`__.
3. Export the inference graph.
4. Convert the model using the Model Optimizer.
4. Convert the model using model conversion API.
The `Example of an Inception V1 Model Conversion <#example_of_an_inception_v1_model_conversion>`__ below illustrates the process of converting an Inception V1 Model.
@@ -46,7 +46,7 @@ This example demonstrates how to convert the model on Linux OSes, but it could b
--output_file inception_v1_inference_graph.pb
Model Optimizer comes with the summarize graph utility, which identifies graph input and output nodes. Run the utility to determine input/output nodes of the Inception V1 model:
Model conversion API comes with the summarize graph utility, which identifies graph input and output nodes. Run the utility to determine input/output nodes of the Inception V1 model:
.. code-block:: sh
@@ -63,21 +63,21 @@ The output looks as follows:
The tool finds one input node with name ``input``, type ``float32``, fixed image size ``(224,224,3)`` and undefined batch size ``-1``. The output node name is ``InceptionV1/Logits/Predictions/Reshape_1``.
**Step 4**. Convert the model with the Model Optimizer:
**Step 4**. Convert the model with the model conversion API:
.. code-block:: sh
mo --input_model ./inception_v1_inference_graph.pb --input_checkpoint ./inception_v1.ckpt -b 1 --mean_value [127.5,127.5,127.5] --scale 127.5
The ``-b`` command line parameter is required because the Model Optimizer cannot convert a model with undefined input size.
The ``-b`` command line parameter is required because model conversion API cannot convert a model with undefined input size.
For the information on why ``--mean_values`` and ``--scale`` command-line parameters are used, refer to the `Mean and Scale Values for TensorFlow-Slim Models <#Mean-and-Scale-Values-for-TensorFlow-Slim-Models>`__.
Mean and Scale Values for TensorFlow-Slim Models
#################################################
The TensorFlow-Slim Models were trained with normalized input data. There are several different normalization algorithms used in the Slim library. OpenVINO classification sample does not perform image pre-processing except resizing to the input layer size. It is necessary to pass mean and scale values to the Model Optimizer so they are embedded into the generated IR in order to get correct classification results.
The TensorFlow-Slim Models were trained with normalized input data. There are several different normalization algorithms used in the Slim library. OpenVINO classification sample does not perform image pre-processing except resizing to the input layer size. It is necessary to pass mean and scale values to model conversion API so they are embedded into the generated IR in order to get correct classification results.
The file `preprocessing_factory.py <https://github.com/tensorflow/models/blob/master/research/slim/preprocessing/preprocessing_factory.py>`__ contains a dictionary variable ``preprocessing_fn_map`` defining mapping between the model type and pre-processing function to be used. The function code should be analyzed to figure out the mean/scale values.

View File

@@ -187,7 +187,7 @@ The script should save into ``~/XLNet-Large/xlnet``.
Converting a frozen TensorFlow XLNet Model to IR
#################################################
To generate the XLNet Intermediate Representation (IR) of the model, run Model Optimizer with the following parameters:
To generate the XLNet Intermediate Representation (IR) of the model, run model conversion with the following parameters:
.. code-block:: sh

Some files were not shown because too many files have changed in this diff Show More