Compare commits

...

147 Commits

Author SHA1 Message Date
Sebastian Golebiewski
fef09f046f [DOCS] Providing missing files for notebook 238 2023-11-27 16:33:46 +01:00
Karol Blaszczak
e0e6e62eda [DOCS] 23.1 selector tool remove (#21314) 2023-11-27 15:01:51 +01:00
Sebastian Golebiewski
9d69c80d8b Fix math formula in Elu_1 (#21205) 2023-11-21 12:09:36 +01:00
Karol Blaszczak
2d6a6e2780 [DOCS] fix npu mention port to 23.1
port https://github.com/openvinotoolkit/openvino/pull/21147
2023-11-17 12:56:43 +00:00
Maciej Smyk
1b58c54a89 [DOCS] Small fixes in articles for 23.1 (#20950)
* Fixes

* Update deployment_intro.md

* Update docs/OV_Runtime_UG/deployment/deployment_intro.md

Co-authored-by: Sebastian Golebiewski <sebastianx.golebiewski@intel.com>

---------

Co-authored-by: Sebastian Golebiewski <sebastianx.golebiewski@intel.com>
2023-11-08 13:41:13 +01:00
Sebastian Golebiewski
4d16873c2e Update get_started.md (#20932) 2023-11-08 07:00:31 +01:00
Sebastian Golebiewski
42c63315a2 Updating notebooks (#20866) 2023-11-06 10:09:14 +01:00
Maciej Smyk
807b26236b [DOCS] Install Guide Update for 23.1 (#20685)
* missing info

* System Requirements

* Update installing-openvino-from-archive-macos.md

* system requirements update
2023-11-06 08:09:33 +01:00
Sebastian Golebiewski
70f190fe4a Porting 20784 (#20796) 2023-10-31 13:46:16 +01:00
Sebastian Golebiewski
cd557d1ff3 Update installing-openvino-from-archive-windows.md (#20757) 2023-10-30 10:46:34 +01:00
Sebastian Golebiewski
49fae17205 fix headers (#20732)
Porting: https://github.com/openvinotoolkit/openvino/pull/20728
2023-10-27 12:58:26 +02:00
Sebastian Golebiewski
9aa4e5f60c [DOCS] Fix command for Building with Ninja for 23.1 (#20606)
* Fix command for Building with Ninja

Removing repeated parameter.

* Update docs/dev/build_windows.md

---------

Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
2023-10-19 15:30:49 +04:00
Maciej Smyk
6bc58a54a4 Fix doc snippet (#20544) 2023-10-18 13:12:53 +02:00
Maciej Smyk
0d95ebc552 [DOCS] Supported formats update for Benchmark C++ Tool for 23.1 (#20489)
* Update cpp_benchmark_tool.md

* Update cpp_benchmark_tool.md

* Update cpp_benchmark_tool.md
2023-10-17 08:07:01 +02:00
Karol Blaszczak
eaef374483 Docs system requirements reorg port 23.1 (#20458)
* [DOCS] - system requirements reorg

* sys recs

* Update docs/articles_en/about_openvino.md
2023-10-13 23:46:48 +02:00
Karol Blaszczak
0299c0aa92 [DOCS] Added Gen AI landing page (#20253) (#20457)
authored-by: Alexander Kozlov <alexander.kozlov@intel.com>
2023-10-13 18:21:10 +02:00
Maciej Smyk
f1736a6d7f Update installing-openvino-overview.md (#20441) 2023-10-13 16:37:21 +02:00
Alexander Suvorov
2f085f5a23 [DOCS] update selector tool (#20342) 2023-10-11 14:24:07 +02:00
Maciej Smyk
3e552ad2b5 Update openvino_intro.md (#20384) 2023-10-11 11:03:24 +02:00
Sebastian Golebiewski
34759054f0 Direct Github link to a specific notebook (#20356) 2023-10-10 15:44:46 +02:00
Tatiana Savina
ed7d153bc3 fix typo (#20280) 2023-10-05 17:22:07 +00:00
Karol Blaszczak
10a44ed5a1 [DOCS] prerelease notes update port (#20277) 2023-10-05 18:08:35 +02:00
Maciej Smyk
1141ea54c9 [DOCS] Inference with OpenVINO Runtime update for 23.1 (#20267)
* Update openvino_intro.md

* Update docs/articles_en/openvino_workflow/openvino_intro.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/articles_en/openvino_workflow/openvino_intro.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

---------

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
2023-10-05 14:55:40 +02:00
Maciej Smyk
bb61694403 Conan guide with fixes (#20259) 2023-10-05 13:01:42 +02:00
Aleksandr Voron
4c868cc909 [CPU][ARM][DOC] ARM documentation - 23.1 release (#19464)
* arm docs

* Update docs/OV_Runtime_UG/supported_plugins/CPU.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/CPU.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* remove f16

* add space

* update plugin link

---------

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>
2023-10-05 10:08:43 +02:00
Maciej Smyk
acbbdb3f2f Update release_notes.md (#20114) 2023-10-03 08:17:32 +00:00
Tatiana Savina
14fdb261c9 add ovc to img (#20192) 2023-10-02 15:58:29 +02:00
Tatiana Savina
128ee0b04f [DOCS] Fix conversion docs comments (#20144)
* fix comments

* more fixes

* fix missing part
2023-10-02 11:32:30 +02:00
Sebastian Golebiewski
6e4f73d0e4 VCPKG docs for dynamic OpenVINO build (#20140)
Porting:
https://github.com/openvinotoolkit/openvino/pull/20127
2023-09-29 10:11:08 +04:00
Sebastian Golebiewski
a86958a867 Fix Issue 20097 - providing an easy to read Cmake command (#20131)
Porting: https://github.com/openvinotoolkit/openvino/pull/20126
2023-09-28 18:15:30 +04:00
Ilya Lavrenov
de5932460c Fixed NCC style check (#20121) (#20123) 2023-09-28 16:08:04 +04:00
bstankix
fd4b0928e5 [DOCS] Bugfix coveo sa-search url (#20055) 2023-09-26 15:16:08 +02:00
jmacekx
83d9131aba [DOCS] use etree to fix docs generated by doxygen (#20011) 2023-09-25 10:39:26 +02:00
Ilya Lavrenov
3aa125cb6c [GPU] Fixed static init order for serialization (#19768) (#19795)
Co-authored-by: Vladimir Paramuzov <vladimir.paramuzov@intel.com>
2023-09-22 15:40:38 +04:00
Alexander Kozlov
59338fa758 Updated model compression README (#19967) 2023-09-20 11:04:20 +02:00
Ilya Lavrenov
b2217fdafd Fixed compilation on macOS 14 with new core development tools (#19947) 2023-09-19 14:21:07 +04:00
Artyom Anokhov
25e33af382 configurations-for-intel-gpu: Updated info for ubuntu20 with the minimum version of GPU driver used in validation (#19824) 2023-09-19 00:37:18 +04:00
Tatiana Savina
11b9ccb263 [DOCS] OVC docs adjustments (#19918) 2023-09-18 16:43:35 +02:00
Karol Blaszczak
f12bd35e6f [DOCS] minor post release tweaks (#19916) 2023-09-18 16:40:31 +02:00
Ilya Lavrenov
cb65668b8e Updated release folder link from 2023.1.0 to 2023.1 (#19905) 2023-09-18 11:18:30 +02:00
Ilya Lavrenov
016340fcff [DOCS] updated archives for 2023.1 (#19896) 2023-09-18 10:04:09 +02:00
Maciej Smyk
2d97a5d59c [DOCS] Notebooks iframe update for 23.1 2023-09-15 15:58:27 +02:00
Karol Blaszczak
ce69d9709a [DOCS] benchmark update 23.1 (#19868) 2023-09-15 11:50:13 +02:00
Karol Blaszczak
2933ad5a13 [DOCS] troubleshooting article port
port: https://github.com/openvinotoolkit/openvino/pull/19855
2023-09-15 08:08:19 +02:00
Maciej Smyk
9fa65836f0 [DOCS] Notebooks Tutorials Page Update for 23.1 (#19854) 2023-09-14 17:52:34 +02:00
Tatiana Savina
fa14ae0a56 [DOCS] Optimization images change (#19849)
* change images

* change workflow

* case and description change
2023-09-14 17:10:01 +02:00
Maciej Smyk
5eaeb08c63 [DOCS] Notebooks update for 23.1 (#19844)
* notebooks-update

* notebooks-update

* fix

* Update 121-convert-to-openvino-with-output.rst

* Update 121-convert-to-openvino-with-output.rst

* fix

* table of content fix

* fix

* fix

* fix

* fix

* Update tutorials.md

* fix

* fix

* Update 227-whisper-subtitles-generation-with-output.rst
2023-09-14 14:33:19 +02:00
Tatiana Savina
3cafb2e1fa [DOCS] release adjustments pass 3 - conversion 2023-09-14 14:29:32 +02:00
Przemyslaw Wysocki
959b4438a1 Remove upper bound (#19803) 2023-09-14 11:47:19 +00:00
Alexander Kozlov
a805c1e028 Introduce weight compression doc (#19680)
* Draft of weight compression docs

* Fixed typos

* Fixed typos

* Fixed typos

* Fixed build

* Update docs/optimization_guide/nncf/weight_compression.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Update docs/optimization_guide/nncf/weight_compression.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Update docs/optimization_guide/nncf/weight_compression.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Update docs/optimization_guide/nncf/weight_compression.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Update weight_compression.md

---------

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>
2023-09-14 13:13:03 +02:00
bstankix
94640fe583 [DOCS] Fix version number (#19816) 2023-09-13 13:30:15 +00:00
Tatiana Savina
cd9c31cb07 [DOCS] release adjustments pass 2 2023-09-13 11:54:24 +02:00
bstankix
7f5f63db23 [DOCS] Add units to benchmark graphs (#19800) 2023-09-13 09:39:54 +00:00
Karol Blaszczak
ea9fba4d49 [DOCS] legacy adjustments pass 1 (#19792) 2023-09-13 11:20:27 +02:00
Karol Blaszczak
c99375e10d [DOCS] OVC/convert_model Documentation port (#19555) (#19776)
port: #19555
2023-09-12 13:47:55 +02:00
Ilya Lavrenov
c7aa3ae808 Resolve ARM CPU plugin illegal instruction on older Linux systems (like Ubuntu 18.04) (#19717) (#19753) 2023-09-12 12:31:34 +04:00
Ilya Lavrenov
51fd9a176d Removed CMAKE_INSTALL_LIBDIR from oneDNN GPU configuration (#19716) 2023-09-12 12:10:42 +04:00
Bartlomiej Bielawa
87dab9f973 [DOCS] Modify dropdowns' css port (#19757) 2023-09-11 17:07:45 +02:00
Maciej Smyk
ecc3abb6cd [DOCS] Update of model_conversion_diagram.svg for 23.1 (#19739)
Port from #19737
Update of model_conversion_diagram.svg according to changes in #19555
2023-09-11 16:59:58 +02:00
bstankix
df0e500562 [DOCS] Port remove index for notebooks from nightly (#19744) 2023-09-11 13:42:37 +02:00
Karol Blaszczak
38de95d011 [DOCS] banner what's new text (#19736)
port: https://github.com/openvinotoolkit/openvino/pull/19730
2023-09-11 12:16:21 +02:00
Karol Blaszczak
862a3392cf [DOCS] Add selector tool 2023.1 port (#19712)
port: #19710
port done manually due to technical issues with cherrypicking
2023-09-11 12:15:09 +02:00
Karol Blaszczak
79aefc49af [DOCS] feature transition section (#19506) (#19732)
port: https://github.com/openvinotoolkit/openvino/pull/19506
2023-09-11 12:14:05 +02:00
Bartlomiej Bielawa
e0391a5855 [DOCS] Move sidebar_nav's arrows to the left (#19734) 2023-09-11 11:49:20 +02:00
Maciej Smyk
93168eebaa Update installing-openvino-pip.md (#19727) 2023-09-11 11:01:41 +02:00
Ilya Lavrenov
7a907dbe97 Try to use conan.lock file (#19709) (#19713) 2023-09-11 11:56:42 +04:00
Ilya Lavrenov
0380d76fb7 Added cmp0091 cmake policy to oneDNN GPU build (#19715) 2023-09-11 11:56:02 +04:00
Ilya Lavrenov
7a9a9c4cc2 Fixed compilation with C++17 on Windows (#19707) 2023-09-09 01:53:28 +04:00
Ilya Lavrenov
994ed2fe93 Fixed build with oneDNN GPU in some Conan scenarios (#19668) 2023-09-08 23:14:25 +02:00
bstankix
55ff188007 [DOCS] Port coveo search engine (#19704) 2023-09-08 13:58:38 +00:00
Maciej Smyk
86f9db3aad img-fix (#19700) 2023-09-08 15:46:54 +02:00
Przemyslaw Wysocki
95d863b06d Add upper bound for setuptools (#19672) (#19698) 2023-09-08 17:01:44 +04:00
Maciej Smyk
6d9ead34fc [DOCS] ShapeOf-3 & Supported Model Formats fix for 23.1 (#19695)
* fix

* Update supported_model_formats.md
2023-09-08 13:33:41 +02:00
Maciej Smyk
1448150a52 Remove IR version and nGraph name from the Opset doc (#19615) 2023-09-07 09:16:57 +02:00
Karol Blaszczak
91fd9fb416 [DOCS] Installation guide restructuring 23.1 port (#19576) 2023-09-06 17:47:09 +02:00
Maciej Smyk
08b092e542 [DOCS] contributing guidelines (#19613) 2023-09-06 16:18:53 +02:00
Ilya Lavrenov
99e872ddd7 Added tflite to 'predefined_frontends' list (#19599) (#19640) 2023-09-06 13:43:31 +04:00
Maciej Smyk
d6bd3e36f1 update (#19620) 2023-09-06 08:59:08 +02:00
Ilya Lavrenov
249ea638d1 Fixed CPU plugin compilation (#19628) 2023-09-06 10:48:58 +04:00
Maciej Smyk
54e50754d8 [DOCS] Fixing Optimize Preprocessing in notebooks 120 and 230 for 23.1 2023-09-05 18:12:40 +02:00
Maciej Smyk
8a10efef70 [DOCS] Extend sphinx_sitemap to add custom metadata for 23.1 2023-09-05 18:10:04 +02:00
Ilya Lavrenov
58ef070e02 Unlock custom creation of PLATFORM_TAG (#19594) 2023-09-05 12:13:48 +04:00
Maxim Vafin
5eb17273d0 [DOCS][PT FE] Update pytorch conversion docs (#19396) 2023-09-05 07:11:31 +02:00
Tatiana Savina
47b736f63e update links (#19563) 2023-09-04 15:04:36 +02:00
Sebastian Golebiewski
e51cac60a2 Adding Quantizing with Accuracy Control using NNCF notebook (#19587) 2023-09-04 14:55:58 +02:00
Maciej Smyk
d396dc06b8 [DOCS] 23.0 to 23.1 link update for 23.1 (#19586)
* 2023.1 link fix

* 2023.1 link fix

* 2023.1 link fix

* 2023.1 link fix
2023-09-04 14:06:30 +02:00
Maciej Smyk
192a01db8c [DOCS] Fix for Install from Docker Image for 23.1 (#19580)
* Update installing-openvino-docker-linux.md

* Update installing-openvino-docker-linux.md

* Update installing-openvino-docker-linux.md
2023-09-04 11:16:19 +02:00
Ilya Lavrenov
d6d27b5d0d A set of fixes for Conan C++ package manager (#19553) 2023-09-04 11:32:27 +04:00
Ilya Lavrenov
f4183c4be5 Fixed static build from build tree (#19565) 2023-09-04 11:31:51 +04:00
Karol Blaszczak
f1a956e539 [DOCS] pytorch usage adjustment - model formats 23.1 (#19561) 2023-09-04 08:41:14 +02:00
Maciej Smyk
13e3f9921f [DOCS] Torch.compile() documentation for 23.1 (#19540)
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
2023-09-04 08:38:29 +02:00
Przemyslaw Wysocki
e701484571 [PyOV] Cython bug cleanup (#19546)
* Revert "Bump cython in cmake (#19473)"

This reverts commit 35a1840b4b.

* Disable cmake check

* Robust detection of Cython version (#19537)

---------

Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
2023-09-02 21:46:14 +04:00
Karol Blaszczak
7f1c7c79c8 [DOCS] adjustment to supported devices port 23.1
adjustments will continue in following PRs
2023-09-01 10:45:03 +02:00
Ilya Lavrenov
68ef953c34 Aligned protobuf version in conanfile.txt with onnx recipe (#19526) 2023-09-01 10:35:20 +04:00
Anastasia Kuporosova
e2534eb9d6 Akup/cherry pick python snippets (#19480)
* first snippet

* part1

* update model state snippet

* add temp dir

* CPU snippets update (#134)

* snippets CPU 1/6

* snippets CPU 2/6

* snippets CPU 3/6

* snippets CPU 4/6

* snippets CPU 5/6

* snippets CPU 6/6

* make  module TODO: REMEMBER ABOUT EXPORTING PYTONPATH ON CIs ETC

* Add static model creation in snippets for CPU

* export_comp_model done

* leftovers

* apply comments

* apply comments -- properties

* small fixes

* add serialize

* rempve debug info

* return IENetwork instead of Function

* apply comments

* revert precision change in common snippets

* update opset

* [PyOV] Edit docs for the rest of plugins (#136)

* modify main.py

* GNA snippets

* GPU snippets

* AUTO snippets

* MULTI snippets

* HETERO snippets

* Added properties

* update gna

* more samples

* Update docs/OV_Runtime_UG/model_state_intro.md

* Update docs/OV_Runtime_UG/model_state_intro.md

---------

Co-authored-by: Jan Iwaszkiewicz <jan.iwaszkiewicz@intel.com>
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
2023-08-30 14:32:53 +02:00
Sebastian Golebiewski
d7d8660bda add-253 (#19501) 2023-08-30 13:46:31 +02:00
Sebastian Golebiewski
ca98745ac5 improve-snippets (#19497)
Porting: https://github.com/openvinotoolkit/openvino/pull/19479
2023-08-30 12:50:19 +02:00
Karol Blaszczak
8f721f971c [DOCS] including NPU documents (#19340) (#19494)
port: #19340
2023-08-30 11:51:27 +02:00
Aleksandr Voron
1d39196baf [CPU][ARM] Fix inference precision for behaviour tests (#19484) 2023-08-30 09:38:54 +04:00
Przemyslaw Wysocki
35a1840b4b Bump cython in cmake (#19473)
* Bump cython in cmake

* Remove the check altogether

* Debug fix

* Updtae docs req

* Add separate cython installation

* Correct python3

* debug

* Add python3-venv package

* Remove source

* Debug

* debug

* pip install flag

* Debug

* Debug

* debug

* debug

* Debug

---------

Co-authored-by: Artyom Anokhov <artyom.anokhov@intel.com>
2023-08-29 16:57:02 +02:00
Anastasia Kuporosova
f9c0e9690a Akup/cherry pick samples namespace update (#19478)
* Fix samples debug

* Fix linter

* Fix speech sample

---------

Co-authored-by: p-wysocki <przemyslaw.wysocki@intel.com>
2023-08-29 14:48:54 +02:00
Irina Efode
d43d5634b4 [CONFORMANCE] Fix for Eye-9 op in the Opset Conformance report (#19426)
Co-authored-by: Sofya Balandina <sofya.balandina@intel.com>
2023-08-29 13:49:58 +02:00
Zhang Yi
0255de9d9a [CPU]apply sdl requirement (#19441) 2023-08-29 12:49:15 +02:00
Pavel Esir
5e1e878ae7 [OVC] Fix output parsing (#19391) 2023-08-29 12:48:09 +02:00
Pavel Esir
0a055f738f [ovc] check if input is correct in split_inputs (#19424) 2023-08-29 12:47:49 +02:00
Anton Voronov
0040703b02 [CPU][OneDNN] fix zero pad perf issues (#19420) 2023-08-29 12:32:02 +02:00
Aleksandr Voron
cf316f12b6 revert default fp16 prec for arm (#19444) 2023-08-29 12:31:50 +02:00
Maciej Smyk
f128f9a7f3 [DOCS] Docker Guide Update for 23.1 (#19449)
* docker-update

* id fix

* Update installing-openvino-docker-linux.md

* Update installing-openvino-docker-linux.md

* Update installing-openvino-docker-linux.md
2023-08-29 08:45:18 +02:00
Sebastian Golebiewski
78b0010656 update-notebooks (#19453)
Add notebook 252-fastcomposer-image-generation. Fix indentation, admonitions, broken links and images.
2023-08-28 14:39:00 +02:00
Wilson Seok
51e0c002ac add sqrt activation support in cpu_impl (#19422) 2023-08-25 12:10:12 -07:00
Min, Byungil
85ea15896b [GPU] Resolve accuracy issue from clamp fused prims (#19408)
+ Added condition when clamp activation is added to fused-ops for fp16 overflow

Signed-off-by: Min, Byungil <byungil.min@intel.com>
2023-08-25 11:22:01 -07:00
Karol Blaszczak
394dd95b25 [DOCS] speech sample deprecation port 23.1 2023-08-25 12:41:47 +02:00
Maxim Vafin
4f84d752d4 [PT FE] Align bool types and same bit int types (#19400) 2023-08-25 13:52:23 +04:00
Gorokhov Dmitriy
8c9163930c [CPU] Fixed has_subnormals behavior for negative zero values (#19361) 2023-08-25 13:19:11 +04:00
Sofya Balandina
22045c6944 [conformance] Add shape mode and graph conv logic to test name (#19401) 2023-08-25 01:00:18 +02:00
Pavel Esir
7649942867 [tests] save into different file in compression_test.py (#19357)
* save into different file in compression_test.py

* reuse existing tmp_files mechanism
2023-08-24 18:02:15 +04:00
Artyom Anokhov
07f55354a6 [packaging] APT/YUM: Added few conflict version for dot-releases of 2023.0.X (#19336) 2023-08-24 15:56:45 +02:00
Maxim Vafin
60d5c9aedd [MO] Fix issue in nncf version verification (#19348)
* Return deleted nncf import

* Remove try-except, it hides exception

* Get version visout importing nncf module
2023-08-24 11:40:30 +02:00
Xuejun Zhai
daba3713c0 [Wrapper] Avoid creating new threads when converting legacy inference request to API 2.0 (#19376)
* Fix error in CVS-115961, caused by wrapper covert 1.0 req to 2.0 req create 2 more threads

Signed-off-by: Zhai, Xuejun <xuejun.zhai@intel.com>

* Eable the test of compareAutoBatchingToSingleBatch with batch size 4 & num req 64, after fix issue 115961

Signed-off-by: Zhai, Xuejun <xuejun.zhai@intel.com>

---------

Signed-off-by: Zhai, Xuejun <xuejun.zhai@intel.com>
2023-08-24 11:46:02 +04:00
Xiuchuan Zhai
f3e91c5473 disable useless and dangerous reorder from int to bf16 (#19369) 2023-08-24 15:11:15 +08:00
Ekaterina Aidova
b781b8f56c [PT FE]: allow example input list with one tensor (#19324) 2023-08-24 10:40:51 +04:00
Maxim Vafin
db49fa0255 [PT FE] Fix issue when FakeQuantize is not inserted after regular operations (#19315) 2023-08-24 09:58:08 +04:00
yanlan song
22db793c01 fix scan issue (#19321)
Signed-off-by: fishbell <bell.song@intel.com>
Co-authored-by: Chen Peter <peter.chen@intel.com>
2023-08-24 04:10:47 +00:00
Sebastian Golebiewski
f491ff3b7e [DOCS] Updating MO documentation for 23.1 (#19372)
* restructure-mo-docs

* apply-commits-18214

Applying commits from:

https://github.com/openvinotoolkit/openvino/pull/18214

* update

* Apply suggestions from code review

Co-authored-by: Anastasiia Pnevskaia <anastasiia.pnevskaia@intel.com>

* Apply suggestions from code review

* Update model_introduction.md

* Update docs/resources/tensorflow_frontend.md

* Create MO_Python_API.md

* Apply suggestions from code review

Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>

* update

---------

Co-authored-by: Anastasiia Pnevskaia <anastasiia.pnevskaia@intel.com>
Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>
2023-08-23 18:53:39 +02:00
Sebastian Golebiewski
0a0b690f57 CVS-113150 (#19371)
Porting:

https://github.com/openvinotoolkit/openvino/pull/18495
2023-08-23 18:27:55 +02:00
Maksim Kutakov
632f7e8356 [CPU] Fix deconvolution default primitive search algo (#19263)
* Fix deconvolution default primitive search

* Add dedicated test
2023-08-23 16:58:49 +02:00
Alexandra Sidorova
56f88804ee [Snippets] Fixed memory leak in LinearIR (#19317)
* [Snippets] Changed shared_ptr<Expression> in ExpressionPort to weak_ptr<Expression>

* [Snippets] Applied Ivan comment
2023-08-23 16:12:41 +02:00
Maksim Kutakov
952bd43844 [CPU] Fix convolution plus sum layout alignment (#19280) 2023-08-23 16:29:18 +04:00
Vladislav Golubev
0614cd5d88 [CPU] Optimal number of streams calculation moved after LPT (#19312) 2023-08-23 16:28:49 +04:00
Marcin Kusmierski
f0e7be1d2b [GNA] Fix memory leak in insert_copy_layer.cpp (#19300)
* Added cleanup transformation for inert copy layer transforamtions
2023-08-23 13:11:08 +01:00
Anton Voronov
1eacf7d70d [CPU][ONEDNN] jit_uni_dw_conv_row_f32: fixed post ops start index (#19352) 2023-08-23 15:52:28 +04:00
Ilya Churaev
b6ccd6cdff Disable proxy plugin for 2023.1 (#19081)
* Disable proxy plugin for 2023.1

* Do not run proxy tests
2023-08-23 15:41:51 +04:00
Sebastian Golebiewski
29ba2fea26 update-notebooks (#19338) 2023-08-22 15:37:41 +02:00
Sebastian Golebiewski
015b344d84 link-to-frontend (#19334) 2023-08-22 12:54:21 +02:00
Surya Siddharth Pemmaraju
e2d39fec68 Added openvino/torch folder for simplyfing the import (#19281) 2023-08-22 14:24:16 +04:00
Mustafa Cavus
b35ae397b7 TorchFX bugfix missing core object in get_device() (#19278) 2023-08-22 14:23:19 +04:00
Zhang Yi
d4d13663cc [CPU]Fix MLAS threadpool of MlasExecuteThreaded (#19294) 2023-08-22 12:52:22 +04:00
Zhang Yi
c70419a0a2 [CPU] Use parallel_nt_static for MLAS threading (#19301) 2023-08-22 11:58:13 +04:00
Roman Kazantsev
df2bcf7dbd [TF FE] Use regular Convolution in case dynamic input channels (#19253) (#19303)
* [TF FE] Use regular Convolution in case dynamic input channels

This solution is aligned with the legacy frontend but it has limitation.
This is a temporal solution until the core obtains ShapeOf evaluator.



* Remove unused variable from the test



* Fix unit-test

* Update mo unit-test

---------

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2023-08-22 09:28:07 +02:00
Sebastian Golebiewski
4a84e84ece port-19307 (#19311)
Porting: #19307
Updating tutorials: adding table of contents and new notebooks.
2023-08-21 16:54:00 +02:00
Anton Voronov
298c6fb8d5 [CPU] Fixed is_on_constant_path() using in all places (#19256) 2023-08-21 18:41:31 +04:00
Wanglei Shen
48b70dc6a5 fix SDL issue (CID 1518459) (#19288) 2023-08-21 20:24:51 +08:00
Maxim Vafin
ec50afd22b [PT FE] Support non boolean inputs for __or__ and __and__ operations (#19272)
Add test for __or__
2023-08-21 13:13:25 +02:00
Wanglei Shen
cd8bb9fb88 fix SDL issue (CID 1518457) for 2023.1 branch (#19290)
* fix SDL issue (CID 1518457) for 2023.1 branch

* update for comments

* update for comments
2023-08-21 17:43:03 +08:00
Georgy Krivoruchko
8a0b844750 [ONNX] Fixed issue with missing sort when wstring path (#19250) (#19258)
* Fixed issue with missing sort when wstring path

* Fixed CI linux builds
2023-08-19 08:47:10 +04:00
Vitaliy Urusovskij
4afb59d2ab Fix uninit members in default GroupNormalization() (#19244) (#19260) 2023-08-18 14:03:24 +00:00
Alina Kladieva
61bde3b56d [ci/azure] Use 2023/1 ref (#19249) 2023-08-17 16:36:14 +00:00
1127 changed files with 54576 additions and 16294 deletions

View File

@@ -32,13 +32,13 @@ resources:
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/openvino_contrib
ref: master
ref: releases/2023/1
- repository: testdata
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/testdata
ref: master
ref: releases/2023/1
variables:
- group: github
@@ -365,9 +365,6 @@ jobs:
- script: $(RUN_PREFIX) $(INSTALL_TEST_DIR)/ov_inference_unit_tests --gtest_print_time=1 --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-InferenceUnit.xml
displayName: 'Inference Unit Tests'
- script: $(RUN_PREFIX) $(INSTALL_TEST_DIR)/ov_proxy_plugin_tests --gtest_print_time=1 --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-OVProxyTests.xml
displayName: 'OV Proxy Plugin Tests'
- script: $(RUN_PREFIX) $(INSTALL_TEST_DIR)/ov_hetero_func_tests --gtest_print_time=1 --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-OVHeteroFuncTests.xml
displayName: 'OV Hetero Func Tests'

View File

@@ -35,6 +35,7 @@ resources:
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/testdata
ref: releases/2023/1
variables:
- group: github

View File

@@ -4,7 +4,7 @@ resources:
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/openvino_contrib
ref: master
ref: releases/2023/1
variables:
- group: github

View File

@@ -33,20 +33,17 @@ pr:
resources:
repositories:
- repository: openvino
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/openvino
- repository: openvino_contrib
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/openvino_contrib
ref: releases/2023/1
- repository: testdata
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/testdata
ref: releases/2023/1
jobs:
- job: CUDAPlugin_Lin

View File

@@ -34,7 +34,7 @@ resources:
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/testdata
ref: master
ref: releases/2023/1
jobs:
- job: Lin_Debian
@@ -278,12 +278,6 @@ jobs:
LD_LIBRARY_PATH: $(INSTALL_TEST_DIR)
displayName: 'OV Core UT'
- script: |
$(INSTALL_TEST_DIR)/ov_proxy_plugin_tests --gtest_print_time=1 --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-OVProxyTests.xml
env:
LD_LIBRARY_PATH: $(INSTALL_TEST_DIR)
displayName: 'OV Proxy Tests'
- script: |
$(INSTALL_TEST_DIR)/ov_hetero_func_tests --gtest_print_time=1 --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-OVHeteroFuncTests.xml
env:

View File

@@ -4,7 +4,7 @@
# type: github
# endpoint: openvinotoolkit
# name: openvinotoolkit/testdata
# ref: master
# ref: releases/2023/1
jobs:
- job: Lin_lohika

View File

@@ -35,13 +35,13 @@ resources:
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/openvino_contrib
ref: master
ref: releases/2023/1
- repository: testdata
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/testdata
ref: master
ref: releases/2023/1
variables:
- group: github
@@ -185,10 +185,6 @@ jobs:
displayName: 'OV Core UT'
enabled: 'false'
- script: $(SETUPVARS) && $(INSTALL_TEST_DIR)/ov_proxy_plugin_tests --gtest_print_time=1 --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-OVProxyTests.xml
displayName: 'OV Proxy Plugin Tests'
enabled: 'false'
- script: $(SETUPVARS) && $(INSTALL_TEST_DIR)/ov_hetero_func_tests --gtest_print_time=1 --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-OVHeteroFuncTests.xml
displayName: 'OV Hetero Func Tests'
enabled: 'false'

View File

@@ -32,13 +32,13 @@ resources:
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/openvino_contrib
ref: master
ref: releases/2023/1
- repository: testdata
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/testdata
ref: master
ref: releases/2023/1
jobs:
- job: Win
@@ -261,9 +261,6 @@ jobs:
- script: call $(SETUPVARS) && $(INSTALL_TEST_DIR)\ov_inference_unit_tests --gtest_print_time=1 --gtest_output=xml:$(INSTALL_TEST_DIR)\TEST-InferenceUnit.xml
displayName: 'Inference Unit Tests'
- script: call $(SETUPVARS) && $(INSTALL_TEST_DIR)\ov_proxy_plugin_tests --gtest_print_time=1 --gtest_output=xml:$(INSTALL_TEST_DIR)\TEST-OVProxyTests.xml
displayName: 'OV Proxy Plugin Tests'
- script: call $(SETUPVARS) && $(INSTALL_TEST_DIR)\ov_hetero_func_tests --gtest_print_time=1 --gtest_output=xml:$(INSTALL_TEST_DIR)\TEST-OVHeteroFuncTests.xml
displayName: 'OV Hetero Func Tests'

View File

@@ -35,6 +35,7 @@ resources:
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/testdata
ref: releases/2023/1
variables:
- group: github

View File

@@ -85,8 +85,8 @@ jobs:
- name: Install Clang dependency
run: |
sudo apt update
sudo apt --assume-yes remove clang-7 clang-8 clang-9 clang-10 clang-11 clang-12 clang-13
sudo apt --assume-yes install libclang-14-dev
sudo apt --assume-yes remove clang-7 clang-8 clang-9 clang-10 clang-11 clang-12 clang-13 clang-15
sudo apt --assume-yes install clang-14 libclang-14-dev
- name: Install Python-based dependencies
run: python3 -m pip install -r cmake/developer_package/ncc_naming_style/requirements_dev.txt

View File

@@ -101,9 +101,6 @@ jobs:
- name: Run OV core unit tests
run: ${{ github.workspace }}/bin/intel64/Release/ov_core_unit_tests
- name: Run OV Proxy plugin tests
run: ${{ github.workspace }}/bin/intel64/Release/ov_proxy_plugin_tests
- name: Run OV Hetero Func tests
run: ${{ github.workspace }}/bin/intel64/Release/ov_hetero_func_tests

View File

@@ -451,11 +451,6 @@ jobs:
source ${{ env.INSTALL_DIR }}/setupvars.sh
${{ env.INSTALL_TEST_DIR }}/ov_auto_batch_func_tests --gtest_output=xml:${{ env.INSTALL_TEST_DIR }}/TEST-ov_auto_batch_func_tests.xml
- name: Proxy Plugin Tests
run: |
source ${{ env.INSTALL_DIR }}/setupvars.sh
${{ env.INSTALL_TEST_DIR }}/ov_proxy_plugin_tests --gtest_print_time=1 --gtest_output=xml:${{ env.INSTALL_TEST_DIR }}/TEST-OVProxyTests.xml
- name: Hetero Func Tests
run: |
source ${{ env.INSTALL_DIR }}/setupvars.sh

View File

@@ -640,11 +640,6 @@ jobs:
run: |
call "${{ env.INSTALL_DIR }}\\setupvars.bat" && ${{ env.INSTALL_TEST_DIR }}/ov_auto_batch_func_tests --gtest_output=xml:${{ env.INSTALL_TEST_DIR }}/TEST-ov_auto_batch_func_tests.xml
- name: Proxy Plugin Tests
shell: cmd
run: |
call "${{ env.INSTALL_DIR }}\\setupvars.bat" && ${{ env.INSTALL_TEST_DIR }}/ov_proxy_plugin_tests --gtest_print_time=1 --gtest_output=xml:${{ env.INSTALL_TEST_DIR }}/TEST-OVProxyTests.xml
- name: Hetero Func Tests
shell: cmd
run: |

View File

@@ -47,6 +47,7 @@ message (STATUS "CMAKE_GENERATOR ....................... " ${CMAKE_GENERATOR})
message (STATUS "CPACK_GENERATOR ....................... " ${CPACK_GENERATOR})
message (STATUS "CMAKE_C_COMPILER_ID ................... " ${CMAKE_C_COMPILER_ID})
message (STATUS "CMAKE_CXX_COMPILER_ID ................. " ${CMAKE_CXX_COMPILER_ID})
message (STATUS "CMAKE_CXX_STANDARD .................... " ${CMAKE_CXX_STANDARD})
if(OV_GENERATOR_MULTI_CONFIG)
string(REPLACE ";" " " config_types "${CMAKE_CONFIGURATION_TYPES}")
message (STATUS "CMAKE_CONFIGURATION_TYPES ............. " ${config_types})

View File

@@ -1,53 +1,88 @@
# How to contribute to the OpenVINO repository
# Contributing to OpenVINO
We welcome community contributions to OpenVINO™. Please read the following guide to learn how to find ideas for contribution, follow best practices for pull requests, and test your changes with our established checks.
## How to contribute to the OpenVINO project
OpenVINO™ is always looking for opportunities to improve and your contributions
play a big role in this process. There are several ways you can make the
product better:
## Before you start contributing you should
### Provide Feedback
- Make sure you agree to contribute your code under [OpenVINO™ (Apache 2.0) license](https://github.com/openvinotoolkit/openvino/blob/master/LICENSE).
- Decide what youre going to contribute. If you are not sure what you want to work on, check out [Contributions Welcome](https://github.com/openvinotoolkit/openvino/issues/17502). See if there isn't anyone already working on the subject you choose, in which case you may still contribute, providing support and suggestions for the given issue or pull request.
- If you are going to fix a bug, check if it still exists. You can do it by building the latest master branch and making sure that the error is still reproducible there. We do not fix bugs that only affect older non-LTS releases like 2020.2, for example (see more details about our [branching strategy](https://github.com/openvinotoolkit/openvino/wiki/Branches)).
* **Report bugs / issues**
If you experience faulty behavior in OpenVINO or its components, you can
[create a new issue](https://github.com/openvinotoolkit/openvino/issues)
in the GitHub issue tracker.
* **Propose new features / improvements**
If you have a suggestion for improving OpenVINO or want to share your ideas, you can open a new
[GitHub Discussion](https://github.com/openvinotoolkit/openvino/discussions).
If your idea is already well defined, you can also create a
[Feature Request Issue](https://github.com/openvinotoolkit/openvino/issues/new?assignees=octocat&labels=enhancement%2Cfeature&projects=&template=feature_request.yml&title=%5BFeature+Request%5D%3A+)
In both cases, provide a detailed description, including use cases, benefits, and potential challenges.
If your points are especially well aligned with the product vision, they will be included in the
[development roadmap](./ROADMAP.md).
User feedback is crucial for OpenVINO development and even if your input is not immediately prioritized,
it may be used at a later time or undertaken by the community, regardless of the official roadmap.
### Contribute Code Changes
* **Fix Bugs or Develop New Features**
If you want to help improving OpenVINO, choose one of the issues reported in
[GitHub Issue Tracker](https://github.com/openvinotoolkit/openvino/issues) and
[create a Pull Request](./CONTRIBUTING_PR.md) addressing it. Consider one of the
tasks listed as [first-time contributions](https://github.com/openvinotoolkit/openvino/issues/17502).
If the feature you want to develop is more complex or not well defined by the reporter,
it is always a good idea to [discuss it](https://github.com/openvinotoolkit/openvino/discussions)
with OpenVINO developers first. Before creating a new PR, check if nobody is already
working on it. In such a case, you may still help, having aligned with the other developer.
Importantly, always check if the change hasn't been implemented before you start working on it!
You can build OpenVINO using the latest master branch and make sure that it still needs your
changes. Also, do not address issues that only affect older non-LTS releases, like 2022.2.
* **Develop a New Device Plugin**
Since the market of computing devices is constantly evolving, OpenVINO is always open to extending
its support for new hardware. If you want to run inference on a device that is currently not supported,
you can see how to develop a new plugin for it in the
[Plugin Developer Guide](https://docs.openvino.ai/canonical/openvino_docs_ie_plugin_dg_overview.html).
### Improve documentation
* **OpenVINO developer documentation** is contained entirely in this repository, under the
[./docs/dev](https://github.com/openvinotoolkit/openvino/tree/master/docs/dev) folder.
* **User documentation** is built from several sources and published at
[docs.openvino.ai](docs.openvino.ai), which is the recommended place for reading
these documents. Use the files maintained in this repository only for editing purposes.
* The easiest way to help with documentation is to review it and provide feedback on the
existing articles. Whether you notice a mistake, see the possibility of improving the text,
or think more information should be added, you can reach out to any of the documentation
contributors to discuss the potential changes.
You can also create a Pull Request directly, following the [editor's guide](./docs/CONTRIBUTING_DOCS.md).
## "Fork & Pull Request model" for code contribution
### Promote and Support OpenVINO
### [](https://github.com/openvinotoolkit/openvino/blob/master/CONTRIBUTING.md#the-instruction-in-brief)The instruction in brief
* **Popularize OpenVINO**
Articles, tutorials, blog posts, demos, videos, and any other involvement
in the OpenVINO community is always a welcome contribution. If you discuss
or present OpenVINO on various social platforms, you are raising awareness
of the product among A.I. enthusiasts and enabling other people to discover
the toolkit. Feel free to reach out to OpenVINO developers if you need help
with making such community-based content.
- Register at GitHub. Create your fork of the OpenVINO™ repository [https://github.com/openvinotoolkit/openvino](https://github.com/openvinotoolkit/openvino) (see [https://help.github.com/articles/fork-a-repo](https://help.github.com/articles/fork-a-repo) for details).
- Install Git.
- Set your user name and email address in Git configuration according to the GitHub account (see [First-Time-Git-Setup](https://git-scm.com/book/en/v2/Getting-Started-First-Time-Git-Setup) for details).
- Choose a task for yourself. It may be a bugfix or an entirely new piece of code.
- Choose a base branch for your work. More details about branches and policies are here: [Branches](https://github.com/openvinotoolkit/openvino/wiki/Branches)
- Clone your fork to your computer.
- Create a new branch (give it a meaningful name) from the base branch of your choice.
- Modify / add the code, following our [Coding Style Guide](./docs/dev/coding_style.md).
- If you want to add a new sample, please have a look at the [Guide for contributing to C++/C/Python IE samples](https://github.com/openvinotoolkit/openvino/wiki/SampleContribute)
- If you want to contribute to the documentation and want to add a new guide, follow that instruction [Documentation guidelines](https://github.com/openvinotoolkit/openvino/wiki/CodingStyleGuideLinesDocumentation)
- Run testsuite locally:
- execute each test binary from the artifacts directory, e.g. `<source dir>/bin/intel64/Release/ieFuncTests`
- When you are done, make sure that your branch is up to date with latest state of the branch you want to contribute to (e.g. `git fetch upstream && git merge upstream/master`). If so, push your branch to your GitHub fork and create a pull request from your branch to the base branch (see [using-pull-requests](https://help.github.com/articles/using-pull-requests) for details).
## Making a good pull request
Following these guidelines will increase the likelihood of your pull request being accepted:
- One PR one issue.
- Build perfectly on your local system.
- Choose the right base branch, based on our [Branch Guidelines](https://github.com/openvinotoolkit/openvino/wiki/Branches).
- Follow the [Coding Style Guide](./docs/dev/coding_style.md) for your code.
- Document your contribution, if you decide it may benefit OpenVINO users. You may do it yourself by editing the files in the "docs" directory or contact someone working with documentation to provide them with the right information.
- Cover your changes with test.
- Add the license statement at the top of new files [C++ example](https://github.com/openvinotoolkit/openvino/blob/master/samples/cpp/classification_sample_async/main.cpp#L1-L2), [Python example](https://github.com/openvinotoolkit/openvino/blob/master/samples/python/hello_classification/hello_classification.py#L3-L4).
- Add proper information to the PR: a meaningful title, the reason why you made the commit, and a link to the issue page, if it exists.
- Remove changes unrelated to the PR.
- If it is still WIP and you want to check CI test results early, use a _Draft_ PR.
- Submit your PR and become an OpenVINO™ contributor!
* **Help Other Community Members**
If you are an experienced OpenVINO user and want to help, you can always
share your expertise with the community. Check GitHub Discussions and
Issues to see if you can help someone.
## Testing and merging pull requests
## License
Your pull request will be automatically tested by OpenVINO™'s precommit (testing statuses are automatically reported as "green" or "red" circles in precommit steps on the PR page). If any builders fail, you need to fix the issues before the PR can be merged. If you push any changes to your branch on GitHub the tests will re-run automatically. No need to close pull request and open a new one!
When an assigned reviewer accepts the pull request and the pre-commit is "green", the review status is set to "Approved", which informs OpenVINO™ maintainers that they can merge your pull request.
By contributing to the OpenVINO project, you agree that your contributions will be
licensed under the terms stated in the [LICENSE](./LICENSE.md) file.

111
CONTRIBUTING_DOCS.md Normal file
View File

@@ -0,0 +1,111 @@
# OpenVINO Documentation Guide
## Basic article structure
OpenVINO documentation is built using Sphinx and the reStructuredText formatting.
That means the basic formatting rules need to be used:
### White Spaces
OpenVINO documentation is developed to be easily readable in both html and
reStructuredText. Here are some suggestions on how to make it render nicely
and improve document clarity.
### Headings (including the article title)
They are made by "underscoring" text with punctuation marks (at least as
many marks as letters in the underscored header). We use the following convention:
```
H1
====================
H2
####################
H3
++++++++++++++++++++
H4
--------------------
H5
....................
```
### Line length
In programming, a limit of 80 characters per line is a common BKM. It may also apply
to reading natural languages fairly well. For this reason, we aim at lines of around
70 to 100 characters long. The limit is not a strict rule but rather a guideline to
follow in most cases. The breaks will not translate to html, and rightly so, but will
make reading and editing documents in GitHub or an editor much easier.
### Tables
Tables may be difficult to implement well in websites. For example, longer portions
of text, like descriptions, may render them difficult to read (e.g. improper cell
widths or heights). Complex tables may also be difficult to read in source files.
To prevent that, check the [table directive documentation](https://www.sphinx-doc.org/en/master/usage/restructuredtext/directives.html#table-directives)
and see our custom directives. Use the following guidelines for easier editing:
* For very big and complex data sets: use a list instead of a table or remove
the problematic content from the table and implement it differently.
* For very big and complex data sets that need to use tables: use an external
file (e.g. PDF) and link to it.
* For medium tables that look bad in source (e.g. due to long lines of text),
use the reStructuredText list table format.
* For medium and small tables, use the reStructuredText grid or simple table formats.
## Cross-linking
There are several directives Sphinx uses for linking, each has its purpose and format.
Follow these guidelines for consistent results:
* Avoid absolute references to internal documents as much as possible (link to source, not html).
* Note that sphinx uses the "back-tick" character and not the "inverted-comma" => ` vs. '
* When a file path starts at the same directory is used, put "./" at its beginning.
* Always add a space before the opening angle bracket ("<") for target files.
Use the following formatting for different links:
* link to an external page / file
* `` `text <url> `__ ``
* use a double underscore for consistency
* link to an internal documentation page / file
* `` :doc:`a docs page <relative file path>` ``
* Link to an rst or md file within our documentation, so that it renders properly in html
* link to a header on the same page
* `` 'a header in the same article <this-is-section-header-title>`__ ``
* anchors are created automatically for all existing headers
* such anchor looks like the header, with minor adjustments:
* all letters are lower case,
* remove all special glyphs, like brackets,
* replace spaces with hyphens
* Create an anchor in an article
* `` .. _anchor-in-the target-article:: ``
* put it before the header to which you want to link
* See the rules for naming anchors / labels at the bottom of this article
* link to an anchor on a different page in our documentation
* `` :ref:`the created anchor <anchor-in-the target-article>` ``
* link to the anchor using just its name
* anchors / labels
Read about anchors
Sphinx uses labels to create html anchors, which can be linked to from anywhere in documentation.
Although they may be put at the top of any article to make linking to it very easy, we do not use
this approach. Every label definition starts with an underscore, the underscore is not used in links.
Most importantly, every label needs to be globally unique. It means that it is always a good
practice to start their labels with a clear identifier of the article they reside in.

63
CONTRIBUTING_PR.md Normal file
View File

@@ -0,0 +1,63 @@
# How to Prepare a Good PR
OpenVINO is an open-source project and you can contribute to its code directly.
To do so, follow these guidelines for creating Pull Requests, so that your
changes get the highest chance of being merged.
## General Rules of a Good Pull Request
* Create your own fork of the repository and use it to create PRs.
Avoid creating change branches in the main repository.
* Choose a proper branch for your work and create your own branch based on it.
* Give your branches, commits, and Pull Requests meaningful names and descriptions.
It helps to track changes later. If your changes cover a particular component,
you can indicate it in the PR name as a prefix, for example: ``[DOCS] PR name``.
* Follow the [OpenVINO code style guide](https://github.com/openvinotoolkit/openvino/blob/master/docs/dev/coding_style.md).
* Make your PRs small - each PR should address one issue. Remove all changes
unrelated to the PR.
* Document your contribution! If your changes may impact how the user works with
OpenVINO, provide the information in proper articles. You can do it yourself,
or contact one of OpenVINO documentation contributors to work together on
developing the right content.
* For Work In Progress, or checking test results early, use a Draft PR.
## Ensure Change Quality
Your pull request will be automatically tested by OpenVINO™'s pre-commit and marked
as "green" if it is ready for merging. If any builders fail, the status is "red,"
you need to fix the issues listed in console logs. Any change to the PR branch will
automatically trigger the checks, so you don't need to recreate the PR, Just wait
for the updated results.
Regardless of the automated tests, you should ensure the quality of your changes:
* Test your changes locally:
* Make sure to double-check your code.
* Run tests locally to identify and fix potential issues (execute test binaries
from the artifacts directory, e.g. ``<source dir>/bin/intel64/Release/ieFuncTests``)
* Before creating a PR, make sure that your branch is up to date with the latest
state of the branch you want to contribute to (e.g. git fetch upstream && git
merge upstream/master).
## Branching Policy
* The "master" branch is used for development and constitutes the base for each new release.
* Each OpenVINO release has its own branch: ``releases/<year>/<release number>``.
* The final release each year is considered a Long Term Support version,
which means it remains active.
* Contributions are accepted only by active branches, which are:
* the "master" branch for future releases,
* the most recently published version for fixes,
* LTS versions (for two years from their release dates).
## Need Additional Help? Check these Articles
* [How to create a fork](https://help.github.com/articles/fork-a-rep)
* [Install Git](https://git-scm.com/book/en/v2/Getting-Started-First-Time-Git-Setup)
* If you want to add a new sample, please have a look at the Guide for contributing
to C++/C/Python IE samples and add the license statement at the top of new files for
C++ example, Python example.

View File

@@ -68,24 +68,24 @@ The OpenVINO™ Runtime can infer models on different hardware devices. This sec
<tbody>
<tr>
<td rowspan=2>CPU</td>
<td> <a href="https://docs.openvino.ai/2023.0/openvino_docs_OV_UG_supported_plugins_CPU.html#doxid-openvino-docs-o-v-u-g-supported-plugins-c-p-u">Intel CPU</a></tb>
<td> <a href="https://docs.openvino.ai/2023.1/openvino_docs_OV_UG_supported_plugins_CPU.html#doxid-openvino-docs-o-v-u-g-supported-plugins-c-p-u">Intel CPU</a></tb>
<td><b><i><a href="./src/plugins/intel_cpu">openvino_intel_cpu_plugin</a></i></b></td>
<td>Intel Xeon with Intel® Advanced Vector Extensions 2 (Intel® AVX2), Intel® Advanced Vector Extensions 512 (Intel® AVX-512), and AVX512_BF16, Intel Core Processors with Intel AVX2, Intel Atom Processors with Intel® Streaming SIMD Extensions (Intel® SSE)</td>
</tr>
<tr>
<td> <a href="https://docs.openvino.ai/2023.0/openvino_docs_OV_UG_supported_plugins_CPU.html">ARM CPU</a></tb>
<td><b><i><a href="https://github.com/openvinotoolkit/openvino_contrib/tree/master/modules/arm_plugin">openvino_arm_cpu_plugin</a></i></b></td>
<td>Raspberry Pi™ 4 Model B, Apple® Mac mini with M1 chip, NVIDIA® Jetson Nano™, Android™ devices
<td> <a href="https://docs.openvino.ai/2023.1/openvino_docs_OV_UG_supported_plugins_CPU.html#doxid-openvino-docs-o-v-u-g-supported-plugins-c-p-u">ARM CPU</a></tb>
<td><b><i><a href="./src/plugins/intel_cpu">openvino_arm_cpu_plugin</a></i></b></td>
<td>Raspberry Pi™ 4 Model B, Apple® Mac with Apple silicon
</tr>
<tr>
<td>GPU</td>
<td><a href="https://docs.openvino.ai/2023.0/openvino_docs_OV_UG_supported_plugins_GPU.html#doxid-openvino-docs-o-v-u-g-supported-plugins-g-p-u">Intel GPU</a></td>
<td><a href="https://docs.openvino.ai/2023.1/openvino_docs_OV_UG_supported_plugins_GPU.html#doxid-openvino-docs-o-v-u-g-supported-plugins-g-p-u">Intel GPU</a></td>
<td><b><i><a href="./src/plugins/intel_gpu">openvino_intel_gpu_plugin</a></i></b></td>
<td>Intel Processor Graphics, including Intel HD Graphics and Intel Iris Graphics</td>
</tr>
<tr>
<td>GNA</td>
<td><a href="https://docs.openvino.ai/2023.0/openvino_docs_OV_UG_supported_plugins_GNA.html#doxid-openvino-docs-o-v-u-g-supported-plugins-g-n-a">Intel GNA</a></td>
<td><a href="https://docs.openvino.ai/2023.1/openvino_docs_OV_UG_supported_plugins_GNA.html#doxid-openvino-docs-o-v-u-g-supported-plugins-g-n-a">Intel GNA</a></td>
<td><b><i><a href="./src/plugins/intel_gna">openvino_intel_gna_plugin</a></i></b></td>
<td>Intel Speech Enabling Developer Kit, Amazon Alexa* Premium Far-Field Developer Kit, Intel Pentium Silver J5005 Processor, Intel Pentium Silver N5000 Processor, Intel Celeron J4005 Processor, Intel Celeron J4105 Processor, Intel Celeron Processor N4100, Intel Celeron Processor N4000, Intel Core i3-8121U Processor, Intel Core i7-1065G7 Processor, Intel Core i7-1060G7 Processor, Intel Core i5-1035G4 Processor, Intel Core i5-1035G7 Processor, Intel Core i5-1035G1 Processor, Intel Core i5-1030G7 Processor, Intel Core i5-1030G4 Processor, Intel Core i3-1005G1 Processor, Intel Core i3-1000G1 Processor, Intel Core i3-1000G4 Processor</td>
</tr>
@@ -103,22 +103,22 @@ OpenVINO™ Toolkit also contains several plugins which simplify loading models
</thead>
<tbody>
<tr>
<td><a href="https://docs.openvino.ai/2023.0/openvino_docs_IE_DG_supported_plugins_AUTO.html#doxid-openvino-docs-i-e-d-g-supported-plugins-a-u-t-o">Auto</a></td>
<td><a href="https://docs.openvino.ai/2023.1/openvino_docs_OV_UG_supported_plugins_AUTO.html">Auto</a></td>
<td><b><i><a href="./src/plugins/auto">openvino_auto_plugin</a></i></b></td>
<td>Auto plugin enables selecting Intel device for inference automatically</td>
</tr>
<tr>
<td><a href="https://docs.openvino.ai/2023.0/openvino_docs_OV_UG_Automatic_Batching.html">Auto Batch</a></td>
<td><a href="https://docs.openvino.ai/2023.1/openvino_docs_OV_UG_Automatic_Batching.html">Auto Batch</a></td>
<td><b><i><a href="./src/plugins/auto_batch">openvino_auto_batch_plugin</a></i></b></td>
<td>Auto batch plugin performs on-the-fly automatic batching (i.e. grouping inference requests together) to improve device utilization, with no programming effort from the user</td>
</tr>
<tr>
<td><a href="https://docs.openvino.ai/2023.0/openvino_docs_OV_UG_Hetero_execution.html#doxid-openvino-docs-o-v-u-g-hetero-execution">Hetero</a></td>
<td><a href="https://docs.openvino.ai/2023.1/openvino_docs_OV_UG_Hetero_execution.html#doxid-openvino-docs-o-v-u-g-hetero-execution">Hetero</a></td>
<td><b><i><a href="./src/plugins/hetero">openvino_hetero_plugin</a></i></b></td>
<td>Heterogeneous execution enables automatic inference splitting between several devices</td>
</tr>
<tr>
<td><a href="https://docs.openvino.ai/2023.0/openvino_docs_OV_UG_Running_on_multiple_devices.html#doxid-openvino-docs-o-v-u-g-running-on-multiple-devices">Multi</a></td>
<td><a href="https://docs.openvino.ai/2023.1/openvino_docs_OV_UG_Running_on_multiple_devices.html#doxid-openvino-docs-o-v-u-g-running-on-multiple-devices">Multi</a></td>
<td><b><i><a href="./src/plugins/auto">openvino_auto_plugin</a></i></b></td>
<td>Multi plugin enables simultaneous inference of the same model on several devices in parallel</td>
</tr>
@@ -155,10 +155,9 @@ The list of OpenVINO tutorials:
## System requirements
The system requirements vary depending on platform and are available on dedicated pages:
- [Linux](https://docs.openvino.ai/2023.0/openvino_docs_install_guides_installing_openvino_linux_header.html)
- [Windows](https://docs.openvino.ai/2023.0/openvino_docs_install_guides_installing_openvino_windows_header.html)
- [macOS](https://docs.openvino.ai/2023.0/openvino_docs_install_guides_installing_openvino_macos_header.html)
- [Raspbian](https://docs.openvino.ai/2023.0/openvino_docs_install_guides_installing_openvino_raspbian.html)
- [Linux](https://docs.openvino.ai/2023.1/openvino_docs_install_guides_installing_openvino_linux_header.html)
- [Windows](https://docs.openvino.ai/2023.1/openvino_docs_install_guides_installing_openvino_windows_header.html)
- [macOS](https://docs.openvino.ai/2023.1/openvino_docs_install_guides_installing_openvino_macos_header.html)
## How to build
@@ -196,7 +195,7 @@ Report questions, issues and suggestions, using:
\* Other names and brands may be claimed as the property of others.
[Open Model Zoo]:https://github.com/openvinotoolkit/open_model_zoo
[OpenVINO™ Runtime]:https://docs.openvino.ai/2023.0/openvino_docs_OV_UG_OV_Runtime_User_Guide.html
[Model Optimizer]:https://docs.openvino.ai/2023.0/openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html
[Post-Training Optimization Tool]:https://docs.openvino.ai/2023.0/pot_introduction.html
[OpenVINO™ Runtime]:https://docs.openvino.ai/2023.1/openvino_docs_OV_UG_OV_Runtime_User_Guide.html
[Model Optimizer]:https://docs.openvino.ai/2023.1/openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html
[Post-Training Optimization Tool]:https://docs.openvino.ai/2023.1/pot_introduction.html
[Samples]:https://github.com/openvinotoolkit/openvino/tree/master/samples

View File

@@ -284,8 +284,7 @@ macro(ov_add_frontend)
if(OV_FRONTEND_LINKABLE_FRONTEND)
set(export_set EXPORT OpenVINOTargets)
set(archive_dest ARCHIVE DESTINATION ${OV_CPACK_ARCHIVEDIR}
COMPONENT ${lib_component})
set(archive_dest ARCHIVE DESTINATION ${OV_CPACK_ARCHIVEDIR} COMPONENT ${lib_component})
set(namelink NAMELINK_COMPONENT ${dev_component})
else()
set(namelink NAMELINK_SKIP)
@@ -295,6 +294,12 @@ macro(ov_add_frontend)
${archive_dest}
LIBRARY DESTINATION ${OV_CPACK_LIBRARYDIR} COMPONENT ${lib_component}
${namelink})
# export to build tree
if(OV_FRONTEND_LINKABLE_FRONTEND)
export(TARGETS ${TARGET_NAME} NAMESPACE openvino::
APPEND FILE "${CMAKE_BINARY_DIR}/OpenVINOTargets.cmake")
endif()
else()
ov_install_static_lib(${TARGET_NAME} ${OV_CPACK_COMP_CORE})
endif()
@@ -306,9 +311,8 @@ macro(ov_add_frontend)
COMPONENT ${dev_component}
FILES_MATCHING PATTERN "*.hpp")
# public target name
set_target_properties(${TARGET_NAME} PROPERTIES EXPORT_NAME frontend::${OV_FRONTEND_NAME})
export(TARGETS ${TARGET_NAME} NAMESPACE openvino::
APPEND FILE "${CMAKE_BINARY_DIR}/OpenVINOTargets.cmake")
endif()
else()
# skipped frontend has to be installed in static libraries case

View File

@@ -84,7 +84,11 @@ macro(ov_define_component_include_rules)
unset(OV_CPACK_COMP_CORE_DEV_EXCLUDE_ALL)
set(OV_CPACK_COMP_CORE_C_DEV_EXCLUDE_ALL ${OV_CPACK_COMP_CORE_DEV_EXCLUDE_ALL})
# licensing
set(OV_CPACK_COMP_LICENSING_EXCLUDE_ALL EXCLUDE_FROM_ALL)
if(CPACK_GENERATOR STREQUAL "CONAN")
unset(OV_CPACK_COMP_LICENSING_EXCLUDE_ALL)
else()
set(OV_CPACK_COMP_LICENSING_EXCLUDE_ALL EXCLUDE_FROM_ALL)
endif()
# samples
set(OV_CPACK_COMP_CPP_SAMPLES_EXCLUDE_ALL EXCLUDE_FROM_ALL)
set(OV_CPACK_COMP_C_SAMPLES_EXCLUDE_ALL ${OV_CPACK_COMP_CPP_SAMPLES_EXCLUDE_ALL})

View File

@@ -24,6 +24,10 @@ macro(ov_install_static_lib target comp)
install(TARGETS ${target} EXPORT OpenVINOTargets
ARCHIVE DESTINATION ${OV_CPACK_ARCHIVEDIR} COMPONENT ${comp} ${ARGN})
# export to local tree to build against static build tree
export(TARGETS ${target} NAMESPACE openvino::
APPEND FILE "${CMAKE_BINARY_DIR}/OpenVINOTargets.cmake")
endif()
endmacro()

View File

@@ -358,7 +358,7 @@ function(ov_generate_plugins_hpp)
"${plugins_hpp_in}"
"${IEDevScripts_DIR}/plugins/create_plugins_hpp.cmake"
COMMENT
"Generate ov_plugins.hpp for build"
"Generate ov_plugins.hpp"
VERBATIM)
# for some reason dependency on source files does not work

View File

@@ -5,7 +5,7 @@
#
# Common cmake options
#
ov_option (ENABLE_PROXY "Proxy plugin for OpenVINO Runtime" ON)
ov_option (ENABLE_PROXY "Proxy plugin for OpenVINO Runtime" OFF)
ie_dependent_option (ENABLE_INTEL_CPU "CPU plugin for OpenVINO Runtime" ON "RISCV64 OR X86 OR X86_64 OR AARCH64 OR ARM" OFF)

View File

@@ -90,7 +90,7 @@ macro(ov_cpack_settings)
# - 2022.1.1, 2022.2 do not have debian packages enabled, distributed only as archives
# - 2022.3 is the first release where Debian updated packages are introduced, others 2022.3.X are LTS
2022.3.0 2022.3.1 2022.3.2 2022.3.3 2022.3.4 2022.3.5
2023.0.0 2023.0.1
2023.0.0 2023.0.1 2023.0.2 2023.0.3
)
#

View File

@@ -76,7 +76,7 @@ macro(ov_cpack_settings)
# - 2022.1.1, 2022.2 do not have rpm packages enabled, distributed only as archives
# - 2022.3 is the first release where RPM updated packages are introduced, others 2022.3.X are LTS
2022.3.0 2022.3.1 2022.3.2 2022.3.3 2022.3.4 2022.3.5
2023.0.0 2023.0.1
2023.0.0 2023.0.1 2023.0.2 2023.0.3
)
find_host_program(rpmlint_PROGRAM NAMES rpmlint DOC "Path to rpmlint")

36
conan.lock Normal file
View File

@@ -0,0 +1,36 @@
{
"version": "0.5",
"requires": [
"zlib/1.2.13#97d5730b529b4224045fe7090592d4c1%1692672717.049",
"xbyak/6.73#250bc3bc73379f90f255876c1c00a4cd%1691853024.351",
"snappy/1.1.10#916523630083f6d855cb2977de8eefb6%1689780661.062",
"pybind11/2.10.4#dd44c80a5ed6a2ef11194380daae1248%1682692198.909",
"pugixml/1.13#f615c1fcec55122b2e177d17061276e7%1691917296.869",
"protobuf/3.21.12#d9f5f4e3b86552552dda4c0a2e928eeb%1685218275.69",
"opencl-icd-loader/2023.04.17#5f73dd9f0c023d416a7f162e320b9c77%1692732261.088",
"opencl-headers/2023.04.17#3d98f2d12a67c2400de6f11d5335b5a6%1683936272.16",
"opencl-clhpp-headers/2023.04.17#7c62fcc7ac2559d4839150d2ebaac5c8%1685450803.672",
"onnx/1.13.1#f11071c8aba52731a5205b028945acbb%1693130310.715",
"onetbb/2021.10.0#cbb2fc43088070b48f6e4339bc8fa0e1%1693812561.235",
"nlohmann_json/3.11.2#a35423bb6e1eb8f931423557e282c7ed%1666619820.488",
"ittapi/3.24.0#9246125f13e7686dee2b0c992b71db94%1682969872.743",
"hwloc/2.9.2#1c63e2eccac57048ae226e6c946ebf0e%1688677682.002",
"gflags/2.2.2#48d1262ffac8d30c3224befb8275a533%1676224985.343",
"flatbuffers/23.5.26#b153646f6546daab4c7326970b6cd89c%1685838458.449",
"ade/0.1.2a#b569ff943843abd004e65536e265a445%1688125447.482"
],
"build_requires": [
"zlib/1.2.13#97d5730b529b4224045fe7090592d4c1%1692672717.049",
"protobuf/3.21.12#d9f5f4e3b86552552dda4c0a2e928eeb%1685218275.69",
"protobuf/3.21.9#515ceb0a1653cf84363d9968b812d6be%1678364058.993",
"patchelf/0.13#0eaada8970834919c3ce14355afe7fac%1680534241.341",
"m4/1.4.19#c1c4b1ee919e34630bb9b50046253d3c%1676610086.39",
"libtool/2.4.6#9ee8efc04c2e106e7fba13bb1e477617%1677509454.345",
"gnu-config/cci.20210814#15c3bf7dfdb743977b84d0321534ad90%1681250000.747",
"flatbuffers/23.5.26#b153646f6546daab4c7326970b6cd89c%1685838458.449",
"cmake/3.27.4#a7e78418b024dccacccc887f049f47ed%1693515860.005",
"automake/1.16.5#058bda3e21c36c9aa8425daf3c1faf50%1688481772.751",
"autoconf/2.71#53be95d228b2dcb30dc199cb84262d8f%1693395343.513"
],
"python_requires": []
}

View File

@@ -2,12 +2,10 @@
ade/0.1.2a
onetbb/[>=2021.2.1]
pugixml/[>=1.10]
protobuf/3.21.9
protobuf/3.21.12
ittapi/[>=3.23.0]
zlib/[>=1.2.8]
opencl-icd-loader/2023.04.17
opencl-clhpp-headers/2023.04.17
opencl-headers/2023.04.17
xbyak/[>=6.62]
snappy/[>=1.1.7]
gflags/2.2.2

View File

@@ -76,7 +76,7 @@ function(build_docs)
# build with openvino notebooks
if(ENABLE_OPENVINO_NOTEBOOKS)
set(NBDOC_SCRIPT "${DOCS_SOURCE_DIR}/nbdoc/nbdoc.py")
list(APPEND commands
list(PREPEND commands
COMMAND ${PYTHON_EXECUTABLE} "${NBDOC_SCRIPT}" "${DOCS_SOURCE_DIR}/notebooks" "${RST_OUTPUT}/notebooks"
)
endif()

View File

@@ -0,0 +1,171 @@
# Optimize and Deploy Generative AI Models {#gen_ai_guide}
@sphinxdirective
Generative AI is an innovative technique that creates new data, such as text, images, video, or audio, using neural networks. OpenVINO accelerates Generative AI use cases as they mostly rely on model inference, allowing for faster development and better performance. When it comes to generative models, OpenVINO supports:
* Conversion, optimization and inference for text, image and audio generative models, for example, Llama 2, MPT, OPT, Stable Diffusion, Stable Diffusion XL, etc.
* Int8 weight compression for text generation models.
* Storage format reduction (fp16 precision for non-compressed models and int8 for compressed models).
* Inference on CPU and GPU platforms, including integrated Intel® Processor Graphics, discrete Intel® Arc™ A-Series Graphics, and discrete Intel® Data Center GPU Flex Series.
OpenVINO offers two main paths for Generative AI use cases:
* Using OpenVINO as a backend for Hugging Face frameworks (transformers, diffusers) through the `Optimum Intel <https://huggingface.co/docs/optimum/intel/inference>`__ extension.
* Using OpenVINO native APIs (Python and C++) with custom pipeline code.
In both cases, OpenVINO runtime and tools are used, the difference is mostly in the preferred API and the final solution's footprint. Native APIs enable the use of generative models in C++ applications, ensure minimal runtime dependencies, and minimize application footprint. The Native APIs approach requires the implementation of glue code (generation loop, text tokenization, or scheduler functions), which is hidden within Hugging Face libraries for a better developer experience.
It is recommended to start with Hugging Face frameworks. Experiment with different models and scenarios to find your fit, and then consider converting to OpenVINO native APIs based on your specific requirements.
Optimum Intel provides interfaces that enable model optimization (weight compression) using `Neural Network Compression Framework (NNCF) <https://github.com/openvinotoolkit/nncf>`__, and export models to the OpenVINO model format for use in native API applications.
The table below summarizes the differences between Hugging Face and Native APIs approaches.
.. list-table::
:widths: 20 25 55
:header-rows: 1
* -
- Hugging Face through OpenVINO
- OpenVINO Native API
* - Model support
- Broad set of Models
- Broad set of Models
* - APIs
- Python (Hugging Face API)
- Python, C++ (OpenVINO API)
* - Model Format
- Source Framework / OpenVINO
- OpenVINO
* - Inference code
- Hugging Face based
- Custom inference pipelines
* - Additional dependencies
- Many Hugging Face dependencies
- Lightweight (e.g. numpy, etc.)
* - Application footprint
- Large
- Small
* - Pre/post-processing and glue code
- Available at Hugging Face out-of-the-box
- OpenVINO samples and notebooks
* - Performance
- Good
- Best
Running Generative AI Models using Hugging Face Optimum Intel
##############################################################
Prerequisites
+++++++++++++++++++++++++++
* Create a Python environment.
* Install Optimum Intel:
.. code-block:: console
pip install optimum[openvino,nncf]
To start using OpenVINO as a backend for Hugging Face, change the original Hugging Face code in two places:
.. code-block:: diff
-from transformers import AutoModelForCausalLM
+from optimum.intel import OVModelForCausalLM
model_id = "meta-llama/Llama-2-7b-chat-hf"
-model = AutoModelForCausalLM.from_pretrained(model_id)
+model = OVModelForCausalLM.from_pretrained(model_id, export=True)
After that, you can call ``save_pretrained()`` method to save model to the folder in the OpenVINO Intermediate Representation and use it further.
.. code-block:: python
model.save_pretrained(model_dir)
Alternatively, you can download and convert the model using CLI interface: ``optimum-cli export openvino --model meta-llama/Llama-2-7b-chat-hf llama_openvino``.
In this case, you can load the converted model in OpenVINO representation directly from the disk:
.. code-block:: python
model_id = "llama_openvino"
model = OVModelForCausalLM.from_pretrained(model_id)
By default, inference will run on CPU. To select a different inference device, for example, GPU, add ``device="GPU"`` to the ``from_pretrained()`` call. To switch to a different device after the model has been loaded, use the ``.to()`` method. The device naming convention is the same as in OpenVINO native API:
.. code-block:: python
model.to("GPU")
Optimum-Intel API also provides out-of-the-box model optimization through weight compression using NNCF which substantially reduces the model footprint and inference latency:
.. code-block:: python
model = OVModelForCausalLM.from_pretrained(model_id, export=True, load_in_8bit=True)
Weight compression is applied by default to models larger than one billion parameters and is also available for CLI interface as the ``--int8`` option.
Below are some examples of using Optimum-Intel for model conversion and inference:
* `Stable Diffusion v2.1 using Optimum-Intel OpenVINO <https://github.com/openvinotoolkit/openvino_notebooks/blob/main/notebooks/236-stable-diffusion-v2/236-stable-diffusion-v2-optimum-demo.ipynb>`__
* `Image generation with Stable Diffusion XL and OpenVINO <https://github.com/openvinotoolkit/openvino_notebooks/blob/main/notebooks/248-stable-diffusion-xl/248-stable-diffusion-xl.ipynb>`__
* `Instruction following using Databricks Dolly 2.0 and OpenVINO <https://github.com/openvinotoolkit/openvino_notebooks/blob/main/notebooks/240-dolly-2-instruction-following/240-dolly-2-instruction-following.ipynb>`__
* `Create an LLM-powered Chatbot using OpenVINO <https://github.com/openvinotoolkit/openvino_notebooks/blob/main/notebooks/254-llm-chatbot/254-llm-chatbot.ipynb>`__
Working with Models Tuned with LoRA
++++++++++++++++++++++++++++++++++++
Low-rank Adaptation (LoRA) is a popular method to tune Generative AI models to a downstream task or custom data. However, it requires some extra steps to be done for efficient deployment using the Hugging Face API. Namely, the trained adapters should be fused into the baseline model to avoid extra computation. This is how it can be done for Large Language Models (LLMs):
.. code-block:: python
model_id = "meta-llama/Llama-2-7b-chat-hf"
lora_adaptor = "./lora_adaptor"
model = AutoModelForCausalLM.from_pretrained(model_id, use_cache=True)
model = PeftModelForCausalLM.from_pretrained(model, lora_adaptor)
model.merge_and_unload()
model.get_base_model().save_pretrained("fused_lora_model")
Now the model can be converted to OpenVINO using Optimum Intel Python API or CLI interfaces mentioned above.
Running Generative AI Models using Native OpenVINO APIs
########################################################
To run Generative AI models using native OpenVINO APIs you need to follow regular **Сonvert -> Optimize -> Deploy** path with a few simplifications.
To convert model from Hugging Face you can use Optimum-Intel export feature that allows to export model in OpenVINO format without invoking conversion API and tools directly, as it is shown above. In this case, the conversion process is a bit more simplified. You can still use a regular conversion path if model comes from outside of Hugging Face ecosystem, i.e., in source framework format (PyTorch, etc.)
Model optimization can be performed within Hugging Face or directly using NNCF as described in the :doc:`weight compression guide <weight_compression>`.
Inference code that uses native API cannot benefit from Hugging Face pipelines. You need to write your custom code or take it from the available examples. Below are some examples of popular Generative AI scenarios:
* In case of LLMs for text generation, you need to handle tokenization, inference and token selection loop, and de-tokenization. If token selection involves beam search, it also needs to be written.
* For image generation models, you need to make a pipeline that includes several model inferences: inference for source (e.g., text) encoder models, inference loop for diffusion process and inference for decoding part. Scheduler code is also required.
To write such pipelines, you can follow the examples provided as part of OpenVINO:
* `llama2.openvino <https://github.com/OpenVINO-dev-contest/llama2.openvino>`__
* `LLM optimization by custom operation embedding for OpenVINO <https://github.com/luo-cheng2021/ov.cpu.llm.experimental>`__
* `C++ Implementation of Stable Diffusion <https://github.com/yangsu2022/OV_SD_CPP>`__
Additional Resources
############################
* `Optimum Intel documentation <https://huggingface.co/docs/optimum/intel/inference>`_
* :doc:`LLM Weight Compression <weight_compression>`
* `Neural Network Compression Framework <https://github.com/openvinotoolkit/nncf>`_
@endsphinxdirective

View File

@@ -14,7 +14,7 @@
Interactive Tutorials (Python) <tutorials>
Sample Applications (Python & C++) <openvino_docs_OV_UG_Samples_Overview>
OpenVINO API 2.0 Transition <openvino_2_0_transition_guide>
Generative AI Optimization and Deployment <gen_ai_guide>
This section will help you get a hands-on experience with OpenVINO even if you are just starting

View File

@@ -3,60 +3,256 @@
@sphinxdirective
.. meta::
:description: Preparing models for OpenVINO Runtime. Learn how to convert and compile models from different frameworks or read them directly.
:description: Preparing models for OpenVINO Runtime. Learn about the methods
used to read, convert and compile models from different frameworks.
.. toctree::
:maxdepth: 1
:hidden:
Supported_Model_Formats
openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide
omz_tools_downloader
openvino_docs_OV_Converter_UG_Conversion_Options
openvino_docs_OV_Converter_UG_prepare_model_convert_model_Converting_Model
Every deep learning workflow begins with obtaining a model. You can choose to prepare a custom one, use a ready-made solution and adjust it to your needs, or even download and run a pre-trained network from an online database, such as `TensorFlow Hub <https://tfhub.dev/>`__, `Hugging Face <https://huggingface.co/>`__, `Torchvision models <https://pytorch.org/hub/>`__.
Every deep learning workflow begins with obtaining a model. You can choose to prepare
a custom one, use a ready-made solution and adjust it to your needs, or even download
and run a pre-trained network from an online database, such as
`TensorFlow Hub <https://tfhub.dev/>`__, `Hugging Face <https://huggingface.co/>`__,
or `Torchvision models <https://pytorch.org/hub/>`__.
:doc:`OpenVINO™ supports several model formats <Supported_Model_Formats>` and allows converting them to it's own, `openvino.runtime.Model <api/ie_python_api/_autosummary/openvino.runtime.Model.html>`__ (`ov.Model <api/ie_python_api/_autosummary/openvino.runtime.Model.html>`__ ), providing a tool dedicated to this task.
If your selected model is in one of the :doc:`OpenVINO™ supported model formats <Supported_Model_Formats>`,
you can use it directly, without the need to save as OpenVINO IR
(`openvino.Model <api/ie_python_api/_autosummary/openvino.Model.html>`__ -
`ov.Model <api/ie_python_api/_autosummary/openvino.Model.html>`__).
For this purpose, you can use ``openvino.Core.read_model`` and ``openvino.Core.compile_model``
methods, so that conversion is performed automatically before inference, for
maximum convenience. Note that for PyTorch models, Python API
is the only conversion option. TensorFlow may present additional considerations
:doc:`TensorFlow Frontend Capabilities and Limitations <openvino_docs_MO_DG_TensorFlow_Frontend>`.
There are several options to convert a model from original framework to OpenVINO model format (``ov.Model``).
The ``read_model()`` method reads a model from a file and produces ``ov.Model``. If the file is in one of the supported original framework file formats, it is converted automatically to OpenVINO Intermediate Representation. If the file is already in the OpenVINO IR format, it is read "as-is", without any conversion involved. ``ov.Model`` can be serialized to IR using the ``ov.serialize()`` method. The serialized IR can be further optimized using :doc:`Neural Network Compression Framework (NNCF) <ptq_introduction>` that applies post-training quantization methods.
For better performance and more optimization options, OpenVINO also offers a conversion
API with two possible approaches: the Python API functions (``openvino.convert_model``
and ``openvino.save_model``) and the ``ovc`` command line tool, which are described in detail in this article.
Convert a model in Python
######################################
.. note::
Model conversion API prior to OpenVINO 2023.1 is considered deprecated.
Both existing and new projects are recommended to transition to the new
solutions, keeping in mind that they are not fully backwards compatible
with ``openvino.tools.mo.convert_model`` or the ``mo`` CLI tool.
For more details, see the :doc:`Model Conversion API Transition Guide <openvino_docs_OV_Converter_UG_prepare_model_convert_model_MO_OVC_transition>`.
Model conversion API, specifically, the ``mo.convert_model()`` method converts a model from original framework to ``ov.Model``. ``mo.convert_model()`` returns ``ov.Model`` object in memory so the ``read_model()`` method is not required. The resulting ``ov.Model`` can be inferred in the same training environment (python script or Jupiter Notebook). ``mo.convert_model()`` provides a convenient way to quickly switch from framework-based code to OpenVINO-based code in your inference application. In addition to model files, ``mo.convert_model()`` can take OpenVINO extension objects constructed directly in Python for easier conversion of operations that are not supported in OpenVINO. The ``mo.convert_model()`` method also has a set of parameters to :doc:`cut the model <openvino_docs_MO_DG_prepare_model_convert_model_Cutting_Model>`, :doc:`set input shapes or layout <openvino_docs_MO_DG_prepare_model_convert_model_Converting_Model>`, :doc:`add preprocessing <openvino_docs_MO_DG_Additional_Optimization_Use_Cases>`, etc.
.. image:: _static/images/model_conversion_diagram.svg
Convert a Model in Python: ``convert_model``
##############################################
You can use the Model conversion API in Python with the ``openvino.convert_model`` function. This function converts a model from its original framework representation, for example PyTorch or TensorFlow, to the object of type ``openvino.Model``. The resulting ``openvino.Model`` can be compiled with ``openvino.compile_model`` and inferred in the same application (Python script or Jupyter Notebook) or saved into a file using``openvino.save_model`` for future use. Below, there are examples of how to use the ``openvino.convert_model`` with models from popular public repositories:
.. tab-set::
.. tab-item:: Torchvision
.. code-block:: py
:force:
import openvino as ov
import torch
from torchvision.models import resnet50
model = resnet50(weights='DEFAULT')
# prepare input_data
input_data = torch.rand(1, 3, 224, 224)
ov_model = ov.convert_model(model, example_input=input_data)
###### Option 1: Save to OpenVINO IR:
# save model to OpenVINO IR for later use
ov.save_model(ov_model, 'model.xml')
###### Option 2: Compile and infer with OpenVINO:
# compile model
compiled_model = ov.compile_model(ov_model)
# run inference
result = compiled_model(input_data)
.. tab-item:: Hugging Face Transformers
.. code-block:: py
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertModel.from_pretrained("bert-base-uncased")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
import openvino as ov
ov_model = ov.convert_model(model, example_input={**encoded_input})
###### Option 1: Save to OpenVINO IR:
# save model to OpenVINO IR for later use
ov.save_model(ov_model, 'model.xml')
###### Option 2: Compile and infer with OpenVINO:
# compile model
compiled_model = ov.compile_model(ov_model)
# prepare input_data using HF tokenizer or your own tokenizer
# encoded_input is reused here for simplicity
# run inference
result = compiled_model({**encoded_input})
.. tab-item:: Keras Applications
.. code-block:: py
import tensorflow as tf
import openvino as ov
tf_model = tf.keras.applications.ResNet50(weights="imagenet")
ov_model = ov.convert_model(tf_model)
###### Option 1: Save to OpenVINO IR:
# save model to OpenVINO IR for later use
ov.save_model(ov_model, 'model.xml')
###### Option 2: Compile and infer with OpenVINO:
# compile model
compiled_model = ov.compile_model(ov_model)
# prepare input_data
import numpy as np
input_data = np.random.rand(1, 224, 224, 3)
# run inference
result = compiled_model(input_data)
.. tab-item:: TensorFlow Hub
.. code-block:: py
import tensorflow as tf
import tensorflow_hub as hub
import openvino as ov
model = tf.keras.Sequential([
hub.KerasLayer("https://tfhub.dev/google/imagenet/mobilenet_v1_100_224/classification/5")
])
# Check model page for information about input shape: https://tfhub.dev/google/imagenet/mobilenet_v1_100_224/classification/5
model.build([None, 224, 224, 3])
model.save('mobilenet_v1_100_224') # use a temporary directory
ov_model = ov.convert_model('mobilenet_v1_100_224')
###### Option 1: Save to OpenVINO IR:
ov.save_model(ov_model, 'model.xml')
###### Option 2: Compile and infer with OpenVINO:
compiled_model = ov.compile_model(ov_model)
# prepare input_data
import numpy as np
input_data = np.random.rand(1, 224, 224, 3)
# run inference
result = compiled_model(input_data)
.. tab-item:: ONNX Model Hub
.. code-block:: py
import onnx
model = onnx.hub.load("resnet50")
onnx.save(model, 'resnet50.onnx') # use a temporary file for model
import openvino as ov
ov_model = ov.convert_model('resnet50.onnx')
###### Option 1: Save to OpenVINO IR:
# save model to OpenVINO IR for later use
ov.save_model(ov_model, 'model.xml')
###### Option 2: Compile and infer with OpenVINO:
# compile model
compiled_model = ov.compile_model(ov_model)
# prepare input_data
import numpy as np
input_data = np.random.rand(1, 3, 224, 224)
# run inference
result = compiled_model(input_data)
In Option 1, where the ``openvino.save_model`` function is used, an OpenVINO model is serialized in the file system as two files with ``.xml`` and ``.bin`` extensions. This pair of files is called OpenVINO Intermediate Representation format (OpenVINO IR, or just IR) and useful for efficient model deployment. OpenVINO IR can be loaded into another application for inference using the ``openvino.Core.read_model`` function. For more details, refer to the :doc:`OpenVINO™ Runtime documentation <openvino_docs_OV_UG_OV_Runtime_User_Guide>`.
Option 2, where ``openvino.compile_model`` is used, provides a convenient way to quickly switch from framework-based code to OpenVINO-based code in your existing Python inference application. In this case, the converted model is not saved to IR. Instead, the model is compiled and used for inference within the same application.
Option 1 separates model conversion and model inference into two different applications. This approach is useful for deployment scenarios requiring fewer extra dependencies and faster model loading in the end inference application.
For example, converting a PyTorch model to OpenVINO usually demands the ``torch`` Python module and Python. This process can take extra time and memory. But, after the converted model is saved as OpenVINO IR with ``openvino.save_model``, it can be loaded in a separate application without requiring the ``torch`` dependency and the time-consuming conversion. The inference application can be written in other languages supported by OpenVINO, for example, in C++, and Python installation is not necessary for it to run.
Before saving the model to OpenVINO IR, consider applying :doc:`Post-training Optimization <ptq_introduction>` to enable more efficient inference and smaller model size.
The figure below illustrates the typical workflow for deploying a trained deep-learning model.
.. image:: ./_static/images/model_conversion_diagram.svg
:alt: model conversion diagram
Convert a model with ``mo`` command-line tool
#############################################
Convert a Model in CLI: ``ovc``
###############################
Another option to convert a model is to use ``mo`` command-line tool. ``mo`` is a cross-platform tool that facilitates the transition between training and deployment environments, performs static model analysis, and adjusts deep learning models for optimal execution on end-point target devices in the same measure, as the ``mo.convert_model`` method.
Another option for model conversion is to use ``ovc`` command-line tool, which stands for OpenVINO Model Converter. The tool combines both ``openvino.convert_model`` and ``openvino.save_model`` functionalities. It is convenient to use when the original model is ready for inference and is in one of the supported file formats: ONNX, TensorFlow, TensorFlow Lite, or PaddlePaddle. As a result, ``ovc`` produces an OpenVINO IR, consisting of ``.xml`` and ``.bin`` files, which needs to be read with the ``openvino.Core.read_model`` method. You can compile and infer the ``ov.Model`` later with :doc:`OpenVINO™ Runtime <openvino_docs_OV_UG_OV_Runtime_User_Guide>`
``mo`` requires the use of a pre-trained deep learning model in one of the supported formats: TensorFlow, TensorFlow Lite, PaddlePaddle, or ONNX. ``mo`` converts the model to the OpenVINO Intermediate Representation format (IR), which needs to be read with the ``ov.read_model()`` method. Then, you can compile and infer the ``ov.Model`` later with :doc:`OpenVINO™ Runtime <openvino_docs_OV_UG_OV_Runtime_User_Guide>`.
.. note::
PyTorch models cannot be converted with ``ovc``, use ``openvino.convert_model`` instead.
The results of both ``ovc`` and ``openvino.convert_model``/``openvino.save_model`` conversion methods are the same. You can choose either of them based on your convenience. Note that there should not be any differences in the results of model conversion if the same set of parameters is used and the model is saved into OpenVINO IR.
The figure below illustrates the typical workflow for deploying a trained deep learning model:
.. image:: _static/images/BASIC_FLOW_MO_simplified.svg
where IR is a pair of files describing the model:
* ``.xml`` - Describes the network topology.
* ``.bin`` - Contains the weights and biases binary data.
Model files (not Python objects) from ONNX, PaddlePaddle, TensorFlow and TensorFlow Lite (check :doc:`TensorFlow Frontend Capabilities and Limitations <openvino_docs_MO_DG_TensorFlow_Frontend>`) do not require a separate step for model conversion, that is ``mo.convert_model``. OpenVINO provides C++ and Python APIs for importing the models to OpenVINO Runtime directly by just calling the ``read_model`` method.
Additional Resources
####################
The results of both ``mo`` and ``mo.convert_model()`` conversion methods described above are the same. You can choose one of them, depending on what is most convenient for you. Keep in mind that there should not be any differences in the results of model conversion if the same set of parameters is used.
The following articles describe in details how to obtain and prepare your model depending on the source model type:
This section describes how to obtain and prepare your model for work with OpenVINO to get the best inference results:
* :doc:`Convert different model formats to the ov.Model format <Supported_Model_Formats>`.
* :doc:`Review all available conversion parameters <openvino_docs_OV_Converter_UG_Conversion_Options>`.
To achieve the best model inference performance and more compact OpenVINO IR representation follow:
* :doc:`Post-training optimization <ptq_introduction>`
* :doc:`Model inference in OpenVINO Runtime <openvino_docs_OV_UG_OV_Runtime_User_Guide>`
If you are using legacy conversion API (``mo`` or ``openvino.tools.mo.convert_model``), please refer to the following materials:
* :doc:`Transition from legacy mo and ov.tools.mo.convert_model <openvino_docs_OV_Converter_UG_prepare_model_convert_model_MO_OVC_transition>`
* :doc:`Legacy Model Conversion API <openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide>`
.. api/ie_python_api/_autosummary/openvino.Model.html is a broken link for some reason - need to investigate python api article generation
* :doc:`See the supported formats and how to use them in your project <Supported_Model_Formats>`.
* :doc:`Convert different model formats to the ov.Model format <openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide>`.
@endsphinxdirective

View File

@@ -3,7 +3,7 @@
@sphinxdirective
.. meta::
:description: OpenVINO™ is an ecosystem of utilities that have advanced capabilities, which help develop deep learning solutions.
:description: OpenVINO™ ecosystem offers various resources for developing deep learning solutions.
.. toctree::
@@ -13,7 +13,6 @@
ote_documentation
datumaro_documentation
ovsa_get_started
openvino_docs_tuning_utilities
OpenVINO™ is not just one tool. It is an expansive ecosystem of utilities, providing a comprehensive workflow for deep learning solution development. Learn more about each of them to reach the full potential of OpenVINO™ Toolkit.
@@ -28,6 +27,7 @@ More resources:
* :doc:`Documentation <tmo_introduction>`
* `GitHub <https://github.com/openvinotoolkit/nncf>`__
* `PyPI <https://pypi.org/project/nncf/>`__
* `Conda Forge <https://anaconda.org/conda-forge/nncf/>`__
**OpenVINO™ Training Extensions**
@@ -60,39 +60,6 @@ More resources:
* `GitHub <https://github.com/openvinotoolkit/datumaro>`__
* `Documentation <https://openvinotoolkit.github.io/datumaro/stable/docs/get-started/introduction.html>`__
**Compile Tool**
Compile tool is now deprecated. If you need to compile a model for inference on a specific device, use the following script:
.. tab-set::
.. tab-item:: Python
:sync: py
.. doxygensnippet:: docs/snippets/export_compiled_model.py
:language: python
:fragment: [export_compiled_model]
.. tab-item:: C++
:sync: cpp
.. doxygensnippet:: docs/snippets/export_compiled_model.cpp
:language: cpp
:fragment: [export_compiled_model]
To learn which device supports the import / export functionality, see the :doc:`feature support matrix <openvino_docs_OV_UG_Working_with_devices>`.
For more details on preprocessing steps, refer to the :doc:`Optimize Preprocessing <openvino_docs_OV_UG_Preprocessing_Overview>`. To compile the model with advanced preprocessing capabilities, refer to the :doc:`Use Case - Integrate and Save Preprocessing Steps Into OpenVINO IR <openvino_docs_OV_UG_Preprocess_Usecase_save>`, which shows how to have all the preprocessing in the compiled blob.
**DL Workbench**
A web-based tool for deploying deep learning models. Built on the core of OpenVINO and equipped with a graphics user interface, DL Workbench is a great way to explore the possibilities of the OpenVINO workflow, import, analyze, optimize, and build your pre-trained models. You can do all that by visiting `Intel® Developer Cloud <https://software.intel.com/content/www/us/en/develop/tools/devcloud.html>`__ and launching DL Workbench online.
**OpenVINO™ integration with TensorFlow (OVTF)**
OpenVINO™ Integration with TensorFlow will no longer be supported as of OpenVINO release 2023.0. As part of the 2023.0 release, OpenVINO will feature a significantly enhanced TensorFlow user experience within native OpenVINO without needing offline model conversions. :doc:`Learn more <openvino_docs_MO_DG_TensorFlow_Frontend>`.
@endsphinxdirective

View File

@@ -0,0 +1,141 @@
# Legacy Features and Components {#openvino_legacy_features}
@sphinxdirective
.. toctree::
:maxdepth: 1
:hidden:
OpenVINO Development Tools package <openvino_docs_install_guides_install_dev_tools>
Model Optimizer / Conversion API <openvino_docs_OV_Converter_UG_prepare_model_convert_model_MO_OVC_transition>
OpenVINO API 2.0 transition <openvino_2_0_transition_guide>
Open Model ZOO <model_zoo>
Apache MXNet, Caffe, and Kaldi <mxnet_caffe_kaldi>
Post-training Optimization Tool <pot_introduction>
Since OpenVINO has grown very rapidly in recent years, some of its features
and components have been replaced by other solutions. Some of them are still
supported to assure OpenVINO users are given enough time to adjust their projects,
before the features are fully discontinued.
This section will give you an overview of these major changes and tell you how
you can proceed to get the best experience and results with the current OpenVINO
offering.
| **OpenVINO Development Tools Package**
| *New solution:* OpenVINO Runtime includes all supported components
| *Old solution:* discontinuation planned for OpenVINO 2025.0
|
| OpenVINO Development Tools used to be the OpenVINO package with tools for
advanced operations on models, such as Model conversion API, Benchmark Tool,
Accuracy Checker, Annotation Converter, Post-Training Optimization Tool,
and Open Model Zoo tools. Most of these tools have been either removed,
replaced by other solutions, or moved to the OpenVINO Runtime package.
| :doc:`See how to install Development Tools <openvino_docs_install_guides_install_dev_tools>`
| **Model Optimizer / Conversion API**
| *New solution:* Direct model support and OpenVINO Converter (OVC)
| *Old solution:* Legacy Conversion API discontinuation planned for OpenVINO 2025.0
|
| The role of Model Optimizer and later the Conversion API was largely reduced
when all major model frameworks became supported directly. For converting model
files explicitly, it has been replaced with a more light-weight and efficient
solution, the OpenVINO Converter (launched with OpenVINO 2023.1).
| :doc:`See how to use OVC <openvino_docs_model_processing_introduction>`
| :doc:`See how to transition from the legacy solution <openvino_docs_OV_Converter_UG_prepare_model_convert_model_MO_OVC_transition>`
| **Open Model ZOO**
| *New solution:* users are encouraged to use public model repositories
| *Old solution:* discontinuation planned for OpenVINO 2024.0
|
| Open Model ZOO provided a collection of models prepared for use with OpenVINO,
and a small set of tools enabling a level of automation for the process.
Since the tools have been mostly replaced by other solutions and several
other model repositories have recently grown in size and popularity,
Open Model ZOO will no longer be maintained. You may still use its resources
until they are fully removed.
| :doc:`See the Open Model ZOO documentation <model_zoo>`
| `Check the OMZ GitHub project <https://github.com/openvinotoolkit/open_model_zoo>`__
| **Apache MXNet, Caffe, and Kaldi model formats**
| *New solution:* conversion to ONNX via external tools
| *Old solution:* model support will be discontinued with OpenVINO 2024.0
|
| Since these three model formats proved to be far less popular among OpenVINO users
than the remaining ones, their support has been discontinued. Converting them to the
ONNX format is a possible way of retaining them in the OpenVINO-based pipeline.
| :doc:`See the previous conversion instructions <mxnet_caffe_kaldi>`
| :doc:`See the currently supported frameworks <Supported_Model_Formats>`
| **Post-training Optimization Tool (POT)**
| *New solution:* NNCF extended in OpenVINO 2023.0
| *Old solution:* POT discontinuation planned for 2024
|
| Neural Network Compression Framework (NNCF) now offers the same functionality as POT,
apart from its original feature set. It is currently the default tool for performing
both, post-training and quantization optimizations, while POT is considered deprecated.
| :doc:`See the deprecated POT documentation <pot_introduction>`
| :doc:`See how to use NNCF for model optimization <openvino_docs_model_optimization_guide>`
| `Check the NNCF GitHub project, including documentation <https://github.com/openvinotoolkit/nncf>`__
| **Old Inference API 1.0**
| *New solution:* API 2.0 launched in OpenVINO 2022.1
| *Old solution:* discontinuation planned for OpenVINO 2024.0
|
| API 1.0 (Inference Engine and nGraph) is now deprecated. It can still be
used but is not recommended. Its discontinuation is planned for 2024.
| :doc:`See how to transition to API 2.0 <openvino_2_0_transition_guide>`
| **Compile tool**
| *New solution:* the tool is no longer needed
| *Old solution:* deprecated in OpenVINO 2023.0
|
| Compile tool is now deprecated. If you need to compile a model for inference on
a specific device, use the following script:
.. tab-set::
.. tab-item:: Python
:sync: py
.. doxygensnippet:: docs/snippets/export_compiled_model.py
:language: python
:fragment: [export_compiled_model]
.. tab-item:: C++
:sync: cpp
.. doxygensnippet:: docs/snippets/export_compiled_model.cpp
:language: cpp
:fragment: [export_compiled_model]
| :doc:`see which devices support import / export <openvino_docs_OV_UG_Working_with_devices>`
| :doc:`Learn more on preprocessing steps <openvino_docs_OV_UG_Preprocessing_Overview>`
| :doc:`See how to integrate and save preprocessing steps into OpenVINO IR <openvino_docs_OV_UG_Preprocess_Usecase_save>`
| **DL Workbench**
| *New solution:* DevCloud version
| *Old solution:* local distribution discontinued in OpenVINO 2022.3
|
| The stand-alone version of DL Workbench, a GUI tool for previewing and benchmarking
deep learning models, has been discontinued. You can use its cloud version:
| `Intel® Developer Cloud for the Edge <https://www.intel.com/content/www/us/en/developer/tools/devcloud/edge/overview.html>`__.
| **OpenVINO™ integration with TensorFlow (OVTF)**
| *New solution:* Direct model support and OpenVINO Converter (OVC)
| *Old solution:* discontinued in OpenVINO 2023.0
|
| OpenVINO™ Integration with TensorFlow is longer supported, as OpenVINO now features a
native TensorFlow support, significantly enhancing user experience with no need for
explicit model conversion.
| :doc:`Learn more <openvino_docs_MO_DG_TensorFlow_Frontend>`
@endsphinxdirective

View File

@@ -0,0 +1,31 @@
# MX Net, Caffe, and Kaldi model formats {#mxnet_caffe_kaldi}
@sphinxdirective
.. toctree::
:maxdepth: 1
:hidden:
openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_MxNet
openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Caffe
openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Kaldi
openvino_docs_MO_DG_prepare_model_convert_model_mxnet_specific_Convert_GluonCV_Models
openvino_docs_MO_DG_prepare_model_convert_model_mxnet_specific_Convert_Style_Transfer_From_MXNet
openvino_docs_MO_DG_prepare_model_convert_model_kaldi_specific_Aspire_Tdnn_Model
The following articles present the deprecated conversion method for MX Net, Caffe,
and Kaldi model formats.
:doc:`Apache MX Net conversion <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_MxNet>`
:doc:`Caffe conversion <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Caffe>`
:doc:`Kaldi conversion <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Kaldi>`
Here are three examples of conversion for particular models.
:doc:`MXNet GluonCV conversion <openvino_docs_MO_DG_prepare_model_convert_model_mxnet_specific_Convert_GluonCV_Models>`
:doc:`MXNet Style Transfer Model conversion <openvino_docs_MO_DG_prepare_model_convert_model_mxnet_specific_Convert_Style_Transfer_From_MXNet>`
:doc:`Kaldi ASpIRE Chain TDNN Model conversion <openvino_docs_MO_DG_prepare_model_convert_model_kaldi_specific_Aspire_Tdnn_Model>`
@endsphinxdirective

View File

@@ -17,6 +17,7 @@
Running Inference <openvino_docs_OV_UG_OV_Runtime_User_Guide>
Deployment on a Local System <openvino_deployment_guide>
Deployment on a Model Server <ovms_what_is_openvino_model_server>
pytorch_2_0_torch_compile
| :doc:`Model Preparation <openvino_docs_model_processing_introduction>`

View File

@@ -0,0 +1,157 @@
# PyTorch Deployment via "torch.compile" {#pytorch_2_0_torch_compile}
@sphinxdirective
The ``torch.compile`` feature enables you to use OpenVINO for PyTorch-native applications.
It speeds up PyTorch code by JIT-compiling it into optimized kernels.
By default, Torch code runs in eager-mode, but with the use of ``torch.compile`` it goes through the following steps:
1. **Graph acquisition** - the model is rewritten as blocks of subgraphs that are either:
* compiled by TorchDynamo and "flattened",
* falling back to the eager-mode, due to unsupported Python constructs (like control-flow code).
2. **Graph lowering** - all PyTorch operations are decomposed into their constituent kernels specific to the chosen backend.
3. **Graph compilation** - the kernels call their corresponding low-level device-specific operations.
How to Use
#################
To use ``torch.compile``, you need to add an import statement and define one of the two available backends:
| ``openvino``
| With this backend, Torch FX subgraphs are directly converted to OpenVINO representation without any additional PyTorch based tracing/scripting.
| ``openvino_ts``
| With this backend, Torch FX subgraphs are first traced/scripted with PyTorch Torchscript, and then converted to OpenVINO representation.
.. tab-set::
.. tab-item:: openvino
:sync: backend-openvino
.. code-block:: console
import openvino.torch
...
model = torch.compile(model, backend='openvino')
Execution diagram:
.. image:: _static/images/torch_compile_backend_openvino.svg
:width: 992px
:height: 720px
:scale: 60%
:align: center
.. tab-item:: openvino_ts
:sync: backend-openvino-ts
.. code-block:: console
import openvino.torch
...
model = torch.compile(model, backend='openvino_ts')
Execution diagram:
.. image:: _static/images/torch_compile_backend_openvino_ts.svg
:width: 1088px
:height: 720px
:scale: 60%
:align: center
Environment Variables
+++++++++++++++++++++++++++
* **OPENVINO_TORCH_BACKEND_DEVICE**: enables selecting a specific hardware device to run the application.
By default, the OpenVINO backend for ``torch.compile`` runs PyTorch applications using the CPU. Setting
this variable to GPU.0, for example, will make the application use the integrated graphics processor instead.
* **OPENVINO_TORCH_MODEL_CACHING**: enables saving the optimized model files to a hard drive, after the first application run.
This makes them available for the following application executions, reducing the first-inference latency.
By default, this variable is set to ``False``. Setting it to ``True`` enables caching.
* **OPENVINO_TORCH_CACHE_DIR**: enables defining a custom directory for the model files (if model caching set to ``True``).
By default, the OpenVINO IR is saved in the ``cache`` sub-directory, created in the application's root directory.
Windows support
++++++++++++++++++++++++++
Currently, PyTorch does not support ``torch.compile`` feature on Windows officially. However, it can be accessed by running
the below instructions:
1. Install the PyTorch nightly wheel file - `2.1.0.dev20230713 <https://download.pytorch.org/whl/nightly/cpu/torch-2.1.0.dev20230713%2Bcpu-cp38-cp38-win_amd64.whl>`__ ,
2. Update the file at ``<python_env_root>/Lib/site-packages/torch/_dynamo/eval_frames.py``
3. Find the function called ``check_if_dynamo_supported()``:
.. code-block:: console
def check_if_dynamo_supported():
if sys.platform == "win32":
raise RuntimeError("Windows not yet supported for torch.compile")
if sys.version_info >= (3, 11):
raise RuntimeError("Python 3.11+ not yet supported for torch.compile")
4. Put in comments the first two lines in this function, so it looks like this:
.. code-block:: console
def check_if_dynamo_supported():
#if sys.platform == "win32":
# raise RuntimeError("Windows not yet supported for torch.compile")
if sys.version_info >= (3, 11):
`raise RuntimeError("Python 3.11+ not yet supported for torch.compile")
Support for Automatic1111 Stable Diffusion WebUI
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Automatic1111 Stable Diffusion WebUI is an open-source repository that hosts a browser-based interface for the Stable Diffusion
based image generation. It allows users to create realistic and creative images from text prompts.
Stable Diffusion WebUI is supported on Intel CPUs, Intel integrated GPUs, and Intel discrete GPUs by leveraging OpenVINO
``torch.compile`` capability. Detailed instructions are available in
`Stable Diffusion WebUI repository. <https://github.com/openvinotoolkit/stable-diffusion-webui/wiki/Installation-on-Intel-Silicon>`__
Architecture
#################
The ``torch.compile`` feature is part of PyTorch 2.0, and is based on:
* **TorchDynamo** - a Python-level JIT that hooks into the frame evaluation API in CPython,
(PEP 523) to dynamically modify Python bytecode right before it is executed (PyTorch operators
that cannot be extracted to FX graph are executed in the native Python environment).
It maintains the eager-mode capabilities using
`Guards <https://pytorch.org/docs/stable/dynamo/guards-overview.html>`__ to ensure the
generated graphs are valid.
* **AOTAutograd** - generates the backward graph corresponding to the forward graph captured by TorchDynamo.
* **PrimTorch** - decomposes complicated PyTorch operations into simpler and more elementary ops.
* **TorchInductor** - a deep learning compiler that generates fast code for multiple accelerators and backends.
When the PyTorch module is wrapped with ``torch.compile``, TorchDynamo traces the module and
rewrites Python bytecode to extract sequences of PyTorch operations into an FX Graph,
which can be optimized by the OpenVINO backend. The Torch FX graphs are first converted to
inlined FX graphs and the graph partitioning module traverses inlined FX graph to identify
operators supported by OpenVINO.
All the supported operators are clustered into OpenVINO submodules, converted to the OpenVINO
graph using OpenVINO's PyTorch decoder, and executed in an optimized manner using OpenVINO runtime.
All unsupported operators fall back to the native PyTorch runtime on CPU. If the subgraph
fails during OpenVINO conversion, the subgraph falls back to PyTorch's default inductor backend.
Additional Resources
############################
* `PyTorch 2.0 documentation <https://pytorch.org/docs/stable/index.html>`_
@endsphinxdirective

View File

@@ -22,11 +22,6 @@
openvino_docs_transformations
OpenVINO Plugin Developer Guide <openvino_docs_ie_plugin_dg_overview>
.. toctree::
:maxdepth: 1
:hidden:
openvino_docs_MO_DG_prepare_model_customize_model_optimizer_Customize_Model_Optimizer
The Intel® Distribution of OpenVINO™ toolkit supports neural-network models trained with various frameworks, including
TensorFlow, PyTorch, ONNX, TensorFlow Lite, and PaddlePaddle (OpenVINO support for Apache MXNet, Caffe, and Kaldi is currently
@@ -62,7 +57,7 @@ Mapping from Framework Operation
Mapping of custom operation is implemented differently, depending on model format used for import. You may choose one of the following:
1. If a model is represented in the ONNX (including models exported from Pytorch in ONNX), TensorFlow Lite, PaddlePaddle or TensorFlow formats, then one of the classes from :doc:`Frontend Extension API <openvino_docs_Extensibility_UG_Frontend_Extensions>` should be used. It consists of several classes available in C++ which can be used with the ``--extensions`` option in Model Optimizer or when a model is imported directly to OpenVINO runtime using the ``read_model`` method. Python API is also available for runtime model import.
1. If a model is represented in the ONNX (including models exported from PyTorch in ONNX), TensorFlow Lite, PaddlePaddle or TensorFlow formats, then one of the classes from :doc:`Frontend Extension API <openvino_docs_Extensibility_UG_Frontend_Extensions>` should be used. It consists of several classes available in C++ which can be used with the ``--extensions`` option in Model Optimizer or when a model is imported directly to OpenVINO runtime using the ``read_model`` method. Python API is also available for runtime model import.
2. If a model is represented in the Caffe, Kaldi or MXNet formats (as legacy frontends), then :doc:`[Legacy] Model Optimizer Extensions <openvino_docs_MO_DG_prepare_model_customize_model_optimizer_Customize_Model_Optimizer>` should be used. This approach is available for model conversion in Model Optimizer only.

View File

@@ -301,11 +301,19 @@ This mapping also specifies the input name "X" and output name "Out".
The last step is to register this custom operation by following:
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
:language: cpp
:fragment: [frontend_extension_framework_map_macro_add_extension]
.. important::
To map an operation on a specific framework, you have to link to a respective
frontend (``openvino::frontend::onnx``, ``openvino::frontend::tensorflow``, ``openvino::frontend::paddle``) in the ``CMakeLists.txt`` file:
.. code-block:: sh
target_link_libraries(${TARGET_NAME} PRIVATE openvino::frontend::onnx)
Mapping to Multiple Operations with ConversionExtension
#######################################################

View File

@@ -94,7 +94,7 @@ Detailed Guides
API References
##############
* `OpenVINO Plugin API <https://docs.openvino.ai/2023.0/groupov_dev_api.html>`__
* `OpenVINO Transformation API <https://docs.openvino.ai/2023.0/groupie_transformation_api.html>`__
* `OpenVINO Plugin API <https://docs.openvino.ai/2023.1/groupov_dev_api.html>`__
* `OpenVINO Transformation API <https://docs.openvino.ai/2023.1/groupie_transformation_api.html>`__
@endsphinxdirective

View File

@@ -15,7 +15,7 @@
The guides below provides extra API references needed for OpenVINO plugin development:
* `OpenVINO Plugin API <https://docs.openvino.ai/2023.0/groupov_dev_api.html>`__
* `OpenVINO Transformation API <https://docs.openvino.ai/2023.0/groupie_transformation_api.html>`__
* `OpenVINO Plugin API <https://docs.openvino.ai/2023.1/groupov_dev_api.html>`__
* `OpenVINO Transformation API <https://docs.openvino.ai/2023.1/groupie_transformation_api.html>`__
@endsphinxdirective

View File

@@ -1,4 +1,4 @@
# Convert a Model {#openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide}
# Legacy Conversion API {#openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide}
@sphinxdirective
@@ -14,12 +14,15 @@
openvino_docs_MO_DG_FP16_Compression
openvino_docs_MO_DG_Python_API
openvino_docs_MO_DG_prepare_model_Model_Optimizer_FAQ
Supported_Model_Formats_MO_DG
.. meta::
:description: Model conversion (MO) furthers the transition between training and
deployment environments, it adjusts deep learning models for
:description: Model conversion (MO) furthers the transition between training and
deployment environments, it adjusts deep learning models for
optimal execution on target devices.
.. note::
This part of the documentation describes a legacy approach to model conversion. Starting with OpenVINO 2023.1, a simpler alternative API for model conversion is available: ``openvino.convert_model`` and OpenVINO Model Converter ``ovc`` CLI tool. Refer to :doc:`Model preparation <openvino_docs_model_processing_introduction>` for more details. If you are still using `openvino.tools.mo.convert_model` or `mo` CLI tool, you can still refer to this documentation. However, consider checking the :doc:`transition guide <openvino_docs_OV_Converter_UG_prepare_model_convert_model_MO_OVC_transition>` to learn how to migrate from the legacy conversion API to the new one. Depending on the model topology, the new API can be a better option for you.
To convert a model to OpenVINO model format (``ov.Model``), you can use the following command:
@@ -44,18 +47,21 @@ To convert a model to OpenVINO model format (``ov.Model``), you can use the foll
If the out-of-the-box conversion (only the ``input_model`` parameter is specified) is not successful, use the parameters mentioned below to override input shapes and cut the model:
- model conversion API provides two parameters to override original input shapes for model conversion: ``input`` and ``input_shape``.
For more information about these parameters, refer to the :doc:`Setting Input Shapes <openvino_docs_MO_DG_prepare_model_convert_model_Converting_Model>` guide.
- ``input`` and ``input_shape`` - the model conversion API parameters used to override original input shapes for model conversion,
For more information about the parameters, refer to the :doc:`Setting Input Shapes <openvino_docs_MO_DG_prepare_model_convert_model_Converting_Model>` guide.
- ``input`` and ``output`` - the model conversion API parameters used to define new inputs and outputs of the converted model to cut off unwanted parts (such as unsupported operations and training sub-graphs),
- To cut off unwanted parts of a model (such as unsupported operations and training sub-graphs),
use the ``input`` and ``output`` parameters to define new inputs and outputs of the converted model.
For a more detailed description, refer to the :doc:`Cutting Off Parts of a Model <openvino_docs_MO_DG_prepare_model_convert_model_Cutting_Model>` guide.
You can also insert additional input pre-processing sub-graphs into the converted model by using
the ``mean_values``, ``scales_values``, ``layout``, and other parameters described
in the :doc:`Embedding Preprocessing Computation <openvino_docs_MO_DG_Additional_Optimization_Use_Cases>` article.
- ``mean_values``, ``scales_values``, ``layout`` - the parameters used to insert additional input pre-processing sub-graphs into the converted model,
The ``compress_to_fp16`` compression parameter in ``mo`` command-line tool allows generating IR with constants (for example, weights for convolutions and matrix multiplications) compressed to ``FP16`` data type. For more details, refer to the :doc:`Compression of a Model to FP16 <openvino_docs_MO_DG_FP16_Compression>` guide.
For more details, see the :doc:`Embedding Preprocessing Computation <openvino_docs_MO_DG_Additional_Optimization_Use_Cases>` article.
- ``compress_to_fp16`` - a compression parameter in ``mo`` command-line tool, which allows generating IR with constants (for example, weights for convolutions and matrix multiplications) compressed to ``FP16`` data type.
For more details, refer to the :doc:`Compression of a Model to FP16 <openvino_docs_MO_DG_FP16_Compression>` guide.
To get the full list of conversion parameters, run the following command:

View File

@@ -3,7 +3,7 @@
@sphinxdirective
By default, when IR is saved all relevant floating-point weights are compressed to ``FP16`` data type during model conversion.
It results in creating a "compressed ``FP16`` model", which occupies about half of
It results in creating a "compressed ``FP16`` model", which occupies about half of
the original space in the file system. The compression may introduce a minor drop in accuracy,
but it is negligible for most models.
In case if accuracy drop is significant user can disable compression explicitly.
@@ -29,20 +29,20 @@ To disable compression, use the ``compress_to_fp16=False`` option:
mo --input_model INPUT_MODEL --compress_to_fp16=False
For details on how plugins handle compressed ``FP16`` models, see
For details on how plugins handle compressed ``FP16`` models, see
:doc:`Working with devices <openvino_docs_OV_UG_Working_with_devices>`.
.. note::
``FP16`` compression is sometimes used as the initial step for ``INT8`` quantization.
Refer to the :doc:`Post-training optimization <pot_introduction>` guide for more
``FP16`` compression is sometimes used as the initial step for ``INT8`` quantization.
Refer to the :doc:`Post-training optimization <ptq_introduction>` guide for more
information about that.
.. note::
Some large models (larger than a few GB) when compressed to ``FP16`` may consume an overly large amount of RAM on the loading
phase of the inference. If that is the case for your model, try to convert it without compression:
phase of the inference. If that is the case for your model, try to convert it without compression:
``convert_model(INPUT_MODEL, compress_to_fp16=False)`` or ``convert_model(INPUT_MODEL)``

View File

@@ -10,30 +10,21 @@ information on using ITT and Intel® VTune™ Profiler to get performance insigh
Test performance with the benchmark_app
###########################################################
Prerequisites
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
To run benchmarks, you need both OpenVINO developer tools and Runtime installed. Follow the
:doc:`Installation guide <openvino_docs_install_guides_install_dev_tools>` and make sure to install the latest
general release package with support for frameworks of the models you want to test.
To test performance of your model, make sure you :doc:`prepare the model for use with OpenVINO <openvino_docs_model_processing_introduction>`.
For example, if you use :doc:`OpenVINO's automation tools <omz_tools_downloader>`, these two lines of code will download the
resnet-50-tf and convert it to OpenVINO IR.
You can run OpenVINO benchmarks in both C++ and Python APIs, yet the experience differs in each case.
The Python one is part of OpenVINO Runtime installation, while C++ is available as a code sample.
For a detailed description, see:
* :doc:`benchmark_app for C++ <openvino_inference_engine_samples_benchmark_app_README>`
* :doc:`benchmark_app for Python <openvino_inference_engine_tools_benchmark_tool_README>`.
.. code-block:: sh
omz_downloader --name resnet-50-tf
omz_converter --name resnet-50-tf
Make sure to install the latest release package with support for frameworks of the models you want to test.
For the most reliable performance benchmarks, :doc:`prepare the model for use with OpenVINO <openvino_docs_model_processing_introduction>`.
Running the benchmark application
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
For a detailed description, see the dedicated articles:
:doc:`benchmark_app for C++ <openvino_inference_engine_samples_benchmark_app_README>` and
:doc:`benchmark_app for Python <openvino_inference_engine_tools_benchmark_tool_README>`.
The benchmark_app includes a lot of device-specific options, but the primary usage is as simple as:
.. code-block:: sh

View File

@@ -4,7 +4,7 @@
Model conversion API is represented by ``convert_model()`` method in openvino.tools.mo namespace. ``convert_model()`` is compatible with types from openvino.runtime, like PartialShape, Layout, Type, etc.
``convert_model()`` has the ability available from the command-line tool, plus the ability to pass Python model objects, such as a Pytorch model or TensorFlow Keras model directly, without saving them into files and without leaving the training environment (Jupyter Notebook or training scripts). In addition to input models consumed directly from Python, ``convert_model`` can take OpenVINO extension objects constructed directly in Python for easier conversion of operations that are not supported in OpenVINO.
``convert_model()`` has the ability available from the command-line tool, plus the ability to pass Python model objects, such as a PyTorch model or TensorFlow Keras model directly, without saving them into files and without leaving the training environment (Jupyter Notebook or training scripts). In addition to input models consumed directly from Python, ``convert_model`` can take OpenVINO extension objects constructed directly in Python for easier conversion of operations that are not supported in OpenVINO.
.. note::
@@ -19,8 +19,8 @@ Example of converting a PyTorch model directly from memory:
:force:
import torchvision
model = torchvision.models.resnet50(pretrained=True)
model = torchvision.models.resnet50(weights='DEFAULT')
ov_model = convert_model(model)
The following types are supported as an input model for ``convert_model()``:
@@ -36,7 +36,7 @@ Example of using native Python classes to set ``input_shape``, ``mean_values`` a
:force:
from openvino.runtime import PartialShape, Layout
ov_model = convert_model(model, input_shape=PartialShape([1,3,100,100]), mean_values=[127, 127, 127], layout=Layout("NCHW"))
Example of using strings for setting ``input_shape``, ``mean_values`` and ``layout``:
@@ -74,7 +74,7 @@ Example of using ``InputCutInfo`` to freeze an input with value:
:force:
from openvino.tools.mo import convert_model, InputCutInfo
ov_model = convert_model(model, input=InputCutInfo("input_name", [3], np.float32, [0.5, 2.1, 3.4]))
To set parameters for models with multiple inputs, use ``list`` of parameters.
@@ -104,7 +104,7 @@ Example of using the ``Layout`` class to set the layout of a model input:
from openvino.runtime import Layout
from openvino.tools.mo import convert_model
ov_model = convert_model(model, source_layout=Layout("NCHW"))
To set both source and destination layouts in the ``layout`` parameter, use the ``LayoutMap`` class. ``LayoutMap`` accepts two parameters: ``source_layout`` and ``target_layout``.
@@ -117,7 +117,7 @@ Example of using the ``LayoutMap`` class to change the layout of a model input:
:force:
from openvino.tools.mo import convert_model, LayoutMap
ov_model = convert_model(model, layout=LayoutMap("NCHW", "NHWC"))
@endsphinxdirective
@endsphinxdirective

View File

@@ -6,28 +6,41 @@
:description: Learn how to convert a model from the
ONNX format to the OpenVINO Intermediate Representation.
Introduction to ONNX
####################
`ONNX <https://github.com/onnx/onnx>`__ is a representation format for deep learning models that allows AI developers to easily transfer models between different frameworks. It is hugely popular among deep learning tools, like PyTorch, Caffe2, Apache MXNet, Microsoft Cognitive Toolkit, and many others.
.. note:: ONNX models are supported via FrontEnd API. You may skip conversion to IR and read models directly by OpenVINO runtime API. Refer to the :doc:`inference example <openvino_docs_OV_UG_Integrate_OV_with_your_application>` for more details. Using ``convert_model`` is still necessary in more complex cases, such as new custom inputs/outputs in model pruning, adding pre-processing, or using Python conversion extensions.
Converting an ONNX Model
########################
This page provides instructions on model conversion from the ONNX format to the OpenVINO IR format. To use model conversion API, install OpenVINO Development Tools by following the :doc:`installation instructions <openvino_docs_install_guides_install_dev_tools>`.
The model conversion process assumes you have an ONNX model that was directly downloaded from a public repository or converted from any framework that supports exporting to the ONNX format.
Model conversion process assumes you have an ONNX model that was directly downloaded from a public repository or converted from any framework that supports exporting to the ONNX format.
.. tab-set::
To convert an ONNX model, run model conversion with the path to the input model ``.onnx`` file:
.. tab-item:: Python
:sync: py
.. code-block:: sh
To convert an ONNX model, run ``convert_model()`` method with the path to the ``<INPUT_MODEL>.onnx`` file:
mo --input_model <INPUT_MODEL>.onnx
.. code-block:: py
:force:
There are no ONNX specific parameters, so only framework-agnostic parameters are available to convert your model. For details, see the *General Conversion Parameters* section in the :doc:`Converting a Model to Intermediate Representation (IR) <openvino_docs_MO_DG_prepare_model_convert_model_Converting_Model>` guide.
ov_model = convert_model("<INPUT_MODEL>.onnx")
compiled_model = core.compile_model(ov_model, "AUTO")
.. important::
The ``convert_model()`` method returns ``ov.Model`` that you can optimize, compile, or save to a file for subsequent use.
.. tab-item:: CLI
:sync: cli
You can use ``mo`` command-line tool to convert a model to IR. The obtained IR can then be read by ``read_model()`` and inferred.
.. code-block:: sh
mo --input_model <INPUT_MODEL>.onnx
There are no ONNX-specific parameters, so only framework-agnostic parameters are available to convert your model. For details, see the *General Conversion Parameters* section in the :doc:`Converting a Model to Intermediate Representation (IR) <openvino_docs_MO_DG_prepare_model_convert_model_Converting_Model>` guide.
Supported ONNX Layers
#####################

View File

@@ -32,60 +32,58 @@ To convert a PaddlePaddle model, use the ``mo`` script and specify the path to t
Converting PaddlePaddle Model From Memory Using Python API
##########################################################
Model conversion API supports passing PaddlePaddle models directly from memory.
Following PaddlePaddle model formats are supported:
Model conversion API supports passing the following PaddlePaddle models directly from memory:
* ``paddle.hapi.model.Model``
* ``paddle.fluid.dygraph.layers.Layer``
* ``paddle.fluid.executor.Executor``
Converting certain PaddlePaddle models may require setting ``example_input`` or ``example_output``. Below examples show how to execute such the conversion.
When you convert certain PaddlePaddle models, you may need to set the ``example_input`` or ``example_output`` parameters first. Below you will find examples that show how to convert aforementioned model formats using the parameters.
* Example of converting ``paddle.hapi.model.Model`` format model:
* ``paddle.hapi.model.Model``
.. code-block:: py
:force:
import paddle
from openvino.tools.mo import convert_model
# create a paddle.hapi.model.Model format model
resnet50 = paddle.vision.models.resnet50()
x = paddle.static.InputSpec([1,3,224,224], 'float32', 'x')
y = paddle.static.InputSpec([1,1000], 'float32', 'y')
model = paddle.Model(resnet50, x, y)
# convert to OpenVINO IR format
ov_model = convert_model(model)
# optional: serialize OpenVINO IR to *.xml & *.bin
from openvino.runtime import serialize
serialize(ov_model, "ov_model.xml", "ov_model.bin")
* Example of converting ``paddle.fluid.dygraph.layers.Layer`` format model:
* ``paddle.fluid.dygraph.layers.Layer``
``example_input`` is required while ``example_output`` is optional, which accept the following formats:
``example_input`` is required while ``example_output`` is optional, and accept the following formats:
``list`` with tensor(``paddle.Tensor``) or InputSpec(``paddle.static.input.InputSpec``)
.. code-block:: py
:force:
import paddle
from openvino.tools.mo import convert_model
# create a paddle.fluid.dygraph.layers.Layer format model
model = paddle.vision.models.resnet50()
x = paddle.rand([1,3,224,224])
# convert to OpenVINO IR format
ov_model = convert_model(model, example_input=[x])
* Example of converting ``paddle.fluid.executor.Executor`` format model:
* ``paddle.fluid.executor.Executor``
``example_input`` and ``example_output`` are required, which accept the following formats:
``example_input`` and ``example_output`` are required, and accept the following formats:
``list`` or ``tuple`` with variable(``paddle.static.data``)
@@ -94,86 +92,37 @@ Converting certain PaddlePaddle models may require setting ``example_input`` or
import paddle
from openvino.tools.mo import convert_model
paddle.enable_static()
# create a paddle.fluid.executor.Executor format model
x = paddle.static.data(name="x", shape=[1,3,224])
y = paddle.static.data(name="y", shape=[1,3,224])
relu = paddle.nn.ReLU()
sigmoid = paddle.nn.Sigmoid()
y = sigmoid(relu(x))
exe = paddle.static.Executor(paddle.CPUPlace())
exe.run(paddle.static.default_startup_program())
# convert to OpenVINO IR format
ov_model = convert_model(exe, example_input=[x], example_output=[y])
.. important::
The ``convert_model()`` method returns ``ov.Model`` that you can optimize, compile, or save to a file for subsequent use.
Supported PaddlePaddle Layers
#############################
For the list of supported standard layers, refer to the :doc:`Supported Operations <openvino_resources_supported_operations_frontend>` page.
Officially Supported PaddlePaddle Models
########################################
The following PaddlePaddle models have been officially validated and confirmed to work (as of OpenVINO 2022.1):
.. list-table::
:widths: 20 25 55
:header-rows: 1
* - Model Name
- Model Type
- Description
* - ppocr-det
- optical character recognition
- Models are exported from `PaddleOCR <https://github.com/PaddlePaddle/PaddleOCR/tree/release/2.1/>`_. Refer to `READ.md <https://github.com/PaddlePaddle/PaddleOCR/tree/release/2.1/#pp-ocr-20-series-model-listupdate-on-dec-15>`_.
* - ppocr-rec
- optical character recognition
- Models are exported from `PaddleOCR <https://github.com/PaddlePaddle/PaddleOCR/tree/release/2.1/>`_. Refer to `READ.md <https://github.com/PaddlePaddle/PaddleOCR/tree/release/2.1/#pp-ocr-20-series-model-listupdate-on-dec-15>`_.
* - ResNet-50
- classification
- Models are exported from `PaddleClas <https://github.com/PaddlePaddle/PaddleClas/tree/release/2.1/>`_. Refer to `getting_started_en.md <https://github.com/PaddlePaddle/PaddleClas/blob/release/2.1/docs/en/tutorials/getting_started_en.md#4-use-the-inference-model-to-predict>`_.
* - MobileNet v2
- classification
- Models are exported from `PaddleClas <https://github.com/PaddlePaddle/PaddleClas/tree/release/2.1/>`_. Refer to `getting_started_en.md <https://github.com/PaddlePaddle/PaddleClas/blob/release/2.1/docs/en/tutorials/getting_started_en.md#4-use-the-inference-model-to-predict>`_.
* - MobileNet v3
- classification
- Models are exported from `PaddleClas <https://github.com/PaddlePaddle/PaddleClas/tree/release/2.1/>`_. Refer to `getting_started_en.md <https://github.com/PaddlePaddle/PaddleClas/blob/release/2.1/docs/en/tutorials/getting_started_en.md#4-use-the-inference-model-to-predict>`_.
* - BiSeNet v2
- semantic segmentation
- Models are exported from `PaddleSeg <https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.1>`_. Refer to `model_export.md <https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.1/docs/model_export.md#>`_.
* - DeepLab v3 plus
- semantic segmentation
- Models are exported from `PaddleSeg <https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.1>`_. Refer to `model_export.md <https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.1/docs/model_export.md#>`_.
* - Fast-SCNN
- semantic segmentation
- Models are exported from `PaddleSeg <https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.1>`_. Refer to `model_export.md <https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.1/docs/model_export.md#>`_.
* - OCRNET
- semantic segmentation
- Models are exported from `PaddleSeg <https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.1>`_. Refer to `model_export.md <https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.1/docs/model_export.md#>`_.
* - Yolo v3
- detection
- Models are exported from `PaddleDetection <https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.1>`_. Refer to `EXPORT_MODEL.md <https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.1/deploy/EXPORT_MODEL.md#>`_.
* - ppyolo
- detection
- Models are exported from `PaddleDetection <https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.1>`_. Refer to `EXPORT_MODEL.md <https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.1/deploy/EXPORT_MODEL.md#>`_.
* - MobileNetv3-SSD
- detection
- Models are exported from `PaddleDetection <https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.2>`_. Refer to `EXPORT_MODEL.md <https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.2/deploy/EXPORT_MODEL.md#>`_.
* - U-Net
- semantic segmentation
- Models are exported from `PaddleSeg <https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.3>`_. Refer to `model_export.md <https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.3/docs/model_export.md#>`_.
* - BERT
- language representation
- Models are exported from `PaddleNLP <https://github.com/PaddlePaddle/PaddleNLP/tree/v2.1.1>`_. Refer to `README.md <https://github.com/PaddlePaddle/PaddleNLP/tree/develop/examples/language_model/bert#readme>`_.
Frequently Asked Questions (FAQ)
################################
When model conversion API is unable to run to completion due to typographical errors, incorrectly used options, or other issues, it provides explanatory messages. They describe the potential cause of the problem and give a link to the :doc:`Model Optimizer FAQ <openvino_docs_MO_DG_prepare_model_Model_Optimizer_FAQ>`, which provides instructions on how to resolve most issues. The FAQ also includes links to relevant sections in :doc:`Convert a Model <openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide>` to help you understand what went wrong.
The model conversion API displays explanatory messages for typographical errors, incorrectly used options, or other issues. They describe the potential cause of the problem and give a link to the :doc:`Model Optimizer FAQ <openvino_docs_MO_DG_prepare_model_Model_Optimizer_FAQ>`, which provides instructions on how to resolve most issues. The FAQ also includes links to relevant sections in :doc:`Convert a Model <openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide>` to help you understand what went wrong.
Additional Resources
####################

View File

@@ -7,13 +7,17 @@
PyTorch format to the OpenVINO Intermediate Representation.
This page provides instructions on how to convert a model from the PyTorch format to the OpenVINO IR format using Model Optimizer.
Model Optimizer Python API allows the conversion of PyTorch models using the ``convert_model()`` method.
This page provides instructions on how to convert a model from the PyTorch format to the OpenVINO IR format.
(Experimental) Converting a PyTorch model with PyTorch Frontend
The conversion is a required step to run inference using OpenVINO API.
It is not required if you choose to work with OpenVINO under the PyTorch framework,
using its :doc:`torch.compile feature <pytorch_2_0_torch_compile>`.
Converting a PyTorch model with PyTorch Frontend
###############################################################
Example of PyTorch model conversion:
To convert a PyTorch model to the OpenVINO IR format, use the OVC API (superseding the previously used tool, MO). To do so, use the ``convert_model()`` method, like so:
.. code-block:: py
:force:
@@ -22,7 +26,7 @@ Example of PyTorch model conversion:
import torch
from openvino.tools.mo import convert_model
model = torchvision.models.resnet50(pretrained=True)
model = torchvision.models.resnet50(weights='DEFAULT')
ov_model = convert_model(model)
Following PyTorch model formats are supported:
@@ -31,12 +35,8 @@ Following PyTorch model formats are supported:
* ``torch.jit.ScriptModule``
* ``torch.jit.ScriptFunction``
Converting certain PyTorch models may require model tracing, which needs ``input_shape`` or ``example_input`` parameters to be set.
``example_input`` is used as example input for model tracing.
``input_shape`` is used for constructing a float zero-filled torch.Tensor for model tracing.
Example of using ``example_input``:
Converting certain PyTorch models may require model tracing, which needs the ``example_input``
parameter to be set, for example:
.. code-block:: py
:force:
@@ -45,8 +45,8 @@ Example of using ``example_input``:
import torch
from openvino.tools.mo import convert_model
model = torchvision.models.resnet50(pretrained=True)
ov_model = convert_model(model, example_input=torch.zeros(1, 3, 100, 100))
model = torchvision.models.resnet50(weights='DEFAULT')
ov_model = convert_model(model, example_input=torch.randn(1, 3, 100, 100))
``example_input`` accepts the following formats:
@@ -56,13 +56,21 @@ Example of using ``example_input``:
* ``list`` or ``tuple`` with tensors (``openvino.runtime.Tensor`` / ``torch.Tensor`` / ``np.ndarray``)
* ``dictionary`` where key is the input name, value is the tensor (``openvino.runtime.Tensor`` / ``torch.Tensor`` / ``np.ndarray``)
Sometimes ``convert_model`` will produce inputs of the model with dynamic rank or dynamic type.
Such model may not be supported by the hardware chosen for inference. To avoid this issue,
use the ``input`` argument of ``convert_model``. For more information, refer to :doc:`Convert Models Represented as Python Objects <openvino_docs_MO_DG_Python_API>`.
.. important::
The ``convert_model()`` method returns ``ov.Model`` that you can optimize, compile, or save to a file for subsequent use.
Exporting a PyTorch Model to ONNX Format
########################################
Currently, the most robust method of converting PyTorch models is exporting a PyTorch model to ONNX and then converting it to IR. To convert and deploy a PyTorch model, follow these steps:
It is also possible to export a PyTorch model to ONNX and then convert it to OpenVINO IR. To convert and deploy a PyTorch model this way, follow these steps:
1. `Export a PyTorch model to ONNX <#exporting-a-pytorch-model-to-onnx-format>`__.
2. :doc:`Convert the ONNX model <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_ONNX>` to produce an optimized :doc:`Intermediate Representation <openvino_docs_MO_DG_IR_and_opsets>` of the model based on the trained network topology, weights, and biases values.
2. :doc:`Convert an ONNX model <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_ONNX>` to produce an optimized :doc:`Intermediate Representation <openvino_docs_MO_DG_IR_and_opsets>` of the model based on the trained network topology, weights, and biases values.
PyTorch models are defined in Python. To export them, use the ``torch.onnx.export()`` method. The code to
evaluate or test the model is usually provided with its code and can be used for its initialization and export.
@@ -86,12 +94,6 @@ To export a PyTorch model, you need to obtain the model as an instance of ``torc
torch.onnx.export(model, (dummy_input, ), 'model.onnx')
Known Issues
####################
As of version 1.8.1, not all PyTorch operations can be exported to ONNX opset 9 which is used by default.
It is recommended to export models to opset 11 or higher when export to default opset 9 is not working. In that case, use ``opset_version`` option of the ``torch.onnx.export``. For more information about ONNX opset, refer to the `Operator Schemas <https://github.com/onnx/onnx/blob/master/docs/Operators.md>`__ page.
Additional Resources
####################

View File

@@ -7,11 +7,9 @@
TensorFlow format to the OpenVINO Intermediate Representation.
This page provides general instructions on how to run model conversion from a TensorFlow format to the OpenVINO IR format. The instructions are different depending on whether your model was created with TensorFlow v1.X or TensorFlow v2.X.
.. note:: TensorFlow models are supported via :doc:`FrontEnd API <openvino_docs_MO_DG_TensorFlow_Frontend>`. You may skip conversion to IR and read models directly by OpenVINO runtime API. Refer to the :doc:`inference example <openvino_docs_OV_UG_Integrate_OV_with_your_application>` for more details. Using ``convert_model`` is still necessary in more complex cases, such as new custom inputs/outputs in model pruning, adding pre-processing, or using Python conversion extensions.
To use model conversion API, install OpenVINO Development Tools by following the :doc:`installation instructions <openvino_docs_install_guides_install_dev_tools>`.
The conversion instructions are different depending on whether your model was created with TensorFlow v1.X or TensorFlow v2.X.
Converting TensorFlow 1 Models
###############################
@@ -19,7 +17,7 @@ Converting TensorFlow 1 Models
Converting Frozen Model Format
+++++++++++++++++++++++++++++++
To convert a TensorFlow model, use the ``*mo*`` script to simply convert a model with a path to the input model ``*.pb*`` file:
To convert a TensorFlow model, use the ``*mo*`` script to simply convert a model with a path to the input model *.pb* file:
.. code-block:: sh
@@ -32,7 +30,7 @@ Converting Non-Frozen Model Formats
There are three ways to store non-frozen TensorFlow models and convert them by model conversion API:
1. **Checkpoint**. In this case, a model consists of two files: ``inference_graph.pb`` (or ``inference_graph.pbtxt``) and ``checkpoint_file.ckpt``.
If you do not have an inference graph file, refer to the `Freezing Custom Models in Python <#Freezing-Custom-Models-in-Python>`__ section.
If you do not have an inference graph file, refer to the `Freezing Custom Models in Python <#freezing-custom-models-in-python>`__ section.
To convert the model with the inference graph in ``.pb`` format, run the `mo` script with a path to the checkpoint file:
.. code-block:: sh
@@ -141,7 +139,7 @@ It is essential to freeze the model before pruning. Use the following code snipp
Keras H5
++++++++
If you have a model in the HDF5 format, load the model using TensorFlow 2 and serialize it in the
If you have a model in HDF5 format, load the model using TensorFlow 2 and serialize it to
SavedModel format. Here is an example of how to do it:
.. code-block:: py
@@ -299,6 +297,10 @@ Model conversion API supports passing TensorFlow/TensorFlow2 models directly fro
checkpoint.restore(save_path)
ov_model = convert_model(checkpoint)
.. important::
The ``convert_model()`` method returns ``ov.Model`` that you can optimize, compile, or save to a file for subsequent use.
Supported TensorFlow and TensorFlow 2 Keras Layers
##################################################

View File

@@ -13,7 +13,11 @@ To convert a TensorFlow Lite model, use the ``mo`` script and specify the path t
mo --input_model <INPUT_MODEL>.tflite
.. note:: TensorFlow Lite models are supported via FrontEnd API. You may skip conversion to IR and read models directly by OpenVINO runtime API. Refer to the :doc:`inference example <openvino_docs_OV_UG_Integrate_OV_with_your_application>` for more details. Using ``convert_model`` is still necessary in more complex cases, such as new custom inputs/outputs in model pruning, adding pre-processing, or using Python conversion extensions.
TensorFlow Lite models are supported via FrontEnd API. You may skip conversion to IR and read models directly by OpenVINO runtime API. Refer to the :doc:`inference example <openvino_docs_OV_UG_Integrate_OV_with_your_application>` for more details. Using ``convert_model`` is still necessary in more complex cases, such as new custom inputs/outputs in model pruning, adding pre-processing, or using Python conversion extensions.
.. important::
The ``convert_model()`` method returns ``ov.Model`` that you can optimize, compile, or save to a file for subsequent use.
Supported TensorFlow Lite Layers
###################################

View File

@@ -31,25 +31,26 @@
openvino_docs_MO_DG_prepare_model_convert_model_pytorch_specific_Convert_RCAN
openvino_docs_MO_DG_prepare_model_convert_model_pytorch_specific_Convert_RNNT
openvino_docs_MO_DG_prepare_model_convert_model_pytorch_specific_Convert_YOLACT
openvino_docs_MO_DG_prepare_model_convert_model_mxnet_specific_Convert_GluonCV_Models
openvino_docs_MO_DG_prepare_model_convert_model_mxnet_specific_Convert_Style_Transfer_From_MXNet
openvino_docs_MO_DG_prepare_model_convert_model_kaldi_specific_Aspire_Tdnn_Model
.. meta::
:description: Get to know conversion methods for specific TensorFlow, ONNX, PyTorch, MXNet, and Kaldi models.
This section provides a set of tutorials that demonstrate conversion methods for specific
TensorFlow, ONNX, PyTorch, MXNet, and Kaldi models, which does not necessarily cover your case.
TensorFlow, ONNX, and PyTorch models. Note that these instructions do not cover all use
cases and may not reflect your particular needs.
Before studying the tutorials, try to convert the model out-of-the-box by specifying only the
``--input_model`` parameter in the command line.
.. warning::
Note that OpenVINO support for Apache MXNet, Caffe, and Kaldi is currently being deprecated and will be removed entirely in the future.
.. note::
Apache MXNet, Caffe, and Kaldi are no longer directly supported by OpenVINO.
They will remain available for some time, so make sure to transition to other
frameworks before they are fully discontinued.
You will find a collection of :doc:`Python tutorials <tutorials>` written for running on Jupyter notebooks
that provide an introduction to the OpenVINO™ toolkit and explain how to use the Python API and tools for
optimized deep learning inference.
@endsphinxdirective
@endsphinxdirective

View File

@@ -22,7 +22,7 @@ The following examples are the situations when model cutting is useful or even r
Model conversion API parameters
###############################
Model conversion API provides command line options ``input`` and ``output`` to specify new entry and exit nodes, while ignoring the rest of the model:
Model conversion API provides ``input`` and ``output`` command-line options to specify new entry and exit nodes, while ignoring the rest of the model:
* ``input`` option accepts a list of layer names of the input model that should be treated as new entry points to the model. See the full list of accepted types for input on :doc:`Model Conversion Python API <openvino_docs_MO_DG_Python_API>` page.
* ``output`` option accepts a list of layer names of the input model that should be treated as new exit points from the model.

View File

@@ -4,7 +4,7 @@
.. meta::
:description: Learn how to convert a BERT-NER model
from Pytorch to the OpenVINO Intermediate Representation.
from PyTorch to the OpenVINO Intermediate Representation.
The goal of this article is to present a step-by-step guide on how to convert PyTorch BERT-NER model to OpenVINO IR. First, you need to download the model and convert it to ONNX.

View File

@@ -4,7 +4,7 @@
.. meta::
:description: Learn how to convert a Cascade RCNN R-101
model from Pytorch to the OpenVINO Intermediate Representation.
model from PyTorch to the OpenVINO Intermediate Representation.
The goal of this article is to present a step-by-step guide on how to convert a PyTorch Cascade RCNN R-101 model to OpenVINO IR. First, you need to download the model and convert it to ONNX.

View File

@@ -4,7 +4,7 @@
.. meta::
:description: Learn how to convert a F3Net model
from Pytorch to the OpenVINO Intermediate Representation.
from PyTorch to the OpenVINO Intermediate Representation.
`F3Net <https://github.com/weijun88/F3Net>`__ : Fusion, Feedback and Focus for Salient Object Detection

View File

@@ -4,7 +4,7 @@
.. meta::
:description: Learn how to convert a QuartzNet model
from Pytorch to the OpenVINO Intermediate Representation.
from PyTorch to the OpenVINO Intermediate Representation.
`NeMo project <https://github.com/NVIDIA/NeMo>`__ provides the QuartzNet model.

View File

@@ -4,7 +4,7 @@
.. meta::
:description: Learn how to convert a RCAN model
from Pytorch to the OpenVINO Intermediate Representation.
from PyTorch to the OpenVINO Intermediate Representation.
`RCAN <https://github.com/yulunzhang/RCAN>`__ : Image Super-Resolution Using Very Deep Residual Channel Attention Networks

View File

@@ -4,7 +4,7 @@
.. meta::
:description: Learn how to convert a RNN-T model
from Pytorch to the OpenVINO Intermediate Representation.
from PyTorch to the OpenVINO Intermediate Representation.
This guide covers conversion of RNN-T model from `MLCommons <https://github.com/mlcommons>`__ repository. Follow

View File

@@ -4,7 +4,7 @@
.. meta::
:description: Learn how to convert a YOLACT model
from Pytorch to the OpenVINO Intermediate Representation.
from PyTorch to the OpenVINO Intermediate Representation.
You Only Look At CoefficienTs (YOLACT) is a simple, fully convolutional model for real-time instance segmentation.

View File

@@ -1,4 +1,4 @@
# Supported Model Formats {#Supported_Model_Formats}
# Supported Model Formats {#Supported_Model_Formats_MO_DG}
@sphinxdirective
@@ -11,37 +11,548 @@
openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_PyTorch
openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow_Lite
openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Paddle
openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_MxNet
openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Caffe
openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Kaldi
openvino_docs_MO_DG_prepare_model_convert_model_tutorials
.. meta::
:description: In OpenVINO, ONNX, PaddlePaddle, TensorFlow and TensorFlow Lite
models do not require any prior conversion, while MxNet, Caffe and Kaldi do.
:description: Learn about supported model formats and the methods used to convert, read, and compile them in OpenVINO™.
**OpenVINO IR (Intermediate Representation)** - the proprietary format of OpenVINO, benefiting from the full extent of its features.
**OpenVINO IR (Intermediate Representation)** - the proprietary and default format of OpenVINO, benefiting from the full extent of its features. All other supported model formats, as listed below, are converted to :doc:`OpenVINO IR <openvino_ir>` to enable inference. Consider storing your model in this format to minimize first-inference latency, perform model optimization, and, in some cases, save space on your drive.
**ONNX, PaddlePaddle, TensorFlow, TensorFlow Lite** - formats supported directly, which means they can be used with
OpenVINO Runtime without any prior conversion. For a guide on how to run inference on ONNX, PaddlePaddle, or TensorFlow,
see how to :doc:`Integrate OpenVINO™ with Your Application <openvino_docs_OV_UG_Integrate_OV_with_your_application>`.
**PyTorch, TensorFlow, ONNX, and PaddlePaddle** - can be used with OpenVINO Runtime API directly,
which means you do not need to save them as OpenVINO IR before including them in your application.
OpenVINO can read, compile, and convert them automatically, as part of its pipeline.
**MXNet, Caffe, Kaldi** - legacy formats that need to be converted to OpenVINO IR before running inference.
The model conversion in some cases may involve intermediate steps. OpenVINO is currently proceeding
**to deprecate these formats** and **remove their support entirely in the future**.
In the Python API, these options are provided as three separate methods:
``read_model()``, ``compile_model()``, and ``convert_model()``.
The ``convert_model()`` method enables you to perform additional adjustments
to the model, such as setting shapes, changing model input types or layouts,
cutting parts of the model, freezing inputs, etc. For a detailed description
of the conversion process, see the
:doc:`model conversion guide <openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide>`.
Here are code examples of how to use these methods with different model formats:
.. tab-set::
.. tab-item:: PyTorch
:sync: torch
.. tab-set::
.. tab-item:: Python
:sync: py
* The ``convert_model()`` method:
This is the only method applicable to PyTorch models.
.. dropdown:: List of supported formats:
* **Python objects**:
* ``torch.nn.Module``
* ``torch.jit.ScriptModule``
* ``torch.jit.ScriptFunction``
.. code-block:: py
:force:
model = torchvision.models.resnet50(weights='DEFAULT')
ov_model = convert_model(model)
compiled_model = core.compile_model(ov_model, "AUTO")
For more details on conversion, refer to the
:doc:`guide <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_PyTorch>`
and an example `tutorial <https://docs.openvino.ai/nightly/notebooks/102-pytorch-onnx-to-openvino-with-output.html>`__
on this topic.
.. tab-item:: TensorFlow
:sync: tf
.. tab-set::
.. tab-item:: Python
:sync: py
* The ``convert_model()`` method:
When you use the ``convert_model()`` method, you have more control and you can specify additional adjustments for ``ov.Model``. The ``read_model()`` and ``compile_model()`` methods are easier to use, however, they do not have such capabilities. With ``ov.Model`` you can choose to optimize, compile and run inference on it or serialize it into a file for subsequent use.
.. dropdown:: List of supported formats:
* **Files**:
* SavedModel - ``<SAVED_MODEL_DIRECTORY>`` or ``<INPUT_MODEL>.pb``
* Checkpoint - ``<INFERENCE_GRAPH>.pb`` or ``<INFERENCE_GRAPH>.pbtxt``
* MetaGraph - ``<INPUT_META_GRAPH>.meta``
* **Python objects**:
* ``tf.keras.Model``
* ``tf.keras.layers.Layer``
* ``tf.Module``
* ``tf.compat.v1.Graph``
* ``tf.compat.v1.GraphDef``
* ``tf.function``
* ``tf.compat.v1.session``
* ``tf.train.checkpoint``
.. code-block:: py
:force:
ov_model = convert_model("saved_model.pb")
compiled_model = core.compile_model(ov_model, "AUTO")
For more details on conversion, refer to the
:doc:`guide <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow>`
and an example `tutorial <https://docs.openvino.ai/nightly/notebooks/101-tensorflow-to-openvino-with-output.html>`__
on this topic.
* The ``read_model()`` and ``compile_model()`` methods:
.. dropdown:: List of supported formats:
* **Files**:
* SavedModel - ``<SAVED_MODEL_DIRECTORY>`` or ``<INPUT_MODEL>.pb``
* Checkpoint - ``<INFERENCE_GRAPH>.pb`` or ``<INFERENCE_GRAPH>.pbtxt``
* MetaGraph - ``<INPUT_META_GRAPH>.meta``
.. code-block:: py
:force:
ov_model = read_model("saved_model.pb")
compiled_model = core.compile_model(ov_model, "AUTO")
For a guide on how to run inference, see how to
:doc:`Integrate OpenVINO™ with Your Application <openvino_docs_OV_UG_Integrate_OV_with_your_application>`.
For TensorFlow format, see :doc:`TensorFlow Frontend Capabilities and Limitations <openvino_docs_MO_DG_TensorFlow_Frontend>`.
.. tab-item:: C++
:sync: cpp
* The ``compile_model()`` method:
.. dropdown:: List of supported formats:
* **Files**:
* SavedModel - ``<SAVED_MODEL_DIRECTORY>`` or ``<INPUT_MODEL>.pb``
* Checkpoint - ``<INFERENCE_GRAPH>.pb`` or ``<INFERENCE_GRAPH>.pbtxt``
* MetaGraph - ``<INPUT_META_GRAPH>.meta``
.. code-block:: cpp
ov::CompiledModel compiled_model = core.compile_model("saved_model.pb", "AUTO");
For a guide on how to run inference, see how to
:doc:`Integrate OpenVINO™ with Your Application <openvino_docs_OV_UG_Integrate_OV_with_your_application>`.
.. tab-item:: C
:sync: c
* The ``compile_model()`` method:
.. dropdown:: List of supported formats:
* **Files**:
* SavedModel - ``<SAVED_MODEL_DIRECTORY>`` or ``<INPUT_MODEL>.pb``
* Checkpoint - ``<INFERENCE_GRAPH>.pb`` or ``<INFERENCE_GRAPH>.pbtxt``
* MetaGraph - ``<INPUT_META_GRAPH>.meta``
.. code-block:: c
ov_compiled_model_t* compiled_model = NULL;
ov_core_compile_model_from_file(core, "saved_model.pb", "AUTO", 0, &compiled_model);
For a guide on how to run inference, see how to
:doc:`Integrate OpenVINO™ with Your Application <openvino_docs_OV_UG_Integrate_OV_with_your_application>`.
.. tab-item:: CLI
:sync: cli
You can use ``mo`` command-line tool to convert a model to IR. The obtained IR can then be read by ``read_model()`` and inferred.
.. code-block:: sh
mo --input_model <INPUT_MODEL>.pb
For details on the conversion, refer to the
:doc:`article <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow>`.
.. tab-item:: TensorFlow Lite
:sync: tflite
.. tab-set::
.. tab-item:: Python
:sync: py
* The ``convert_model()`` method:
When you use the ``convert_model()`` method, you have more control and you can specify additional adjustments for ``ov.Model``. The ``read_model()`` and ``compile_model()`` methods are easier to use, however, they do not have such capabilities. With ``ov.Model`` you can choose to optimize, compile and run inference on it or serialize it into a file for subsequent use.
.. dropdown:: List of supported formats:
* **Files**:
* ``<INPUT_MODEL>.tflite``
.. code-block:: py
:force:
ov_model = convert_model("<INPUT_MODEL>.tflite")
compiled_model = core.compile_model(ov_model, "AUTO")
For more details on conversion, refer to the
:doc:`guide <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow>`
and an example `tutorial <https://docs.openvino.ai/nightly/notebooks/119-tflite-to-openvino-with-output.html>`__
on this topic.
Refer to the following articles for details on conversion for different formats and models:
* The ``read_model()`` method:
* :doc:`How to convert ONNX <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_ONNX>`
* :doc:`How to convert PaddlePaddle <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Paddle>`
* :doc:`How to convert TensorFlow <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow>`
* :doc:`How to convert TensorFlow Lite <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow_Lite>`
* :doc:`How to convert MXNet <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_MxNet>`
* :doc:`How to convert Caffe <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Caffe>`
* :doc:`How to convert Kaldi <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Kaldi>`
.. dropdown:: List of supported formats:
* :doc:`Conversion examples for specific models <openvino_docs_MO_DG_prepare_model_convert_model_tutorials>`
* **Files**:
@endsphinxdirective
* ``<INPUT_MODEL>.tflite``
.. code-block:: py
:force:
ov_model = read_model("<INPUT_MODEL>.tflite")
compiled_model = core.compile_model(ov_model, "AUTO")
* The ``compile_model()`` method:
.. dropdown:: List of supported formats:
* **Files**:
* ``<INPUT_MODEL>.tflite``
.. code-block:: py
:force:
compiled_model = core.compile_model("<INPUT_MODEL>.tflite", "AUTO")
For a guide on how to run inference, see how to
:doc:`Integrate OpenVINO™ with Your Application <openvino_docs_OV_UG_Integrate_OV_with_your_application>`.
.. tab-item:: C++
:sync: cpp
* The ``compile_model()`` method:
.. dropdown:: List of supported formats:
* **Files**:
* ``<INPUT_MODEL>.tflite``
.. code-block:: cpp
ov::CompiledModel compiled_model = core.compile_model("<INPUT_MODEL>.tflite", "AUTO");
For a guide on how to run inference, see how to
:doc:`Integrate OpenVINO™ with Your Application <openvino_docs_OV_UG_Integrate_OV_with_your_application>`.
.. tab-item:: C
:sync: c
* The ``compile_model()`` method:
.. dropdown:: List of supported formats:
* **Files**:
* ``<INPUT_MODEL>.tflite``
.. code-block:: c
ov_compiled_model_t* compiled_model = NULL;
ov_core_compile_model_from_file(core, "<INPUT_MODEL>.tflite", "AUTO", 0, &compiled_model);
For a guide on how to run inference, see how to
:doc:`Integrate OpenVINO™ with Your Application <openvino_docs_OV_UG_Integrate_OV_with_your_application>`.
.. tab-item:: CLI
:sync: cli
* The ``convert_model()`` method:
You can use ``mo`` command-line tool to convert a model to IR. The obtained IR can then be read by ``read_model()`` and inferred.
.. dropdown:: List of supported formats:
* **Files**:
* ``<INPUT_MODEL>.tflite``
.. code-block:: sh
mo --input_model <INPUT_MODEL>.tflite
For details on the conversion, refer to the
:doc:`article <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow_Lite>`.
.. tab-item:: ONNX
:sync: onnx
.. tab-set::
.. tab-item:: Python
:sync: py
* The ``convert_model()`` method:
When you use the ``convert_model()`` method, you have more control and you can specify additional adjustments for ``ov.Model``. The ``read_model()`` and ``compile_model()`` methods are easier to use, however, they do not have such capabilities. With ``ov.Model`` you can choose to optimize, compile and run inference on it or serialize it into a file for subsequent use.
.. dropdown:: List of supported formats:
* **Files**:
* ``<INPUT_MODEL>.onnx``
.. code-block:: py
:force:
ov_model = convert_model("<INPUT_MODEL>.onnx")
compiled_model = core.compile_model(ov_model, "AUTO")
For more details on conversion, refer to the
:doc:`guide <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_ONNX>`
and an example `tutorial <https://docs.openvino.ai/nightly/notebooks/102-pytorch-onnx-to-openvino-with-output.html>`__
on this topic.
* The ``read_model()`` method:
.. dropdown:: List of supported formats:
* **Files**:
* ``<INPUT_MODEL>.onnx``
.. code-block:: py
:force:
ov_model = read_model("<INPUT_MODEL>.onnx")
compiled_model = core.compile_model(ov_model, "AUTO")
* The ``compile_model()`` method:
.. dropdown:: List of supported formats:
* **Files**:
* ``<INPUT_MODEL>.onnx``
.. code-block:: py
:force:
compiled_model = core.compile_model("<INPUT_MODEL>.onnx", "AUTO")
For a guide on how to run inference, see how to :doc:`Integrate OpenVINO™ with Your Application <openvino_docs_OV_UG_Integrate_OV_with_your_application>`.
.. tab-item:: C++
:sync: cpp
* The ``compile_model()`` method:
.. dropdown:: List of supported formats:
* **Files**:
* ``<INPUT_MODEL>.onnx``
.. code-block:: cpp
ov::CompiledModel compiled_model = core.compile_model("<INPUT_MODEL>.onnx", "AUTO");
For a guide on how to run inference, see how to :doc:`Integrate OpenVINO™ with Your Application <openvino_docs_OV_UG_Integrate_OV_with_your_application>`.
.. tab-item:: C
:sync: c
* The ``compile_model()`` method:
.. dropdown:: List of supported formats:
* **Files**:
* ``<INPUT_MODEL>.onnx``
.. code-block:: c
ov_compiled_model_t* compiled_model = NULL;
ov_core_compile_model_from_file(core, "<INPUT_MODEL>.onnx", "AUTO", 0, &compiled_model);
For details on the conversion, refer to the :doc:`article <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_ONNX>`
.. tab-item:: CLI
:sync: cli
* The ``convert_model()`` method:
You can use ``mo`` command-line tool to convert a model to IR. The obtained IR can then be read by ``read_model()`` and inferred.
.. dropdown:: List of supported formats:
* **Files**:
* ``<INPUT_MODEL>.onnx``
.. code-block:: sh
mo --input_model <INPUT_MODEL>.onnx
For details on the conversion, refer to the
:doc:`article <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_ONNX>`
.. tab-item:: PaddlePaddle
:sync: pdpd
.. tab-set::
.. tab-item:: Python
:sync: py
* The ``convert_model()`` method:
When you use the ``convert_model()`` method, you have more control and you can specify additional adjustments for ``ov.Model``. The ``read_model()`` and ``compile_model()`` methods are easier to use, however, they do not have such capabilities. With ``ov.Model`` you can choose to optimize, compile and run inference on it or serialize it into a file for subsequent use.
.. dropdown:: List of supported formats:
* **Files**:
* ``<INPUT_MODEL>.pdmodel``
* **Python objects**:
* ``paddle.hapi.model.Model``
* ``paddle.fluid.dygraph.layers.Layer``
* ``paddle.fluid.executor.Executor``
.. code-block:: py
:force:
ov_model = convert_model("<INPUT_MODEL>.pdmodel")
compiled_model = core.compile_model(ov_model, "AUTO")
For more details on conversion, refer to the
:doc:`guide <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Paddle>`
and an example `tutorial <https://docs.openvino.ai/nightly/notebooks/103-paddle-to-openvino-classification-with-output.html>`__
on this topic.
* The ``read_model()`` method:
.. dropdown:: List of supported formats:
* **Files**:
* ``<INPUT_MODEL>.pdmodel``
.. code-block:: py
:force:
ov_model = read_model("<INPUT_MODEL>.pdmodel")
compiled_model = core.compile_model(ov_model, "AUTO")
* The ``compile_model()`` method:
.. dropdown:: List of supported formats:
* **Files**:
* ``<INPUT_MODEL>.pdmodel``
.. code-block:: py
:force:
compiled_model = core.compile_model("<INPUT_MODEL>.pdmodel", "AUTO")
For a guide on how to run inference, see how to
:doc:`Integrate OpenVINO™ with Your Application <openvino_docs_OV_UG_Integrate_OV_with_your_application>`.
.. tab-item:: C++
:sync: cpp
* The ``compile_model()`` method:
.. dropdown:: List of supported formats:
* **Files**:
* ``<INPUT_MODEL>.pdmodel``
.. code-block:: cpp
ov::CompiledModel compiled_model = core.compile_model("<INPUT_MODEL>.pdmodel", "AUTO");
For a guide on how to run inference, see how to
:doc:`Integrate OpenVINO™ with Your Application <openvino_docs_OV_UG_Integrate_OV_with_your_application>`.
.. tab-item:: C
:sync: c
* The ``compile_model()`` method:
.. dropdown:: List of supported formats:
* **Files**:
* ``<INPUT_MODEL>.pdmodel``
.. code-block:: c
ov_compiled_model_t* compiled_model = NULL;
ov_core_compile_model_from_file(core, "<INPUT_MODEL>.pdmodel", "AUTO", 0, &compiled_model);
For a guide on how to run inference, see how to
:doc:`Integrate OpenVINO™ with Your Application <openvino_docs_OV_UG_Integrate_OV_with_your_application>`.
.. tab-item:: CLI
:sync: cli
* The ``convert_model()`` method:
You can use ``mo`` command-line tool to convert a model to IR. The obtained IR can then be read by ``read_model()`` and inferred.
.. dropdown:: List of supported formats:
* **Files**:
* ``<INPUT_MODEL>.pdmodel``
.. code-block:: sh
mo --input_model <INPUT_MODEL>.pdmodel
For details on the conversion, refer to the
:doc:`article <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Paddle>`.
**MXNet, Caffe, and Kaldi** are legacy formats that need to be converted explicitly to OpenVINO IR or ONNX before running inference.
As OpenVINO is currently proceeding **to deprecate these formats** and **remove their support entirely in the future**,
converting them to ONNX for use with OpenVINO should be considered the default path.
.. note::
If you want to keep working with the legacy formats the old way, refer to a previous
`OpenVINO LTS version and its documentation <https://docs.openvino.ai/2022.3/Supported_Model_Formats.html>`__ .
OpenVINO versions of 2023 are mostly compatible with the old instructions,
through a deprecated MO tool, installed with the deprecated OpenVINO Developer Tools package.
`OpenVINO 2023.0 <https://docs.openvino.ai/2023.0/Supported_Model_Formats.html>`__ is the last
release officially supporting the MO conversion process for the legacy formats.
@endsphinxdirective

View File

@@ -10,7 +10,7 @@
This tutorial explains how to convert a RetinaNet model to the Intermediate Representation (IR).
`Public RetinaNet model <https://github.com/fizyr/keras-retinanet>`__ does not contain pretrained TensorFlow weights.
To convert this model to the TensorFlow format, follow the `Reproduce Keras to TensorFlow Conversion tutorial <https://docs.openvino.ai/2023.0/omz_models_model_retinanet_tf.html>`__.
To convert this model to the TensorFlow format, follow the `Reproduce Keras to TensorFlow Conversion tutorial <https://docs.openvino.ai/2023.1/omz_models_model_retinanet_tf.html>`__.
After converting the model to TensorFlow format, run the following command:

View File

@@ -1,4 +1,4 @@
# [LEGACY] Model Optimizer Extensibility {#openvino_docs_MO_DG_prepare_model_customize_model_optimizer_Customize_Model_Optimizer}
# Legacy Model Optimizer Extensibility {#openvino_docs_MO_DG_prepare_model_customize_model_optimizer_Customize_Model_Optimizer}
@sphinxdirective

View File

@@ -0,0 +1,98 @@
# Conversion Parameters {#openvino_docs_OV_Converter_UG_Conversion_Options}
@sphinxdirective
.. _deep learning model optimizer:
.. meta::
:description: Model Conversion API provides several parameters to adjust model conversion.
This document describes all available parameters for ``openvino.convert_model``, ``ovc``, and ``openvino.save_model`` without focusing on a particular framework model format. Use this information for your reference as a common description of the conversion API capabilities in general. Part of the options can be not relevant to some specific frameworks. Use :doc:`Supported Model Formats <Supported_Model_Formats>` page for more dedicated framework-dependent tutorials.
In most cases when it is required to convert a model the following simple syntax can be used:
.. tab-set::
.. tab-item:: Python
:sync: py
.. code-block:: py
:force:
import openvino as ov
ov_model = ov.convert_model('path_to_your_model')
# or, when model is a Python model object
ov_model = ov.convert_model(model)
# Optionally adjust model by embedding pre-post processing here...
ov.save_model(ov_model, 'model.xml')
.. tab-item:: CLI
:sync: cli
.. code-block:: sh
ovc path_to_your_model
Providing just a path to the model or model object as ``openvino.convert_model`` argument is frequently enough to make a successful conversion. However, depending on the model topology and original deep learning framework, additional parameters may be required, which are described below.
- ``example_input`` parameter available in Python ``openvino.convert_model`` only is intended to trace the model to obtain its graph representation. This parameter is crucial for converting PyTorch models and may sometimes be required for TensorFlow models. For more details, refer to the :doc:`PyTorch Model Conversion <openvino_docs_OV_Converter_UG_prepare_model_convert_model_Convert_Model_From_PyTorch>` or :doc:`TensorFlow Model Conversion <openvino_docs_OV_Converter_UG_prepare_model_convert_model_Convert_Model_From_TensorFlow>`.
- ``input`` parameter to set or override shapes for model inputs. It configures dynamic and static dimensions in model inputs depending on your inference requirements. For more information on this parameter, refer to the :doc:`Setting Input Shapes <openvino_docs_OV_Converter_UG_prepare_model_convert_model_Converting_Model>` guide.
- ``output`` parameter to select one or multiple outputs from the original model. This is useful when the model has outputs that are not required for inference in a deployment scenario. By specifying only necessary outputs, you can create a more compact model that infers faster.
- ``compress_to_fp16`` parameter that is provided by ``ovc`` CLI tool and ``openvino.save_model`` Python function, gives controls over the compression of model weights to FP16 format when saving OpenVINO model to IR. This option is enabled by default which means all produced IRs are saved using FP16 data type for weights which saves up to 2x storage space for the model file and in most cases doesn't sacrifice model accuracy. In case it does affect accuracy, the compression can be disabled by setting this flag to ``False``:
.. tab-set::
.. tab-item:: Python
:sync: py
.. code-block:: py
:force:
import openvino as ov
ov_model = ov.convert_model(original_model)
ov.save_model(ov_model, 'model.xml' compress_to_fp16=False)
.. tab-item:: CLI
:sync: cli
.. code-block:: sh
ovc path_to_your_model --compress_to_fp16=False
For details on how plugins handle compressed ``FP16`` models, see
:doc:`Working with devices <openvino_docs_OV_UG_Working_with_devices>`.
.. note::
``FP16`` compression is sometimes used as the initial step for ``INT8`` quantization.
Refer to the :doc:`Post-training optimization <ptq_introduction>` guide for more
information about that.
- ``extension`` parameter which makes possible conversion of the models consisting of operations that are not supported by OpenVINO out-of-the-box. It requires implementing of an OpenVINO extension first, please refer to :doc:`Frontend Extensions <openvino_docs_Extensibility_UG_Frontend_Extensions>` guide.
- ``share_weigths`` parameter with default value ``True`` allows reusing memory with original weights. For models loaded in Python and then passed to ``openvino.convert_model``, that means that OpenVINO model will share the same areas in program memory where the original weights are located. For models loaded from files by ``openvino.convert_model``, file memory mapping is used to avoid extra memory allocation. When enabled, the original model cannot be destroyed (Python object cannot be deallocated and original model file cannot be deleted) for the whole lifetime of OpenVINO model. If it is not desired, set ``share_weights=False`` when calling ``openvino.convert_model``.
.. note:: ``ovc`` does not have ``share_weights`` option and always uses sharing to reduce conversion time and consume less amount of memory during the conversion.
- ``output_model`` parameter in ``ovc`` and ``openvino.save_model`` specifies name for output ``.xml`` file with the resulting OpenVINO IR. The accompanying ``.bin`` file name will be generated automatically by replacing ``.xml`` extension with ``.bin`` extension. The value of ``output_model`` must end with ``.xml`` extension. For ``ovc`` command line tool, ``output_model`` can also contain a name of a directory. In this case, the resulting OpenVINO IR files will be put into that directory with a base name of ``.xml`` and ``.bin`` files matching the original model base name passed to ``ovc`` as a parameter. For example, when calling ``ovc your_model.onnx --output_model directory_name``, files ``directory_name/your_model.xml`` and ``directory_name/your_model.bin`` will be created. If ``output_model`` is not used, then the current directory is used as a destination directory.
.. note:: ``openvino.save_model`` does not support a directory for ``output_model`` parameter value because ``openvino.save_model`` gets OpenVINO model object represented in a memory and there is no original model file name available for output file name generation. For the same reason, ``output_model`` is a mandatory parameter for ``openvino.save_model``.
- ``verbose`` parameter activates extra diagnostics printed to the standard output. Use for debugging purposes in case there is an issue with the conversion and to collect information for better bug reporting to OpenVINO team.
.. note:: Weights sharing does not equally work for all the supported model formats. The value of this flag is considered as a hint for the conversion API, and actual sharing is used only if it is implemented and possible for a particular model representation.
You can always run ``ovc -h`` or ``ovc --help`` to recall all the supported parameters for ``ovc``.
Use ``ovc --version`` to check the version of OpenVINO package installed.
@endsphinxdirective

View File

@@ -0,0 +1,59 @@
# Converting an ONNX Model {#openvino_docs_OV_Converter_UG_prepare_model_convert_model_Convert_Model_From_ONNX}
@sphinxdirective
.. meta::
:description: Learn how to convert a model from the
ONNX format to the OpenVINO Model.
Introduction to ONNX
####################
`ONNX <https://github.com/onnx/onnx>`__ is a representation format for deep learning models that enables AI developers to easily transfer models between different frameworks.
.. note:: An ONNX model file can be loaded by ``openvino.Core.read_model`` or ``openvino.Core.compile_model`` methods by OpenVINO runtime API without the need to prepare an OpenVINO IR first. Refer to the :doc:`inference example <openvino_docs_OV_UG_Integrate_OV_with_your_application>` for more details. Using ``openvino.convert_model`` is still recommended if the model load latency is important for the inference application.
Converting an ONNX Model
########################
This page provides instructions on model conversion from the ONNX format to the OpenVINO IR format.
For model conversion, you need an ONNX model either directly downloaded from a public repository or converted from any framework that supports exporting to the ONNX format.
To convert an ONNX model, run model conversion with the path to the input model ``.onnx`` file:
.. tab-set::
.. tab-item:: Python
:sync: py
.. code-block:: py
import openvino as ov
ov.convert_model('your_model_file.onnx')
.. tab-item:: CLI
:sync: cli
.. code-block:: sh
ovc your_model_file.onnx
External Data Files
###################
ONNX models may consist of multiple files when the model size exceeds 2GB allowed by Protobuf. According to this `ONNX article <https://github.com/onnx/onnx/blob/main/docs/ExternalData.md>`__, instead of a single file, the model is represented as one file with ``.onnx`` extension and multiple separate files with external data. These data files are located in the same directory as the main ``.onnx`` file or in another directory.
OpenVINO model conversion API supports ONNX models with external data representation. In this case, you only need to pass the main file with ``.onnx`` extension as ``ovc`` or ``openvino.convert_model`` parameter. The other files will be found and loaded automatically during the model conversion. The resulting OpenVINO model, represented as an IR in the filesystem, will have the usual structure with a single ``.xml`` file and a single ``.bin`` file, where all the original model weights are copied and packed together.
Supported ONNX Layers
#####################
For the list of supported standard layers, refer to the :doc:`Supported Operations <openvino_resources_supported_operations_frontend>` page.
Additional Resources
####################
Check out more examples of model conversion in :doc:`interactive Python tutorials <tutorials>`.
@endsphinxdirective

View File

@@ -0,0 +1,201 @@
# Converting a PaddlePaddle Model {#openvino_docs_OV_Converter_UG_prepare_model_convert_model_Convert_Model_From_Paddle}
@sphinxdirective
.. meta::
:description: Learn how to convert a model from the
PaddlePaddle format to the OpenVINO Model.
This page provides general instructions on how to convert a model from the PaddlePaddle format to the OpenVINO IR format using OpenVINO model conversion API. The instructions are different depending on the PaddlePaddle model format.
.. note:: PaddlePaddle model serialized in a file can be loaded by ``openvino.Core.read_model`` or ``openvino.Core.compile_model`` methods by OpenVINO runtime API without preparing OpenVINO IR first. Refer to the :doc:`inference example <openvino_docs_OV_UG_Integrate_OV_with_your_application>` for more details. Using ``openvino.convert_model`` is still recommended if model load latency matters for the inference application.
Converting PaddlePaddle Model Files
###################################
PaddlePaddle inference model includes ``.pdmodel`` (storing model structure) and ``.pdiparams`` (storing model weight). For details on how to export a PaddlePaddle inference model, refer to the `Exporting PaddlePaddle Inference Model <https://www.paddlepaddle.org.cn/documentation/docs/zh/develop/guides/beginner/model_save_load_cn.html>`__ Chinese guide.
To convert a PaddlePaddle model, use the ``ovc`` or ``openvino.convert_model`` and specify the path to the input ``.pdmodel`` model file:
.. tab-set::
.. tab-item:: Python
:sync: py
.. code-block:: py
import openvino as ov
ov.convert_model('your_model_file.pdmodel')
.. tab-item:: CLI
:sync: cli
.. code-block:: sh
ovc your_model_file.pdmodel
**For example**, this command converts a YOLOv3 PaddlePaddle model to OpenVINO IR model:
.. tab-set::
.. tab-item:: Python
:sync: py
.. code-block:: py
import openvino as ov
ov.convert_model('yolov3.pdmodel')
.. tab-item:: CLI
:sync: cli
.. code-block:: sh
ovc yolov3.pdmodel
Converting PaddlePaddle Python Model
####################################
Model conversion API supports passing PaddlePaddle models directly in Python without saving them to files in the user code.
Following PaddlePaddle model object types are supported:
* ``paddle.hapi.model.Model``
* ``paddle.fluid.dygraph.layers.Layer``
* ``paddle.fluid.executor.Executor``
Some PaddlePaddle models may require setting ``example_input`` or ``output`` for conversion as shown in the examples below:
* Example of converting ``paddle.hapi.model.Model`` format model:
.. code-block:: py
:force:
import paddle
import openvino as ov
# create a paddle.hapi.model.Model format model
resnet50 = paddle.vision.models.resnet50()
x = paddle.static.InputSpec([1,3,224,224], 'float32', 'x')
y = paddle.static.InputSpec([1,1000], 'float32', 'y')
model = paddle.Model(resnet50, x, y)
# convert to OpenVINO IR format
ov_model = ov.convert_model(model)
ov.save_model(ov_model, "resnet50.xml")
* Example of converting ``paddle.fluid.dygraph.layers.Layer`` format model:
``example_input`` is required while ``output`` is optional. ``example_input`` accepts the following formats:
``list`` with tensor (``paddle.Tensor``) or InputSpec (``paddle.static.input.InputSpec``)
.. code-block:: py
:force:
import paddle
import openvino as ov
# create a paddle.fluid.dygraph.layers.Layer format model
model = paddle.vision.models.resnet50()
x = paddle.rand([1,3,224,224])
# convert to OpenVINO IR format
ov_model = ov.convert_model(model, example_input=[x])
* Example of converting ``paddle.fluid.executor.Executor`` format model:
``example_input`` and ``output`` are required, which accept the following formats:
``list`` or ``tuple`` with variable(``paddle.static.data``)
.. code-block:: py
:force:
import paddle
import openvino as ov
paddle.enable_static()
# create a paddle.fluid.executor.Executor format model
x = paddle.static.data(name="x", shape=[1,3,224])
y = paddle.static.data(name="y", shape=[1,3,224])
relu = paddle.nn.ReLU()
sigmoid = paddle.nn.Sigmoid()
y = sigmoid(relu(x))
exe = paddle.static.Executor(paddle.CPUPlace())
exe.run(paddle.static.default_startup_program())
# convert to OpenVINO IR format
ov_model = ov.convert_model(exe, example_input=[x], output=[y])
Supported PaddlePaddle Layers
#############################
For the list of supported standard layers, refer to the :doc:`Supported Operations <openvino_resources_supported_operations_frontend>` page.
Officially Supported PaddlePaddle Models
########################################
The following PaddlePaddle models have been officially validated and confirmed to work (as of OpenVINO 2022.1):
.. list-table::
:widths: 20 25 55
:header-rows: 1
* - Model Name
- Model Type
- Description
* - ppocr-det
- optical character recognition
- Models are exported from `PaddleOCR <https://github.com/PaddlePaddle/PaddleOCR/tree/release/2.1/>`_. Refer to `READ.md <https://github.com/PaddlePaddle/PaddleOCR/tree/release/2.1/#pp-ocr-20-series-model-listupdate-on-dec-15>`_.
* - ppocr-rec
- optical character recognition
- Models are exported from `PaddleOCR <https://github.com/PaddlePaddle/PaddleOCR/tree/release/2.1/>`_. Refer to `READ.md <https://github.com/PaddlePaddle/PaddleOCR/tree/release/2.1/#pp-ocr-20-series-model-listupdate-on-dec-15>`_.
* - ResNet-50
- classification
- Models are exported from `PaddleClas <https://github.com/PaddlePaddle/PaddleClas/tree/release/2.1/>`_. Refer to `getting_started_en.md <https://github.com/PaddlePaddle/PaddleClas/blob/release/2.1/docs/en/tutorials/getting_started_en.md#4-use-the-inference-model-to-predict>`_.
* - MobileNet v2
- classification
- Models are exported from `PaddleClas <https://github.com/PaddlePaddle/PaddleClas/tree/release/2.1/>`_. Refer to `getting_started_en.md <https://github.com/PaddlePaddle/PaddleClas/blob/release/2.1/docs/en/tutorials/getting_started_en.md#4-use-the-inference-model-to-predict>`_.
* - MobileNet v3
- classification
- Models are exported from `PaddleClas <https://github.com/PaddlePaddle/PaddleClas/tree/release/2.1/>`_. Refer to `getting_started_en.md <https://github.com/PaddlePaddle/PaddleClas/blob/release/2.1/docs/en/tutorials/getting_started_en.md#4-use-the-inference-model-to-predict>`_.
* - BiSeNet v2
- semantic segmentation
- Models are exported from `PaddleSeg <https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.1>`_. Refer to `model_export.md <https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.1/docs/model_export.md#>`_.
* - DeepLab v3 plus
- semantic segmentation
- Models are exported from `PaddleSeg <https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.1>`_. Refer to `model_export.md <https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.1/docs/model_export.md#>`_.
* - Fast-SCNN
- semantic segmentation
- Models are exported from `PaddleSeg <https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.1>`_. Refer to `model_export.md <https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.1/docs/model_export.md#>`_.
* - OCRNET
- semantic segmentation
- Models are exported from `PaddleSeg <https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.1>`_. Refer to `model_export.md <https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.1/docs/model_export.md#>`_.
* - Yolo v3
- detection
- Models are exported from `PaddleDetection <https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.1>`_. Refer to `EXPORT_MODEL.md <https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.1/deploy/EXPORT_MODEL.md#>`_.
* - ppyolo
- detection
- Models are exported from `PaddleDetection <https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.1>`_. Refer to `EXPORT_MODEL.md <https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.1/deploy/EXPORT_MODEL.md#>`_.
* - MobileNetv3-SSD
- detection
- Models are exported from `PaddleDetection <https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.2>`_. Refer to `EXPORT_MODEL.md <https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.2/deploy/EXPORT_MODEL.md#>`_.
* - U-Net
- semantic segmentation
- Models are exported from `PaddleSeg <https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.3>`_. Refer to `model_export.md <https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.3/docs/model_export.md#>`_.
* - BERT
- language representation
- Models are exported from `PaddleNLP <https://github.com/PaddlePaddle/PaddleNLP/tree/v2.1.1>`_. Refer to `README.md <https://github.com/PaddlePaddle/PaddleNLP/tree/develop/examples/language_model/bert#readme>`_.
Additional Resources
####################
Check out more examples of model conversion in :doc:`interactive Python tutorials <tutorials>`.
@endsphinxdirective

View File

@@ -0,0 +1,156 @@
# Converting a PyTorch Model {#openvino_docs_OV_Converter_UG_prepare_model_convert_model_Convert_Model_From_PyTorch}
@sphinxdirective
.. meta::
:description: Learn how to convert a model from the
PyTorch format to the OpenVINO Model.
To convert a PyTorch model, use the ``openvino.convert_model`` function.
Here is the simplest example of PyTorch model conversion using a model from ``torchvision``:
.. code-block:: py
:force:
import torchvision
import torch
import openvino as ov
model = torchvision.models.resnet50(weights='DEFAULT')
ov_model = ov.convert_model(model)
``openvino.convert_model`` function supports the following PyTorch model object types:
* ``torch.nn.Module`` derived classes
* ``torch.jit.ScriptModule``
* ``torch.jit.ScriptFunction``
When using ``torch.nn.Module`` as an input model, ``openvino.convert_model`` often requires the ``example_input`` parameter to be specified. Internally, it triggers the model tracing during the model conversion process, using the capabilities of the ``torch.jit.trace`` function.
The use of ``example_input`` can lead to a better quality OpenVINO model in terms of correctness and performance compared to converting the same original model without specifying ``example_input``. While the necessity of ``example_input`` depends on the implementation details of a specific PyTorch model, it is recommended to always set the ``example_input`` parameter when it is available.
The value for the ``example_input`` parameter can be easily derived from knowing the input tensor's element type and shape. While it may not be suitable for all cases, random numbers can frequently serve this purpose effectively:
.. code-block:: py
:force:
import torchvision
import torch
import openvino as ov
model = torchvision.models.resnet50(weights='DEFAULT')
ov_model = ov.convert_model(model, example_input=torch.rand(1, 3, 224, 224))
In practice, the code to evaluate or test the PyTorch model is usually provided with the model itself and can be used to generate a proper ``example_input`` value. A modified example of using ``resnet50`` model from ``torchvision`` is presented below. It demonstrates how to switch inference in the existing PyTorch application to OpenVINO and how to get value for ``example_input``:
.. code-block:: py
:force:
from torchvision.io import read_image
from torchvision.models import resnet50, ResNet50_Weights
import requests, PIL, io, torch
# Get a picture of a cat from the web:
img = PIL.Image.open(io.BytesIO(requests.get("https://placekitten.com/200/300").content))
# Torchvision model and input data preparation from https://pytorch.org/vision/stable/models.html
weights = ResNet50_Weights.DEFAULT
model = resnet50(weights=weights)
model.eval()
preprocess = weights.transforms()
batch = preprocess(img).unsqueeze(0)
# PyTorch model inference and post-processing
prediction = model(batch).squeeze(0).softmax(0)
class_id = prediction.argmax().item()
score = prediction[class_id].item()
category_name = weights.meta["categories"][class_id]
print(f"{category_name}: {100 * score:.1f}% (with PyTorch)")
# OpenVINO model preparation and inference with the same post-processing
import openvino as ov
compiled_model = ov.compile_model(ov.convert_model(model, example_input=batch))
prediction = torch.tensor(compiled_model(batch)[0]).squeeze(0).softmax(0)
class_id = prediction.argmax().item()
score = prediction[class_id].item()
category_name = weights.meta["categories"][class_id]
print(f"{category_name}: {100 * score:.1f}% (with OpenVINO)")
Check out more examples in :doc:`interactive Python tutorials <tutorials>`.
.. note::
In the examples above the ``openvino.save_model`` function is not used because there are no PyTorch-specific details regarding the usage of this function. In all examples, the converted OpenVINO model can be saved to IR by calling ``ov.save_model(ov_model, 'model.xml')`` as usual.
Supported Input Parameter Types
###############################
If the model has a single input, the following input types are supported in ``example_input``:
* ``openvino.runtime.Tensor``
* ``torch.Tensor``
* ``tuple`` or any nested combination of tuples
If a model has multiple inputs, the input values are combined in a ``list``, a ``tuple``, or a ``dict``:
* values in a ``list`` or ``tuple`` should be passed in the same order as the original model specifies,
* ``dict`` has keys from the names of the original model argument names.
Enclosing in ``list``, ``tuple`` or ``dict`` can be used for a single input as well as for multiple inputs.
If a model has a single input parameter and the type of this input is a ``tuple``, it should be always passed enclosed into an extra ``list``, ``tuple`` or ``dict`` as in the case of multiple inputs. It is required to eliminate ambiguity between ``model((a, b))`` and ``model(a, b)`` in this case.
Non-tensor Data Types
#####################
When a non-tensor data type, such as a ``tuple`` or ``dict``, appears in a model input or output, it is flattened. The flattening means that each element within the ``tuple`` will be represented as a separate input or output. The same is true for ``dict`` values, where the keys of the ``dict`` are used to form a model input/output name. The original non-tensor input or output is replaced by one or multiple new inputs or outputs resulting from this flattening process. This flattening procedure is applied recursively in the case of nested ``tuples`` and ``dicts`` until it reaches the assumption that the most nested data type is a tensor.
For example, if the original model is called with ``example_input=(a, (b, c, (d, e)))``, where ``a``, ``b``, ... ``e`` are tensors, it means that the original model has two inputs. The first is a tensor ``a``, and the second is a tuple ``(b, c, (d, e))``, containing two tensors ``b`` and ``c`` and a nested tuple ``(d, e)``. Then the resulting OpenVINO model will have signature ``(a, b, c, d, e)``, which means it will have five inputs, all of type tensor, instead of two in the original model.
Flattening of a ``dict`` is supported for outputs only. If your model has an input of type ``dict``, you will need to decompose the ``dict`` to one or multiple tensor inputs by modifying the original model signature or making a wrapper model on top of the original model. This approach hides the dictionary from the model signature and allows it to be processed inside the model successfully.
.. note::
An important consequence of flattening is that only ``tuple`` and ``dict`` with a fixed number of elements and key values are supported. The structure of such inputs should be fully described in the ``example_input`` parameter of ``convert_model``. The flattening on outputs should be reproduced with the given ``example_input`` and cannot be changed once the conversion is done.
Check out more examples of model conversion with non-tensor data types in the following tutorials:
* `Video Subtitle Generation using Whisper and OpenVINO™ <notebooks/227-whisper-subtitles-generation-with-output.html>`__
* `Visual Question Answering and Image Captioning using BLIP and OpenVINO <notebooks/233-blip-visual-language-processing-with-output.html>`__
Exporting a PyTorch Model to ONNX Format
########################################
An alternative method of converting PyTorch models is exporting a PyTorch model to ONNX with ``torch.onnx.export`` first and then converting the resulting ``.onnx`` file to OpenVINO Model with ``openvino.convert_model``. It can be considered as a backup solution if a model cannot be converted directly from PyTorch to OpenVINO as described in the above chapters. Converting through ONNX can be more expensive in terms of code, conversion time, and allocated memory.
1. Refer to the `Exporting PyTorch models to ONNX format <https://pytorch.org/docs/stable/onnx.html>`__ guide to learn how to export models from PyTorch to ONNX.
2. Follow :doc:`Convert an ONNX model <openvino_docs_OV_Converter_UG_prepare_model_convert_model_Convert_Model_From_ONNX>` chapter to produce OpenVINO model.
Here is an illustration of using these two steps together:
.. code-block:: py
:force:
import torchvision
import torch
import openvino as ov
model = torchvision.models.resnet50(weights='DEFAULT')
# 1. Export to ONNX
torch.onnx.export(model, (torch.rand(1, 3, 224, 224), ), 'model.onnx')
# 2. Convert to OpenVINO
ov_model = ov.convert_model('model.onnx')
.. note::
As of version 1.8.1, not all PyTorch operations can be exported to ONNX opset 9 which is used by default.
It is recommended to export models to opset 11 or higher when export to default opset 9 is not working. In that case, use ``opset_version`` option of the ``torch.onnx.export``. For more information about ONNX opset, refer to the `Operator Schemas <https://github.com/onnx/onnx/blob/master/docs/Operators.md>`__ page.
@endsphinxdirective

View File

@@ -0,0 +1,331 @@
# Converting a TensorFlow Model {#openvino_docs_OV_Converter_UG_prepare_model_convert_model_Convert_Model_From_TensorFlow}
@sphinxdirective
.. meta::
:description: Learn how to convert a model from a
TensorFlow format to the OpenVINO Model.
This page provides general instructions on how to run model conversion from a TensorFlow format to the OpenVINO IR format. The instructions are different depending on whether your model was created with TensorFlow v1.X or TensorFlow v2.X.
.. note:: TensorFlow models can be loaded by ``openvino.Core.read_model`` or ``openvino.Core.compile_model`` methods by OpenVINO runtime API without preparing OpenVINO IR first. Refer to the :doc:`inference example <openvino_docs_OV_UG_Integrate_OV_with_your_application>` for more details. Using ``openvino.convert_model`` is still recommended if model load latency matters for the inference application.
.. note:: Examples below that convert TensorFlow models from a file, do not require any version of TensorFlow to be installed on the system, except in cases when the ``tensorflow`` module is imported explicitly.
Converting TensorFlow 2 Models
##############################
TensorFlow 2.X officially supports two model formats: SavedModel and Keras H5 (or HDF5).
Below are the instructions on how to convert each of them.
SavedModel Format
+++++++++++++++++
A model in the SavedModel format consists of a directory with a ``saved_model.pb`` file and two subfolders: ``variables`` and ``assets`` inside.
To convert a model, run conversion with the directory as the model argument:
.. tab-set::
.. tab-item:: Python
:sync: py
.. code-block:: py
:force:
import openvino as ov
ov_model = ov.convert_model('path_to_saved_model_dir')
.. tab-item:: CLI
:sync: cli
.. code-block:: sh
ovc path_to_saved_model_dir
Keras H5 Format
+++++++++++++++
If you have a model in HDF5 format, load the model using TensorFlow 2 and serialize it to
SavedModel format. Here is an example of how to do it:
.. code-block:: py
:force:
import tensorflow as tf
model = tf.keras.models.load_model('model.h5')
tf.saved_model.save(model,'model')
Converting a Keras H5 model with a custom layer to the SavedModel format requires special considerations.
For example, the model with a custom layer ``CustomLayer`` from ``custom_layer.py`` is converted as follows:
.. code-block:: py
:force:
import tensorflow as tf
from custom_layer import CustomLayer
model = tf.keras.models.load_model('model.h5', custom_objects={'CustomLayer': CustomLayer})
tf.saved_model.save(model,'model')
Then follow the above instructions for the SavedModel format.
.. note::
Avoid using any workarounds or hacks to resave TensorFlow 2 models into TensorFlow 1 formats.
Converting TensorFlow 1 Models
###############################
Converting Frozen Model Format
+++++++++++++++++++++++++++++++
To convert a TensorFlow model, run model conversion with the path to the input model ``*.pb*`` file:
.. tab-set::
.. tab-item:: Python
:sync: py
.. code-block:: py
import openvino as ov
ov_model = ov.convert_model('your_model_file.pb')
.. tab-item:: CLI
:sync: cli
.. code-block:: sh
ovc your_model_file.pb
Converting Non-Frozen Model Formats
+++++++++++++++++++++++++++++++++++
There are three ways to store non-frozen TensorFlow models.
1. **SavedModel format**. In this case, a model consists of a special directory with a ``.pb`` file
and several subfolders: ``variables``, ``assets``, and ``assets.extra``. For more information about the SavedModel directory, refer to the `README <https://github.com/tensorflow/tensorflow/tree/master/tensorflow/python/saved_model#components>`__ file in the TensorFlow repository.
To convert such TensorFlow model, run the conversion similarly to other model formats and pass a path to the directory as a model argument:
.. tab-set::
.. tab-item:: Python
:sync: py
.. code-block:: py
import openvino as ov
ov_model = ov.convert_model('path_to_saved_model_dir')
.. tab-item:: CLI
:sync: cli
.. code-block:: sh
ovc path_to_saved_model_dir
2. **Checkpoint**. In this case, a model consists of two files: ``inference_graph.pb`` (or ``inference_graph.pbtxt``) and ``checkpoint_file.ckpt``.
If you do not have an inference graph file, refer to the `Freezing Custom Models in Python <#Freezing-Custom-Models-in-Python>`__ section.
To convert the model with the inference graph in ``.pb`` format, provide paths to both files as an argument for ``ovc`` or ``openvino.convert_model``:
.. tab-set::
.. tab-item:: Python
:sync: py
.. code-block:: py
import openvino as ov
ov_model = ov.convert_model(['path_to_inference_graph.pb', 'path_to_checkpoint_file.ckpt'])
.. tab-item:: CLI
:sync: cli
.. code-block:: sh
ovc path_to_inference_graph.pb path_to_checkpoint_file.ckpt
To convert the model with the inference graph in the ``.pbtxt`` format, specify the path to ``.pbtxt`` file instead of the ``.pb`` file. The conversion API automatically detects the format of the provided file, there is no need to specify the model file format explicitly when calling ``ovc`` or ``openvino.convert_model`` in all examples in this document.
3. **MetaGraph**. In this case, a model consists of three or four files stored in the same directory: ``model_name.meta``, ``model_name.index``,
``model_name.data-00000-of-00001`` (the numbers may vary), and ``checkpoint`` (optional).
To convert such a TensorFlow model, run the conversion providing a path to `.meta` file as an argument:
.. tab-set::
.. tab-item:: Python
:sync: py
.. code-block:: py
import openvino as ov
ov_model = ov.convert_model('path_to_meta_graph.meta')
.. tab-item:: CLI
:sync: cli
.. code-block:: sh
ovc path_to_meta_graph.meta
Freezing Custom Models in Python
++++++++++++++++++++++++++++++++
When a model is defined in Python code, you must create an inference graph file. Graphs are usually built in a form
that allows model training. That means all trainable parameters are represented as variables in the graph.
To be able to use such a graph with the model conversion API, it should be frozen first before passing to the ``openvino.convert_model`` function:
.. code-block:: py
:force:
import tensorflow as tf
from tensorflow.python.framework import graph_io
frozen = tf.compat.v1.graph_util.convert_variables_to_constants(sess, sess.graph_def, ["name_of_the_output_node"])
import openvino as ov
ov_model = ov.convert_model(frozen)
Where:
* ``sess`` is the instance of the TensorFlow Session object where the network topology is defined.
* ``["name_of_the_output_node"]`` is the list of output node names in the graph; ``frozen`` graph will include only those nodes from the original ``sess.graph_def`` that are directly or indirectly used to compute given output nodes. The ``'name_of_the_output_node'`` is an example of a possible output node name. You should derive the names based on your own graph.
Converting TensorFlow Models from Memory Using Python API
############################################################
Model conversion API supports passing TensorFlow/TensorFlow2 models directly from memory.
* ``tf.keras.Model``
.. code-block:: py
:force:
import openvino as ov
model = tf.keras.applications.ResNet50(weights="imagenet")
ov_model = ov.convert_model(model)
* ``tf.keras.layers.Layer``. Requires saving model to TensorFlow ``saved_model`` file format and then loading to ``openvino.convert_model``. Saving to the file and then restoring is required due to a known bug in ``openvino.convert_model`` that ignores model signature.
.. code-block:: py
:force:
import tensorflow_hub as hub
import openvino as ov
model = hub.KerasLayer("https://tfhub.dev/google/imagenet/mobilenet_v1_100_224/classification/5")
model.build([None, 224, 224, 3])
model.save('mobilenet_v1_100_224') # use a temporary directory
ov_model = ov.convert_model('mobilenet_v1_100_224')
* ``tf.Module``. Requires setting shapes in ``input`` parameter.
.. code-block:: py
:force:
import tensorflow as tf
import openvino as ov
class MyModule(tf.Module):
def __init__(self, name=None):
super().__init__(name=name)
self.constant1 = tf.constant(5.0, name="var1")
self.constant2 = tf.constant(1.0, name="var2")
def __call__(self, x):
return self.constant1 * x + self.constant2
model = MyModule(name="simple_module")
ov_model = ov.convert_model(model, input=[-1])
.. note:: There is a known bug in ``openvino.convert_model`` on using ``tf.Variable`` nodes in the model graph. The results of the conversion of such models are unpredictable. It is recommended to save a model with ``tf.Variable`` into TensorFlow Saved Model format and load it with ``openvino.convert_model``.
* ``tf.compat.v1.Graph``
.. code-block:: py
:force:
with tf.compat.v1.Session() as sess:
inp1 = tf.compat.v1.placeholder(tf.float32, [100], 'Input1')
inp2 = tf.compat.v1.placeholder(tf.float32, [100], 'Input2')
output = tf.nn.relu(inp1 + inp2, name='Relu')
tf.compat.v1.global_variables_initializer()
model = sess.graph
import openvino as ov
ov_model = ov.convert_model(model)
* ``tf.compat.v1.GraphDef``
.. code-block:: py
:force:
with tf.compat.v1.Session() as sess:
inp1 = tf.compat.v1.placeholder(tf.float32, [100], 'Input1')
inp2 = tf.compat.v1.placeholder(tf.float32, [100], 'Input2')
output = tf.nn.relu(inp1 + inp2, name='Relu')
tf.compat.v1.global_variables_initializer()
model = sess.graph_def
import openvino as ov
ov_model = ov.convert_model(model)
* ``tf.function``
.. code-block:: py
:force:
@tf.function(
input_signature=[tf.TensorSpec(shape=[1, 2, 3], dtype=tf.float32),
tf.TensorSpec(shape=[1, 2, 3], dtype=tf.float32)])
def func(x, y):
return tf.nn.sigmoid(tf.nn.relu(x + y))
import openvino as ov
ov_model = ov.convert_model(func)
* ``tf.compat.v1.session``
.. code-block:: py
:force:
with tf.compat.v1.Session() as sess:
inp1 = tf.compat.v1.placeholder(tf.float32, [100], 'Input1')
inp2 = tf.compat.v1.placeholder(tf.float32, [100], 'Input2')
output = tf.nn.relu(inp1 + inp2, name='Relu')
tf.compat.v1.global_variables_initializer()
import openvino as ov
ov_model = ov.convert_model(sess)
* ``tf.train.checkpoint``
.. code-block:: py
:force:
model = tf.keras.Model(...)
checkpoint = tf.train.Checkpoint(model)
save_path = checkpoint.save(save_directory)
# ...
checkpoint.restore(save_path)
import openvino as ov
ov_model = ov.convert_model(checkpoint)
Supported TensorFlow and TensorFlow 2 Keras Layers
##################################################
For the list of supported standard layers, refer to the :doc:`Supported Operations <openvino_resources_supported_operations_frontend>` page.
Summary
#######
In this document, you learned:
* Basic information about how the model conversion API works with TensorFlow models.
* Which TensorFlow models are supported.
* How to freeze a TensorFlow model.
@endsphinxdirective

View File

@@ -0,0 +1,42 @@
# Converting a TensorFlow Lite Model {#openvino_docs_OV_Converter_UG_prepare_model_convert_model_Convert_Model_From_TensorFlow_Lite}
@sphinxdirective
.. meta::
:description: Learn how to convert a model from a
TensorFlow Lite format to the OpenVINO Model.
To convert a TensorFlow Lite model, run model conversion with the path to the ``.tflite`` model file:
.. tab-set::
.. tab-item:: Python
:sync: py
.. code-block:: py
import openvino as ov
ov.convert_model('your_model_file.tflite')
.. tab-item:: CLI
:sync: cli
.. code-block:: sh
ovc your_model_file.tflite
.. note:: TensorFlow Lite model file can be loaded by ``openvino.Core.read_model`` or ``openvino.Core.compile_model`` methods by OpenVINO runtime API without preparing OpenVINO IR first. Refer to the :doc:`inference example <openvino_docs_OV_UG_Integrate_OV_with_your_application>` for more details. Using ``openvino.convert_model`` is still recommended if model load latency matters for the inference application.
Supported TensorFlow Lite Layers
###################################
For the list of supported standard layers, refer to the :doc:`Supported Operations <openvino_resources_supported_operations_frontend>` page.
Supported TensorFlow Lite Models
###################################
More than eighty percent of public TensorFlow Lite models are supported from open sources `TensorFlow Hub <https://tfhub.dev/s?deployment-format=lite&subtype=module,placeholder>`__ and `MediaPipe <https://developers.google.com/mediapipe>`__.
Unsupported models usually have custom TensorFlow Lite operations.
@endsphinxdirective

View File

@@ -0,0 +1,141 @@
# Setting Input Shapes {#openvino_docs_OV_Converter_UG_prepare_model_convert_model_Converting_Model}
With model conversion API you can increase your model's efficiency by providing an additional shape definition using the ``input`` parameter.
@sphinxdirective
.. meta::
:description: Learn how to increase the efficiency of a model by providing an additional shape definition with the ``input`` parameter of ``openvino.convert_model`` and ``ovc``.
.. _when_to_specify_input_shapes:
Specifying Shapes in the ``input`` Parameter
#####################################################
``openvino.convert_model`` supports conversion of models with dynamic input shapes that contain undefined dimensions.
However, if the shape of data is not going to change from one inference request to another,
it is recommended to set up static shapes (when all dimensions are fully defined) for the inputs.
Doing it at this stage, instead of during inference in runtime, can be beneficial in terms of performance and memory consumption.
To set up static shapes, model conversion API provides the ``input`` parameter.
For more information on changing input shapes in runtime, refer to the :doc:`Changing input shapes <openvino_docs_OV_UG_ShapeInference>` guide.
To learn more about dynamic shapes in runtime, refer to the :doc:`Dynamic Shapes <openvino_docs_OV_UG_DynamicShapes>` guide.
The OpenVINO Runtime API may present certain limitations in inferring models with undefined dimensions on some hardware. See the :doc:`Features support matrix <openvino_docs_OV_UG_Working_with_devices>` for reference.
In this case, the ``input`` parameter and the :doc:`reshape method <openvino_docs_OV_UG_ShapeInference>` can help to resolve undefined dimensions.
For example, run model conversion for the TensorFlow MobileNet model with the single input
and specify the input shape of ``[2,300,300,3]``:
.. tab-set::
.. tab-item:: Python
:sync: py
.. code-block:: py
:force:
import openvino as ov
ov_model = ov.convert_model("MobileNet.pb", input=[2, 300, 300, 3])
.. tab-item:: CLI
:sync: cli
.. code-block:: sh
ovc MobileNet.pb --input [2,300,300,3]
If a model has multiple inputs, the input shape should be specified in ``input`` parameter as a list. In ``ovc``, this is a command separate list, and in ``openvino.convert_model`` this is a Python list or tuple with number of elements matching the number of inputs in the model. Use input names from the original model to define the mapping between inputs and shapes specified.
The following example demonstrates the conversion of the ONNX OCR model with a pair of inputs ``data`` and ``seq_len``
and specifies shapes ``[3,150,200,1]`` and ``[3]`` for them respectively:
.. tab-set::
.. tab-item:: Python
:sync: py
.. code-block:: py
:force:
import openvino as ov
ov_model = ov.convert_model("ocr.onnx", input=[("data", [3,150,200,1]), ("seq_len", [3])])
.. tab-item:: CLI
:sync: cli
.. code-block:: sh
ovc ocr.onnx --input data[3,150,200,1],seq_len[3]
If the order of inputs is defined in the input model and the order is known for the user, names could be omitted. In this case, it is important to specify shapes in the same order of input model inputs:
.. tab-set::
.. tab-item:: Python
:sync: py
.. code-block:: py
:force:
import openvino as ov
ov_model = ov.convert_model("ocr.onnx", input=([3,150,200,1], [3]))
.. tab-item:: CLI
:sync: cli
.. code-block:: sh
ovc ocr.onnx --input [3,150,200,1],[3]
Whether the model has a specified order of inputs depends on the original framework. Usually, it is convenient to set shapes without specifying the names of the parameters in the case of PyTorch model conversion because a PyTorch model is considered as a callable that usually accepts positional parameters. On the other hand, names of inputs are convenient when converting models from model files, because naming of inputs is a good practice for many frameworks that serialize models to files.
The ``input`` parameter allows overriding original input shapes if it is supported by the model topology.
Shapes with dynamic dimensions in the original model can be replaced with static shapes for the converted model, and vice versa.
The dynamic dimension can be marked in model conversion API parameter as ``-1`` or ``?`` when using ``ovc``.
For example, launch model conversion for the ONNX OCR model and specify dynamic batch dimension for inputs:
.. tab-set::
.. tab-item:: Python
:sync: py
.. code-block:: py
:force:
import openvino as ov
ov_model = ov.convert_model("ocr.onnx", input=[("data", [-1, 150, 200, 1]), ("seq_len", [-1])])
.. tab-item:: CLI
:sync: cli
.. code-block:: sh
ovc ocr.onnx --input "data[?,150,200,1],seq_len[?]"
To optimize memory consumption for models with undefined dimensions in run-time, model conversion API provides the capability to define boundaries of dimensions.
The boundaries of undefined dimension can be specified with ellipsis in the command line or with ``openvino.Dimension`` class in Python.
For example, launch model conversion for the ONNX OCR model and specify a boundary for the batch dimension 1..3, which means that the input tensor will have batch dimension minimum 1 and maximum 3 in inference:
.. tab-set::
.. tab-item:: Python
:sync: py
.. code-block:: py
:force:
import openvino as ov
batch_dim = ov.Dimension(1, 3)
ov_model = ov.convert_model("ocr.onnx", input=[("data", [batch_dim, 150, 200, 1]), ("seq_len", [batch_dim])])
.. tab-item:: CLI
:sync: cli
.. code-block:: sh
ovc ocr.onnx --input data[1..3,150,200,1],seq_len[1..3]
In practice, not every model is designed in a way that allows change of input shapes. An attempt to change the shape for such models may lead to an exception during model conversion, later in model inference, or even to wrong results of inference without explicit exception raised. A knowledge about model topology is required to set shapes appropriately.
For more information about shape follow the :doc:`inference troubleshooting <troubleshooting_reshape_errors>`
and :ref:`ways to relax shape inference flow <how-to-fix-non-reshape-able-model>` guides.
@endsphinxdirective

View File

@@ -0,0 +1,638 @@
# Transition from Legacy Conversion API {#openvino_docs_OV_Converter_UG_prepare_model_convert_model_MO_OVC_transition}
@sphinxdirective
.. meta::
:description: Transition guide from MO / mo.convert_model() to OVC / ov.convert_model().
.. toctree::
:maxdepth: 1
:hidden:
openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide
openvino_docs_MO_DG_prepare_model_customize_model_optimizer_Customize_Model_Optimizer
In the 2023.1 OpenVINO release OpenVINO Model Converter was introduced with the corresponding
Python API: ``openvino.convert_model`` method. ``ovc`` and ``openvino.convert_model`` represent
a lightweight alternative of ``mo`` and ``openvino.tools.mo.convert_model`` which are considered
legacy API now. In this article, all the differences between ``mo`` and ``ovc`` are summarized
and the transition guide from the legacy API to the new API is provided.
Parameters Comparison
#####################
The comparison of parameters between ov.convert_model() / OVC and mo.convert_model() / MO.
.. list-table::
:widths: 20 25 55
:header-rows: 1
* - mo.convert_model() / MO
- ov.convert_model() / OVC
- Differences description
* - input_model
- input_model
- Along with model object or path to input model ov.convert_model() accepts list of model parts, for example, the path to TensorFlow weights plus the path to TensorFlow checkpoint. OVC tool accepts an unnamed input model.
* - output_dir
- output_model
- output_model in OVC tool sets both output model name and output directory.
* - model_name
- output_model
- output_model in OVC tool sets both output model name and output directory.
* - input
- input
- ov.convert_model() accepts tuples for setting multiple parameters. OVC tool 'input' does not have type setting and freezing functionality. ov.convert_model() does not allow input cut.
* - output
- output
- ov.convert_model() does not allow output cut.
* - input_shape
- N/A
- Not available in ov.convert_model() / OVC. Can be replaced by ``input`` parameter.
* - example_input
- example_input
- No differences.
* - batch
- N/A
- Not available in ov.convert_model() / OVC. Can be replaced by model reshape functionality. See details below.
* - mean_values
- N/A
- Not available in ov.convert_model() / OVC. Can be replaced by functionality from ``PrePostProcessor``. See details below.
* - scale_values
- N/A
- Not available in ov.convert_model() / OVC. Can be replaced by functionality from ``PrePostProcessor``. See details below.
* - scale
- N/A
- Not available in ov.convert_model() / OVC. Can be replaced by functionality from ``PrePostProcessor``. See details below.
* - reverse_input_channels
- N/A
- Not available in ov.convert_model() / OVC. Can be replaced by functionality from ``PrePostProcessor``. See details below.
* - source_layout
- N/A
- Not available in ov.convert_model() / OVC. Can be replaced by functionality from ``PrePostProcessor``. See details below.
* - target_layout
- N/A
- Not available in ov.convert_model() / OVC. Can be replaced by functionality from ``PrePostProcessor``. See details below.
* - layout
- N/A
- Not available in ov.convert_model() / OVC. Can be replaced by functionality from ``PrePostProcessor``. See details below.
* - compress_to_fp16
- compress_to_fp16
- OVC provides 'compress_to_fp16' for command line tool only, as compression is performed during saving a model to IR (Intermediate Representation).
* - extensions
- extension
- No differences.
* - transform
- N/A
- Not available in ov.convert_model() / OVC. Can be replaced by functionality from ``PrePostProcessor``. See details below.
* - transformations_config
- N/A
- Not available in ov.convert_model() / OVC.
* - static_shape
- N/A
- Not available in ov.convert_model() / OVC.
* - freeze_placeholder_with_value
- N/A
- Not available in ov.convert_model() / OVC.
* - use_legacy_frontend
- N/A
- Not available in ov.convert_model() / OVC.
* - use_legacy_frontend
- N/A
- Not available in ov.convert_model() / OVC.
* - silent
- verbose
- OVC / ov.convert_model provides 'verbose' parameter instead of 'silent' for printing of detailed conversion information if 'verbose' is set to True.
* - log_level
- N/A
- Not available in ov.convert_model() / OVC.
* - version
- version
- N/A
* - progress
- N/A
- Not available in ov.convert_model() / OVC.
* - stream_output
- N/A
- Not available in ov.convert_model() / OVC.
* - share_weights
- share_weights
- No differences.
* - framework
- N/A
- Not available in ov.convert_model() / OVC.
* - help / -h
- help / -h
- OVC provides help parameter only in command line tool.
* - example_output
- output
- OVC / ov.convert_model 'output' parameter includes capabilities of MO 'example_output' parameter.
* - input_model_is_text
- N/A
- Not available in ov.convert_model() / OVC.
* - input_checkpoint
- input_model
- All supported model formats can be passed to 'input_model'.
* - input_meta_graph
- input_model
- All supported model formats can be passed to 'input_model'.
* - saved_model_dir
- input_model
- All supported model formats can be passed to 'input_model'.
* - saved_model_tags
- N/A
- Not available in ov.convert_model() / OVC.
* - tensorflow_custom_operations_config_update
- N/A
- Not available in ov.convert_model() / OVC.
* - tensorflow_object_detection_api_pipeline_config
- N/A
- Not available in ov.convert_model() / OVC.
* - tensorboard_logdir
- N/A
- Not available in ov.convert_model() / OVC.
* - tensorflow_custom_layer_libraries
- N/A
- Not available in ov.convert_model() / OVC.
* - input_symbol
- N/A
- Not available in ov.convert_model() / OVC.
* - nd_prefix_name
- N/A
- Not available in ov.convert_model() / OVC.
* - pretrained_model_name
- N/A
- Not available in ov.convert_model() / OVC.
* - save_params_from_nd
- N/A
- Not available in ov.convert_model() / OVC.
* - legacy_mxnet_model
- N/A
- Not available in ov.convert_model() / OVC.
* - enable_ssd_gluoncv
- N/A
- Not available in ov.convert_model() / OVC.
* - input_proto
- N/A
- Not available in ov.convert_model() / OVC.
* - caffe_parser_path
- N/A
- Not available in ov.convert_model() / OVC.
* - k
- N/A
- Not available in ov.convert_model() / OVC.
* - disable_omitting_optional
- N/A
- Not available in ov.convert_model() / OVC.
* - enable_flattening_nested_params
- N/A
- Not available in ov.convert_model() / OVC.
* - counts
- N/A
- Not available in ov.convert_model() / OVC.
* - remove_output_softmax
- N/A
- Not available in ov.convert_model() / OVC.
* - remove_memory
- N/A
- Not available in ov.convert_model() / OVC.
Transition from Legacy API to New API
############################################################################
mo.convert_model() provides a wide range of preprocessing parameters. Most of these parameters have analogs in OVC or can be replaced with functionality from ``ov.PrePostProcessor`` class.
Here is the guide to transition from legacy model preprocessing to new API preprocessing.
``input_shape``
################
.. tab-set::
.. tab-item:: Python
:sync: py
.. list-table::
:header-rows: 1
* - Legacy API
- New API
* - .. code-block:: py
:force:
from openvino.tools import mo
ov_model = mo.convert_model(model, input_shape=[[1, 3, 100, 100],[1]])
- .. code-block:: py
:force:
import openvino as ov
ov_model = ov.convert_model(model, input=[[1, 3, 100, 100],[1]])
.. tab-item:: CLI
:sync: cli
.. list-table::
:header-rows: 1
* - Legacy API
- New API
* - .. code-block:: sh
:force:
mo --input_model MODEL_NAME --input_shape [1,3,100,100],[1] --output_dir OUTPUT_DIR
- .. code-block:: sh
:force:
ovc MODEL_NAME --input [1,3,100,100],[1] --output_model OUTPUT_MODEL
``batch``
##########
.. tab-set::
.. tab-item:: Python
:sync: py
.. list-table::
:header-rows: 1
* - Legacy API
- New API
* - .. code-block:: py
:force:
from openvino.tools import mo
ov_model = mo.convert_model(model, batch=2)
- .. code-block:: py
:force:
import openvino as ov
ov_model = ov.convert_model(model)
input_shape = ov_model.inputs[0].partial_shape
input_shape[0] = 2 # batch size
ov_model.reshape(input_shape)
.. tab-item:: CLI
:sync: cli
.. list-table::
:header-rows: 1
* - Legacy API
- New API
* - .. code-block:: sh
:force:
mo --input_model MODEL_NAME --batch 2 --output_dir OUTPUT_DIR
- Not available in OVC tool. Please check Python API.
``mean_values``
################
.. tab-set::
.. tab-item:: Python
:sync: py
.. list-table::
:header-rows: 1
* - Legacy API
- New API
* - .. code-block:: py
:force:
from openvino.tools import mo
ov_model = mo.convert_model(model, mean_values=[0.5, 0.5, 0.5])
- .. code-block:: py
:force:
import openvino as ov
ov_model = ov.convert_model(model)
prep = ov.preprocess.PrePostProcessor(ov_model)
prep.input(input_name).tensor().set_layout(ov.Layout("NHWC"))
prep.input(input_name).preprocess().mean([0.5, 0.5, 0.5])
ov_model = prep.build()
There is currently no heuristic for automatic detection of the channel to which mean, scale or reverse channels should be applied. ``Layout`` needs to be explicitly specified with "C" channel. For example "NHWC", "NCHW", "?C??". See also :doc:`Layout API overview <openvino_docs_OV_UG_Layout_Overview>`.
.. tab-item:: CLI
:sync: cli
.. list-table::
:header-rows: 1
* - Legacy API
- New API
* - .. code-block:: sh
:force:
mo --input_model MODEL_NAME --mean_values [0.5,0.5,0.5] --output_dir OUTPUT_DIR
- Not available in OVC tool. Please check Python API.
``scale_values``
#################
.. tab-set::
.. tab-item:: Python
:sync: py
.. list-table::
:header-rows: 1
* - Legacy API
- New API
* - .. code-block:: py
:force:
from openvino.tools import mo
ov_model = mo.convert_model(model, scale_values=[255., 255., 255.])
- .. code-block:: py
:force:
import openvino as ov
ov_model = ov.convert_model(model)
prep = ov.preprocess.PrePostProcessor(ov_model)
prep.input(input_name).tensor().set_layout(ov.Layout("NHWC"))
prep.input(input_name).preprocess().scale([255., 255., 255.])
ov_model = prep.build()
There is currently no heuristic for automatic detection of the channel to which mean, scale or reverse channels should be applied. ``Layout`` needs to be explicitly specified with "C" channel. For example "NHWC", "NCHW", "?C??". See also :doc:`Layout API overview <openvino_docs_OV_UG_Layout_Overview>`.
.. tab-item:: CLI
:sync: cli
.. list-table::
:header-rows: 1
* - Legacy API
- New API
* - .. code-block:: sh
:force:
mo --input_model MODEL_NAME --scale_values [255,255,255] --output_dir OUTPUT_DIR
- Not available in OVC tool. Please check Python API.
``reverse_input_channels``
###########################
.. tab-set::
.. tab-item:: Python
:sync: py
.. list-table::
:header-rows: 1
* - Legacy API
- New API
* - .. code-block:: py
:force:
from openvino.tools import mo
ov_model = mo.convert_model(model, reverse_input_channels=True)
- .. code-block:: py
:force:
import openvino as ov
ov_model = ov.convert_model(model)
prep = ov.preprocess.PrePostProcessor(ov_model)
prep.input(input_name).tensor().set_layout(ov.Layout("NHWC"))
prep.input(input_name).preprocess().reverse_channels()
ov_model = prep.build()
There is currently no heuristic for automatic detection of the channel to which mean, scale or reverse channels should be applied. ``Layout`` needs to be explicitly specified with "C" channel. For example "NHWC", "NCHW", "?C??". See also :doc:`Layout API overview <openvino_docs_OV_UG_Layout_Overview>`.
.. tab-item:: CLI
:sync: cli
.. list-table::
:header-rows: 1
* - Legacy API
- New API
* - .. code-block:: sh
:force:
mo --input_model MODEL_NAME --reverse_input_channels --output_dir OUTPUT_DIR
- Not available in OVC tool. Please check Python API.
``source_layout``
##################
.. tab-set::
.. tab-item:: Python
:sync: py
.. list-table::
:header-rows: 1
* - Legacy API
- New API
* - .. code-block:: py
:force:
import openvino as ov
from openvino.tools import mo
ov_model = mo.convert_model(model, source_layout={input_name: ov.Layout("NHWC")})
- .. code-block:: py
:force:
import openvino as ov
ov_model = ov.convert_model(model)
prep = ov.preprocess.PrePostProcessor(ov_model)
prep.input(input_name).model().set_layout(ov.Layout("NHWC"))
ov_model = prep.build()
.. tab-item:: CLI
:sync: cli
.. list-table::
:header-rows: 1
* - Legacy API
- New API
* - .. code-block:: sh
:force:
mo --input_model MODEL_NAME --source_layout input_name(NHWC) --output_dir OUTPUT_DIR
- Not available in OVC tool. Please check Python API.
``target_layout``
##################
.. tab-set::
.. tab-item:: Python
:sync: py
.. list-table::
:header-rows: 1
* - Legacy API
- New API
* - .. code-block:: py
:force:
import openvino as ov
from openvino.tools import mo
ov_model = mo.convert_model(model, target_layout={input_name: ov.Layout("NHWC")})
- .. code-block:: py
:force:
import openvino as ov
ov_model = ov.convert_model(model)
prep = ov.preprocess.PrePostProcessor(ov_model)
prep.input(input_name).tensor().set_layout(ov.Layout("NHWC"))
ov_model = prep.build()
.. tab-item:: CLI
:sync: cli
.. list-table::
:header-rows: 1
* - Legacy API
- New API
* - .. code-block:: sh
:force:
mo --input_model MODEL_NAME --target_layout input_name(NHWC) --output_dir OUTPUT_DIR
- Not available in OVC tool. Please check Python API.
``layout``
###########
.. tab-set::
.. tab-item:: Python
:sync: py
.. list-table::
:header-rows: 1
* - Legacy API
- New API
* - .. code-block:: py
:force:
from openvino.tools import mo
ov_model = mo.convert_model(model, layout={input_name: mo.LayoutMap("NCHW", "NHWC")})
- .. code-block:: py
:force:
import openvino as ov
ov_model = ov.convert_model(model)
prep = ov.preprocess.PrePostProcessor(ov_model)
prep.input(input_name).model().set_layout(ov.Layout("NCHW"))
prep.input(input_name).tensor().set_layout(ov.Layout("NHWC"))
ov_model = prep.build()
.. tab-item:: CLI
:sync: cli
.. list-table::
:header-rows: 1
* - Legacy API
- New API
* - .. code-block:: sh
:force:
mo --input_model MODEL_NAME --layout "input_name(NCHW->NHWC)" --output_dir OUTPUT_DIR
- Not available in OVC tool. Please check Python API.
``transform``
##############
.. tab-set::
.. tab-item:: Python
:sync: py
.. list-table::
:header-rows: 1
* - Legacy API
- New API
* - .. code-block:: py
:force:
from openvino.tools import mo
ov_model = mo.convert_model(model, transform=[('LowLatency2', {'use_const_initializer': False}), 'Pruning', ('MakeStateful', {'param_res_names': {'input_name': 'output_name'}})])
- .. code-block:: py
:force:
import openvino as ov
from openvino._offline_transformations import apply_low_latency_transformation, apply_pruning_transformation, apply_make_stateful_transformation
ov_model = ov.convert_model(model)
apply_low_latency_transformation(model, use_const_initializer=False)
apply_pruning_transformation(model)
apply_make_stateful_transformation(model, param_res_names={'input_name': 'output_name'})
.. tab-item:: CLI
:sync: cli
.. list-table::
:header-rows: 1
* - Legacy API
- New API
* - .. code-block:: sh
:force:
mo --input_model MODEL_NAME --transform LowLatency2[use_const_initializer=False],Pruning,MakeStateful[param_res_names={'input_name':'output_name'}] --output_dir OUTPUT_DIR
- Not available in OVC tool. Please check Python API.
Supported Frameworks in MO vs OVC
#################################
ov.convert_model() and OVC tool support conversion from PyTorch, TF, TF Lite, ONNX, PaddlePaddle.
The following frameworks are supported only in MO and mo.convert_model(): Caffe, MxNet, Kaldi.
@endsphinxdirective

View File

@@ -0,0 +1,33 @@
# Supported Model Formats {#Supported_Model_Formats}
@sphinxdirective
.. toctree::
:maxdepth: 1
:hidden:
openvino_docs_OV_Converter_UG_prepare_model_convert_model_Convert_Model_From_PyTorch
openvino_docs_OV_Converter_UG_prepare_model_convert_model_Convert_Model_From_TensorFlow
openvino_docs_OV_Converter_UG_prepare_model_convert_model_Convert_Model_From_ONNX
openvino_docs_OV_Converter_UG_prepare_model_convert_model_Convert_Model_From_TensorFlow_Lite
openvino_docs_OV_Converter_UG_prepare_model_convert_model_Convert_Model_From_Paddle
**OpenVINO IR (Intermediate Representation)** - the proprietary format of OpenVINO™, benefiting from the full extent of its features. The result of running ``ovc`` CLI tool or ``openvino.save_model`` is OpenVINO IR. All other supported formats can be converted to the IR, refer to the following articles for details on conversion:
* :doc:`How to convert PyTorch <openvino_docs_OV_Converter_UG_prepare_model_convert_model_Convert_Model_From_PyTorch>`
* :doc:`How to convert ONNX <openvino_docs_OV_Converter_UG_prepare_model_convert_model_Convert_Model_From_ONNX>`
* :doc:`How to convert TensorFlow <openvino_docs_OV_Converter_UG_prepare_model_convert_model_Convert_Model_From_TensorFlow>`
* :doc:`How to convert TensorFlow Lite <openvino_docs_OV_Converter_UG_prepare_model_convert_model_Convert_Model_From_TensorFlow_Lite>`
* :doc:`How to convert PaddlePaddle <openvino_docs_OV_Converter_UG_prepare_model_convert_model_Convert_Model_From_Paddle>`
To choose the best workflow for your application, read the :doc:`Model Preparation section <openvino_docs_model_processing_introduction>`
Refer to the list of all supported conversion options in :doc:`Conversion Parameters <openvino_docs_OV_Converter_UG_Conversion_Options>`
Additional Resources
####################
* :doc:`Transition guide from the legacy to new conversion API <openvino_docs_OV_Converter_UG_prepare_model_convert_model_MO_OVC_transition>`
@endsphinxdirective

View File

@@ -51,7 +51,7 @@ If you installed OpenVINO via PyPI, download `the OpenVINO repository <https://g
The applications include:
- **Speech Sample** - Acoustic model inference based on Kaldi neural networks and speech feature vectors.
- **Speech Sample** - ``[DEPRECATED]`` Acoustic model inference based on Kaldi neural networks and speech feature vectors.
- :doc:`Automatic Speech Recognition C++ Sample <openvino_inference_engine_samples_speech_sample_README>`
- :doc:`Automatic Speech Recognition Python Sample <openvino_inference_engine_ie_bridges_python_sample_speech_sample_README>`
@@ -98,13 +98,15 @@ The applications include:
- **Benchmark Application** Estimates deep learning inference performance on supported devices for synchronous and asynchronous modes.
- :doc:`Benchmark C++ Tool <openvino_inference_engine_samples_benchmark_app_README>`
Note that the Python version of the benchmark tool is currently available only through the :doc:`OpenVINO Development Tools installation <openvino_docs_install_guides_install_dev_tools>`. It is not created in the samples directory but can be launched with the following command: ``benchmark_app -m <model> -i <input> -d <device>``. For more information, check the :doc:`Benchmark Python Tool <openvino_inference_engine_tools_benchmark_tool_README>` documentation.
Note that the Python version of the benchmark tool is a core component of the OpenVINO installation package and
may be executed with the following command: ``benchmark_app -m <model> -i <input> -d <device>``.
For more information, check the :doc:`Benchmark Python Tool <openvino_inference_engine_tools_benchmark_tool_README>`.
.. note::
All C++ samples support input paths containing only ASCII characters, except for the Hello Classification Sample, that supports Unicode.
All C++ samples support input paths containing only ASCII characters, except for the Hello Classification Sample, which supports Unicode.
Media Files Available for Samples
#################################
@@ -119,7 +121,7 @@ To run the sample, you can use :doc:`public <omz_models_group_public>` or :doc:`
Build the Sample Applications
#############################
.. _build-samples-linux:
Build the Sample Applications on Linux
++++++++++++++++++++++++++++++++++++++

View File

@@ -26,10 +26,9 @@ Local Deployment Options
- using Debian / RPM packages - a recommended way for Linux operating systems;
- using PIP package manager on PyPI - the default approach for Python-based applications;
- using Docker images - if the application should be deployed as a Docker image, use a pre-built OpenVINO™ Runtime Docker image as a base image in the Dockerfile for the application container image. For more information about OpenVINO Docker images, refer to :doc:`Installing OpenVINO on Linux from Docker <openvino_docs_install_guides_installing_openvino_docker_linux>`
Furthermore, to customize your OpenVINO Docker image, use the `Docker CI Framework <https://github.com/openvinotoolkit/docker_ci>`__ to generate a Dockerfile and built the image.
- using Docker images - if the application should be deployed as a Docker image, use a pre-built OpenVINO™ Runtime Docker image as a base image in the Dockerfile for the application container image. For more information about OpenVINO Docker images, refer to :doc:`Installing OpenVINO from Docker <openvino_docs_install_guides_installing_openvino_docker>`
- Furthermore, to customize your OpenVINO Docker image, use the `Docker CI Framework <https://github.com/openvinotoolkit/docker_ci>`__ to generate a Dockerfile and build the image.
- Grab a necessary functionality of OpenVINO together with your application, also called "local distribution":
- using :doc:`OpenVINO Deployment Manager <openvino_docs_install_guides_deployment_manager_tool>` - providing a convenient way for creating a distribution package;
@@ -45,7 +44,7 @@ The table below shows which distribution type can be used for what target operat
- Operating systems
* - Debian packages
- Ubuntu 18.04 long-term support (LTS), 64-bit; Ubuntu 20.04 long-term support (LTS), 64-bit
* - RMP packages
* - RPM packages
- Red Hat Enterprise Linux 8, 64-bit
* - Docker images
- Ubuntu 22.04 long-term support (LTS), 64-bit; Ubuntu 20.04 long-term support (LTS), 64-bit; Red Hat Enterprise Linux 8, 64-bit

View File

@@ -137,6 +137,7 @@ OpenVINO Runtime uses frontend libraries dynamically to read models in different
- ``openvino_tensorflow_lite_frontend`` is used to read the TensorFlow Lite file format.
- ``openvino_onnx_frontend`` is used to read the ONNX file format.
- ``openvino_paddle_frontend`` is used to read the Paddle file format.
- ``openvino_pytorch_frontend`` is used to convert PyTorch model via ``openvino.convert_model`` API.
Depending on the model format types that are used in the application in `ov::Core::read_model <classov_1_1Core.html#doxid-classov-1-1-core-1ae0576a95f841c3a6f5e46e4802716981>`__, select the appropriate libraries.

View File

@@ -437,9 +437,9 @@ To build your project using CMake with the default build tools currently availab
Additional Resources
####################
* See the :doc:`OpenVINO Samples <openvino_docs_OV_UG_Samples_Overview>` page or the `Open Model Zoo Demos <https://docs.openvino.ai/2023.0/omz_demos.html>`__ page for specific examples of how OpenVINO pipelines are implemented for applications like image classification, text prediction, and many others.
* See the :doc:`OpenVINO Samples <openvino_docs_OV_UG_Samples_Overview>` page or the `Open Model Zoo Demos <https://docs.openvino.ai/2023.1/omz_demos.html>`__ page for specific examples of how OpenVINO pipelines are implemented for applications like image classification, text prediction, and many others.
* :doc:`OpenVINO™ Runtime Preprocessing <openvino_docs_OV_UG_Preprocessing_Overview>`
* :doc:`Using Encrypted Models with OpenVINO <openvino_docs_OV_UG_protecting_model_guide>`
* `Open Model Zoo Demos <https://docs.openvino.ai/2023.0/omz_demos.html>`__
* `Open Model Zoo Demos <https://docs.openvino.ai/2023.1/omz_demos.html>`__
@endsphinxdirective

View File

@@ -16,7 +16,19 @@ One of the main concepts for OpenVINO™ API 2.0 is being "easy to use", which i
* Development and deployment of OpenVINO-based applications.
To accomplish that, the 2022.1 release OpenVINO introduced significant changes to the installation and deployment processes. This guide will walk you through these changes.
To accomplish that, the 2022.1 release OpenVINO introduced significant changes to the installation
and deployment processes. Further changes were implemented in 2023.1, aiming at making the installation
process even simpler.
.. tip::
These instructions are largely deprecated and should be used for versions prior to 2023.1.
The OpenVINO Development Tools package is being deprecated and will be discontinued entirely in 2025.
With this change, the OpenVINO Runtime package has become the default choice for installing the
software. It now includes all components necessary to utilize OpenVINO's functionality.
The Installer Package Contains OpenVINO™ Runtime Only
#####################################################
@@ -47,8 +59,8 @@ In previous versions, OpenVINO Development Tools was a part of the main package.
$ mo.py -h
For 2022.1 and After
++++++++++++++++++++
For 2022.1 and After (prior to 2023.1)
++++++++++++++++++++++++++++++++++++++++++
In OpenVINO 2022.1 and later, you can install the development tools only from a `PyPI <https://pypi.org/project/openvino-dev/>`__ repository, using the following command (taking TensorFlow as an example):
@@ -67,7 +79,8 @@ Then, the tools can be used by commands like:
$ pot -h
Installation of any other dependencies is not required. For more details on the installation steps, see the :doc:`Install OpenVINO Development Tools <openvino_docs_install_guides_install_dev_tools>`.
Installation of any other dependencies is not required. For more details on the installation steps, see the
`Install OpenVINO Development Tools <https://docs.openvino.ai/2023.0/openvino_docs_install_guides_install_dev_tools.html>`__ prior to OpenVINO 2023.1.
Interface Changes for Building C/C++ Applications
#################################################

View File

@@ -20,7 +20,7 @@ nGraph API
.. tab-item:: Python
:sync: py
.. doxygensnippet:: docs/snippets/ngraph.py
.. doxygensnippet:: docs/snippets/ngraph_snippet.py
:language: Python
:fragment: ngraph:graph

View File

@@ -163,9 +163,21 @@ Example of Creating Model OpenVINO API
In the following example, the ``SinkVector`` is used to create the ``ov::Model``. For a model with states, except inputs and outputs, the ``Assign`` nodes should also point to the ``Model`` to avoid deleting it during graph transformations. Use the constructor to do it, as shown in the example, or with the special ``add_sinks(const SinkVector& sinks)`` method. After deleting the node from the graph with the ``delete_sink()`` method, a sink can be deleted from ``ov::Model``.
.. doxygensnippet:: docs/snippets/ov_model_with_state_infer.cpp
:language: cpp
:fragment: [model_create]
.. tab-set::
.. tab-item:: C++
:sync: cpp
.. doxygensnippet:: docs/snippets/ov_model_with_state_infer.cpp
:language: cpp
:fragment: [model_create]
.. tab-item:: Python
:sync: py
.. doxygensnippet:: docs/snippets/ov_model_with_state_infer.py
:language: python
:fragment: ov:model_create
.. _openvino-state-api:
@@ -189,10 +201,22 @@ Based on the IR from the previous section, the example below demonstrates infere
One infer request and one thread will be used in this example. Using several threads is possible if there are several independent sequences. Then, each sequence can be processed in its own infer request. Inference of one sequence in several infer requests is not recommended. In one infer request, a state will be saved automatically between inferences, but if the first step is done in one infer request and the second in another, a state should be set in a new infer request manually (using the ``ov::IVariableState::set_state`` method).
.. tab-set::
.. tab-item:: C++
:sync: cpp
.. doxygensnippet:: docs/snippets/ov_model_with_state_infer.cpp
:language: cpp
:fragment: [part1]
.. tab-item:: Python
:sync: py
.. doxygensnippet:: docs/snippets/ov_model_with_state_infer.py
:language: python
:fragment: ov:part1
.. doxygensnippet:: docs/snippets/ov_model_with_state_infer.cpp
:language: cpp
:fragment: [part1]
For more elaborate examples demonstrating how to work with models with states,

View File

@@ -22,7 +22,18 @@
on different platforms.
OpenVINO Runtime is a set of C++ libraries with C and Python bindings providing a common API to deliver inference solutions on the platform of your choice. Use the OpenVINO Runtime API to read an Intermediate Representation (IR), TensorFlow, TensorFlow Lite, ONNX, or PaddlePaddle model and execute it on preferred devices.
OpenVINO Runtime is a set of C++ libraries with C and Python bindings providing a common API to deliver inference solutions on the platform of your choice. Use the OpenVINO Runtime API to read PyTorch, TensorFlow, TensorFlow Lite, ONNX, and PaddlePaddle models and execute them on preferred devices. OpenVINO gives you the option to use these models directly or convert them to the OpenVINO IR (Intermediate Representation) format explicitly, for maximum performance.
.. note::
For more detailed information on how to convert, read, and compile supported model formats
see the :doc:`Supported Formats article <Supported_Model_Formats_MO_DG>`.
Note that TensorFlow models can be run using the
:doc:`torch.compile feature <pytorch_2_0_torch_compile>`, as well as the standard ways of
:doc:`converting TensorFlow <openvino_docs_OV_Converter_UG_prepare_model_convert_model_Convert_Model_From_PyTorch>`
or reading them directly.
OpenVINO Runtime uses a plugin architecture. Its plugins are software components that contain complete implementation for inference on a particular Intel® hardware device: CPU, GPU, GNA, etc. Each plugin implements the unified API and provides additional hardware-specific APIs for configuring devices or API interoperability between OpenVINO Runtime and underlying plugin backend.
@@ -32,17 +43,4 @@ The scheme below illustrates the typical workflow for deploying a trained deep l
.. image:: _static/images/BASIC_FLOW_IE_C.svg
Video
####################
.. list-table::
* - .. raw:: html
<iframe allowfullscreen mozallowfullscreen msallowfullscreen oallowfullscreen webkitallowfullscreen height="315" width="560"
src="https://www.youtube.com/embed/e6R13V8nbak">
</iframe>
* - **OpenVINO Runtime Concept**. Duration: 3:43
@endsphinxdirective

View File

@@ -62,7 +62,7 @@ Model input dimensions can be specified as dynamic using the model.reshape metho
Some models may already have dynamic shapes out of the box and do not require additional configuration. This can either be because it was generated with dynamic shapes from the source framework, or because it was converted with Model Conversion API to use dynamic shapes. For more information, see the Dynamic Dimensions “Out of the Box” section.
The examples below show how to set dynamic dimensions with a model that has a static ``[1, 3, 224, 224]`` input shape (such as `mobilenet-v2 <https://docs.openvino.ai/2023.0/omz_models_model_mobilenet_v2.html>`__). The first example shows how to change the first dimension (batch size) to be dynamic. In the second example, the third and fourth dimensions (height and width) are set as dynamic.
The examples below show how to set dynamic dimensions with a model that has a static ``[1, 3, 224, 224]`` input shape (such as `mobilenet-v2 <https://docs.openvino.ai/2023.1/omz_models_model_mobilenet_v2.html>`__). The first example shows how to change the first dimension (batch size) to be dynamic. In the second example, the third and fourth dimensions (height and width) are set as dynamic.
.. tab-set::
@@ -175,7 +175,7 @@ The lower and/or upper bounds of a dynamic dimension can also be specified. They
.. tab-item:: C
:sync: c
The dimension bounds can be coded as arguments for `ov_dimension <https://docs.openvino.ai/2023.0/structov_dimension.html#doxid-structov-dimension>`__, as shown in these examples:
The dimension bounds can be coded as arguments for `ov_dimension <https://docs.openvino.ai/2023.1/structov_dimension.html#doxid-structov-dimension>`__, as shown in these examples:
.. doxygensnippet:: docs/snippets/ov_dynamic_shapes.c
:language: cpp

View File

@@ -68,14 +68,14 @@ in the model preparation script for such a case.
.. doxygensnippet:: docs/snippets/ov_preprocessing.py
:language: Python
:fragment: ov:preprocess:save
:fragment: ov:preprocess:save_model
.. tab-item:: C++
:sync: cpp
.. doxygensnippet:: docs/snippets/ov_preprocessing.cpp
:language: cpp
:fragment: ov:preprocess:save
:fragment: ov:preprocess:save_model
Application Code - Load Model to Target Device
@@ -110,8 +110,8 @@ Additional Resources
* :doc:`Layout API overview <openvino_docs_OV_UG_Layout_Overview>`
* :doc:`Model Optimizer - Optimize Preprocessing Computation <openvino_docs_MO_DG_Additional_Optimization_Use_Cases>`
* :doc:`Model Caching Overview <openvino_docs_OV_UG_Model_caching_overview>`
* The `ov::preprocess::PrePostProcessor <https://docs.openvino.ai/2023.0/classov_1_1preprocess_1_1PrePostProcessor.html#doxid-classov-1-1preprocess-1-1-pre-post-processor>`__ C++ class documentation
* The `ov::pass::Serialize <https://docs.openvino.ai/2023.0/classov_1_1pass_1_1Serialize.html#doxid-classov-1-1pass-1-1-serialize.html>`__ - pass to serialize model to XML/BIN
* The `ov::set_batch <https://docs.openvino.ai/2023.0/namespaceov.html#doxid-namespaceov-1a3314e2ff91fcc9ffec05b1a77c37862b.html>`__ - update batch dimension for a given model
* The `ov::preprocess::PrePostProcessor <https://docs.openvino.ai/2023.1/classov_1_1preprocess_1_1PrePostProcessor.html#doxid-classov-1-1preprocess-1-1-pre-post-processor>`__ C++ class documentation
* The `ov::pass::Serialize <https://docs.openvino.ai/2023.1/classov_1_1pass_1_1Serialize.html#doxid-classov-1-1pass-1-1-serialize.html>`__ - pass to serialize model to XML/BIN
* The `ov::set_batch <https://docs.openvino.ai/2023.1/namespaceov.html#doxid-namespaceov-1a3314e2ff91fcc9ffec05b1a77c37862b.html>`__ - update batch dimension for a given model
@endsphinxdirective

View File

@@ -52,7 +52,7 @@ CPU plugin supports the following data types as inference precision of internal
- Floating-point data types:
- ``f32`` (Intel® x86-64, Arm®)
- ``bf16``(Intel® x86-64)
- ``bf16`` (Intel® x86-64)
- Integer data types:
- ``i32`` (Intel® x86-64, Arm®)

View File

@@ -2,6 +2,12 @@
@sphinxdirective
.. meta::
:description: The list of types of devices and corresponding plugins which
are compatible with OpenVINO Runtime and support inference
of deep learning models.
.. toctree::
:maxdepth: 1
:hidden:
@@ -9,19 +15,16 @@
openvino_docs_OV_UG_query_api
openvino_docs_OV_UG_supported_plugins_CPU
openvino_docs_OV_UG_supported_plugins_GPU
openvino_docs_OV_UG_supported_plugins_NPU
openvino_docs_OV_UG_supported_plugins_GNA
.. meta::
:description: The list of types of devices and corresponding plugins which
are compatible with OpenVINO Runtime and support inference
of deep learning models.
OpenVINO™ Runtime can infer deep learning models using the following device types:
* :doc:`CPU <openvino_docs_OV_UG_supported_plugins_CPU>`
* :doc:`GPU <openvino_docs_OV_UG_supported_plugins_GPU>`
* :doc:`GNA <openvino_docs_OV_UG_supported_plugins_GNA>`
* :doc:`Arm® CPU <openvino_docs_OV_UG_supported_plugins_CPU>`
For a more detailed list of hardware, see :doc:`Supported Devices <openvino_docs_OV_UG_supported_plugins_Supported_Devices>`.

View File

@@ -1,6 +1,8 @@
# GNA Device {#openvino_docs_OV_UG_supported_plugins_GNA}
@sphinxdirective
.. meta::
@@ -18,10 +20,14 @@ For more details on how to configure a system to use GNA, see the :doc:`GNA conf
.. note::
Intel's GNA is being discontinued and Intel® Core™ Ultra (formerly known as Meteor Lake) will be the last generation of hardware to include it.
Consider Intel's new Visual Processing Unit as a low-power solution for offloading neural network computation, for processors offering the technology.
Intel's GNA is being discontinued and Intel® Core™ Ultra (formerly known as Meteor Lake)
will be the last generation of hardware to include it.
For this reason, the GNA plugin will soon be discontinued.
Consider Intel's new Neural Processing Unit as a low-power solution for offloading
neural network computation, for processors offering the technology.
Intel® GNA Generational Differences
###########################################################

View File

@@ -0,0 +1,28 @@
# NPU Device {#openvino_docs_OV_UG_supported_plugins_NPU}
@sphinxdirective
.. meta::
:description: The NPU plugin in the Intel® Distribution of OpenVINO™ toolkit
aims at high performance inference of neural
networks on the low-power NPU processing device.
NPU is a new generation of low-power processing unit dedicated to processing neural networks.
The NPU plugin is a core part of the OpenVINO™ toolkit. For its in-depth description, see:
..
- `NPU plugin developer documentation < cmake_options_for_custom_compilation.md ??? >`__.
- `NPU plugin source files < ??? >`__.
@endsphinxdirective

Binary file not shown.

File diff suppressed because it is too large Load Diff

View File

@@ -1,121 +1,121 @@
Network model,Release,IE-Type,Platform name,Throughput-OVMS-INT8,Throughput-OV-INT8,Throughput-OVMS-FP32,Throughput-OV-FP32
begin_rec,,,,,,,
bert-base-cased,OV-2023.0,xeon,Intel® Xeon® 8260M CPU-only,559.87,591.49,182.12,188.52
bert-base-cased,OV-2023.0,xeon,Intel® Xeon® Gold 6238M CPU-only,504.08,520.95,157.58,162.80
bert-base-cased,OV-2023.0,core,Intel® Core™ i9-11900K CPU-only,137.25,145.76,38.54,40.85
bert-base-cased,OV-2023.0,core,Intel® Core™ i7-11700K CPU-only,139.53,154.46,40.65,44.50
bert-base-cased,OV-2023.0,core,Intel® Core™ i5-8500 CPU-only,28.38,30.17,17.53,18.44
bert-base-cased,OV-2023.0,core,Intel® Core™ i3-10100 CPU-only,26.43,27.18,15.59,15.76
end_rec,,,,,,,
begin_rec,,,,,,,
bert-large-uncased,OV-2023.0,xeon,Intel® Xeon® 8260M CPU-only,32.58,37.81,15.39,16.79
bert-large-uncased,OV-2023.0,xeon,Intel® Xeon® Gold 6238M CPU-only,28.06,33.33,13.70,14.54
bert-large-uncased,OV-2023.0,core,Intel® Core™ i9-11900K CPU-only,8.75,9.04,3.16,3.27
bert-large-uncased,OV-2023.0,core,Intel® Core™ i7-11700K CPU-only,8.15,8.23,3.23,3.28
bert-large-uncased,OV-2023.0,core,Intel® Core™ i5-8500 CPU-only,2.56,2.58,1.52,1.55
bert-large-uncased,OV-2023.0,core,Intel® Core™ i3-10100 CPU-only,2.12,2.29,1.29,1.30
end_rec,,,,,,,
begin_rec,,,,,,,
DeeplabV3,OV-2023.0,xeon,Intel® Xeon® 8260M CPU-only,343.44,369.62,114.69,120.87
DeeplabV3,OV-2023.0,xeon,Intel® Xeon® Gold 6238M CPU-only,279.60,305.89,90.33,96.68
DeeplabV3,OV-2023.0,core,Intel® Core™ i9-11900K CPU-only,89.82,108.12,25.46,25.74
DeeplabV3,OV-2023.0,core,Intel® Core™ i7-11700K CPU-only,77.58,92.22,21.85,23.09
DeeplabV3,OV-2023.0,core,Intel® Core™ i5-8500 CPU-only,16.77,29.67,16.77,17.29
DeeplabV3,OV-2023.0,core,Intel® Core™ i3-10100 CPU-only,26.99,28.39,15.71,16.00
end_rec,,,,,,,
begin_rec,,,,,,,
Efficientdet-D0,OV-2023.0,xeon,Intel® Xeon® 8260M CPU-only,380.32,419.32,236.50,243.76
Efficientdet-D0,OV-2023.0,xeon,Intel® Xeon® Gold 6238M CPU-only,324.60,353.57,207.87,227.27
Efficientdet-D0,OV-2023.0,core,Intel® Core™ i9-11900K CPU-only,121.90,137.75,53.07,57.23
Efficientdet-D0,OV-2023.0,core,Intel® Core™ i7-11700K CPU-only,111.73,127.45,44.97,46.59
Efficientdet-D0,OV-2023.0,core,Intel® Core™ i5-8500 CPU-only,40.04,41.11,25.72,28.38
Efficientdet-D0,OV-2023.0,core,Intel® Core™ i3-10100 CPU-only,41.21,42.72,25.10,25.95
end_rec,,,,,,,
begin_rec,,,,,,,
faster_rcnn_resnet50_coco,OV-2023.0,xeon,Intel® Xeon® 8260M CPU-only,65.01,70.39,17.65,18.89
faster_rcnn_resnet50_coco,OV-2023.0,xeon,Intel® Xeon® Gold 6238M CPU-only,60.49,62.73,15.71,16.46
faster_rcnn_resnet50_coco,OV-2023.0,core,Intel® Core™ i9-11900K CPU-only,13.89,14.16,3.83,4.04
faster_rcnn_resnet50_coco,OV-2023.0,core,Intel® Core™ i7-11700K CPU-only,14.91,15.29,4.06,4.28
faster_rcnn_resnet50_coco,OV-2023.0,core,Intel® Core™ i5-8500 CPU-only,3.58,3.62,1.88,1.89
faster_rcnn_resnet50_coco,OV-2023.0,core,Intel® Core™ i3-10100 CPU-only,3.22,3.23,1.70,1.71
end_rec,,,,,,,
begin_rec,,,,,,,
Inception-V4,OV-2023.0,xeon,Intel® Xeon® 8260M CPU-only,741.48,791.73,185.33,190.14
Inception-V4,OV-2023.0,xeon,Intel® Xeon® Gold 6238M CPU-only,647.30,693.21,161.35,165.76
Inception-V4,OV-2023.0,core,Intel® Core™ i9-11900K CPU-only,158.69,162.07,37.16,39.11
Inception-V4,OV-2023.0,core,Intel® Core™ i7-11700K CPU-only,174.96,183.19,40.53,42.20
Inception-V4,OV-2023.0,core,Intel® Core™ i5-8500 CPU-only,36.16,37.61,18.49,19.05
Inception-V4,OV-2023.0,core,Intel® Core™ i3-10100 CPU-only,32.06,33.23,16.39,16.76
end_rec,,,,,,,
begin_rec,,,,,,,
Mobilenet-SSD ,OV-2023.0,xeon,Intel® Xeon® 8260M CPU-only,4118.70,5664.02,1312.52,1488.89
Mobilenet-SSD ,OV-2023.0,xeon,Intel® Xeon® Gold 6238M CPU-only,3740.30,4877.12,1156.02,1255.77
Mobilenet-SSD ,OV-2023.0,core,Intel® Core™ i9-11900K CPU-only,918.95,1250.22,291.99,335.81
Mobilenet-SSD ,OV-2023.0,core,Intel® Core™ i7-11700K CPU-only,945.19,1429.69,273.07,329.38
Mobilenet-SSD ,OV-2023.0,core,Intel® Core™ i5-8500 CPU-only,236.18,280.08,152.84,173.51
Mobilenet-SSD ,OV-2023.0,core,Intel® Core™ i3-10100 CPU-only,235.71,263.74,138.06,150.19
end_rec,,,,,,,
begin_rec,,,,,,,
Mobilenet-V2 ,OV-2023.0,xeon,Intel® Xeon® 8260M CPU-only,7580.68,13077.02,3108.32,3891.42
Mobilenet-V2 ,OV-2023.0,xeon,Intel® Xeon® Gold 6238M CPU-only,7310.77,11049.48,2661.25,3172.62
Mobilenet-V2 ,OV-2023.0,core,Intel® Core™ i9-11900K CPU-only,1979.69,3041.21,709.27,904.80
Mobilenet-V2 ,OV-2023.0,core,Intel® Core™ i7-11700K CPU-only,1911.48,3538.02,619.12,804.96
Mobilenet-V2 ,OV-2023.0,core,Intel® Core™ i5-8500 CPU-only,520.37,680.97,411.23,536.68
Mobilenet-V2 ,OV-2023.0,core,Intel® Core™ i3-10100 CPU-only,549.38,665.54,400.97,503.12
end_rec,,,,,,,
begin_rec,,,,,,,
GPT-2,OV-2023.0,xeon,Intel® Xeon® 8260M CPU-only,,,9.06,11.37
GPT-2,OV-2023.0,xeon,Intel® Xeon® Gold 6238M CPU-only,,,7.68,8.78
GPT-2,OV-2023.0,core,Intel® Core™ i9-11900K CPU-only,,,2.07,2.44
GPT-2,OV-2023.0,core,Intel® Core™ i7-11700K CPU-only,,,1.85,2.24
GPT-2,OV-2023.0,core,Intel® Core™ i5-8500 CPU-only,,,1.11,1.21
GPT-2,OV-2023.0,core,Intel® Core™ i3-10100 CPU-only,,,1.02,1.06
end_rec,,,,,,,
begin_rec,,,,,,,
Resnet-50,OV-2023.0,xeon,Intel® Xeon® 8260M CPU-only,2328.67,2558.36,624.81,635.89
Resnet-50,OV-2023.0,xeon,Intel® Xeon® Gold 6238M CPU-only,2062.84,2261.19,558.42,570.31
Resnet-50,OV-2023.0,core,Intel® Core™ i9-11900K CPU-only,500.96,527.47,122.28,133.20
Resnet-50,OV-2023.0,core,Intel® Core™ i7-11700K CPU-only,549.24,594.07,126.20,144.62
Resnet-50,OV-2023.0,core,Intel® Core™ i5-8500 CPU-only,114.70,123.79,60.22,66.07
Resnet-50,OV-2023.0,core,Intel® Core™ i3-10100 CPU-only,106.60,113.22,55.48,57.32
end_rec,,,,,,,
begin_rec,,,,,,,
SSD-Resnet34-1200 ,OV-2023.0,xeon,Intel® Xeon® 8260M CPU-only,43.61,47.57,11.84,12.22
SSD-Resnet34-1200 ,OV-2023.0,xeon,Intel® Xeon® Gold 6238M CPU-only,38.68,40.91,10.19,10.61
SSD-Resnet34-1200 ,OV-2023.0,core,Intel® Core™ i9-11900K CPU-only,8.75,8.83,2.18,2.27
SSD-Resnet34-1200 ,OV-2023.0,core,Intel® Core™ i7-11700K CPU-only,9.84,10.10,2.34,2.58
SSD-Resnet34-1200 ,OV-2023.0,core,Intel® Core™ i5-8500 CPU-only,2.08,2.12,1.21,1.23
SSD-Resnet34-1200 ,OV-2023.0,core,Intel® Core™ i3-10100 CPU-only,1.83,1.84,1.03,1.07
end_rec,,,,,,,
begin_rec,,,,,,,
Unet-Camvid--0001 ,OV-2023.0,xeon,Intel® Xeon® 8260M CPU-only,71.34,77.83,17.45,18.10
Unet-Camvid--0001 ,OV-2023.0,xeon,Intel® Xeon® Gold 6238M CPU-only,61.58,67.16,15.23,15.47
Unet-Camvid--0001 ,OV-2023.0,core,Intel® Core™ i9-11900K CPU-only,14.39,14.76,3.52,3.71
Unet-Camvid--0001 ,OV-2023.0,core,Intel® Core™ i7-11700K CPU-only,15.73,16.27,4.03,4.10
Unet-Camvid--0001 ,OV-2023.0,core,Intel® Core™ i5-8500 CPU-only,2.99,3.05,1.90,1.95
Unet-Camvid--0001 ,OV-2023.0,core,Intel® Core™ i3-10100 CPU-only,2.79,2.80,1.73,1.74
end_rec,,,,,,,
begin_rec,,,,,,,
Yolo_V3,OV-2023.0,xeon,Intel® Xeon® 8260M CPU-only,240.57,263.21,69.57,72.63
Yolo_V3,OV-2023.0,xeon,Intel® Xeon® Gold 6238M CPU-only,208.97,227.96,59.95,61.34
Yolo_V3,OV-2023.0,core,Intel® Core™ i9-11900K CPU-only,51.50,53.96,14.09,14.52
Yolo_V3,OV-2023.0,core,Intel® Core™ i7-11700K CPU-only,55.30,60.81,14.27,15.84
Yolo_V3,OV-2023.0,core,Intel® Core™ i5-8500 CPU-only,12.91,13.40,7.01,7.31
Yolo_V3,OV-2023.0,core,Intel® Core™ i3-10100 CPU-only,11.42,11.73,6.29,6.39
end_rec,,,,,,,
begin_rec,,,,,,,
Yolo_V3_Tiny,OV-2023.0,xeon,Intel® Xeon® 8260M CPU-only,1785.36,2420.73,690.57,754.41
Yolo_V3_Tiny,OV-2023.0,xeon,Intel® Xeon® Gold 6238M CPU-only,1614.61,2097.37,584.59,632.46
Yolo_V3_Tiny,OV-2023.0,core,Intel® Core™ i9-11900K CPU-only,441.22,574.61,158.91,166.02
Yolo_V3_Tiny,OV-2023.0,core,Intel® Core™ i7-11700K CPU-only,418.34,639.79,154.96,174.78
Yolo_V3_Tiny,OV-2023.0,core,Intel® Core™ i5-8500 CPU-only,121.45,141.31,72.11,77.17
Yolo_V3_Tiny,OV-2023.0,core,Intel® Core™ i3-10100 CPU-only,114.38,127.16,67.41,71.53
end_rec,,,,,,,
begin_rec,,,,,,,
Yolo_V8n,OV-2023.0,xeon,Intel® Xeon® 8260M CPU-only,588.99,779.76,308.36,385.15
Yolo_V8n,OV-2023.0,xeon,Intel® Xeon® Gold 6238M CPU-only,574.93,655.56,273.21,322.97
Yolo_V8n,OV-2023.0,core,Intel® Core™ i9-11900K CPU-only,161.00,229.64,71.65,83.89
Yolo_V8n,OV-2023.0,core,Intel® Core™ i7-11700K CPU-only,148.22,233.11,68.23,84.40
Yolo_V8n,OV-2023.0,core,Intel® Core™ i5-8500 CPU-only,55.71,66.98,36.15,41.24
Yolo_V8n,OV-2023.0,core,Intel® Core™ i3-10100 CPU-only,53.23,62.99,34.77,37.34
end_rec,,,,,,,
Network model,Release,IE-Type,Platform name,Throughput-OVMS-INT8,Throughput-OV-INT8,Throughput-OVMS-FP32,Throughput-OV-FP32,UOM_T
begin_rec,,,,,,,,
bert-base-cased,OV-2023.0,xeon,Intel® Xeon® 8260M CPU-only,559.87,591.49,182.12,188.52,FPS
bert-base-cased,OV-2023.0,xeon,Intel® Xeon® Gold 6238M CPU-only,504.08,520.95,157.58,162.80,FPS
bert-base-cased,OV-2023.0,core,Intel® Core™ i9-11900K CPU-only,137.25,145.76,38.54,40.85,FPS
bert-base-cased,OV-2023.0,core,Intel® Core™ i7-11700K CPU-only,139.53,154.46,40.65,44.50,FPS
bert-base-cased,OV-2023.0,core,Intel® Core™ i5-8500 CPU-only,28.38,30.17,17.53,18.44,FPS
bert-base-cased,OV-2023.0,core,Intel® Core™ i3-10100 CPU-only,26.43,27.18,15.59,15.76,FPS
end_rec,,,,,,,,
begin_rec,,,,,,,,
bert-large-uncased,OV-2023.0,xeon,Intel® Xeon® 8260M CPU-only,32.58,37.81,15.39,16.79,FPS
bert-large-uncased,OV-2023.0,xeon,Intel® Xeon® Gold 6238M CPU-only,28.06,33.33,13.70,14.54,FPS
bert-large-uncased,OV-2023.0,core,Intel® Core™ i9-11900K CPU-only,8.75,9.04,3.16,3.27,FPS
bert-large-uncased,OV-2023.0,core,Intel® Core™ i7-11700K CPU-only,8.15,8.23,3.23,3.28,FPS
bert-large-uncased,OV-2023.0,core,Intel® Core™ i5-8500 CPU-only,2.56,2.58,1.52,1.55,FPS
bert-large-uncased,OV-2023.0,core,Intel® Core™ i3-10100 CPU-only,2.12,2.29,1.29,1.30,FPS
end_rec,,,,,,,,
begin_rec,,,,,,,,
DeeplabV3,OV-2023.0,xeon,Intel® Xeon® 8260M CPU-only,343.44,369.62,114.69,120.87,FPS
DeeplabV3,OV-2023.0,xeon,Intel® Xeon® Gold 6238M CPU-only,279.60,305.89,90.33,96.68,FPS
DeeplabV3,OV-2023.0,core,Intel® Core™ i9-11900K CPU-only,89.82,108.12,25.46,25.74,FPS
DeeplabV3,OV-2023.0,core,Intel® Core™ i7-11700K CPU-only,77.58,92.22,21.85,23.09,FPS
DeeplabV3,OV-2023.0,core,Intel® Core™ i5-8500 CPU-only,16.77,29.67,16.77,17.29,FPS
DeeplabV3,OV-2023.0,core,Intel® Core™ i3-10100 CPU-only,26.99,28.39,15.71,16.00,FPS
end_rec,,,,,,,,
begin_rec,,,,,,,,
Efficientdet-D0,OV-2023.0,xeon,Intel® Xeon® 8260M CPU-only,380.32,419.32,236.50,243.76,FPS
Efficientdet-D0,OV-2023.0,xeon,Intel® Xeon® Gold 6238M CPU-only,324.60,353.57,207.87,227.27,FPS
Efficientdet-D0,OV-2023.0,core,Intel® Core™ i9-11900K CPU-only,121.90,137.75,53.07,57.23,FPS
Efficientdet-D0,OV-2023.0,core,Intel® Core™ i7-11700K CPU-only,111.73,127.45,44.97,46.59,FPS
Efficientdet-D0,OV-2023.0,core,Intel® Core™ i5-8500 CPU-only,40.04,41.11,25.72,28.38,FPS
Efficientdet-D0,OV-2023.0,core,Intel® Core™ i3-10100 CPU-only,41.21,42.72,25.10,25.95,FPS
end_rec,,,,,,,,
begin_rec,,,,,,,,
faster_rcnn_resnet50_coco,OV-2023.0,xeon,Intel® Xeon® 8260M CPU-only,65.01,70.39,17.65,18.89,FPS
faster_rcnn_resnet50_coco,OV-2023.0,xeon,Intel® Xeon® Gold 6238M CPU-only,60.49,62.73,15.71,16.46,FPS
faster_rcnn_resnet50_coco,OV-2023.0,core,Intel® Core™ i9-11900K CPU-only,13.89,14.16,3.83,4.04,FPS
faster_rcnn_resnet50_coco,OV-2023.0,core,Intel® Core™ i7-11700K CPU-only,14.91,15.29,4.06,4.28,FPS
faster_rcnn_resnet50_coco,OV-2023.0,core,Intel® Core™ i5-8500 CPU-only,3.58,3.62,1.88,1.89,FPS
faster_rcnn_resnet50_coco,OV-2023.0,core,Intel® Core™ i3-10100 CPU-only,3.22,3.23,1.70,1.71,FPS
end_rec,,,,,,,,
begin_rec,,,,,,,,
Inception-V4,OV-2023.0,xeon,Intel® Xeon® 8260M CPU-only,741.48,791.73,185.33,190.14,FPS
Inception-V4,OV-2023.0,xeon,Intel® Xeon® Gold 6238M CPU-only,647.30,693.21,161.35,165.76,FPS
Inception-V4,OV-2023.0,core,Intel® Core™ i9-11900K CPU-only,158.69,162.07,37.16,39.11,FPS
Inception-V4,OV-2023.0,core,Intel® Core™ i7-11700K CPU-only,174.96,183.19,40.53,42.20,FPS
Inception-V4,OV-2023.0,core,Intel® Core™ i5-8500 CPU-only,36.16,37.61,18.49,19.05,FPS
Inception-V4,OV-2023.0,core,Intel® Core™ i3-10100 CPU-only,32.06,33.23,16.39,16.76,FPS
end_rec,,,,,,,,
begin_rec,,,,,,,,
"Mobilenet-SSD ",OV-2023.0,xeon,Intel® Xeon® 8260M CPU-only,4118.70,5664.02,1312.52,1488.89,FPS
"Mobilenet-SSD ",OV-2023.0,xeon,Intel® Xeon® Gold 6238M CPU-only,3740.30,4877.12,1156.02,1255.77,FPS
"Mobilenet-SSD ",OV-2023.0,core,Intel® Core™ i9-11900K CPU-only,918.95,1250.22,291.99,335.81,FPS
"Mobilenet-SSD ",OV-2023.0,core,Intel® Core™ i7-11700K CPU-only,945.19,1429.69,273.07,329.38,FPS
"Mobilenet-SSD ",OV-2023.0,core,Intel® Core™ i5-8500 CPU-only,236.18,280.08,152.84,173.51,FPS
"Mobilenet-SSD ",OV-2023.0,core,Intel® Core™ i3-10100 CPU-only,235.71,263.74,138.06,150.19,FPS
end_rec,,,,,,,,
begin_rec,,,,,,,,
"Mobilenet-V2 ",OV-2023.0,xeon,Intel® Xeon® 8260M CPU-only,7580.68,13077.02,3108.32,3891.42,FPS
"Mobilenet-V2 ",OV-2023.0,xeon,Intel® Xeon® Gold 6238M CPU-only,7310.77,11049.48,2661.25,3172.62,FPS
"Mobilenet-V2 ",OV-2023.0,core,Intel® Core™ i9-11900K CPU-only,1979.69,3041.21,709.27,904.80,FPS
"Mobilenet-V2 ",OV-2023.0,core,Intel® Core™ i7-11700K CPU-only,1911.48,3538.02,619.12,804.96,FPS
"Mobilenet-V2 ",OV-2023.0,core,Intel® Core™ i5-8500 CPU-only,520.37,680.97,411.23,536.68,FPS
"Mobilenet-V2 ",OV-2023.0,core,Intel® Core™ i3-10100 CPU-only,549.38,665.54,400.97,503.12,FPS
end_rec,,,,,,,,
begin_rec,,,,,,,,
GPT-2,OV-2023.0,xeon,Intel® Xeon® 8260M CPU-only,,,9.06,11.37,FPS
GPT-2,OV-2023.0,xeon,Intel® Xeon® Gold 6238M CPU-only,,,7.68,8.78,FPS
GPT-2,OV-2023.0,core,Intel® Core™ i9-11900K CPU-only,,,2.07,2.44,FPS
GPT-2,OV-2023.0,core,Intel® Core™ i7-11700K CPU-only,,,1.85,2.24,FPS
GPT-2,OV-2023.0,core,Intel® Core™ i5-8500 CPU-only,,,1.11,1.21,FPS
GPT-2,OV-2023.0,core,Intel® Core™ i3-10100 CPU-only,,,1.02,1.06,FPS
end_rec,,,,,,,,
begin_rec,,,,,,,,
Resnet-50,OV-2023.0,xeon,Intel® Xeon® 8260M CPU-only,2328.67,2558.36,624.81,635.89,FPS
Resnet-50,OV-2023.0,xeon,Intel® Xeon® Gold 6238M CPU-only,2062.84,2261.19,558.42,570.31,FPS
Resnet-50,OV-2023.0,core,Intel® Core™ i9-11900K CPU-only,500.96,527.47,122.28,133.20,FPS
Resnet-50,OV-2023.0,core,Intel® Core™ i7-11700K CPU-only,549.24,594.07,126.20,144.62,FPS
Resnet-50,OV-2023.0,core,Intel® Core™ i5-8500 CPU-only,114.70,123.79,60.22,66.07,FPS
Resnet-50,OV-2023.0,core,Intel® Core™ i3-10100 CPU-only,106.60,113.22,55.48,57.32,FPS
end_rec,,,,,,,,
begin_rec,,,,,,,,
"SSD-Resnet34-1200 ",OV-2023.0,xeon,Intel® Xeon® 8260M CPU-only,43.61,47.57,11.84,12.22,FPS
"SSD-Resnet34-1200 ",OV-2023.0,xeon,Intel® Xeon® Gold 6238M CPU-only,38.68,40.91,10.19,10.61,FPS
"SSD-Resnet34-1200 ",OV-2023.0,core,Intel® Core™ i9-11900K CPU-only,8.75,8.83,2.18,2.27,FPS
"SSD-Resnet34-1200 ",OV-2023.0,core,Intel® Core™ i7-11700K CPU-only,9.84,10.10,2.34,2.58,FPS
"SSD-Resnet34-1200 ",OV-2023.0,core,Intel® Core™ i5-8500 CPU-only,2.08,2.12,1.21,1.23,FPS
"SSD-Resnet34-1200 ",OV-2023.0,core,Intel® Core™ i3-10100 CPU-only,1.83,1.84,1.03,1.07,FPS
end_rec,,,,,,,,
begin_rec,,,,,,,,
"Unet-Camvid--0001 ",OV-2023.0,xeon,Intel® Xeon® 8260M CPU-only,71.34,77.83,17.45,18.10,FPS
"Unet-Camvid--0001 ",OV-2023.0,xeon,Intel® Xeon® Gold 6238M CPU-only,61.58,67.16,15.23,15.47,FPS
"Unet-Camvid--0001 ",OV-2023.0,core,Intel® Core™ i9-11900K CPU-only,14.39,14.76,3.52,3.71,FPS
"Unet-Camvid--0001 ",OV-2023.0,core,Intel® Core™ i7-11700K CPU-only,15.73,16.27,4.03,4.10,FPS
"Unet-Camvid--0001 ",OV-2023.0,core,Intel® Core™ i5-8500 CPU-only,2.99,3.05,1.90,1.95,FPS
"Unet-Camvid--0001 ",OV-2023.0,core,Intel® Core™ i3-10100 CPU-only,2.79,2.80,1.73,1.74,FPS
end_rec,,,,,,,,
begin_rec,,,,,,,,
Yolo_V3,OV-2023.0,xeon,Intel® Xeon® 8260M CPU-only,240.57,263.21,69.57,72.63,FPS
Yolo_V3,OV-2023.0,xeon,Intel® Xeon® Gold 6238M CPU-only,208.97,227.96,59.95,61.34,FPS
Yolo_V3,OV-2023.0,core,Intel® Core™ i9-11900K CPU-only,51.50,53.96,14.09,14.52,FPS
Yolo_V3,OV-2023.0,core,Intel® Core™ i7-11700K CPU-only,55.30,60.81,14.27,15.84,FPS
Yolo_V3,OV-2023.0,core,Intel® Core™ i5-8500 CPU-only,12.91,13.40,7.01,7.31,FPS
Yolo_V3,OV-2023.0,core,Intel® Core™ i3-10100 CPU-only,11.42,11.73,6.29,6.39,FPS
end_rec,,,,,,,,
begin_rec,,,,,,,,
Yolo_V3_Tiny,OV-2023.0,xeon,Intel® Xeon® 8260M CPU-only,1785.36,2420.73,690.57,754.41,FPS
Yolo_V3_Tiny,OV-2023.0,xeon,Intel® Xeon® Gold 6238M CPU-only,1614.61,2097.37,584.59,632.46,FPS
Yolo_V3_Tiny,OV-2023.0,core,Intel® Core™ i9-11900K CPU-only,441.22,574.61,158.91,166.02,FPS
Yolo_V3_Tiny,OV-2023.0,core,Intel® Core™ i7-11700K CPU-only,418.34,639.79,154.96,174.78,FPS
Yolo_V3_Tiny,OV-2023.0,core,Intel® Core™ i5-8500 CPU-only,121.45,141.31,72.11,77.17,FPS
Yolo_V3_Tiny,OV-2023.0,core,Intel® Core™ i3-10100 CPU-only,114.38,127.16,67.41,71.53,FPS
end_rec,,,,,,,,
begin_rec,,,,,,,,
Yolo_V8n,OV-2023.0,xeon,Intel® Xeon® 8260M CPU-only,588.99,779.76,308.36,385.15,FPS
Yolo_V8n,OV-2023.0,xeon,Intel® Xeon® Gold 6238M CPU-only,574.93,655.56,273.21,322.97,FPS
Yolo_V8n,OV-2023.0,core,Intel® Core™ i9-11900K CPU-only,161.00,229.64,71.65,83.89,FPS
Yolo_V8n,OV-2023.0,core,Intel® Core™ i7-11700K CPU-only,148.22,233.11,68.23,84.40,FPS
Yolo_V8n,OV-2023.0,core,Intel® Core™ i5-8500 CPU-only,55.71,66.98,36.15,41.24,FPS
Yolo_V8n,OV-2023.0,core,Intel® Core™ i3-10100 CPU-only,53.23,62.99,34.77,37.34,FPS
end_rec,,,,,,,,
1 Network model Release IE-Type Platform name Throughput-OVMS-INT8 Throughput-OV-INT8 Throughput-OVMS-FP32 Throughput-OV-FP32 UOM_T
2 begin_rec
3 bert-base-cased OV-2023.0 xeon Intel® Xeon® 8260M CPU-only 559.87 591.49 182.12 188.52 FPS
4 bert-base-cased OV-2023.0 xeon Intel® Xeon® Gold 6238M CPU-only 504.08 520.95 157.58 162.80 FPS
5 bert-base-cased OV-2023.0 core Intel® Core™ i9-11900K CPU-only 137.25 145.76 38.54 40.85 FPS
6 bert-base-cased OV-2023.0 core Intel® Core™ i7-11700K CPU-only 139.53 154.46 40.65 44.50 FPS
7 bert-base-cased OV-2023.0 core Intel® Core™ i5-8500 CPU-only 28.38 30.17 17.53 18.44 FPS
8 bert-base-cased OV-2023.0 core Intel® Core™ i3-10100 CPU-only 26.43 27.18 15.59 15.76 FPS
9 end_rec
10 begin_rec
11 bert-large-uncased OV-2023.0 xeon Intel® Xeon® 8260M CPU-only 32.58 37.81 15.39 16.79 FPS
12 bert-large-uncased OV-2023.0 xeon Intel® Xeon® Gold 6238M CPU-only 28.06 33.33 13.70 14.54 FPS
13 bert-large-uncased OV-2023.0 core Intel® Core™ i9-11900K CPU-only 8.75 9.04 3.16 3.27 FPS
14 bert-large-uncased OV-2023.0 core Intel® Core™ i7-11700K CPU-only 8.15 8.23 3.23 3.28 FPS
15 bert-large-uncased OV-2023.0 core Intel® Core™ i5-8500 CPU-only 2.56 2.58 1.52 1.55 FPS
16 bert-large-uncased OV-2023.0 core Intel® Core™ i3-10100 CPU-only 2.12 2.29 1.29 1.30 FPS
17 end_rec
18 begin_rec
19 DeeplabV3 OV-2023.0 xeon Intel® Xeon® 8260M CPU-only 343.44 369.62 114.69 120.87 FPS
20 DeeplabV3 OV-2023.0 xeon Intel® Xeon® Gold 6238M CPU-only 279.60 305.89 90.33 96.68 FPS
21 DeeplabV3 OV-2023.0 core Intel® Core™ i9-11900K CPU-only 89.82 108.12 25.46 25.74 FPS
22 DeeplabV3 OV-2023.0 core Intel® Core™ i7-11700K CPU-only 77.58 92.22 21.85 23.09 FPS
23 DeeplabV3 OV-2023.0 core Intel® Core™ i5-8500 CPU-only 16.77 29.67 16.77 17.29 FPS
24 DeeplabV3 OV-2023.0 core Intel® Core™ i3-10100 CPU-only 26.99 28.39 15.71 16.00 FPS
25 end_rec
26 begin_rec
27 Efficientdet-D0 OV-2023.0 xeon Intel® Xeon® 8260M CPU-only 380.32 419.32 236.50 243.76 FPS
28 Efficientdet-D0 OV-2023.0 xeon Intel® Xeon® Gold 6238M CPU-only 324.60 353.57 207.87 227.27 FPS
29 Efficientdet-D0 OV-2023.0 core Intel® Core™ i9-11900K CPU-only 121.90 137.75 53.07 57.23 FPS
30 Efficientdet-D0 OV-2023.0 core Intel® Core™ i7-11700K CPU-only 111.73 127.45 44.97 46.59 FPS
31 Efficientdet-D0 OV-2023.0 core Intel® Core™ i5-8500 CPU-only 40.04 41.11 25.72 28.38 FPS
32 Efficientdet-D0 OV-2023.0 core Intel® Core™ i3-10100 CPU-only 41.21 42.72 25.10 25.95 FPS
33 end_rec
34 begin_rec
35 faster_rcnn_resnet50_coco OV-2023.0 xeon Intel® Xeon® 8260M CPU-only 65.01 70.39 17.65 18.89 FPS
36 faster_rcnn_resnet50_coco OV-2023.0 xeon Intel® Xeon® Gold 6238M CPU-only 60.49 62.73 15.71 16.46 FPS
37 faster_rcnn_resnet50_coco OV-2023.0 core Intel® Core™ i9-11900K CPU-only 13.89 14.16 3.83 4.04 FPS
38 faster_rcnn_resnet50_coco OV-2023.0 core Intel® Core™ i7-11700K CPU-only 14.91 15.29 4.06 4.28 FPS
39 faster_rcnn_resnet50_coco OV-2023.0 core Intel® Core™ i5-8500 CPU-only 3.58 3.62 1.88 1.89 FPS
40 faster_rcnn_resnet50_coco OV-2023.0 core Intel® Core™ i3-10100 CPU-only 3.22 3.23 1.70 1.71 FPS
41 end_rec
42 begin_rec
43 Inception-V4 OV-2023.0 xeon Intel® Xeon® 8260M CPU-only 741.48 791.73 185.33 190.14 FPS
44 Inception-V4 OV-2023.0 xeon Intel® Xeon® Gold 6238M CPU-only 647.30 693.21 161.35 165.76 FPS
45 Inception-V4 OV-2023.0 core Intel® Core™ i9-11900K CPU-only 158.69 162.07 37.16 39.11 FPS
46 Inception-V4 OV-2023.0 core Intel® Core™ i7-11700K CPU-only 174.96 183.19 40.53 42.20 FPS
47 Inception-V4 OV-2023.0 core Intel® Core™ i5-8500 CPU-only 36.16 37.61 18.49 19.05 FPS
48 Inception-V4 OV-2023.0 core Intel® Core™ i3-10100 CPU-only 32.06 33.23 16.39 16.76 FPS
49 end_rec
50 begin_rec
51 Mobilenet-SSD OV-2023.0 xeon Intel® Xeon® 8260M CPU-only 4118.70 5664.02 1312.52 1488.89 FPS
52 Mobilenet-SSD OV-2023.0 xeon Intel® Xeon® Gold 6238M CPU-only 3740.30 4877.12 1156.02 1255.77 FPS
53 Mobilenet-SSD OV-2023.0 core Intel® Core™ i9-11900K CPU-only 918.95 1250.22 291.99 335.81 FPS
54 Mobilenet-SSD OV-2023.0 core Intel® Core™ i7-11700K CPU-only 945.19 1429.69 273.07 329.38 FPS
55 Mobilenet-SSD OV-2023.0 core Intel® Core™ i5-8500 CPU-only 236.18 280.08 152.84 173.51 FPS
56 Mobilenet-SSD OV-2023.0 core Intel® Core™ i3-10100 CPU-only 235.71 263.74 138.06 150.19 FPS
57 end_rec
58 begin_rec
59 Mobilenet-V2 OV-2023.0 xeon Intel® Xeon® 8260M CPU-only 7580.68 13077.02 3108.32 3891.42 FPS
60 Mobilenet-V2 OV-2023.0 xeon Intel® Xeon® Gold 6238M CPU-only 7310.77 11049.48 2661.25 3172.62 FPS
61 Mobilenet-V2 OV-2023.0 core Intel® Core™ i9-11900K CPU-only 1979.69 3041.21 709.27 904.80 FPS
62 Mobilenet-V2 OV-2023.0 core Intel® Core™ i7-11700K CPU-only 1911.48 3538.02 619.12 804.96 FPS
63 Mobilenet-V2 OV-2023.0 core Intel® Core™ i5-8500 CPU-only 520.37 680.97 411.23 536.68 FPS
64 Mobilenet-V2 OV-2023.0 core Intel® Core™ i3-10100 CPU-only 549.38 665.54 400.97 503.12 FPS
65 end_rec
66 begin_rec
67 GPT-2 OV-2023.0 xeon Intel® Xeon® 8260M CPU-only 9.06 11.37 FPS
68 GPT-2 OV-2023.0 xeon Intel® Xeon® Gold 6238M CPU-only 7.68 8.78 FPS
69 GPT-2 OV-2023.0 core Intel® Core™ i9-11900K CPU-only 2.07 2.44 FPS
70 GPT-2 OV-2023.0 core Intel® Core™ i7-11700K CPU-only 1.85 2.24 FPS
71 GPT-2 OV-2023.0 core Intel® Core™ i5-8500 CPU-only 1.11 1.21 FPS
72 GPT-2 OV-2023.0 core Intel® Core™ i3-10100 CPU-only 1.02 1.06 FPS
73 end_rec
74 begin_rec
75 Resnet-50 OV-2023.0 xeon Intel® Xeon® 8260M CPU-only 2328.67 2558.36 624.81 635.89 FPS
76 Resnet-50 OV-2023.0 xeon Intel® Xeon® Gold 6238M CPU-only 2062.84 2261.19 558.42 570.31 FPS
77 Resnet-50 OV-2023.0 core Intel® Core™ i9-11900K CPU-only 500.96 527.47 122.28 133.20 FPS
78 Resnet-50 OV-2023.0 core Intel® Core™ i7-11700K CPU-only 549.24 594.07 126.20 144.62 FPS
79 Resnet-50 OV-2023.0 core Intel® Core™ i5-8500 CPU-only 114.70 123.79 60.22 66.07 FPS
80 Resnet-50 OV-2023.0 core Intel® Core™ i3-10100 CPU-only 106.60 113.22 55.48 57.32 FPS
81 end_rec
82 begin_rec
83 SSD-Resnet34-1200 OV-2023.0 xeon Intel® Xeon® 8260M CPU-only 43.61 47.57 11.84 12.22 FPS
84 SSD-Resnet34-1200 OV-2023.0 xeon Intel® Xeon® Gold 6238M CPU-only 38.68 40.91 10.19 10.61 FPS
85 SSD-Resnet34-1200 OV-2023.0 core Intel® Core™ i9-11900K CPU-only 8.75 8.83 2.18 2.27 FPS
86 SSD-Resnet34-1200 OV-2023.0 core Intel® Core™ i7-11700K CPU-only 9.84 10.10 2.34 2.58 FPS
87 SSD-Resnet34-1200 OV-2023.0 core Intel® Core™ i5-8500 CPU-only 2.08 2.12 1.21 1.23 FPS
88 SSD-Resnet34-1200 OV-2023.0 core Intel® Core™ i3-10100 CPU-only 1.83 1.84 1.03 1.07 FPS
89 end_rec
90 begin_rec
91 Unet-Camvid--0001 OV-2023.0 xeon Intel® Xeon® 8260M CPU-only 71.34 77.83 17.45 18.10 FPS
92 Unet-Camvid--0001 OV-2023.0 xeon Intel® Xeon® Gold 6238M CPU-only 61.58 67.16 15.23 15.47 FPS
93 Unet-Camvid--0001 OV-2023.0 core Intel® Core™ i9-11900K CPU-only 14.39 14.76 3.52 3.71 FPS
94 Unet-Camvid--0001 OV-2023.0 core Intel® Core™ i7-11700K CPU-only 15.73 16.27 4.03 4.10 FPS
95 Unet-Camvid--0001 OV-2023.0 core Intel® Core™ i5-8500 CPU-only 2.99 3.05 1.90 1.95 FPS
96 Unet-Camvid--0001 OV-2023.0 core Intel® Core™ i3-10100 CPU-only 2.79 2.80 1.73 1.74 FPS
97 end_rec
98 begin_rec
99 Yolo_V3 OV-2023.0 xeon Intel® Xeon® 8260M CPU-only 240.57 263.21 69.57 72.63 FPS
100 Yolo_V3 OV-2023.0 xeon Intel® Xeon® Gold 6238M CPU-only 208.97 227.96 59.95 61.34 FPS
101 Yolo_V3 OV-2023.0 core Intel® Core™ i9-11900K CPU-only 51.50 53.96 14.09 14.52 FPS
102 Yolo_V3 OV-2023.0 core Intel® Core™ i7-11700K CPU-only 55.30 60.81 14.27 15.84 FPS
103 Yolo_V3 OV-2023.0 core Intel® Core™ i5-8500 CPU-only 12.91 13.40 7.01 7.31 FPS
104 Yolo_V3 OV-2023.0 core Intel® Core™ i3-10100 CPU-only 11.42 11.73 6.29 6.39 FPS
105 end_rec
106 begin_rec
107 Yolo_V3_Tiny OV-2023.0 xeon Intel® Xeon® 8260M CPU-only 1785.36 2420.73 690.57 754.41 FPS
108 Yolo_V3_Tiny OV-2023.0 xeon Intel® Xeon® Gold 6238M CPU-only 1614.61 2097.37 584.59 632.46 FPS
109 Yolo_V3_Tiny OV-2023.0 core Intel® Core™ i9-11900K CPU-only 441.22 574.61 158.91 166.02 FPS
110 Yolo_V3_Tiny OV-2023.0 core Intel® Core™ i7-11700K CPU-only 418.34 639.79 154.96 174.78 FPS
111 Yolo_V3_Tiny OV-2023.0 core Intel® Core™ i5-8500 CPU-only 121.45 141.31 72.11 77.17 FPS
112 Yolo_V3_Tiny OV-2023.0 core Intel® Core™ i3-10100 CPU-only 114.38 127.16 67.41 71.53 FPS
113 end_rec
114 begin_rec
115 Yolo_V8n OV-2023.0 xeon Intel® Xeon® 8260M CPU-only 588.99 779.76 308.36 385.15 FPS
116 Yolo_V8n OV-2023.0 xeon Intel® Xeon® Gold 6238M CPU-only 574.93 655.56 273.21 322.97 FPS
117 Yolo_V8n OV-2023.0 core Intel® Core™ i9-11900K CPU-only 161.00 229.64 71.65 83.89 FPS
118 Yolo_V8n OV-2023.0 core Intel® Core™ i7-11700K CPU-only 148.22 233.11 68.23 84.40 FPS
119 Yolo_V8n OV-2023.0 core Intel® Core™ i5-8500 CPU-only 55.71 66.98 36.15 41.24 FPS
120 Yolo_V8n OV-2023.0 core Intel® Core™ i3-10100 CPU-only 53.23 62.99 34.77 37.34 FPS
121 end_rec

45
docs/_static/css/coveo_custom.css vendored Normal file
View File

@@ -0,0 +1,45 @@
:root {
--atomic-primary: rgb(var(--ost-color-navbar-background));
--atomic-primary-light: rgb(var(--ost-color-sst-dropdown-background-active));
--atomic-border-radius-md: 0.1rem;
--atomic-border-radius-lg: 0.2rem;
--atomic-border-radius-xl: 0.3rem;
}
::part(result-list-grid-clickable-container) {
border: 1px solid lightgray;
border-radius: var(--atomic-border-radius-md);
}
.view-selector-container {
grid-area: atomic-section-facets;
display: flex;
align-items: center;
column-gap: 0.5rem;
}
.view-selector-container .view-selector,
.view-selector-container .view-selector:hover,
.view-selector-container .view-selector:active,
.view-selector-container .view-selector:focus {
border: none;
background-color: none;
background: none;
outline: none;
padding: 4px 12px;
font-size: 14px;
display: flex;
grid-gap: 8px;
align-items: center;
justify-content: center;
}
.view-selector-container .view-selector i {
margin: 0;
}
.view-selector-container .view-selector.selected {
border-bottom: 2px solid rgb(var(--ost-color-navbar-background));
font-weight: 700;
color: rgb(var(--ost-color-navbar-background));
}

View File

@@ -95,6 +95,38 @@ ul#navbar-main-elements > li:hover {
}
/* Move sidebar menu arrows to the left */
.bd-sidebar label {
left: 0px;
height: 20px;
width: 20px;
top: 5px;
}
/* Moving dropdown arrows to the left */
details.sd-dropdown .sd-summary-up,
details.sd-dropdown .sd-summary-down {
left: 10px;
}
/* Ttile is at the same place for both open and close states */
details.sd-dropdown:not([open]).sd-card {
padding: 0px;
}
/* Ttile is at the same place for both open and close states */
details.sd-dropdown[open].sd-card {
padding: 0px;
}
/* Move title 40px away from the arrow */
details.sd-dropdown .sd-summary-title {
padding-left: 40px;
}
/* Second level items */
#bd-docs-nav > div > ul > li > ul {
padding-left: 0.3rem;

View File

@@ -167,3 +167,7 @@ h1 {
max-width: 100%;
}
}
.sd-row {
--sd-gutter-x: 0rem!important;
}

View File

@@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:18bc08f90f844c09594cfa538f4ba2205ea2e67c849927490c01923e394ed11a
size 71578
oid sha256:63301a7c31b6660fbdb55fb733e20af6a172c0512455f5de8c6be5e1a5b3ed0b
size 71728

Some files were not shown because too many files have changed in this diff Show More